Fake OpenAI AI Tool Used to Spread Malware to Thousands of Users

Cybersecurity researchers have discovered a fake AI project online that pretended to be an official OpenAI tool but was actually designed to spread malware. The malicious project appeared on Hugging Face, a popular platform where developers share artificial intelligence models and software tools.

The fake project copied the name and description of a legitimate OpenAI “Privacy Filter” tool, making it look authentic to unsuspecting users. Researchers said the attackers used a technique known as “typosquatting,” where scammers create names that closely resemble trusted companies or products in order to trick people into downloading harmful files.

The fake repository quickly became extremely popular and reportedly reached the number one trending position on the platform. Within a short period of time, it accumulated hundreds of thousands of downloads. Security experts believe some of those download numbers may have been artificially inflated using fake accounts and automated activity to make the project appear more trustworthy and popular.

Once downloaded, the malicious files installed an “infostealer” malware program on Windows computers. This type of malware is designed to secretly collect sensitive information from victims. Researchers said the malware could steal saved browser passwords, cryptocurrency wallet information, VPN credentials, Discord tokens, and other personal or business data stored on the device.

Investigators found that the malware used multiple stages to avoid detection. The attack disabled certain security protections, downloaded additional hidden files from remote servers, and attempted to hide itself from antivirus software. The final malware payload was reportedly written in Rust, a programming language increasingly being used in modern cyberattacks because of its speed and ability to avoid detection.

Security experts warn that this incident highlights a growing problem within the AI industry. As artificial intelligence tools become more popular, cybercriminals are increasingly targeting developers, researchers, and everyday users by disguising malware as legitimate AI software. Open-source AI platforms can be especially attractive targets because anyone can upload files and projects for others to download.

Researchers say users should be cautious when downloading AI tools or software from online repositories, even if a project appears popular or trending. Experts recommend verifying the publisher, checking official links from trusted companies, and scanning files before running them on a computer. The incident serves as another reminder that cybercriminals are adapting quickly to the growing AI ecosystem and finding new ways to exploit trust in emerging technologies.