OpenAI Blocks ChatGPT Accounts Linked to State-Sponsored Hacking Groups
OpenAI has recently banned several ChatGPT accounts associated with state-sponsored hacking groups from Russia and China. These accounts were reportedly used to assist in malware development, social media automation, and research related to U.S. satellite communications technologies.
The Russian-linked actor utilized ChatGPT to refine Windows malware, debug code across multiple languages, and set up command-and-control infrastructure. This actor demonstrated knowledge of Windows internals and exhibited operational security behaviors. The malware campaign, named “ScopeCreep,” involved the use of temporary email accounts to sign up for ChatGPT, with each account used for a single conversation to make incremental improvements to the malicious software. Once the improvement was made, the account was abandoned, and a new one was created.
The malware was distributed through a trojanized version of a legitimate video game crosshair overlay tool. Users who downloaded this version had their systems infected by a malware loader that retrieved additional payloads from an external server. The malware was designed to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection. Techniques such as Base64 encoding, DLL side-loading, and the use of SOCKS5 proxies were employed to conceal the attacker’s presence.
The Chinese-linked groups, identified as APT5 and APT15, engaged with ChatGPT for various activities, including open-source research, technical discussions, and system configuration troubleshooting. Some of these activities appeared to be related to Linux system administration, software development, and infrastructure
OpenAI continues to monitor and take action against misuse of its models, reinforcing its commitment to preventing the use of its technology for malicious purposes.