In a significant move to combat the malicious use of artificial intelligence, OpenAI has announced the closure of numerous ChatGPT accounts directly linked to state-sponsored hacking groups from Russia, China, and Iran. The tech giant revealed that these accounts were being exploited for a range of illicit activities, including malware development, social engineering, and the generation of disinformation campaigns.
According to OpenAI’s latest threat intelligence report, the company’s advanced detection systems, bolstered by AI capabilities, successfully identified and disrupted these nefarious operations over the past three months. Among the most notable disclosures is the banning of accounts associated with a Russian-speaking threat actor who used ChatGPT to refine Windows malware, dubbed “ScopeCreep.” This group employed sophisticated operational security, creating multiple temporary accounts, each used for a single, incremental improvement to their malicious code.
Chinese nation-state hacking groups, including well-known entities like APT5 and APT15, were also found to be leveraging ChatGPT. Their activities ranged from conducting open-source research on U.S. satellite communications and troubleshooting Linux system configurations to developing brute-force scripts for FTP servers and automating social media manipulation.
Furthermore, OpenAI flagged and disabled accounts linked to Iranian influence operations, specifically “Storm-2035,” which utilized the AI model to generate pro-Iran messaging and support for various geopolitical causes, including Scottish independence and Palestinian rights, targeting global audiences on platforms like X.
The report also highlighted a broader ecosystem of coordinated inauthentic behavior, including North Korean IT worker schemes using ChatGPT for fraudulent job applications, and operations from the Philippines and Cambodia involved in comment spamming and task scams.
OpenAI emphasized that while these instances demonstrate the evolving tactics of threat actors, none of the detected misuse involved sophisticated or large-scale attacks solely enabled by its tools.
The company reiterated its commitment to continually improving its detection and mitigation efforts, underscoring the importance of collaborative action across the industry to safeguard against the misuse of AI technologies. This decisive action by OpenAI serves as a stark reminder of the dual-use nature of advanced AI and the ongoing battle to ensure its responsible development and deployment.