A new report published yesterday by OpenAI is causing a stir in the cybersecurity community, which found 10 very unique abuse cases of AI tools that it was used to by threat actors across the globe. The results highlight the increasing prominence of artificial intelligence in cyber operations and the critical necessity of strong, collective defenses.
Threat actors based in six countries—China, Russia, North Korea, Iran, Cambodia and the Philippines—have been using OpenAI’s models to carry out “malicious activities,” according to the report, titled “Disrupting malicious uses of AI: June 2025.”
OpenAI stresses that these AI tools haven’t invented entirely new categories of threats, even if they have made it easier for bad actors to use more sophisticated attacks and operate at a more efficient scale.
Of the six revealed campaigns, a few are particularly notable. Russian-speaking “ScopeCreep” used ChatGPT to compose and tweak Windows malware. North Korean hackers were discovered making counterfeit resumes for remote tech jobs en masse to infiltrate corporate devices.
A second major operation, “Operation Sneer Review,” which could be traced to China, barraged social media platforms including TikTok and X with pro-Chinese propaganda, often relying on fictitious digital personas from multiple nationalities.
According to OpenAI, its detection systems managed to identify unusual activities across all of these operations, resulting in account terminations as well as intelligence sharing with partnering platforms.
The report also included examples of AI used for military intelligence purposes, producing marketing material for illicit services, and creating politically provocative content to shape public opinion.
Although these AI-driven tactics are powerful, such campaigns often experienced a low level of actual engagement, OpenAI said, indicating that, while advanced, the tools still have some way to go before audience members truly trust them.
This extensive report is a red alert to security teams everywhere. OpenAI emphasized the need to remain vigilant with regard to how adversarial actors are weaponizing large language models and offered to share real-time threat intelligence with other top AI firms.
The results underscore the need for a unified and multilayered defense approach to protect against the ever-changing AI threat landscape.