The Chinese Communist Party (CCP) law enforcement official was trying to carefully word a sensitive message.
He turned to ChatGPT—the artificial intelligence chatbot used by hundreds of millions worldwide, asking it to help draft and polish text tied to what appeared to be an influence and intimidation campaign targeting Chinese dissidents living overseas.
That interaction would later help investigators uncover the broader campaign.
In a threat intelligence report released this week, OpenAI, the company behind ChatGPT, said the official’s activity helped investigators identify and dismantle what they described as a sprawling campaign targeting critics of the CCP worldwide, including efforts to impersonate U.S. immigration officials.
Ben Nimmo, who leads investigative work on influence operations at OpenAI, addressed reporters in a briefing ahead of the report’s release.
Impersonation, Forged Papers and Fake Deaths
According to OpenAI’s report, actors linked to China used its tools to draft political content, translate messaging, and create online personas designed to appear authentic.In one case, operators allegedly impersonated U.S. immigration officials and sent warnings to a Chinese dissident living in the United States, claiming the individual’s public statements had violated the law. Investigators said the apparent aim was to intimidate and discourage further criticism.
The campaign also involved forged legal threats. The ChatGPT user described creating fake documents made to resemble official records from a U.S. county court, intended to pressure social media platforms into removing dissidents’ accounts.
In another instance, the user documented plans to fake a dissident’s death, including drafting a false obituary and generating images of a gravestone. False rumors of the individual’s death later appeared on Chinese-language internet platforms, matching details recorded in the ChatGPT conversations.
AI Used to Increase Productivity and Scale
OpenAI said the campaigns used artificial intelligence to improve speed and efficiency, not replace human operators.Actors relied on ChatGPT to draft messages, edit content, translate material, and plan posting strategies. The broader operations, however, still depended on fake social media accounts, external platforms, and traditional propaganda methods.
Despite those advantages, OpenAI said it found limited evidence that the campaigns reached large audiences or generated significant engagement. Many of the accounts were identified and removed before gaining traction.
Broader Stakes in the AI Rivalry
The findings come as Washington and Beijing compete for dominance in artificial intelligence—a rivalry that extends beyond technology into surveillance, information control, and national security.Human rights groups and Western officials have long warned that Beijing pressures critics abroad through surveillance, harassment, and legal threats, a practice known as transnational repression. Investigators say AI is now strengthening those efforts, helping operators plan campaigns, craft messaging, and expand their reach.
He said the competition is unfolding not only at the technological frontier, but in how governments deploy AI across their intelligence and information systems.
Technology companies, including OpenAI, say they are strengthening monitoring systems and publishing threat reports to identify and disrupt misuse of their platforms.