A CCP Official Turned to ChatGPT for Help. It Exposed a Global Intimidation Campaign, OpenAI Says

What the official likely saw as a tool became key evidence, revealing how AI is being used to scale Beijing’s influence and repression.
Published: 2/27/2026, 5:33:36 AM EST

The Chinese Communist Party (CCP) law enforcement official was trying to carefully word a sensitive message.

He turned to ChatGPT—the artificial intelligence chatbot used by hundreds of millions worldwide, asking it to help draft and polish text tied to what appeared to be an influence and intimidation campaign targeting Chinese dissidents living overseas.

That interaction would later help investigators uncover the broader campaign.

In a threat intelligence report released this week, OpenAI, the company behind ChatGPT, said the official’s activity helped investigators identify and dismantle what they described as a sprawling campaign targeting critics of the CCP worldwide, including efforts to impersonate U.S. immigration officials.

Ben Nimmo, who leads investigative work on influence operations at OpenAI, addressed reporters in a briefing ahead of the report’s release.

“This is what Chinese modern transnational repression looks like,” he said. “It’s not just digital. It’s not just about trolling. It’s industrialized. It’s about trying to hit critics of the CCP with everything, everywhere, all at once.”

Impersonation, Forged Papers and Fake Deaths

According to OpenAI’s report, actors linked to China used its tools to draft political content, translate messaging, and create online personas designed to appear authentic.

In one case, operators allegedly impersonated U.S. immigration officials and sent warnings to a Chinese dissident living in the United States, claiming the individual’s public statements had violated the law. Investigators said the apparent aim was to intimidate and discourage further criticism.

The campaign also involved forged legal threats. The ChatGPT user described creating fake documents made to resemble official records from a U.S. county court, intended to pressure social media platforms into removing dissidents’ accounts.

In another instance, the user documented plans to fake a dissident’s death, including drafting a false obituary and generating images of a gravestone. False rumors of the individual’s death later appeared on Chinese-language internet platforms, matching details recorded in the ChatGPT conversations.

OpenAI said it banned the accounts after detecting the activity.

AI Used to Increase Productivity and Scale

OpenAI said the campaigns used artificial intelligence to improve speed and efficiency, not replace human operators.

Actors relied on ChatGPT to draft messages, edit content, translate material, and plan posting strategies. The broader operations, however, still depended on fake social media accounts, external platforms, and traditional propaganda methods.

“Our findings continue to show that AI is being used as part of broader workflows,” OpenAI said in its report. “Threat actors are using AI to increase productivity and scale, not to fully automate influence operations.”

Despite those advantages, OpenAI said it found limited evidence that the campaigns reached large audiences or generated significant engagement. Many of the accounts were identified and removed before gaining traction.

“We will continue to investigate and disrupt malicious uses of our models,” the company said.

Broader Stakes in the AI Rivalry

The findings come as Washington and Beijing compete for dominance in artificial intelligence—a rivalry that extends beyond technology into surveillance, information control, and national security.

Human rights groups and Western officials have long warned that Beijing pressures critics abroad through surveillance, harassment, and legal threats, a practice known as transnational repression. Investigators say AI is now strengthening those efforts, helping operators plan campaigns, craft messaging, and expand their reach.

The OpenAI report “clearly demonstrates the way that China is actively employing AI tools to enhance information operations,” Michael Horowitz, a former Pentagon official now at the University of Pennsylvania, told CNN.

He said the competition is unfolding not only at the technological frontier, but in how governments deploy AI across their intelligence and information systems.

Lawmakers on the House Select Committee on the Chinese Communist Party said in a post on X that they are reviewing OpenAI’s findings and pledged to continue investigating threats to dissidents and U.S. institutions.
Concerns extend across the industry. U.S. AI firm Anthropic said it recently identified three Chinese AI companies, including DeepSeek, attempting to use its chatbot to train competing AI systems—warning the technology could ultimately be integrated into military, intelligence, and surveillance programs.

Technology companies, including OpenAI, say they are strengthening monitoring systems and publishing threat reports to identify and disrupt misuse of their platforms.