From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report

OpenAI said it banned accounts linked ‌to Chinese law enforcement, romance scammers and influence operations, including a smear ‌campaign against Japan's first woman prime ⁠minister, in a report detailing the misuse of its ChatGPT technology.
Published: 2/25/2026, 11:18:32 PM EST
From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report
A person stands next to the logo of ChatGPT, an AI-powered chatbot by OpenAI at Bharat Mandapam, one of the venues for AI Impact Summit, in New Delhi, India, on Feb. 17, 2026. (Bhawika Chhabra/Reuters)

OpenAI said it banned accounts linked ‌to Chinese law enforcement, romance scammers and influence operations, including a smear ‌campaign against Japan's first woman prime ⁠minister, in a report detailing the misuse of its ChatGPT technology.

The company said several accounts used its ​chatbot alongside other tools, including social media accounts, to carry ⁠out cybercrimes while posing as a dating agency, law firms and U.S. officials, among others.

Here are some details from OpenAI:

A small set of accounts that likely originated in China used OpenAI's models to request information ‌about ⁠U.S. persons, online forums and federal building locations, and sought guidance on face-swapping software.

The ‌same accounts also generated English-language emails to state-level U.S. officials or policy analysts working in business and ​finance, inviting targets to participate in paid consultations.

OpenAI said it ​banned a ChatGPT account linked to ​an individual associated with Chinese law enforcement whose activity involved orchestrating a covert influence operation targeting ⁠Japanese Prime Minister Sanae Takaichi.

A cluster of ChatGPT accounts used the chatbot to run a dating scam targeting Indonesian men and likely defrauded hundreds ​of victims a month, according ⁠to OpenAI.

OpenAI said the scam used ChatGPT to ​generate promotional text and ​ads ‌for a fake dating service, luring users to join the platform and pressuring targets to complete several tasks requiring large payments.

Several ‌accounts used OpenAI's models to pose as lawfirms and ⁠impersonate real attorneys and U.S. law enforcement,targeting fraud victims, OpenAI said.