AI-Generated Deepfake Cyberbullying Emerges as New Crisis for Schools Nationwide

Victims often struggle to disprove the fake content because of its realistic appearance, contributing to anxiety, depression, and social withdrawal.
Published: 12/23/2025, 5:25:59 AM EST
AI-Generated Deepfake Cyberbullying Emerges as New Crisis for Schools Nationwide
A school bus carries children at the end of a school day at Sixth Ward Middle School in Thibodaux, La., on Dec. 11, 2025. (Stephen Smith/AP Photo)

Schools across the United States are confronting a rapidly evolving form of cyberbullying as artificial intelligence tools are increasingly used to create realistic, sexually explicit deepfake images of students, often with devastating consequences for victims and limited preparedness among educators.

Law enforcement officials and child safety advocates say the technology, which can digitally alter ordinary photos into fabricated nude images within minutes, has outpaced school policies and disciplinary frameworks. What was once a technically complex process now requires little more than a smartphone app, making misuse widespread and difficult to contain.

The issue gained renewed attention this fall following a case in Louisiana, where AI-generated nude images circulated among middle school students. While two boys were later charged under a newly enacted state law targeting AI-generated explicit material, the incident exposed gaps in how schools respond when digital abuse unfolds in real time.

Officials involved acknowledged that the speed and anonymity of the technology complicated efforts to intervene before the situation escalated.

Sheriff Craig Webre of Lafourche Parish warned that the accessibility of AI tools has fundamentally changed the landscape of student misconduct.

“While the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience,” he said in a public statement, calling on parents to address the issue directly with their children.

The Louisiana case is not isolated. According to data from the National Conference of State Legislatures, at least half of U.S. states enacted laws in 2025 aimed at regulating deepfakes and other generative AI misuse. Some statutes specifically address simulated child sexual abuse material, reflecting growing concern over how frequently minors are being targeted.

The scope of the problem is underscored by federal reporting trends. The National Center for Missing and Exploited Children reported a sharp surge in AI-generated child sexual abuse images submitted to its cyber tipline, climbing from thousands in 2023 to hundreds of thousands within the first half of 2025 alone.

Researchers say the harm inflicted by AI-driven harassment differs from traditional bullying. Sergio Alexander, a research associate at Texas Christian University who studies emerging technologies, noted that fabricated images can circulate repeatedly online, prolonging emotional distress. Victims often struggle to disprove the content because of its realistic appearance, contributing to anxiety, depression, and social withdrawal.

Despite the growing threat, experts say many schools remain unprepared. Sameer Hinduja, co-director of the Cyberbullying Research Center, has urged districts to update their conduct codes and clearly communicate consequences related to AI misuse. Without clear policies, he said, students may assume adults are unaware or incapable of responding.

“So many of them are just so unaware and so ignorant,” he said. “We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn’t happening amongst their youth.”

Parents are also being encouraged to take a proactive role. Technology educators emphasize open conversations about deepfakes, stressing that children should feel safe reporting incidents without fear of punishment.

As artificial intelligence continues to advance, officials warn that addressing deepfake cyberbullying will require coordinated efforts from lawmakers, educators, parents, and law enforcement before more students are harmed by technology designed for innovation, not abuse.

The Associated Press contributed to this report.