WASHINGTON—After a slowed-down video that made House Speaker Nancy Pelosi (D-Calif.) look drunk circulated online last month, the national conversation around fake and misleading media has spread into Congress.
U.S. lawmakers are trying to get a handle on what to do about the phenomena of fake media, which can range from a minor tweak of an image, audio file, or footage, to a full-blown artificial creation.
“Can anyone pick out which of these faces are real and which are fake?” Congressman Adam Schiff (D-Calif.) asked the audience during a House Permanent Select Committee on Intelligence on June 13, showing a collage of four very real-looking portraits. All of them were fake.
“Thinking ahead to 2020 and beyond, one does not need any great imagination to envision even more nightmarish scenarios that would leave the government, the media, and the public struggling to discern what is real and what is fake,” he said.
The witnesses at the hearing agreed that the consequences to public trust in politicians, institutions, and democracy writ large could be dire.
“And it’s likely to get much worse before it gets better, but we have to continue to do what we can,” said Buffalo University professor David Doermann, who was one of four witnesses.
But what government should do about it—if it should do anything at all—is a hotly debated question.
Doermann argued that the tools to detect fake media should be given to those who unknowingly spread it—social media users— and that the government should play an ancillary role.
Social media platforms should play a role in identifying fake content, he said, by alerting users that what they’re seeing may not be real.
“Most of the times it’s conservatives that get banned [from social media platforms] and not Democrats,” he said. “It’s all in who’s building the filter.”
Tech Oligarchs treat Conservatives as peasants -Carlos Maza’s Reign of Terror https://t.co/JgNH9qxjw0
— Devin Nunes (@DevinNunes) June 7, 2019
Maryland University law professor Danielle Citron argued that while using the courts to combat defamatory fake media is expensive and time-consuming, having the right laws to allow for legal redress for reputational harm is part of the solution. She also supports laws that would increase the responsibility of platforms for the content they allow on their sites.
Clint Watts, a research fellow at the German Marshall Fund, also suggested that Congress pass a law to deter government officials and agencies from publishing false or manipulated content, because the government “must always be the purveyor of facts and truth,” he said.
He also suggested that government should work with the private sector on tools that could inform users of the origin of the media as much as possible.
Doermann warned that as technology becomes more advanced, so will the ease and ability with which creators will be able to hide or fake a media’s origin.
“Let there be no question that this is a race,” he said. “The better manipulators get, the better detectors need to be.”
As for social media platforms using AI to detect fake media, Watts warned that using AI alone could not do the job. He cited a piece by The Onion, which showed a fake Joe Biden washing a car outside the White House with the headline “Shirtless Biden Washes Trans Am In White House Driveway.”
The photos are obviously fake and would probably be flagged by bots, but most Americans who are aware that the Onion is a satirical site wouldn’t be fooled.
“If we went to that extreme, we would have a country where everything that’s ever been changed or modified for any reason would have to be policed,” he said.
While the panelists disagreed with what exactly the government’s role should be, they all agreed that there is no silver bullet, and that tackling the problem requires a multipronged approach.