Eliot Higgins, the founder of the open-source investigative outlet Bellingcat, was reading this week about the expected indictment of Donald Trump when he decided he wanted to visualize it.
He turned to an AI art generator, giving the technology simple prompts, such as, "Donald Trump falling down while being arrested." He shared the results - images of the former president surrounded by officers, their badges blurry and indistinct - on Twitter. "Making pictures of Trump getting arrested while waiting for Trump's arrest," he wrote.
"I was just mucking about," Higgins said in an interview. "I thought maybe five people would retweet it."
Two days later, his posts depicting an event that never happened have been viewed nearly 5 million times, creating a case study in the increasing sophistication of AI-generated images, the ease with which they can be deployed and their potential to create confusion in volatile news environments. The episode also makes evident the absence of corporate standards or government regulation addressing the use of AI to create and spread falsehoods.
"Policymakers have been warning for years about the potential misuse of synthetic media to spread disinformation and more generally to sow confusion and discord," said Sen. Mark R. Warner (D-Va.), the chairman of the Senate Intelligence Committee. "While it took a few years for the capabilities to catch up, we're now at a point where these tools are widely available and incredibly capable."
Warner said developers "should already be on notice: if your product directly enables harms that are reasonably foreseeable, you can be held potentially liable." But he said policymakers also have work to do, calling for new obligations to ensure firms are addressing the dangers of artificial intelligence.
The leading online platforms where such images are disseminated have inconsistent policies on the matter. Twitter did not respond with comment to a Washington Post inquiry about the images. A Meta spokeswoman pointed to examples of the images - which quickly leaped off Twitter to other platforms - being fact-checked on its services, including on the photo-sharing app Instagram. YouTube and TikTok did not immediately respond to requests for comment.
"Missing Context. Independent fact-checkers say information in this post could mislead people," reads the red text below one of the images in a post shared by an account with more than 3,000 followers.
On Facebook, however, one of the images was left untouched when shared by a user with three times as many followers. "In an unprecedented turn of events, former President Donald Trump was arrested and escorted to federal prison," wrote the user, who describes himself as a blogger and former U.S. infantryman. "The shocking image of Trump, with officers holding both of his hands, quickly circulated on social media."
There was a penalty for Higgins, but it was exacted by Midjourney, the art generator he had used to create the visuals. And it arrived without explanation. He said he was locked out of Midjourney's server on Wednesday but received no communication from the company about what rules he had violated.
Midjourney, which describes itself as an "independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species," did not respond to a request for comment.
"I thought, 'Oops, looks like there's been a consequence for my actions,'" Higgins said.
Trump over the weekend wrote on Truth Social, his social networking site, that he expected to be arrested on Tuesday, priming his supporters for images of his apprehension.
"It's the first visual collateral of Trump getting arrested, even if he's not," said Angelo Carusone, the president of Media Matters for America, the left-leaning watchdog group. "This is going to be the image that a lot of people have in their minds even if Trump doesn't end up getting indicted - let alone arrested."
When Higgins began circulating the Trump images, he made clear that they were fakes. But the cascade of visuals he was able to seed across the internet shows "there's been a giant step forward in the ability to create fake but believable images at volume," said Sam Gregory, executive director of the human rights organization Witness. "And it's easy to see how this could be done in a coordinated way with an intent to deceive."
Defects of the technology that raise doubts about the authenticity of the images - six-fingered hands or metallic skin - don't undercut its disruptive potential, said Gregory, whose group convened experts last fall from across the technology industry, law, art and entertainment to identify ways to "support the nascent power of synthetic media for advocacy, parody and satire, while confronting the gray areas of deception, gaslighting and disinformation." Among the potential actions they identified were disclosure and labeling of how media is made.
"The aim," Gregory said, "may not be to convince people that a certain event happened but to convince people that they can't trust anything and to undermine trust in all images."
The technology is advancing fast. Higgins said he used Version 5 of Midjourney's art generator to create the Trump images. The latest version, for which he pays $30 per month, is much more sophisticated than its immediate precursor, he said, vastly improving a set of visuals he created of U.S. presidents as popes. "The tool seems to be learning more about image coherence," he said.
The technology appears to build iteratively on its knowledge of certain visuals, Higgins said. "I'm pretty sure it's about the number of people it's been trained on with a given name," he explained. "For celebrities, the less famous they are internationally, the less accurate the images."
Once his images of Trump's fictitious arrest began taking off, he decided to complete the story arc - adding images of a trial and an ultimate escape from prison. Certain cues tripped up the technology. "I did one of Donald Trump carving a key out of soap, but it generated an orange Donald Trump out of soap, which is interesting but not what I was going for," Higgins said.
Complex themes also present problems for the technology, he said. Given fears that such tools may be used by conspiracy theorists, Higgins has sought to simulate their efforts. But his attempts have mostly fallen flat. His instructions to create an image of two bugbears of the political right laughing together - Anthony S. Fauci, the nation's former leading infectious-disease doctor, and George Soros, the liberal financier - led to an image of the two men merged together, Higgins said.
Synthetic media has been used in recent weeks to push falsehoods online, sometimes with unclear provenance and other times with the material's creator gloating about it.
Last month, a faked video spread on Twitter appearing to show Sen. Elizabeth Warren (D-Mass.) claiming that Republicans should not be allowed to vote. Twitter labeled it "altered audio," and one of the main accounts circulating the falsified material was later suspended.
When far-right activist Jack Posobiec tweeted a video seeming to show President Biden announcing a military draft to answer Russia's offensive in Ukraine, he described it as a "sneak preview of things to come." Twitter applied a label to the tweet stating, "The video shown is a 'deepfake' created with the aid of artificial intelligence (AI)."
Posobiec, in an appearance this month at the Conservative Political Action Conference, defended the tactic. "So this week, I made a deepfake of Joe Biden that got a little bit of attention," he said. The move met pushback not just from fact-checkers, he said, but also from some on the right who asked him, he said, "How can you do that? Why would you make something like this?"
"Screw them all," he said of fact-checkers and mainstream news outlets.
Major technology companies bolstered their policies against deepfakes after the rapid spread on social media of a distorted video of House Speaker Nancy Pelosi (D-Calif.) in 2019. The following year, Meta banned users from posting highly manipulated videos but left the door open for manipulated videos that are meant to be parody or satire. Twitter also introduced a new rule prohibiting users from sharing deceptive and manipulated media that may cause harm, such as tweets that could lead to violence, widespread civil unrest or threatening someone's privacy.
"Since then, the technology has gotten more sophisticated but a lot harder to detect," Carusone said. "And none of them have made any significant investments in how they are not only going to detect these issues but then enforce their policies against it."
He added: "What I see here is kind of the opening salvo in a new front in the war against disinformation."
Jessica González, co-CEO of the media advocacy group Free Press, said the tech giants are less equipped to combat deepfakes following widespread layoffs in the industry. Meta, Google and Twitter have collectively laid off tens of thousands of workers in recent months.
Since taking over Twitter in October, Elon Musk has softened the platform's policies against hate speech, reinstated extremist influencers and scaled back the company's content moderation practices through drastic cuts in its workforce.
"I don't know that Twitter has the personnel or the desire frankly to make sure that the content shared on its site is accurate," González said. "There's other, better deepfakes that pose an even greater risk especially as we move into election season. And it's important that people have accurate information about the people running for office."
WASHINGTON POST