x
Breaking News
More () »

AI image of explosion at Pentagon that never existed, reports spread on Twitter

The image originally said to show a large explosion near the Pentagon complex appears to be AI generated.

WASHINGTON — Officials say there was no explosion at the Pentagon Monday morning, following the spread of a viral image showing a large column of black smoke. Arlington firefighters and Pentagon Protection Agency officers said there was no danger to the public. 

The image was originally shared on Twitter by the user OSINTdefender, a profile described as an open source intelligence monitor just after 9 a.m. The image quickly spread across Twitter before the original poster removed the initial tweet about an hour later.

Arlington Fire Department officials made it clear that there was no need to be alarmed by the image.

"There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public," the Arlington County Fire Department tweeted. 

Nick Waters, a London-based researcher for the open-source investigative journalism group Bellingcat, says it took him just seconds to figure out the image is a fake. He points out that the building in the photograph doesn't look anything like the Pentagon. He also points to a distorted fence that morphs into a crowd barrier.

"If a bomb goes off in Washington, it's not just going to be one picture from one angle," Waters said. "You're going to get scores and scores of pictures showing the explosion, showing the after-effects of the explosion [and] people saying they saw the explosion."

The image appears to be generated by artificial intelligence, specifically a form of AI called generative artificial intelligence, or Gen-AI. Gen-AI requires someone to prompt, or command, the program to create something completely new.

Siwei Lyu, the director of the UB Media Forensic Lab at SUNY Buffalo and a deepfake expert, said that while Gen-AI can be helpful, it’s also doing harm when in the wrong hands. It’s being used to “recreate or produce realistic types of media,” including images, video, audio and some text, Lyu said. This includes impersonations or fake content.

Another picture suggesting an explosion at the White House also hit Twitter Monday morning. Waters said that too was fake, but it got a lot less traction than the Pentagon image. 

Waters pointed to distorted cars in the foreground as a clear sign it was produced by AI. The White House in the image had a huge office building pasted on the back.

Waters fears the impact of hoaxes like this is that people come to mistrust everything on the internet, even though "there's so much useful information online."

“You can use AI models to recreate human faces, making them look like real people, but [is actually] nobody alive,” Lyu said. “You can also use an algorithm to transfer or convert voices using text to voice … You can have videos where the subject is generated by, again, generative AI models to impersonate a particular person.” 

Lyu offered some advice to avoid getting duped by Gen-AI videos, text and audio. Learn more about tips and tools to stay informed here

When it comes to images, use reverse image search tool like TinEye to learn where the image came from. You can also use the photo's metadata to collect data about an image that can be used, along with other information, to verify if an image is real.

WUSA9 is now on Roku and Amazon Fire TVs. Download the apps today for live newscasts and video on demand.

Download the WUSA9 app to get breaking news, weather and important stories at your fingertips.

Sign up for the Get Up DC newsletter: Your forecast. Your commute. Your news.
Sign up for the Capitol Breach email newsletter, delivering the latest breaking news and a roundup of the investigation into the Capitol Riots on January 6, 2021.

Before You Leave, Check This Out