An unprecedented wave of AI-generated content regarding the U.S.-Israel military campaign against Iran has garnered hundreds of millions of views on social media, polluting the information environment during a consequential regional conflict.
The deluge of fabricated content includes AI-generated videos, fake satellite imagery, and manipulated visuals, produced by foreign governments, state media, and opportunistic creators since the conflict began on February 28. One widely shared AI-generated video appearing to show missiles striking Tel Aviv was featured in over 300 posts and viewed tens of millions of times.
Another fake clip depicting the Burj Khalifa in Dubai engulfed in flames circulated widely as residents feared drone and missile attacks. Iran’s state-affiliated Tehran Times shared a fabricated satellite image claiming to show damage to a U.S. radar facility in Qatar. Google’s SynthID watermark detection tool confirmed the image was created or altered using a Google AI product.
The image was derived from genuine satellite imagery of a U.S. naval base in Bahrain taken in February 2025. Video game footage has also been passed off as real combat. Texas Governor Greg Abbott reposted a clip from the game War Thunder, writing “Bye bye,” before deleting the post after a Community Note flagged it as game footage.
BBC Verify senior journalist Shayan Sardarizadeh stated that “this war might have already broken the record for the highest number of AI-generated videos and images that have gone viral during a conflict”. X’s AI chatbot Grok repeatedly failed to identify fake content, insisting an AI-generated video of missiles hitting Tel Aviv was real, citing reports from Newsweek and Reuters to bolster its false confirmation.
X did not respond to BBC Verify’s request for comment on the chatbot’s behavior. On March 4, X’s head of product Nikita Bier announced that creators posting AI-generated videos of armed conflict without disclosure will be suspended from the Creator Revenue Sharing Program for 90 days, with repeat offenders facing permanent removal.
The platform said it will flag violations through Community Notes and metadata from generative AI tools. Bier stated that “with today’s AI technologies, it is trivial to create content that can mislead people”. Critics argue that X’s engagement-based revenue-sharing model incentivizes sensationalized and misleading content.
Expert Mahsa Alimardani warned that “fake videos like these undermine the public’s trust in verified information available online and complicate the documentation of real evidence”. BBC Verify is a fact-checking unit of the British Broadcasting Corporation, and the investigation was published on March 6.




