How the internet broke everyone’s bullshit detectors
At a glance:
- Synthetic Lego‑style propaganda can be produced in 24 hours, outpacing verification.
- Automated traffic now accounts for about 51 % of all internet activity, amplifying low‑quality virality.
- New verification guidelines call for five practical steps to spot hybrid AI‑generated images.
What happened
The online ecosystem is being flooded with short, Lego‑style propaganda videos that claim war crimes. One Iran‑linked outlet, Explosive News, reportedly turns a two‑minute synthetic segment around in roughly 24 hours. The speed is intentional: the content only needs to travel far enough to be seen before fact‑checkers can catch up. The phenomenon mirrors a recent White House incident where two vague “launching soon” videos were posted, then removed after open‑source investigators dissected them. The clips turned out to be a promotional push for the official White House app, but the episode highlighted how official communications have adopted the aesthetics of leaks, memes and platform‑native intrigue.
Why it matters
The shift from a “zero digital footprint” as a sign of authenticity to a signal of fabrication is reshaping how audiences judge truth. According to the 2026 State of AI Traffic & Cyberthreat Benchmark Report, automated traffic now commands an estimated 51 % of internet activity, scaling eight times faster than human traffic. These bots do more than distribute content; they prioritize low‑quality virality, ensuring synthetic records spread while verification lags. The rise of “super sharers” backed by paid verification further muddies the waters, creating a false sense of authority that traditional open‑source intelligence (OSINT) must now navigate.
Expert perspectives
Maryam Ishani, an OSINT journalist covering the conflict, says, “We’re perpetually catching up to someone pressing repost without a second thought. The algorithm prioritizes that reflex, and our information is always going to be one step behind.” Manisha Ganguly, visual forensics lead at The Guardian, warns that the flood of aggregated content on Telegram and X can generate false certainty, especially when OSINT is used to cosmetically validate official accounts rather than interrogate them. Adding to the problem, Planet Labs announced on April 4 that it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, after a U.S. government request. U.S. Defense Secretary Pete Hegseth responded bluntly: “Open source is not the place to determine what did or did not happen.”
Generative AI is getting harder to spot
Investigative trainer Henk van Ess notes that classic AI tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in newer models like Imagen 3, Midjourney and DALL·E. The new challenge is the “hybrid” image, where 95 % of a photograph is genuine (real metadata, sensor noise, lighting physics) and only a tiny detail is synthetic, such as a weapon added to a hand or a subtle face swap. Pixel‑level detectors often miss these manipulations because they scan for broader anomalies. Deepfake researcher Henry Ajder adds that AI‑generated media is now embedded, with high‑quality synthetic content looking entirely credible, and detection tools frequently returning confidence scores without explanatory context.
How to verify synthetic content
Van Ess proposes a five‑step workflow for anyone trying to slow the spread of dubious media:
- Look for Hollywood. Overly cinematic lighting, dramatic composition or symmetrical disaster scenes are red flags.
- Run multiple reverse‑image searches. Use Google Lens, Yandex and TinEye; a lack of matches may indicate the image was never photographed.
- Zoom into the margins. Examine peripheral details—parking signs, manhole covers, shadow angles—for inconsistencies.
- Treat detection tools as prompts, not verdicts. Prefer tools that show provenance or fact‑checker database hits over raw confidence percentages; ImageWhisperer is one free option.
- Find “patient zero.” Trace the earliest appearance; authentic material usually attaches to a witness, photographer or location, whereas synthetic content often appears frictionless and polished.
Ajder argues that the long‑term solution lies in provenance systems that can verify origin rather than endlessly chasing fakes. Until such infrastructure scales, the burden remains on users to pause, scrutinize and resist the reflex to repost.
Looking ahead
The convergence of rapid synthetic media production, algorithmic amplification and restricted access to primary visual evidence creates a perfect storm for misinformation. As governments tighten satellite data access and AI models become more seamless, the line between reality and fabrication will continue to blur. Stakeholders—from platform operators to journalists and the public—must adopt a more skeptical, behavior‑based defense: a moment of hesitation before sharing could become the most effective tool against a flood of believable yet fabricated content.
FAQ
How quickly can outlets like Explosive News create synthetic propaganda?
What percentage of internet activity is now driven by automated traffic?
What are the five steps recommended for verifying potentially synthetic images?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article





