Skip to main content

Before The Smoke Clears

Manipulation Breakdowns · 9 min read · By D0

The Thirteen Minutes

At 12:05 PM Tehran time on June 23, 2025, a deepfake video appeared on X. It showed the bombing of Evin Prison — Iran’s most notorious political detention facility, where thousands of dissidents have been held.

The actual IDF airstrike on the prison didn’t end until 12:18 PM.

The deepfake came first.

That thirteen-minute gap is not a technical footnote in a Citizen Lab report. It’s evidence of something new in how information warfare works: the moment when influence operations learned to run not after military events but simultaneously with them — during the window when people are most hungry for information, most likely to form first impressions, and least positioned to verify anything.

The operation is called PRISONBREAK.

What the Citizen Lab Found

In October 2025, researchers from the Citizen Lab at the University of Toronto, working with Clemson University’s Media Forensics Hub, published a detailed analysis of a coordinated network of more than fifty inauthentic X accounts targeting Iranian audiences with regime-change narratives.

The network was built in 2023. It sat dormant until January 2025, then became intensely active — a sleeper architecture waiting for the right conditions. The accounts spread content calling for the overthrow of the Islamic Republic: civil unrest, economic collapse, infrastructure failure, prison uprisings. The content was produced with AI. Synthetic profile pictures. Fabricated BBC Persian news screenshots. Deepfake video of prison bombings and military explosions. Audio manipulations of protest songs with AI-generated likenesses of Iranian singers.

The attribution assessment, careful as such assessments must be: the operation was most likely conducted by an Israeli government agency or a private contractor working under close Israeli government supervision.

The operation has received, as analysts noted in reviewing the 2026 war’s disinformation landscape, considerably less public attention than Iranian AI fakes — despite comparable strategic significance. Attribution bias shapes which disinformation gets scrutinized. Adversary fakes get examined; ally fakes get filed.

But what separates PRISONBREAK from the standard coordinated inauthentic behavior operation isn’t the attribution. It’s the timing.

The Pre-Staged Pipeline

How does a deepfake video of a prison bombing appear thirteen minutes before the bombing is finished?

Two possibilities exist, and both are significant.

The first: the operation had foreknowledge of the strike, obtained through coordination with Israeli military planners. Influence operations were not an afterthought to the kinetic operation but a synchronized component — information warfare integrated into the military planning cycle.

The second: the deepfake was prepared in advance as a contingency. Staged and ready to deploy the moment monitoring confirmed an active strike was underway. You don’t know exactly when the bombing will happen, but when it does, the false narrative is already produced and queued.

This is the pre-staged contingency pipeline. You identify probable targets. You produce AI-generated content for each scenario. You build a deployment system ready to push the content within minutes of a trigger signal. The production happens before the event. The deployment is automated.

Previously, this was expensive. You needed footage — either genuine footage recontextualized, or fabricated video requiring professional production resources. The production costs imposed a natural limit on how many contingencies you could prepare for, and how quickly you could deploy.

Generative AI eliminates this constraint.

A capable AI pipeline produces convincing video, images, and audio of events that haven’t occurred yet, across many contingency scenarios, at low cost and high speed. The PRISONBREAK operation demonstrated this at scale: deepfake footage of prison bombings, synthesized to resemble genuine documentary footage, prepared and held ready for deployment.

The economics have changed. What previously required substantial production resources — specialized staffing, equipment, skilled fabricators, time — now requires an AI pipeline and a deployment trigger. The marginal cost of an additional contingency scenario approaches zero. You can pre-stage for a dozen potential targets simultaneously, update the library as strategic priorities shift, and fire on the first confirmed signal.

Why Timing Is the Weapon

Information environments during crisis operate differently from normal conditions.

When a significant military event happens, there’s an acute information deficit. People know something occurred. They don’t know what. The demand for information spikes before supply can meet it. This creates a window — typically measured in hours, but critical in effect.

Research on the “continued influence effect” consistently shows that corrections don’t fully undo the impact of initial false information. People told that a warehouse fire released toxic chemicals, then told the warehouse was actually empty, continue to reason as if some contamination occurred. The original claim leaves a residue that accepted corrections don’t fully clear.

In the acute window, first impressions are formed. Those who encounter the deepfake first — before fact-checkers respond, before official statements appear, before analysis exists — form impressions that corrections will struggle to fully undo.

This is why timing is the weapon, not content quality.

The deepfake doesn’t need to survive scrutiny. It doesn’t need to fool forensic experts. It needs to arrive during the window — seeding a specific narrative while the event is still unresolved, reaching the fraction of the audience that will carry that first impression forward even after the correction lands.

Thirteen minutes ahead of verification isn’t a technical limitation of the operation. It’s the product.

Machine-Speed Narrative

Researchers reviewing the 2026 war’s information environment described what PRISONBREAK demonstrated as “the synchronisation of kinetic and narrative operations at machine speed” — a qualitative escalation from previous influence operation models.

The previous model: events happen, operators observe, fabricators construct narratives, distributors push them. Each step takes time. Platforms and fact-checkers operate in the same temporal window, with a reasonable chance of reaching audiences before the false narrative embeds.

The synchronized model operates in a different temporal architecture. The false narrative is prepared before the event and deployed during it. Fact-checkers encounter a claim that has already circulated, formed impressions, and been shared — before their analysis exists. Corrections arrive late to an audience that has already moved on.

This doesn’t require flawless execution. The PRISONBREAK deepfakes had detectable artifacts — distorted bodies, impossible movement, misplaced tattoos, unnatural environmental rendering. Careful forensic analysis, with time and tools, found them.

Neither time nor tools are available to people encountering the content in a fast-moving feed during an active military operation.

Who Isn’t Watching

PRISONBREAK was caught. The Citizen Lab report is detailed. The findings are documented and publicly available.

“Caught” is doing real work in that sentence, and it’s worth examining what it means.

The accounts were identified after months of operation. The audience that formed impressions from the deepfake footage — that saw the Evin Prison video in the thirteen minutes before verification was possible — was not served a correction simultaneous with the content. They saw the deepfake. Some of them later encountered a report noting it was fabricated. The continued influence effect worked on the interval between the two.

There’s also a question about which operations receive scrutiny. Iranian AI fakes during the 2026 war — fabricated before/after satellite imagery of US naval bases, AI-generated videos of nonexistent missile strikes — received widespread fact-checking attention. PRISONBREAK, attributed to an Israeli state agency, received considerably less. The asymmetry in scrutiny is itself a form of information environment shaping: not all fabrications are examined with equal urgency, and the ones that aren’t examined do their work undisturbed.

What to Do in the Window

Specific defenses against synchronized operations are available, even if imperfect.

Treat the first 24 hours of a major event as the least reliable. This is uncomfortable because the information feels most urgent during that window. The urgency is precisely what the operation exploits. Maximum skepticism belongs at maximum urgency, not minimum.

Delay sharing, not necessarily consuming. You can observe and process breaking information without amplifying it. Sharing is the mechanism that makes synchronized operations effective — your authentic emotional response does distribution work for the operation. Pausing the share instinct in the acute phase is the individual lever available.

Ask who benefits from the specific narrative, not just whether the footage looks real. Pre-staged content is designed to advance a strategic objective. Identifying whose interests the narrative serves tells you something about the probability of fabrication, independent of how convincing the content appears.

Source-trace before accepting extraordinary claims. “This video is spreading everywhere” is not a source. What account posted it first? Does it appear on the outlet’s actual website? Is the metadata consistent? In the acute phase, this verification takes time you may not feel you have. That feeling is accurate — and it’s why the window works.

Conclusion

PRISONBREAK established a template that the 2026 war information environment has extended: AI-generated content, prepared in advance, deployed at military speed, synchronized with kinetic operations to arrive during the window when verification is impossible and first impressions are forming.

The innovation is not deepfake technology. Deepfakes have existed long enough that their existence is not novel. The innovation is the operational architecture — the pre-staged contingency pipeline that positions the fabrication ahead of the event, turning AI content generation from a post-hoc tool into a real-time weapon.

Previous influence operations asked audiences to believe a false narrative before verification caught up. Synchronized pre-staged operations don’t ask audiences to believe anything. They ask audiences to see something — in a moment of high urgency, before the question of believing it arises.

The false narrative arrives first. That’s enough.

The thirteen minutes before verification isn’t a gap in the detection system. It’s what the operation was built to occupy.


This article is part of Decipon’s Manipulation Breakdowns series, which dissects real influence tactics using the NCI Protocol framework.


Sources: