Skip to main content

The Liar's Dividend: How Deepfakes Win Without Deceiving Anyone

Manipulation Breakdowns · 11 min read · By D0

The Wrong Question

The standard question about deepfakes is whether you can spot one. Researchers build detection classifiers. Platforms deploy authenticity tools. Media literacy curricula teach people to count fingers, look for blurry teeth, check for mismatched lighting.

The question assumes the threat model is deception — that deepfakes work by fooling you into believing a fabrication. That assumption is already outdated.

In January 2026, President Trump announced the capture of Venezuelan president Nicolás Maduro. Within hours, AI-generated images and manipulated video flooded social media — not footage of the actual event, but synthetic celebratory scenes, fabricated gratitude, invented crowds. Elon Musk shared what appeared to be an AI-generated video of Venezuelans thanking the United States for Maduro’s capture.

Days later, after a fatal shooting by an Immigration and Customs Enforcement officer in Minneapolis, a likely AI-edited fake image of the scene circulated online. Users attempted to digitally remove the officer’s mask from the image.

In neither case did the fakes need to be convincing to accomplish something. They didn’t need to fool anyone. They needed only to exist — in sufficient volume, at sufficient speed — to make the authentic footage and authentic photos slightly less certain. The confusion itself was the product.

This is the liar’s dividend, and it has arrived.

What the Liar’s Dividend Actually Is

The term comes from legal scholars Bobby Chesney and Danielle Keats Citron, who coined it in 2018. Their prediction: as deepfake technology matured, it would hand bad actors a perverse weapon. Not the ability to make people believe false things — the ability to make people doubt true ones.

The mechanism runs like this. Once deepfakes become common knowledge, any piece of video or audio evidence can be challenged with two words: “that’s fake.” The challenger doesn’t need to prove the content is fabricated. They only need to invoke the possibility. In an environment where fabrications are real and detection is unreliable, the possibility is credible.

When a politician is caught on camera saying something damaging, the response is no longer denial of the statement — it’s denial of the footage. When atrocities are documented, the perpetrators can claim the documentation is synthetic. When authentic photos contradict preferred narratives, those photos can be dismissed as generated. The deepfake doesn’t have to be the political video, the atrocity footage, the incriminating photo. It just has to exist as a category of possibility.

The dividend is collected on authentic content.

Two Ways Deepfakes Win

There are two distinct mechanisms by which synthetic media damages the information environment, and they operate in opposite directions.

The first is classic deception: a fabricated video is believed, and the viewer carries a false impression. This is the threat model that most research addresses. It is real. It is documented. The Bristol University study published in Communications Psychology earlier this year found that even participants who were warned a video was fake still relied on its content when making moral judgments.

The second is the liar’s dividend: authentic content is doubted, and the viewer carries uncertainty rather than belief. This is less studied, harder to measure, and more dangerous at scale.

The asymmetry matters. Deception produces a specific false belief that can, in principle, be corrected. The liar’s dividend produces general uncertainty that resists correction — because correcting uncertainty requires establishing a standard of verification that the ambient environment no longer supports.

Jeff Hancock, a communication professor at Stanford, describes the default state of human information processing: “We believe communication until we have some reason to disbelieve.” Deepfakes — and awareness of deepfakes — provide a reason to disbelieve that applies to everything. Once that reason is available, it can be selectively invoked against authentic content by anyone with an interest in that content being doubted.

Confirmation Bias as the Amplifier

The liar’s dividend does not affect everyone equally. Research shows that political content makes people more likely to dismiss authentic content as fake when it contradicts their prior beliefs. The deepfake accusation is asymmetrically deployed: unfavorable authentic footage gets challenged; favorable synthetic content gets shared.

This is not a new phenomenon — motivated reasoning predates deepfakes. But deepfakes give motivated reasoning a new tool. Previously, claiming “that footage is fabricated” required some plausible technical account. Now, the technical account is plausible by default. The accusation costs nothing and creates friction that functions as refutation in practice, even when it fails as argument.

In the Venezuela scenario, the fake celebratory videos were shared by people who found them emotionally satisfying regardless of their authenticity. In the Minneapolis scenario, the fake image was circulated by people whose prior beliefs about the event were served by its existence. Neither audience was neutral about the outcome — and the synthetic content arrived exactly calibrated to what those audiences wanted to see.

Hancock’s observation applies here: we believe communication until we have reason to disbelieve. Partisan audiences have reason to disbelieve authentic content that challenges their position, and reason to believe synthetic content that confirms it. Deepfakes provide the raw material for both moves simultaneously.

The Cognitive Endpoint

If the liar’s dividend operates long enough at sufficient scale, the research suggests a specific endpoint: not false belief but disengagement from truth-seeking altogether.

Media scholar Renee Hobbs describes a pattern of “cognitive exhaustion” — the mental cost of evaluating contested information eventually exceeds the perceived benefit. When every piece of video evidence might be fabricated, when every photo might be AI-generated, when detection tools are unreliable and corrections arrive late, the rational response for many people is to stop evaluating and start pattern-matching to tribal signals instead.

Who shared this? What platform? What do people I trust say about it? These heuristics replace content evaluation. The information environment is abandoned as a source of grounding and replaced by social affiliation as a signal of credibility.

This endpoint serves specific interests. It serves those who benefit from public paralysis — incumbents of any type, political or otherwise. It serves those who want to conduct operations without documentary accountability — governments, militaries, corporations facing damaging evidence. It serves those who have already lost the argument on the merits and need the merits to become contested.

The confusion is not a byproduct. In many cases, it is the goal.

Detection Is Not the Solution

The reflex response to the deepfake problem is better detection. Classifiers improve. Watermarking is proposed. Platforms deploy automated tools. The framing is that if detection catches up with generation, the problem resolves.

Detection does not address the liar’s dividend.

Even if a deepfake is detected and labeled — even if the platform marks it synthetic before the viewer sees it — that same detection infrastructure now exists as a framework through which authentic content can be doubted. The existence of a classifier that labels content “synthetic” means the question “is this content synthetic?” is now a live question about every piece of content. Detection capability and doubt capability arrive together.

Visual tricks for identifying deepfakes — counting fingers, examining teeth, looking for lighting inconsistencies — no longer work reliably. Detection tools are increasingly unreliable. But the damage to the ambient environment was done before reliability was at issue. The uncertainty is already distributed.

The solution the liar’s dividend points to is not better detection but better provenance. The question should shift from “does this content look authentic?” to “where did this content come from and how do we know?” Source, chain of custody, and authenticated origin matter more than any analysis of the content itself.

The Coalition for Content Provenance and Authenticity (C2PA) has been developing standards for cryptographic signing of media at the point of capture. The idea is that a camera or recording device signs the media at creation, and that signature travels with the content through any legitimate editing workflow. Content without a valid signature is not necessarily fake, but it cannot be authenticated. Content with a valid signature can be traced to a verified origin.

This is the right frame. It treats the problem as one of infrastructure rather than cognitive performance — not “can viewers detect fakes” but “can the information environment provide authenticated provenance.” The former puts the burden on individual human beings; the latter puts it on systems.

Neither solution is fast, cheap, or available at current platform scale.

What the Incidents Reveal

The Venezuela and Minneapolis incidents are worth examining not as edge cases but as templates.

Both involved a real event — a politically charged, high-stakes situation with genuine stakes and genuine audiences. In both cases, synthetic content appeared within hours, not days. The speed matters: the fabrications arrive while the situation is live, before fact-checkers can operate, while attention is highest.

In both cases, the synthetic content was not sophisticated. The AI-generated Venezuelan celebration video Musk shared was identifiable to trained observers. The manipulated ICE shooting image was described as “likely AI-edited,” not a seamless fabrication. Sophistication was not required. The content needed only to be visually plausible enough to circulate on social media and plausible enough to be shared by audiences who found its message congenial.

This is important: the liar’s dividend does not require technically advanced deepfakes. It requires only that deepfakes exist as a known category, that they arrive fast, and that there are audiences motivated to share them before verification occurs. The crude fakes serve their purpose even if — especially if — they are later identified as crude, because the identification arrives after the damage.

The sequence is: event → fast synthetic content → uncertainty → disengagement or tribal processing → belated correction that reaches a fraction of the original audience.

That sequence repeats. It is now, as of early 2026, the default trajectory of high-stakes visual information during breaking events.

The Asymmetry That Matters

Producing a deepfake: cheap, fast, automatable. A motivated actor can generate dozens of variants of a fabricated scene in the time it takes a fact-checker to confirm an authentic one.

Debunking a deepfake: slow, expensive, requires expertise, and reaches a smaller audience than the original content.

This asymmetry existed before AI. Lies have always been cheaper to produce than corrections. AI scales the production side without scaling the correction side. The gap between lie-production cost and correction cost, already disadvantageous for truth, widens.

For practical purposes: in the attention economy, first impressions dominate. The fabrication arrives first. It occupies the interpretation frame through which the authentic content is subsequently read. Even if the fabrication is identified, it has already shaped the question “is this real?” — and that question, once asked, is not fully answered by any subsequent confirmation.

The liar’s dividend collects in the gap between first impression and eventual correction.

What You Can Actually Do

The liar’s dividend is a structural problem, not one solvable by individual cognitive effort. That said, individual practice matters:

Source over content. When evaluating video or images during breaking events, ask who captured it, what platform it came from, and whether the originating source can be traced. Content analysis — does it look real? — is now the least reliable evaluation method available.

Pause on breaking event content. The first hours of a high-profile incident are precisely when synthetic content arrives fastest and fact-checking capacity is thinnest. Waiting even twelve hours substantially changes the information landscape.

Distinguish claims from evidence. A video is not a claim; it is alleged evidence of a claim. The claim and the evidentiary status of the video should be evaluated separately. That a video appears to show something does not establish that the thing occurred.

Treat synthetic media accusations skeptically too. The liar’s dividend is also exploitable from the other side: authentic content can be falsely accused of being synthetic. Demanding “that’s a deepfake” without evidence is the same epistemic move as circulating a deepfake without disclosure. Both are trust attacks.

Follow provenance infrastructure. When platforms and cameras implement cryptographic signing and C2PA standards, use that information. Over time, authenticated provenance will become the most reliable signal available.

Conclusion

The deepfake threat is usually discussed as a deception problem. Can you spot the fake? Can platforms detect it? Can regulators mandate disclosure?

The liar’s dividend is a different problem, and asking the deception question doesn’t address it. The synthetic videos that appeared within hours of the Maduro capture announcement may not have fooled many people. They didn’t need to. They made the information environment slightly less trustworthy, slightly more contested, slightly more prone to tribal processing rather than evidence evaluation. Multiply that effect across every high-stakes news event going forward, at the production costs that AI now enables, and the cumulative corrosion becomes the dominant effect.

The question deepfakes have now made live is not “is this fake?” It’s “can I trust anything at all?” The second question is harder, less answerable, and precisely the question that bad actors with an interest in confusion are paying to keep open.

The liar’s dividend is real. It is being collected.


This article is part of Decipon’s Manipulation Breakdowns series, which dissects real influence tactics using the NCI Protocol framework.


Sources: