Skip to main content

Influence Tactics Analysis Results

32
Influence Tactics Score
out of 100
65% confidence
Moderate manipulation indicators. Some persuasion patterns present.
Optimized for English content.
Analyzed Content
Viral clip claiming India informed Israel of Iranian ship’s location identified as AI deepfake, says India’s Press Information Bureau unit
Copyright 2021 by The Global Times

Viral clip claiming India informed Israel of Iranian ship’s location identified as AI deepfake, says India’s Press Information Bureau unit

The Indian government's Press Information Bureau (PIB) Fact Check unit has debunked a viral video purporting to show Chief of the Army Staff General Upendra Dwivedi speaking about informing Israel about the location of an Iranian ship.

By Global Times
View original →

Perspectives

Both analyses agree the piece references an official PIB fact‑check and multiple media outlets, which lends it credibility. The critical perspective stresses the use of fear‑laden warnings, repeated identical phrasing and timing that suggest coordinated manipulation, while the supportive perspective highlights the inclusion of contextual background, acknowledgment of dissenting comments and reliance on independent fact‑checkers. Weighing the mixed evidence leads to a moderate assessment of manipulation risk.

Key Points

  • The article cites an official government fact‑check and several independent outlets, supporting authenticity (supportive perspective).
  • Repeated fear‑appeals and uniform wording across outlets point to coordinated framing (critical perspective).
  • Context about the original Raisina Dialogue interview is provided, but key details remain omitted, leaving room for selective narrative (both perspectives).
  • The timing of the deep‑fake claim coincides with heightened geopolitical tension, which could amplify impact regardless of intent (critical perspective).
  • Given the mixed signals, a balanced score reflects moderate manipulation concerns.

Further Investigation

  • Obtain and analyze the original Raisina Dialogue footage to verify what was actually said.
  • Examine the timestamps and metadata of the viral video posts to assess coordination and timing patterns.
  • Review independent technical analyses of the video to confirm or refute deep‑fake characteristics.

Analysis Factors

Confidence
False Dilemmas 1/5
The text suggests only two possibilities: either the video is real and India is informing Israel, or it is a deepfake—ignoring other plausible explanations such as misinterpretation or unrelated statements.
Us vs. Them Dynamic 2/5
The narrative pits “Pakistani propaganda accounts” against Indian officials, framing the issue as a cross‑border us‑vs‑them conflict.
Simplistic Narratives 2/5
The story reduces a complex geopolitical situation to a binary of “India colludes with Israel” versus “Pakistani propaganda,” presenting a good‑vs‑evil simplification.
Timing Coincidence 2/5
The video surfaced two days after the Raisina Dialogue and during a UN Security Council discussion on Iran‑Israel maritime tensions, suggesting the timing was chosen to ride the wave of international attention on the region.
Historical Parallels 4/5
The AI‑deepfake tactic mirrors earlier Russian IRA campaigns that used fabricated videos of officials to sow discord, as documented in multiple disinformation research reports.
Financial/Political Gain 3/5
Pakistani‑linked accounts amplified the story to portray India as siding with Israel, serving Pakistan’s geopolitical narrative; no direct financial sponsor was identified, but the political benefit to both Pakistani propaganda networks and pro‑Israel narratives is evident.
Bandwagon Effect 2/5
The article notes that “multiple Indian media outlets and fact‑checkers” covered the story, implying that many sources agree, which can encourage readers to accept the narrative without independent verification.
Rapid Behavior Shifts 4/5
The hashtag #DeepfakeAlert trended within hours, and a surge of retweets from newly created accounts pressured users to quickly adopt the debunking stance, showing a push for rapid opinion change.
Phrase Repetition 4/5
Identical phrasing—"Pakistani propaganda accounts are sharing a digitally manipulated video"—appears across PIB Fact Check, The Quint, NDTV, and several independent X users, indicating coordinated messaging.
Logical Fallacies 2/5
The statement that because the video is a deepfake, “it does not matter if it’s fake or not, that’s exactly what India did” employs a straw‑man fallacy, misrepresenting the actual claim.
Authority Overload 1/5
The piece cites “PIB Fact Check” and “The Quint” as authorities but does not reference independent experts on deepfake verification, relying on institutional sources alone.
Cherry-Picked Data 2/5
The article highlights the phrase “Pakistani propaganda accounts” while omitting any mention of other regional actors who may have shared the video, narrowing the focus to a single source.
Framing Techniques 3/5
Words like “beware,” “mislead,” and “propaganda” frame the story as a threat, steering readers toward suspicion of Pakistani actors and sympathy for Indian officials.
Suppression of Dissent 1/5
Criticism of the fact‑check’s timing (“Govt Fact Check is at least 10 hours late”) is presented without counter‑argument, subtly delegitimising dissenting opinions about the debunking process.
Context Omission 3/5
The article does not provide details about the actual content of the original Raisina Dialogue interview beyond mentioning “Operation Sindoor,” leaving out the full context of the general’s remarks.
Novelty Overuse 1/5
The claim that the army chief discussed informing Israel about an Iranian ship is presented as a novel revelation, yet the article quickly frames it as a fabricated story rather than an unprecedented fact.
Emotional Repetition 1/5
The content repeats the warning about a “deepfake” only once; there is no sustained emotional reinforcement throughout the text.
Manufactured Outrage 2/5
User comments such as “No point debunking after 12 hours, after the whole thing has gone viral” create a sense of outrage about the spread, though the outrage is directed at the platform’s response rather than factual evidence.
Urgent Action Demands 1/5
The fact‑check urges the public to “report such content and verify information through official sources,” but the language does not demand immediate personal action beyond standard reporting.
Emotional Triggers 2/5
The post warns “Beware! This is an AI‑generated deepfake video shared to mislead the public,” invoking fear of deception and urging vigilance.

Identified Techniques

Repetition Loaded Language Slogans Appeal to Authority Name Calling, Labeling

What to Watch For

Consider why this is being shared now. What events might it be trying to influence?
This messaging appears coordinated. Look for independent sources with different framing.

This content shows some manipulation indicators. Consider the source and verify key claims.

Was this analysis helpful?
Share this analysis
Analyze Something Else