Skip to main content

The Sleeper Persona: How Iran Pre-Positioned Fake Activists Before the War

Manipulation Breakdowns · 10 min read · By D0

The Account That Waited

“Ana Rodri” from California was a reliable presence. Her bio read: Daughter of migrants, dreamer and resilient | I fight against discrimination and imperialism. She posted about anti-ICE raids. She promoted pro-Nicolás Maduro content. She had a point of view, a voice, a presence that felt authentic to the communities she spoke to.

She didn’t exist.

On February 28, 2026, when U.S. and Israeli forces launched Operation Epic Fury — airstrikes targeting Iranian nuclear sites, military infrastructure, and leadership — Ana Rodri pivoted. So did 61 other accounts. They stopped posting about Scottish independence, Irish reunification, ICE checkpoints, and Maduro. They started posting about civilian casualties in Iran, anti-war protests outside Trump Tower, and footage purportedly showing Iranian missile strikes on U.S. bases.

Some of the footage wasn’t from Iran. One account posted a video of a car accident on a Saudi Arabian freeway, labeled it a drone strike on a U.S. Embassy.

What Clemson University’s Media Forensics Hub documented in March 2026 was not a standard influence operation. It was something older and more patient: a pre-positioned network, built for a moment that hadn’t arrived yet, maintained by real human operators who had spent months becoming someone their target audiences would trust.

The Architecture of Patience

Standard influence operations are reactive. Bot farms respond to events. Content is manufactured at volume, pushed through low-credibility accounts, designed to overwhelm feeds rather than persuade individuals.

What the Clemson researchers found was different. The 62 accounts they identified operated across X, Instagram, and Bluesky, and they shared a structural feature that distinguishes them from ordinary disinformation: they had been building credibility before they were needed.

Before February 28, these accounts existed in two categories.

The first: Spanish-language profiles presenting as Latina women from Texas, California, Venezuela, and Chile. They posted about anti-ICE activism. They positioned themselves within progressive communities that had established sympathies around immigration, anti-imperialism, and distrust of U.S. foreign policy.

The second: English-language profiles presenting as Scottish independence advocates, Irish nationalists, and English dissidents. They posted anti-Labour content, Scottish independence arguments, Irish reunification messaging — the texture of communities with longstanding grievances against British and American military and political establishments.

Researchers noted the accounts showed signs of real human operation, not automation. Typos. Consistent stylistic choices. The kind of imperfection that suggests someone was actually writing the posts, day after day, for months.

Then, on cue, they all pivoted.

The Logic of Identity Selection

The communities chosen weren’t random. Each served a specific function in Iran’s communications strategy.

Scottish independence advocates carry a particular mixture of grievances: anti-NATO sentiment, hostility toward UK military establishment, and skepticism toward American foreign policy interventionism. These aren’t fabricated views — they exist within the real Scottish independence movement. An account presenting as a Scottish independence supporter didn’t need to be convincing about the war; it only needed to be convincing as a Scottish independence supporter. The credibility on local issues transferred to the foreign policy position.

Irish nationalists offer similar political geometry. The history of Irish republicanism includes anti-imperialist traditions, opposition to British military involvement, and skepticism of U.S. foreign policy entanglements. An account that had been reliably posting about Irish reunification carried that history into its anti-war content.

Latina women from Texas and California represented a different calculation. In American progressive coalition politics, they occupy a specific credibility position on immigration, anti-ICE activism, and Latin American foreign policy. An account with a Latina woman persona posting anti-ICE content for months carries established progressive-coalition credibility into anti-war content about Iran.

The selection of all three categories shares a common logic: each represents a real community with genuine pre-existing sympathy for the messaging Iran needed to push. The accounts didn’t need to persuade these communities. They needed to infiltrate the perception that community members were speaking.

The In-Group Advantage

Why does this work?

Source credibility research identifies two components of persuasion: expertise (knowing the subject) and trustworthiness (having no ulterior motive). These accounts were engineered to score high on both within their target audiences.

An account that has been posting Scottish independence content for months appears to have local knowledge and genuine community investment. It doesn’t look like a foreign agent — it looks like a local activist who also cares about what’s happening in Iran. The foreign policy opinion arrives filtered through an established in-group identity.

Social psychology adds a layer: in-group messengers bypass the skepticism applied to outgroup sources. If you trust the community, and you believe a message comes from within the community, you evaluate the message differently than you would the same message from a stranger. The IRGC didn’t try to convince Scottish independence supporters that they should care about Iran. It created the appearance that Scottish independence supporters already did, and were saying so in their own voice.

The pre-positioning matters precisely here. An account that had been reliably Scottish about Scottish things for months is evaluated differently than an account created last week. The history is the credibility. Time spent posting about local issues is investment in trust that will be spent when needed.

The Pivot as a Tell — and Why It Arrives Too Late

When Clemson researchers looked at account activity patterns, the pivot was visible. Accounts that had been posting about ICE and Maduro were suddenly posting about Iranian casualties. Accounts that had been about Scottish independence were suddenly about drone strikes. Analyzed in aggregate, this pattern is a clear signal of coordinated inauthentic behavior: a simultaneous shift, across unrelated accounts, on unrelated platforms, in multiple languages.

The problem is when the analysis happens.

By the time Clemson published its findings, the accounts on X that presented as American-tied had mostly been suspended. But the Scottish and Irish accounts on Instagram and Bluesky were still active. The content they had posted — the anti-war images, the civilian casualty footage, the misrepresented car accident video labeled as a drone strike — had already circulated. Some of it reached, in the researchers’ estimate, tens of millions of users.

The analytical infrastructure for detecting coordinated inauthentic behavior is real. Researchers, platform integrity teams, and independent journalists do find these networks. The timeline is the problem: detection and suspension happen after distribution. The moment of maximum impact — when a conflict begins and audiences are most hungry for information — is also the moment when misinformation circulates fastest and analytical resources are most stretched.

Pre-positioned networks exploit this gap intentionally. By the time the pivot is identified, the credibility has already been spent.

Not a Bot Farm. An Asset Network.

The distinction between the IRGC network Clemson documented and a standard bot farm is operationally significant.

A bot farm is infrastructure for volume. Thousands of accounts generate massive amounts of content to overwhelm feeds, manufacture trending signals, and create the appearance of broad support. Individual accounts are low-quality — obvious, expendable, replaceable.

The network Clemson found operated differently. Sixty-two accounts is a small number by volume standards. The reach per account was not enormous. These were not designed to overwhelm. They were designed to be believed — to function as plausible community members whose content would be shared, discussed, and treated as authentic expression from within the target communities.

This is a different threat model. Bot farms can be partially defeated by platform moderation at scale: identify low-quality accounts, suspend at volume, publish findings. The approach is imperfect but tractable.

Pre-positioned persona networks require longitudinal detection. The suspicious signal is not account quality or posting volume — it’s the behavioral pattern over time: the coordinated pivot, the disconnect between established persona and sudden new focus, the simultaneous shift across unrelated platforms. That signal requires looking at accounts not in the moment of the crisis but in the months before it. Most content moderation is event-driven. These networks exploit the interval before the event.

What It Requires to See It

The Clemson Media Forensics Hub found these accounts because researchers were specifically looking at the information environment around the Iran conflict. They applied behavioral analysis across platforms over time — not just at the content, but at the patterns: which accounts pivoted when, what the prior posting history looked like, which accounts showed simultaneous behavioral shifts.

That’s resource-intensive work that most platforms are not conducting systematically on the long tail of their user base before a crisis occurs. Individual users have no access to cross-platform behavioral data. The analytical capacity is concentrated in academic labs, platform integrity teams, and specialized research organizations.

What individual readers can do is limited but not zero.

Notice the pivot. An account that has been consistently focused on one community issue suddenly posting with high urgency about an unrelated foreign conflict is showing a behavioral shift worth treating skeptically. The sudden expansion of scope in a moment of crisis is a signal — not proof, but a prompt for caution.

Check account history. An account created six months ago claiming long-term community membership is not the same as a three-year account with consistent local focus. Profile pictures, account age, posting consistency, and the relationship between biography and content are all worth examining.

Treat credentialing skeptically. As a Scottish independence supporter, I believe… is not evidence of Scottish independence support. The tactic works by exploiting the heuristic that community members are more trustworthy on community-adjacent issues. Recognizing that heuristic is the beginning of resisting it.

Understand that sympathetic communities are the target. These accounts weren’t built to convince people who oppose Iranian foreign policy. They were built to infiltrate communities that already had sympathy — anti-war progressives, independence movements, anti-imperialist constituencies — and appear as one of them. The existence of a network like this is a reason to scrutinize content within your own information ecosystem, not just content from obvious adversaries.

Conclusion

Sixty-two accounts built like sleepers — patient, identity-specific, locally credible — then activated simultaneously when a war began. The operational logic is precise: build the credibility you’ll need before you need it, in communities that will already agree with the message you’ll eventually deliver.

This is not a new concept in intelligence tradecraft. What is new is that the internet makes it tractable at scale, across platforms and languages, with relatively modest overhead. Real human operators ran these accounts. They invested months. They posted convincingly about ICE checkpoints and independence movements because they understood that people who genuinely care about those causes would be more receptive to anti-war content from a perceived community member than from a stranger.

The IRGC didn’t manufacture Scottish independence sympathy for anti-war sentiment. It found communities where that sympathy already lived, built counterfeit members of those communities, and waited.

The borrowed identity is not the message. It is the delivery mechanism — and until researchers look at what pivoted when, and in which direction, it is invisible.


This article is part of Decipon’s Manipulation Breakdowns series, which dissects real influence tactics using the NCI Protocol framework.


Sources: