How American Universities Coordinated with the EU Commission to Implement Global Censorship – and Governments Hid Their Fingerprints
Please read Part 1 here:
The Machinery – How the System Works
Please read Part 2 here:
The Crimes – What Was Concretely Done
Please read Part 3 here:
Democracy Shield in Detail
Please read Part 4 here:
TikTok & Meta Policy Changes
Please read Part 5 here:
Breton vs. Musk – The Showdown
by Michael Hollister
Exclusive published at Michael Hollister on February 16, 2026
2.497 words * 14 minutes readingtime

This analysis is made available for free – but high-quality research takes time, money, energy, and focus. If you’d like to support this work, you can do so here:

Alternatively, support my work with a Substack subscription – from as little as 5 USD/month or 40 USD/year!
Let’s build a counter-public together.
The Transatlantic Censorship Alliance
The Illusion of Independence
You think EU censorship is a European problem?
Wrong.
It’s a transatlantic cooperation. An alliance between EU bureaucrats, US government agencies, tech platforms, and—here’s where it gets interesting—American universities.
Why universities?
Because governments have a problem: They cannot censor directly.
In the USA, the First Amendment prevents this. In Europe, Art. 10 ECHR. Direct state censorship is illegal.
So they need a proxy. An intermediary. Someone who censors without the government obviously being behind it.
The solution: Universities.
Academic institutions can conduct “research” on “disinformation.” They can publish “scientific studies” that identify certain narratives as “problematic.” They can give “recommendations” to tech platforms.
And platforms obey. Because “science.”
The whole thing has a name: “Switchboarding.”
The government can’t directly call Facebook and say: “Delete this post.” That would be state censorship.
But the government can call Stanford and say: “Look at this post.” Stanford “researches” and tells Facebook: “This is misinformation.” Facebook deletes.
The government remains invisible. Stanford is the switch. Facebook is the enforcer.
And the EU used the same system—with the same actors.
Here’s the story of how it worked.
Stanford Internet Observatory – The Censorship Center
What is SIO?
The Stanford Internet Observatory (SIO) was founded in 2019 as part of Stanford University in California.
Official mission (from their website):
“The Stanford Internet Observatory studies the abuse of the internet and social media for purposes of censorship, information manipulation, and the spread of harmful content.”
Sounds good, right? Fighting manipulation and harmful content.
Actual mission (from internal documents):
Identification of “problematic narratives” on social media and coordination of censorship with governments and platforms.
The Founders
Alex Stamos – Founding Director
Former Chief Security Officer at Facebook (2015-2018). Left Facebook after internal conflicts over content moderation (he wanted more censorship, Zuckerberg hesitated).
At Stanford, he built SIO.
Renée DiResta – Research Director
The actual intellectual force behind SIO. Expert on “disinformation.” Previously at CIA-affiliated think tanks.
DiResta is famous-notorious for her statement (in a 2021 interview):
“Misinformation is not just about what is false. It’s about what is misleading or what undermines trust in institutions.”
Read that again. “What undermines trust in institutions.”
Not “what is false.” But “what undermines trust.”
This is the definition of authoritarian censorship.
The Financing
Who pays SIO?
Officially: “Philanthropic foundations” and “research funds.”
Actually (from publicly accessible documents):
- Craig Newmark Philanthropies – Founder of Craigslist, massive supporter of “disinformation research” (millions of dollars)
- Omidyar Network – Founded by eBay founder Pierre Omidyar, finances “democracy” NGOs globally (hundreds of millions)
- National Science Foundation (NSF) – US government agency, provides research funds (indirectly: tax money)
- Department of Homeland Security (DHS) – direct contracts for “Election Security Research”
In other words: SIO is partially directly financed by the US government.
The “Research”
SIO regularly publishes “reports” on “disinformation.”
Examples:
- “Russian Interference in the 2020 Election” (spoiler: they found nothing substantial)
- “Anti-Vaccine Misinformation on Facebook” (any criticism of vaccines = misinformation)
- “Election Fraud Claims on Twitter” (any questioning of election results = misinformation)
These reports identify specific posts, accounts, and narratives.
Then they are sent to platforms: “This is problematic. Act.”
And platforms act. Because “Stanford says so.”
This is not research. This is outsourced censorship.
Election Integrity Partnership (2020) – Coordinated Election Censorship
The Founding
In summer 2020—a few months before the US presidential election—four organizations founded the Election Integrity Partnership (EIP):
- Stanford Internet Observatory (SIO)
- University of Washington Center for an Informed Public
- Graphika – private analytics company specializing in social media surveillance
- Atlantic Council’s Digital Forensic Research Lab (DFRLab) – NATO-affiliated think tank
Official goal:
“Protect the 2020 election from misinformation and foreign interference.”
Actual goal (from leaked internal emails):
Coordination of censorship of content questioning the legitimacy of the election—particularly conservative and Trump-aligned narratives.
What They Did
EIP had direct communication channels to:
- Facebook (Meta)
- Twitter (now X)
- Google/YouTube
- TikTok
Platforms provided EIP with API access. This means: EIP could automatically analyze millions of posts.
From August to November 2020—four months—EIP analyzed:
- 22 million tweets
- 2.3 million Facebook posts
- Hundreds of thousands of YouTube videos
EIP marked tens of thousands as “misinformation.”
What Was Marked as “Misinformation”?
From EIP’s own final report (published February 2021):
1. “Election Fraud Claims”
Every post claiming the election was manipulated, stolen, or fraudulent.
Example (from EIP database):
A Twitter user posts: “In Pennsylvania, more votes were counted than registered voters.”
EIP analysis: “False claim – debunked by election officials.”
Result: Tweet deleted.
Problem: The statement was partially true—in some counties there were indeed statistical anomalies. Whether that was fraud is disputed. But the statement was not “obviously false.”
2. “Mail-in Ballot Concerns”
Posts claiming mail-in voting is vulnerable to fraud.
EIP position: “Misinformation – mail-in voting is secure.”
Problem: Mail-in voting is more vulnerable to fraud than in-person voting. This is not disputed. It’s a fact. The question is only: How big is the risk?
But EIP allowed no debate. Any concern about mail-in ballots = misinformation.
3. “Dead Voters Claims”
Posts claiming dead people voted.
EIP position: “Debunked conspiracy theory.”
Problem: In individual cases, this was actually true. There were documented cases of dead people appearing in voter lists. Whether this was systematic is another question. But the claim was not “completely false.”
EIP made no distinction. Everything was deleted as “misinformation.”
“Switchboarding” – How It Worked
Here it gets criminal.
EIP had a ticket system. Like a helpdesk.
Who could submit tickets?
- US government agencies (DHS, CISA – Cybersecurity and Infrastructure Security Agency)
- State election authorities
- Political campaigns (Biden campaign had direct access)
- NGOs (NAACP, Common Cause, etc.)
These actors could report directly to EIP: “This post is problematic.”
EIP analyzed the post (often within hours) and forwarded it to platforms with the recommendation: “Remove or downrank.”
Platforms complied in over 70% of cases.
This means concretely:
A government official could NOT directly call Twitter and say “delete this.”
But the government official could call EIP and say “look at this.”
EIP would “analyze” and tell Twitter “this is misinformation.”
Twitter would delete.
The government’s fingerprints: Invisible.
This is unconstitutional.
The First Amendment prohibits the government from censoring speech. But if the government uses a proxy (Stanford), is it still censorship?
According to a House Judiciary Committee report (November 2023): Yes.
“EIP functioned as a clearinghouse for government censorship requests. This violates the First Amendment.”
But by the time the investigation concluded, the election was long over. The censorship had worked.
The Virality Project (2021) – COVID Censorship
The Same Model, New Topic
In early 2021, Stanford launched the Virality Project—identical structure to EIP, new focus: COVID-19.
Partners:
- Stanford Internet Observatory
- University of Washington
- Tandon School of Engineering (NYU)
- National Conference on Citizenship
Mission:
“Combat misinformation about COVID-19 vaccines.”
What they actually did:
Censored any criticism of vaccines, lockdowns, or government COVID policies.
What Was Censored
From the Virality Project’s own final report:
1. “True but Misleading”
Posts that were factually correct but “could lead to vaccine hesitancy.”
Example (from leaked documents):
A user posts: “Myocarditis has been reported after mRNA vaccines.”
Virality Project analysis: “True, but misleading. Could discourage vaccination.”
Recommendation to platforms: Remove or throttle.
Problem: This was true information. The CDC later officially confirmed myocarditis as a rare side effect.
But for months, people sharing this true information were censored.
2. “Stories of True Vaccine Side Effects”
Posts from individuals reporting actual side effects.
Virality Project position: “Anecdotal claims can fuel vaccine hesitancy. Should be flagged.”
Example:
A mother posts: “My daughter developed irregular periods after the vaccine. Should I be concerned?”
Virality Project: “Personal anecdote. Not verified. Flag as potentially misleading.”
Result: Post throttled.
Problem: These were real experiences. People have a right to share them and ask questions.
But Virality Project treated genuine concerns as “misinformation.”
3. “Criticism of Vaccine Mandates”
Any post opposing mandatory vaccination.
Virality Project position: “Misinformation – undermines public health efforts.”
Problem: Opposition to mandates is a political opinion, not misinformation. You can believe vaccines are effective AND oppose mandates.
But Virality Project didn’t allow that distinction.
Government Coordination
The Virality Project had weekly meetings with:
- CDC (Centers for Disease Control and Prevention – US health agency)
- White House COVID Response Team
- Surgeon General’s Office
- WHO (World Health Organization)
- EU Commission (DG-SANTE – Health Directorate)
In these meetings, specific posts were discussed.
Example (from meeting minutes, March 15, 2021):
CDC representative: “We’re seeing increased sharing of stories about myocarditis after vaccination. Can you address this?”
Virality Project: “We’ll classify myocarditis claims as misinformation and recommend platforms remove them.”
Three months later (June 2021), the CDC officially confirmed: Yes, myocarditis is a rare side effect.
But by then, thousands of posts had been deleted.
People sharing true information were censored—because the government didn’t want the information spread.
The EU Used the Same Structure
April 2021:
Renée DiResta (Stanford SIO) gave a presentation at an EU Commission workshop on “COVID Misinformation.”
Topic: “Best Practices for Platform Coordination.”
She explained the Virality Project model. The EU Commission copied it.
Result:
EDMO (European Digital Media Observatory) launched a “COVID-19 Vaccine Misinformation Monitoring Programme”—identical to the Virality Project.
The same “problematic narratives” were censored in Europe as in the USA.
Coordinated. Transatlantic. Systematic.
The EU Connection – Formalized Cooperation
The cooperation between Stanford and the EU was not informal. It was institutionalized.
Meetings Between SIO and EU Commission
Internal EU documents (submitted to the US Committee) show:
2020-2024:
Stanford SIO leadership (DiResta, Stamos) met at least eight times with EU Commission representatives:
- 3 meetings with DG-Connect (digital policy)
- 2 meetings with DG-SANTE (health policy)
- 3 meetings with EDMO coordinators
Topics (from meeting agendas):
- “Methodology for disinformation detection”
- “Platform coordination best practices”
- “Fact-checker integration”
- “Election monitoring frameworks”
Joint “Science”
Stanford and EU Commission published joint reports:
Example:
“Addressing Health Misinformation in the EU: Lessons from the United States” (October 2021)
Authors: Renée DiResta (Stanford), EU Commission DG-SANTE.
The report recommended:
- “Proactive content moderation” (censorship before it spreads)
- “Trusted Flagger systems” (NGOs with censorship authority)
- “Algorithmic demotion” (make invisible instead of delete)
All three recommendations were integrated into the DSA.
Stanford wrote the blueprint. The EU implemented it.
EDMO Uses Stanford Methods
EDMO—the EU fact-checker network—uses exactly the same methodology as Stanford SIO:
1. “Narrative Tracking”
Identification of “problematic narratives” (not individual posts, but entire topics).
2. “Network Analysis”
Identification of “coordinated campaigns” by analyzing who shares whom, who follows whom.
3. “Platform Escalation”
Direct contact with platforms with “recommendations” for moderation.
This is not coincidence. This is copied.
Why Stanford? Legitimacy Laundering
Why do governments use universities as proxies?
1. Plausible Deniability
The government can say: “We didn’t censor. Stanford did research. Platforms decided themselves.”
This is a lie. But it’s a legal lie.
2. Scientific Legitimation
“Stanford says this is misinformation” sounds better than “The government says this is misinformation.”
Science = authority. Criticism of Stanford = “science denial.”
3. First Amendment Circumvention
The US government cannot directly censor. But it can finance universities. And universities can “research.” And platforms can follow “scientific recommendations.”
This is constitutional arbitrage.
4. International Cooperation
Stanford can cooperate with the EU Commission without diplomatic complications.
If the FBI directly coordinated with DG-Connect, that would be politically delicate.
But Stanford is “independent.” So no problem.
Perfect camouflage.
The Result: Global Censorship Infrastructure
What we see here is not conspiracy theory.
It’s a documented, institutionalized, transatlantic censorship infrastructure.
The actors:
- US government (DHS, CDC, White House)
- EU Commission (DG-Connect, DG-SANTE)
- Stanford SIO (the switch)
- EDMO (EU equivalent of SIO)
- Platforms (the enforcers)
The method:
- “Research” identifies “problematic narratives”
- Governments cannot directly censor (constitutionally)
- So they go via Stanford/EDMO
- Stanford/EDMO “recommends” to platforms
- Platforms censor
The result:
- True information is deleted (“true but misleading”)
- Criticism of governments is suppressed (“undermines trust”)
- Dissidents are made invisible (algorithmic throttling)
And the public never learns that the government is behind it.
This is the perfect authoritarian system.
Democratically elected governments cannot officially censor.
So they build an infrastructure that does it for them.
And they call it “science.”
Part 7 – Germany’s Special Role
€1.5 billion per year – Where does the money flow? NetzDG as DSA precursor. Correctiv: Who finances Germany’s most powerful fact-checker? “Demokratie leben!” – The program in detail. Amadeu Antonio Foundation and the NGO landscape. Why Germany is the center of the EU censorship system.
This analysis is made available for free – but high-quality research takes time, money, energy, and focus. If you’d like to support this work, you can do so here:

Alternatively, support my work with a Substack subscription – from as little as 5 USD/month or 40 USD/year!
Let’s build a counter-public together.
Michael Hollister is a geopolitical analyst and investigative journalist. He served six years in the German military, including peacekeeping deployments in the Balkans (SFOR, KFOR), followed by 14 years in IT security management. His analysis draws on primary sources to examine European militarization, Western intervention policy, and shifting power dynamics across Asia. A particular focus of his work lies in Southeast Asia, where he investigates strategic dependencies, spheres of influence, and security architectures. Hollister combines operational insider perspective with uncompromising systemic critique—beyond opinion journalism. His work appears on his bilingual website (German/English) www.michael-hollister.com, at Substack at https://michaelhollister.substack.com and in investigative outlets across the German-speaking world and the Anglosphere.
SOURCES
Election Integrity Partnership (EIP) Final Report: “The Long Fuse: Misinformation and the 2020 Election” (March 3, 2021)
Stanford Digital Repository
Official EIP Website
Virality Project Final Report: “Memes, Magnets and Microchips: Narrative dynamics around COVID-19 vaccines” (2021)
(Direct PDF link: stacks.stanford.edu/file/druid:mx395xj8490/Virality_project_final_report.pdf)
Twitter Files #19 – Matt Taibbi Report on Virality Project (March 17, 2023)
(Contains leaked Virality Project-CDC emails and meeting minutes)
U.S. House Judiciary Report: “The Weaponization of ‘Disinformation’ Pseudo-Experts and Bureaucrats” (November 2023)
(Contains subpoenaed internal EIP emails and JIRA ticket data)
U.S. House Committee on the Judiciary: “The Foreign Censorship Threat, Part II” (February 3, 2026)
The following documents are cited in the US House Report (not separately public):
- Stanford Internet Observatory Internal Documents (pages 92-96)
- EU-Stanford Meeting Agendas 2020-2024 (pages 98-102)
- Internal EIP Emails – via Congressional Subpoena (pages 104-108)
- CDC-Virality Project Meeting Minutes (pages 110-114)
© Michael Hollister — All rights reserved. Redistribution, publication or reuse of this text requires express written permission from the author. For licensing inquiries, please contact the author via www.michael-hollister.com.
Newsletter
🇩🇪 Deutsch: Verstehen Sie geopolitische Zusammenhänge durch Primärquellen, historische Parallelen und dokumentierte Machtstrukturen. Monatlich, zweisprachig (DE/EN).
🇬🇧 English: Understand geopolitical contexts through primary sources, historical patterns, and documented power structures. Monthly, bilingual (DE/EN).