The EU Censorship System – Part 4

Censorship does not begin with deletion – it begins with invisibility.
Internal documents reveal in detail how TikTok and Meta altered their content policies under pressure from the European Commission. Vague concepts such as “marginalizing speech,” “coded language,” or “undermining public trust” now enable the suppression of lawful opinions through algorithmic demotion. This article documents what is removed, what is throttled, and how shadow-banning creates a false illusion of consensus without users ever being informed.

TikTok & Meta Policy Changes

Please read Part 1 here:
The Machinery – How the System Works

Please read Part 2 here:
The Crimes – What Was Concretely Done

Please read Part 3 here:
Democracy Shield in Detail


by Michael Hollister
Exclusive published at Michael Hollister on February 12, 2026

2.473 words * 13 minutes readingtime

This analysis is made available for free – but high-quality research takes time, money, energy, and focus. If you’d like to support this work, you can do so here:

Alternatively, support my work with a Substack subscription – from as little as 5 USD/month or 40 UDS/year!
Let’s build a counter-public together.

The Concrete Censorship Rules
What Gets Deleted and Throttled

How Platforms Changed Their Rules to Obey the EU – and Which Posts Disappeared

The Invisible Censorship

You’ve seen in Part 1 the system: DSA, “voluntary” codes, NGO networks, €3-5 billion budget. In Part 2 the crimes: Eight manipulated elections, COVID censorship, suppression of American citizens. In Part 3 the endgame: Democracy Shield, User Verification from 2027, the end of anonymity.

But how does censorship work concretely in daily life?

When a post is deleted, users usually see: “This content violates our Community Guidelines.” Which guideline exactly? Why? Who decided? No answer.

This is intentional. Censorship works best when it’s invisible. When users don’t know what’s allowed and what isn’t. When rules remain vague. When arbitrariness reigns.

But the internal documents before the US House of Representatives show: There are very concrete rules. Platforms have systematically changed their Community Guidelines and content moderation policies under pressure from the EU Commission.

These changes were not publicly communicated. No press release. No blog post. Simply quietly implemented.

Why? Because these rules, if publicly discussed, would trigger outrage.

Here they are. In black and white. With concrete examples.

TikTok Community Guidelines – The DSA Compliance Changes

The Change of March 2024

On March 20, 2024, TikTok updated its Community Guidelines. An internal document—”TikTok Community Guidelines Update Executive Summary”—explains why:

“As advised by the legal team, the updates were mainly related to compliance with the Digital Services Act (DSA).”

Not “to better protect users.” Not “to improve the community.” But: DSA compliance.

The EU Commission had demanded TikTok must “mitigate systemic risks.” TikTok obeyed.

The New Censorship Categories

The internal document lists five new categories that are censored starting March 2024:

1. “Marginalizing Speech”

Definition (from TikTok document):

“Content that marginalizes individuals or groups based on protected characteristics, including through subtle or indirect language.”

“Subtle or indirect language”—that’s the key.

Example 1 (hypothetical but realistic based on the rule):

A user posts: “I think women’s sports leagues should be reserved for biological women.”

Analysis:

  • Is this “marginalizing”? For whom? For trans women.
  • Is this “subtle or indirect”? Yes—the user doesn’t directly say “trans women aren’t women,” but it’s implied.
  • Result: Post is deleted or throttled as “marginalizing speech.”

Example 2 (from leaked moderation logs):

A user posts: “Islam is incompatible with Western values.”

Analysis:

  • Is this “marginalizing”? For whom? For Muslims.
  • Protected characteristic: Religion.
  • Result: Post deleted.

The problem:

These statements are opinions. They don’t attack any specific person. They don’t call for violence. They are political or cultural statements.

But under “marginalizing speech,” they are forbidden.

2. “Coded Statements That Normalize Inequitable Treatment”

Definition:

“Statements that use coded language or dog whistles to promote unequal treatment of individuals or groups.”

“Dog whistles”—this is political combat language. It means: statements that sound superficially harmless but have a “hidden” discriminatory meaning.

Who decides what a “dog whistle” is?

TikTok moderators. Trained by EU-funded NGOs.

Example (from moderation training documents):

A user posts: “We must protect our borders.”

Analysis:

  • Superficially: Statement about border security.
  • “Coded meaning”: Anti-immigration rhetoric, “dog whistle” for xenophobia.
  • Result: Post is throttled as “coded statement.”

Example 2:

A user posts: “All Lives Matter.”

Analysis:

  • Superficially: Positive statement about the value of all life.
  • “Coded meaning”: Rejection of Black Lives Matter, “dog whistle” for racism.
  • Result: Post deleted.

The problem:

“Coded language” can mean anything. Any statement can be interpreted as a “dog whistle” if the moderator has a particular political viewpoint.

3. “Misinformation That Undermines Public Trust”

Definition:

“False or misleading information that undermines trust in democratic institutions, public health authorities, or electoral processes.”

Read that carefully. “Undermines trust.”

Not “is factually false.” But “undermines trust.”

Example 1:

A user posts: “The EU Commission was not directly elected by citizens.”

Factual status: True. The Commission is nominated by the European Council, confirmed by Parliament. Citizens don’t vote directly.

But: This statement “undermines trust in democratic institutions” (the EU).

Result: Post is labeled with “Missing Context,” reach reduced.

Example 2:

A user posts: “Lockdowns caused massive economic damage.”

Factual status: True. Lockdowns led to job losses, business failures, mental health problems.

But: This statement “undermines trust in public health authorities.”

Result: Post throttled as “misleading information.”

The problem:

True statements are censored because they “undermine trust.” This is the definition of authoritarian censorship.

4. “Media Presented Out of Context”

Definition:

“Photos, videos, or audio clips presented in a way that misrepresents their original context or meaning.”

This sounds reasonable—actually manipulated media should be marked.

The problem: Who decides what’s “out of context”?

Example (from leaked cases):

A user posts a video: A politician says in a speech: “We must restrict freedom of speech to protect democracy.”

TikTok moderation: “Out of context—the politician was only referring to hate speech, not general freedom of speech.”

The post is deleted.

But: The user didn’t edit the video. He just uploaded a clip. The politician actually said these words.

The problem: “Context” is subjective. Moderators can mark any post as “out of context.”

5. “Misrepresented Authoritative Information”

Definition:

“Content that misrepresents or disputes information from authoritative sources such as governments, health organizations, or scientific institutions.”

“Disputes.”

This means: If a government or WHO says something, and a user disagrees, that’s grounds for censorship.

Example 1:

A user posts: “The WHO was wrong about COVID multiple times—for example, they initially claimed masks don’t help.”

Factual status: True. The WHO said in February 2020 masks were not necessary for the general population. They later changed their position.

But: This post “disputes authoritative information” (the current WHO position).

Result: Post throttled.

Example 2:

A user posts: “Government inflation numbers are unreliable—they don’t account for food and energy prices.”

Factual status: Debatable. Economists argue about how inflation should be measured.

But: This post “disputes authoritative information” (government statistics).

Result: Post marked as “misleading.”

The problem:

Governments and international organizations are not infallible. They can be wrong. Citizens must have the right to question them.

But under this rule: Whoever questions authority gets censored.

Meta (Facebook/Instagram) – The “Borderline Content” Strategy

Community Standards Updates 2023-2024

Meta updated its Community Standards in September 2023 and again in March 2024.

The changes were more subtle than TikTok’s—but just as effective.

The “Borderline Content” Concept

Meta introduced a new category: “Borderline Content.”

Definition (from Meta documents):

“Content that does not explicitly violate our Community Standards, but comes close to the line.”

This is brilliant—from a censorship perspective.

Posts that don’t violate rules can still be censored. They’re just labeled “borderline” and algorithmically suppressed.

What counts as “Borderline”?

According to internal training documents for Meta moderators:

“Borderline Hate Speech”:

  • “Statements that could be perceived as marginalizing certain groups, even if not explicitly hateful.”
  • Example: “I prefer traditional marriage between a man and a woman.”
  • Analysis: Could be “perceived as marginalizing” LGBTQ+ people.
  • Result: Marked as “Borderline,” reach reduced by 40%.

“Borderline Misinformation”:

  • “Claims that contradict mainstream expert consensus, even if not definitively proven false.”
  • Example: “Natural immunity from COVID infection is as effective as vaccination.”
  • Analysis: Contradicts “mainstream expert consensus” (though multiple studies support this).
  • Result: Marked as “Borderline,” reach reduced by 60%.

“Borderline Political Content”:

  • “Content that promotes polarizing political views.”
  • Example: “The EU is becoming undemocratic.”
  • Analysis: “Polarizing political view.”
  • Result: Reach reduced by 50%.

The Implementation

How does Meta decide what’s “borderline”?

Step 1: AI scans every post for keywords and semantic patterns.

Step 2: If AI flags a post as “potentially borderline,” it goes to human review.

Step 3: Moderator decides: Violates rules → delete. Borderline → throttle. Okay → approve.

Step 4: If a user posts multiple “borderline” items, their entire account is marked as “repeat offender” and all future posts are automatically throttled by 80%.

The result:

A user who regularly posts critical content about migration, COVID policies, or EU politics becomes practically invisible—without ever being banned.

The Algorithm Changes – How “Demotion” Works

What is “Demotion”?

“Demotion” means: A post isn’t deleted, but algorithmically treated so it becomes practically invisible.

Technically:

Every post gets a “Distribution Score”—an algorithm calculates how many users will see the post.

Factors:

  • Engagement (likes, comments, shares)
  • Recency (how new is the post)
  • Relevance (is the user interested)
  • Safety Score (is the content “problematic”)

When a post is marked as “Borderline” or “Misinformation,” the Safety Score drops drastically → Distribution Score drops → Post is shown to almost no one.

Reach reduction:

Internal Meta data (from the US Congressional report) shows:

  • “Borderline Hate Speech”: -40% reach
  • “Borderline Misinformation”: -60% reach
  • “Repeated Borderline Violator” (user who posts borderline content multiple times): -80% reach for all posts

A user who regularly posts critical content about migration, COVID policies, or EU politics becomes practically invisible—without ever being banned.

“Shadow-Banning” – Does It Exist?

Officially, platforms say: “We don’t do shadow-banning.”

Definition of shadow-banning:

A user is not informed that their posts are being throttled. They see their post normally. But no one else sees it.

The truth:

This is exactly what “Borderline Content Demotion” does.

Meta doesn’t call it “shadow-banning.” They call it “reducing distribution of low-quality content.”

But the result is the same:

  • User doesn’t know they’re being throttled
  • Their posts disappear from newsfeeds
  • Their reach collapses

Proof (from internal documents):

A TikTok engineer wrote in an internal Slack channel (March 2024):

“The new DSA-compliant content filters are extremely aggressive. We’re seeing false positive rates of 15-20% on political content. Users are complaining their posts ‘disappear’—they’re not deleted, just de-boosted to near-zero visibility.”

15-20% false positives. That means: Every fifth political post marked as “problematic” is falsely marked.

But the user never finds out.

How Platforms Decide: AI + Human Moderators

Phase 1: AI Filtering

Every post is scanned by an AI model. The model was trained with millions of examples of “problematic content.”

The model searches for:

  • Keywords (e.g., “immigrant,” “lockdown,” “vaccine,” “gender,” “Islam”)
  • Semantic patterns (e.g., “X is dangerous for Y”)
  • Tonality (e.g., “aggressive language”)

If the model classifies a post as “potentially problematic” (score above a threshold), it’s forwarded to Phase 2.

Phase 2: Human Review

A moderator (often in the Philippines, India, or North Africa) reviews the post.

The moderator has 30-60 seconds per post.

The moderator follows a decision tree:

  • Does the post explicitly violate a rule? → Delete
  • Is the post “borderline”? → Throttle
  • Is the post okay? → Approve

The problem:

Moderators are overworked, underpaid, poorly trained. They make mistakes. When in doubt: Throttling is safer than approving (because otherwise the platform could be punished for “too little moderation”).

The result:

Massive over-moderation. Thousands of posts are falsely throttled or deleted.

What This Means for Users – The Chilling Effect

Self-Censorship as Result

The goal of these policies is not to delete every critical post.

The goal is: Get users to censor themselves.

How it works:

  1. User posts something critical about EU, migration, COVID, gender
  2. Post is deleted or throttled
  3. User thinks: “If I post this again, I’ll be banned”
  4. User no longer posts anything critical

This is the “chilling effect”—the deterrent effect.

People don’t stay silent because they changed their mind. They stay silent because they’re afraid.

The Invisible Opinion Manipulation

When critical posts are systematically throttled, the public only sees one side:

  • Pro-EU posts are not throttled
  • Anti-EU posts are throttled

The result:

Users think: “Everyone is pro-EU. I’m the only one who’s critical. Maybe I’m wrong.”

This is social isolation through algorithmic manipulation.

But in reality: Millions of people think the same. Their posts are just made invisible.

Example: Germany and Migration

A user in Germany posts in 2024:

“I’m concerned about uncontrolled migration.”

What happens:

  • AI marks as “Borderline Anti-Migration Rhetoric”
  • Post is throttled by 60%
  • Instead of 1,000 people seeing it, 400 see it
  • User thinks: “Nobody’s interested in my opinion.”

In reality:

Millions of Germans think the same. But their posts are also throttled.

The result: An artificial illusion of consensus.

Summary

TikTok and Meta have radically changed their content policies under pressure from the EU Commission:

New censorship categories:

  • “Marginalizing speech”—political opinions about gender, religion, culture
  • “Coded language”—any statement can be interpreted as a “dog whistle”
  • “Undermines public trust”—true statements that criticize authorities
  • “Out of context”—subjective, arbitrary
  • “Disputes authoritative information”—questioning governments forbidden

Algorithm manipulation:

  • “Borderline content” is throttled by 40-80%
  • Users are not informed
  • “Shadow-banning” under a different name

The result:

  • Massive self-censorship
  • Artificial illusion of consensus
  • Critical voices disappear

This is not conspiracy theory. This is documented. In internal documents. With timestamps. With concrete examples.

Part 5 – Breton vs. Musk

The showdown between EU Commissioner Thierry Breton and Elon Musk. The threat letter of August 12, 2024. The €120 million fine against X. Why Breton had to resign. The story of a failed intimidation.

Michael Hollister is a geopolitical analyst and investigative journalist. He served six years in the German military, including peacekeeping deployments in the Balkans (SFOR, KFOR), followed by 14 years in IT security management. His analysis draws on primary sources to examine European militarization, Western intervention policy, and shifting power dynamics across Asia. A particular focus of his work lies in Southeast Asia, where he investigates strategic dependencies, spheres of influence, and security architectures. Hollister combines operational insider perspective with uncompromising systemic critique—beyond opinion journalism. His work appears on his bilingual website (German/English) www.michael-hollister.com, at Substack at https://michaelhollister.substack.com and in investigative outlets across the German-speaking world and the Anglosphere.

This analysis is made available for free – but high-quality research takes time, money, energy, and focus. If you’d like to support this work, you can do so here:

Alternatively, support my work with a Substack subscription – from as little as 5 USD/month or 40 USD/year!
Let’s build a counter-public together.

SOURCES

U.S. House Committee on the Judiciary: “The Foreign Censorship Threat, Part II” (February 3, 2026)

The following internal documents are cited in the report (not publicly accessible):

  • TikTok Community Guidelines Update Executive Summary (March 20, 2024) – Pages 45-48
  • Meta Community Standards Updates (September 2023, March 2024) – Pages 52-56
  • Internal Moderation Training Documents – Pages 58-63
  • TikTok Engineering Slack Messages – Page 89

TikTok Community Guidelines (public version)

Meta Community Standards (public version)


© Michael Hollister —
All rights reserved. Redistribution, publication or reuse of this text requires express written permission from the author. For licensing inquiries, please contact the author via www.michael-hollister.com.


Newsletter

🇩🇪 Deutsch: Verstehen Sie geopolitische Zusammenhänge durch Primärquellen, historische Parallelen und dokumentierte Machtstrukturen. Monatlich, zweisprachig (DE/EN).

🇬🇧 English: Understand geopolitical contexts through primary sources, historical patterns, and documented power structures. Monthly, bilingual (DE/EN).