What our second measurement says about misinformation on major platforms in Europe

Posted on: 

Science Feedback and partners have released a second measurement of Structural Indicators across six Very Large Online Platforms (VLOPs) in four EU member states (France, Spain, Poland, Slovakia). This second wave, conducted in October 2025, allows us for the first time to compare results across two independent measurement periods, and the consistency of findings confirms we are measuring structural features of platforms, not short-term noise.

Below we highlight the headline results on (1) Prevalence of misinformation, (2) the “misinformation premium”, (3) Monetisation, (4) AI-generated mis/disinformation (new this wave), and (5) Audience growth (new this wave).

For methods, all figures, and country breakdowns, download the full report.

This report is published under the SIMODS project, led by Science Feedback in partnership with the Universitat Oberta de Catalunya, Check First, Newtral, Demagog SK, and Pravda.

Key findings at a glance

  • Prevalence: TikTok continues to show the highest prevalence of mis/disinformation (~25% of exposure-weighted posts), up from ~20% in the first measurement. YouTube also saw a notable increase, from ~8.5% to ~12%. Three platforms (TikTok, X/Twitter, and YouTube) now contain more problematic content than credible content in our samples.
  • Misinformation premium: The interaction advantage of low-credibility accounts over high-credibility ones persisted or worsened on most platforms. On X/Twitter it rose from ~4× to ~10×, and on YouTube from ~8.5× to ~11×. LinkedIn remains the only exception where no significant premium is observed.
  • Monetisation: Platform opacity remains the rule. Where data allowed inference (YouTube and Facebook), a high proportion of eligible low-credibility accounts appear monetised (81% on YouTube and 22% on Facebook) indicating that demonetisation policies are not working as intended.
  • AI-generated mis/disinformation: One in four mis/disinformation posts on TikTok (24%) and nearly one in five on YouTube (19%) contain AI-generated content. Over 83% of these carry no label. Health misinformation dominates on both platforms.
  • Audience growth: On most platforms, no significant difference in follower growth was found between high- and low-credibility accounts. X/Twitter is the exception: low-credibility accounts are growing their audiences at roughly 3.5 times the rate of high-credibility ones.

Prevalence: how much misleading content do users encounter?

Using the same methodology as the first wave (exposure-weighted random samples of public-interest content annotated by professional fact-checkers) we measured the fraction of posts containing false or misleading information across all six platforms.

Results show significant and in some cases worsening differences between platforms (Figure 1):

  • TikTok exhibits the highest prevalence at 25% [22.6%, 27.5%], up from ~20% in the first measurement. Roughly one in four posts on topics we investigated contains misleading or false information.
  • Facebook follows at 15% [13.2%, 16.8%], YouTube at 12% [10.6%, 13.9%], and X/Twitter at 11% [9.0%, 12.2%].
  • Instagram stands at 8% [6.5%, 9.2%].
  • LinkedIn again records the lowest prevalence at 1% [0.5%, 1.5%].
Figure 1 – Prevalence of mis/disinformation across platforms (with 95% Confidence Intervals).

Health misinformation remains the dominant category, representing ~43% of all identified mis/disinformation posts, followed by the Russia–Ukraine war (23%) and national politics (12%), see Figure 2.

Figure 2 – Topic distribution of mis/disinformation posts across the studied data sample.

The “misinformation premium”: who gets more interactions for the audience they have?

As in the first wave, we compare interactions per post per 1 000 followers across high- and low-credibility accounts. Across almost all platforms, low-credibility accounts continue to receive disproportionate engagement relative to their audience size, a pattern that has persisted or worsened since the first measurement (Figure 3).

  • On YouTube, the misinformation premium now stands at ~11×: a low-credibility account receives around eleven times more interactions per post than a high-credibility one of comparable size.
  • X/Twitter shows a similar ratio of ~10×, up sharply from ~4× in the first wave.
  • Facebook stands at ~9×, Instagram at ~4×, and TikTok at ~2×.
  • LinkedIn is the only platform where no statistically significant premium is observed.

This premium has proven robust across two independent measurement periods. It is a structural feature of how most platforms amplify content, not an artefact of any single data collection window.

Figure 3 – Average interactions per post per 1 000 followers by account credibility, showing that low-credibility accounts get more interactions than credible accounts (except on LinkedIn).

Monetisation: platforms are still funding the accounts spreading misinformation

For the second consecutive wave, meaningful cross-platform comparison of monetisation remains impossible due to platform opacity. Data access requests submitted under DSA Article 40.4 in January 2026 had received no response at the time of writing.
Where public signals allowed partial inference (YouTube and Facebook) the findings are consistent with the first report:

On YouTube, 81% of eligible low-credibility channels appear to be monetised, compared to 90% of eligible high-credibility channels. The gap is narrow.
On Facebook, the gap is wider (22% vs. 51%), but the fact that over one in five eligible low-credibility accounts appears to benefit from ads is a signal that enforcement is incomplete.

These results confirm that platforms are, to a meaningful extent, financially sustaining the very accounts that repeatedly spread misleading content.

AI-generated mis/disinformation: a growing and largely unlabelled threat

This wave introduces a new indicator tracking the share of mis/disinformation that contains AI-generated elements (images or video). The results point to a phenomenon that has grown rapidly and is poorly managed by platforms:

AI-generated mis/disinformation in our sample accumulated approximately 34 million views, with TikTok accounting for 69% of these.

TikTok leads with 24% of its mis/disinformation posts containing AI-generated content; YouTube follows at 19%. Both platforms are exclusively video-based, which partly explains the higher figures. Facebook stands at 7%, while X/Twitter (4.4%) and Instagram (2.6%) show lower proportions. LinkedIn shows no instances in our sample.

Of all AI-generated mis/disinformation identified, only 16.5% carries any visible label (14% on TikTok, just 1.8% on Facebook, and 0.9% on YouTube). No labels were observed on the remaining platforms.

Health misinformation dominates the AI-generated category on both video platforms, accounting for 61% on YouTube and 57% on TikTok. A rising trend is the impersonation of real doctors or the use of fictitious AI-generated doctor.

What two waves of measurement tell us

The central value of this second report is not any single finding in isolation: it is the consistency of results across time. Prevalence estimates, the misinformation premium, and monetisation patterns are all broadly reproduced from the first wave. This reproducibility confirms that the phenomena we are measuring are structural, not incidental, and that the methodology is sound enough to support long-term monitoring.

The integration of the Code of Conduct on Disinformation into the DSA framework, effective July 2025, gives these measurements a direct regulatory function. The SIMODS indicators were designed to be comparable across platforms and stable over time. With two waves now complete, they are ready to serve as formal benchmarks in auditing and compliance assessments. What is now required is the political will to use them.

Read the full report

Second Measurement of the State of Online Disinformation in Europe on Very Large Online Platforms. Second report of the SIMODS project (Structural Indicators to Monitor Online Disinformation Scientifically) (PDF)

Includes full methods, confidence intervals, country-level results, AI-generated content analysis, audience growth data, and recommendations for policymakers, platforms, and funders.

Science Feedback is a non-partisan, non-profit organization dedicated to science education. Our reviews are crowdsourced directly from a community of scientists with relevant expertise. We strive to explain whether and why information is or is not consistent with the science and to help readers know which news to trust.
Please get in touch if you have any comment or think there is an important claim or article that would need to be reviewed.

Related Articles