Truth Engine
Truth Engine

What Percentage of Reviews are Fake?

The number one question we get asked here at TruthEngine® is always a version of the same thing

Q: "So, how many reviews are fake then?"

The answer we give is always the same:

A: Half of the reviews we analyse are suspicious.

The reaction to that answer varies hugely.

Some people are not remotely surprised. Others react with shock, indignation, or an immediate desire to doubt the process and question our rigour.

From there, the conversation usually follows a familiar path. How do you know? How did you arrive at that number? Isn't it supposed to be more like 7-15%?

The Reality

We have spent the last five years analysing multiple millions of reviews, in minute detail, across the largest review platforms in the world.

There is undoubtedly a degree of commonality to the sectors we analyse - crowded and commercially lucrative markets such as utilities, telecoms, travel, hospitality and healthcare. But what we find across all of them is a remarkably similar set of dynamics and behaviours.

Time and time again, the proportion flagged as suspicious sits between 40 and 55%.

Where Does the 7 to 15% Number Come From?

The “7 to 15% of reviews are fake” figure that circulates widely is old, and it is based on two numbers that both need unpicking.

The 7% figure comes from the proportion of reviews that Trustpilot states it removes each year in its self-published transparency report, using its internal systems.

The 15% figure comes from a 2023 UK government study, which estimated that between 11 and 15% of product reviews in certain e-commerce categories were likely to be fake. The report itself noted that this was likely to be a conservative estimate, given its focus on specific segments and platforms.

Detecting fake reviews is not a trivial task. Much on-platform review-authenticity tooling still relies heavily on IP addresses and author-level markers that are relatively easy to mask or bypass.

What Makes a Review “suspicious”?

At this point, it's important to distinguish between the words fake and suspicious. While they are often used interchangeably, there are some important differences.

The UK's DMCC Act, Australia's ACCC regime and other consumer-protection frameworks tend to use the term fake reviews for simplicity. Each then goes on to define what qualifies. In essence, a fake review is one that presents itself as a genuine, independently written account of a real experience, but isn't.

  • Reviews written by businesses themselves, or by their staff or agents, are fake.
  • Reviews written in return for a concealed payment or incentive are fake.
  • Reviews generated by bots or review farms are fake.

The motivation behind a single suspicious review on an open platform is almost impossible to determine. In practice, however, fake reviews rarely appear in isolation. They tend to arrive in bursts, often hundreds or thousands at a time, at which point the pattern they leave becomes statistically anomalous.

For that reason, we prefer the term suspicious, and we report to clients, journalists and regulators the proportion of reviews that display suspicious characteristics.

Where Does Our 50% Number Come From?

To answer this properly, we maintain a continually updated database of all the research we've completed to date.

It is built on the analysis of multiple millions of reviews across a wide range of competitive sectors in the UK, US and Europe. It spans all major open review platforms - including Google, Tripadvisor and Trustpilot - and includes reviews dating from the early 2000s through to the present day.

Across that dataset, the proportion currently flagged as suspicious stands at 50.156%.

Definitionally, that number isn't static. It moves up and down depending on sector, platform and time period. What is striking, however, is how remarkably consistent it is and that consistency is precisely why we're comfortable standing behind it.

One final clarification matters however - saying that half of reviews show suspicious characteristics is not the same as saying half of businesses are at fault. In reality, most review manipulation is structural, not malicious.

It emerges from a combination of:

  • intense competitive pressure
  • platform mechanics that reward both review volume and recency
  • low consumer participation in leaving genuine reviews
  • and, increasingly, AI systems that make synthetic content cheap and scalable

The result is a review ecosystem that no longer behaves like a neutral record of customer experience and increasingly requires independent oversight to remain credible.

Why This Matters Now

Reviews influence purchasing decisions, pricing, switching behaviour, brand trust, and increasingly, how AI systems recommend products and services.

At the same time, regulators around the world are clamping down on review manipulation. In the UK, fake and misleading reviews are now explicitly illegal. Similar regimes are emerging elsewhere.

Our Takeaway

The reason the “7-15 %” figure persists is not necessarily because it is accurate, but because it is comfortable. It suggests a marginal problem at the edges of an otherwise healthy system.

What we see instead is a system under increasing strain, where manipulation has become normalised and on-platform fake review detection is lagging.

That doesn't mean reviews are any less useful than they used to be. But it does mean they need to be read with caution, interpreted in context, and treated carefully.

Written by Daniel Mohacek on January 29, 2026

Related Posts

What Review Platforms Actually Sell

Daniel Mohacek

- January 30, 2026

Black Friday Consumer Guide

Daniel Mohacek

- November 11, 2025

The Difference Between Review Manipulation and Fake Reviews

Daniel Mohacek

- November 3, 2025