The number one question we get asked here at TruthEngine® is always a version of the same thing
Q: "So, how many reviews are fake then?"
The answer we give is always the same:
A: Half of the reviews we analyse are suspicious.
The reaction to that answer varies hugely.
Some people are not remotely surprised. Others react with shock, indignation, or an immediate desire to doubt the process and question our rigour.
From there, the conversation usually follows a familiar path. How do you know? How did you arrive at that number? Isn't it supposed to be more like 7-15%?
We have spent the last five years analysing multiple millions of reviews, in minute detail, across the largest review platforms in the world.
There is undoubtedly a degree of commonality to the sectors we analyse - crowded and commercially lucrative markets such as utilities, telecoms, travel, hospitality and healthcare. But what we find across all of them is a remarkably similar set of dynamics and behaviours.
Time and time again, the proportion flagged as suspicious sits between 40 and 55%.
The “7 to 15% of reviews are fake” figure that circulates widely is old, and it is based on two numbers that both need unpicking.
The 7% figure comes from the proportion of reviews that Trustpilot states it removes each year in its self-published transparency report, using its internal systems.
The 15% figure comes from a 2023 UK government study, which estimated that between 11 and 15% of product reviews in certain e-commerce categories were likely to be fake. The report itself noted that this was likely to be a conservative estimate, given its focus on specific segments and platforms.
Detecting fake reviews is not a trivial task. Much on-platform review-authenticity tooling still relies heavily on IP addresses and author-level markers that are relatively easy to mask or bypass.
At this point, it's important to distinguish between the words fake and suspicious. While they are often used interchangeably, there are some important differences.
The UK's DMCC Act, Australia's ACCC regime and other consumer-protection frameworks tend to use the term fake reviews for simplicity. Each then goes on to define what qualifies. In essence, a fake review is one that presents itself as a genuine, independently written account of a real experience, but isn't.
The motivation behind a single suspicious review on an open platform is almost impossible to determine. In practice, however, fake reviews rarely appear in isolation. They tend to arrive in bursts, often hundreds or thousands at a time, at which point the pattern they leave becomes statistically anomalous.
For that reason, we prefer the term suspicious, and we report to clients, journalists and regulators the proportion of reviews that display suspicious characteristics.
To answer this properly, we maintain a continually updated database of all the research we've completed to date.
It is built on the analysis of multiple millions of reviews across a wide range of competitive sectors in the UK, US and Europe. It spans all major open review platforms - including Google, Tripadvisor and Trustpilot - and includes reviews dating from the early 2000s through to the present day.
Across that dataset, the proportion currently flagged as suspicious stands at 50.156%.
Definitionally, that number isn't static. It moves up and down depending on sector, platform and time period. What is striking, however, is how remarkably consistent it is and that consistency is precisely why we're comfortable standing behind it.
One final clarification matters however - saying that half of reviews show suspicious characteristics is not the same as saying half of businesses are at fault. In reality, most review manipulation is structural, not malicious.
It emerges from a combination of:
The result is a review ecosystem that no longer behaves like a neutral record of customer experience and increasingly requires independent oversight to remain credible.
Reviews influence purchasing decisions, pricing, switching behaviour, brand trust, and increasingly, how AI systems recommend products and services.
At the same time, regulators around the world are clamping down on review manipulation. In the UK, fake and misleading reviews are now explicitly illegal. Similar regimes are emerging elsewhere.
The reason the “7-15 %” figure persists is not necessarily because it is accurate, but because it is comfortable. It suggests a marginal problem at the edges of an otherwise healthy system.
What we see instead is a system under increasing strain, where manipulation has become normalised and on-platform fake review detection is lagging.
That doesn't mean reviews are any less useful than they used to be. But it does mean they need to be read with caution, interpreted in context, and treated carefully.
Written by Daniel Mohacek on January 29, 2026
Daniel Mohacek
- January 30, 2026
Daniel Mohacek
- November 11, 2025
Daniel Mohacek
- November 3, 2025