We didn’t actually set out to build it. Somehow, we found ourselves in a situation where we were left with little choice.
Six years ago, we were working at a business that was hit by a coordinated fake negative review campaign. Not a couple of unfair comments, but a sustained and nasty campaign by a rival to damage our credibility, reputation and ultimately cripple our revenue.
Anyone who has been unfortunate enough to experience this will recognise the gut-wrenching feeling.
You know the reviews are 100% untrue, but suddenly the agonising burden of proof is on you to prove your innocence.
Everywhere we turned, the prevailing wisdom was the same - if we wanted to keep trading, we needed to bury the fake negatives with a substantial campaign of fake positives.
We engaged a leading consumer law firm to understand if this was even legal.
Their message was clear. It wasn’t. The UK government were signalling that explicit fake review rules were coming. Faking reviews was already treated as a ‘misleading commercial practice’ across much of the EU where we operated, and Australia had active fake review enforcement already in place.
The era of “everyone else is doing it” was clearly not going to last, and we spotted an opportunity. If reviews were moving from wild-west marketing hacks to highly regulated business data, somebody independent would need to step up and help verify them.
Businesses, understandably, want the best possible presentation of their brand, and commercial review platforms have real financial incentives to take their money. The deeper we looked, the clearer it became that nobody was coming to help us and we needed to solve it ourselves.
So without much choice, we began work on the world’s first fully independent, cross-platform fake review detection system.
At the time, everyone thought we’d gone mad.
Disinformation wasn’t yet part of everyday conversation, AI hadn’t gone mainstream, and very few people could picture a world where anyone would bother to fake their reviews, let alone one in which they’d become a meaningful reputational and financial risk.
As the tech took shape over the next two years, we began analysing large volumes of public review data across various sectors and platforms and discovered that the widely quoted World Economic Forum statistic that 1 in 16 reviews were fake was emphatically not what we were seeing.
We were getting 1 in 3.
In 2021.
It was at that point the TruthEngine® stopped being an in-house R&D project and became a serious mission.
During 2024, the regulatory tide began arriving for real. Market by market, guidance worldwide was turning into real legal obligations for both the businesses being reviewed and the platforms hosting them.
In the UK, fake reviews became explicitly illegal on 24 May 2024 under the new Digital Markets, Competition and Consumers Act, with fines of up to 10% of global turnover for brands caught manipulating their reviews and the platforms caught hosting them.
The EU’s Omnibus Directive strengthened rules around misleading consumer reviews for all 27 member states, and in the US, the Federal Trade Commission stepped up enforcement and made it clear that fake testimonials, undisclosed incentives and review suppression could be punished with civil penalties of up to $53,088 per violation.
What had once been seen as sharp marketing was rapidly becoming a clear compliance risk.
Running through all of these regimes is a common thread - responsibility for the authenticity of reviews is no longer something businesses can outsource. Regulators have made it clear that if reviews influence purchasing decisions, the organisations benefiting from them must take their own ‘reasonable and proportionate steps’ to ensure they are actually genuine.
This is exactly where we come in. The TruthEngine® was never built to accuse. It was built to work on behalf of our clients to dispassionately and accurately establish facts. Using the TruthEngine® gives organisations a clear, behind-closed-doors and evidence-based view of what risk is hidden in their review portfolio so they can act responsibly and confidently.
Compliance with fake-review regulation is no longer optional, and in most markets the frameworks create a positive obligation on businesses and platforms - regulators expect them to take active steps to ensure the authenticity of their reviews and to be able to prove those steps as required.
Plausible deniability is no longer enough. Without clear evidence, organisations cannot confidently reassure customers, investors, regulators or their own boards. And once doubt begins to creep in, trust and brand equity have a habit of eroding fast.
Something else happened that none of us fully anticipated when we started. AI didn’t just begin reading and using reviews, it also made them far easier to generate at scale.
Bad actors can now produce convincing, consistent fake reviews faster and more cheaply than ever before.
At the same time, the same AI systems writing the reviews have begun using reviews as a major trust signal.
Large language models now summarise, recommend and rank businesses partly on the strength of their review data. If that data is distorted, outdated or manipulated, the AI output is distorted too. This loop affects visibility, conversion, reputation and even company valuation, often long before organisations have fully realised what’s happening.
Six years on, what began as a response to an agonising experience has become something much larger.
We now work with brands, platforms, investors and legal teams who all face the same underlying challenge: separating genuine customer reviews from noise, manipulation and distortion.
We work with brands conducting forensic sweeps of their entire review portfolios to help them understand what percentage are suspicious. We actively monitor the new reviews arriving to ensure they stay clean and compliant, and we work with lawyers, advisors and M&A firms to help them identify and avoid the risk of faked reviews.
Our job is not to make businesses look good. It is to quietly and objectively report the truth about their data and help them act early, fix what needs fixing, and prove they are taking both their legal obligations and their customers seriously.
That is why we first built the TruthEngine®, and it’s why we are still building it six years later.
Six years on, what began as a response to an agonising experience has become something much larger.
We now work with brands, platforms, investors and legal teams who all face the same underlying challenge: separating genuine customer reviews from noise, manipulation and distortion.
We work with brands conducting forensic sweeps of their entire review portfolios to help them understand what percentage are suspicious. We actively monitor the new reviews arriving to ensure they stay clean and compliant, and we work with lawyers, advisors and M&A firms to help them identify and avoid the risk of faked reviews.
Our job is not to make businesses look good. It is to quietly and objectively report the truth about their data and help them act early, fix what needs fixing, and prove they are taking both their legal obligations and their customers seriously.
That is why we first built the TruthEngine®, and it’s why we are still building it six years later.
Written by Daniel Mohacek on February 18, 2026
Daniel Mohacek
- January 30, 2026
Daniel Mohacek
- November 11, 2025
Daniel Mohacek
- November 3, 2025