For well over a decade, online reviews have played a major role in shaping consumer decisions. They sit underneath, within and on top of search, guiding decisions about everything from a late-night takeaway to which solicitor to instruct. Most people barely think about them. Most businesses wish they could influence them. And for years, Google has acted as the main gatekeeper for how those reviews were collected, ranked and surfaced.
That era is ending.
Large language models are quietly becoming the first place people go for recommendations. They answer instantly. No ads (yet!). No scrolling. No juggling between ten tabs. They pull together signals, summarise sentiment and offer a single judgement that feels reassuringly human. But there is a problem. LLMs are heavily influenced by online reviews but those reviews were never designed to carry this level of scrutiny.
We are entering the age of GEO: Generative Engine Optimisation. A world where discovery is not shaped by search rankings but by narrative coherence and the underlying quality of the data the models consume. And in the UK, this shift is happening fast, with real implications for businesses, consumers and regulators.
UK businesses now face a strange predicament: reviews matter more than ever, but the underlying data has never been less dependable.
Reviews today are inconsistent across platforms, often unstructured, and in many sectors significantly manipulated - some deliberately, some accidentally and some historically. A company’s profile can look pristine on one platform and chaotic on another. Incentivised reviews are still widespread whilst negative reviews are suppressed more than we realise.
LLMs consume all of this without distinction.
The question is no longer how reviews affect rankings. It is how flawed review ecosystems are shaping what AI believes to be true.
There is a temptation to treat GEO as the next chapter of SEO. It isn’t.
SEO is about visibility. GEO is about integrity.
LLMs reward consistency, structure, authenticity and provenance. They do not reward volume. They do not reward marketing tactics and they certainly do not reward messy and often conflicted review footprints spread across multitudinous platforms.
For many UK businesses this presents an exposure and a conundrum. They are now being assessed by a system they cannot see, cannot negotiate with and cannot pay to influence.
The more forward-thinking brands have already adjusted course. They are consolidating their review footprints, dealing with legacy authenticity issues, removing duplication, verifying authenticity and treating their reviews as an operational asset rather than just a marketing tool. The era of “just get more reviews” has ended. The era of “make sure the data behind your reviews is real, defensible and machine-readable” has begun.
The UK’s Digital Markets, Competition and Consumers Act (DMCC Act) is the clearest sign yet that reviews are moving into a regulated phase. The law now explicitly prohibits:
For years, these things sat in a grey zone. Not any more.
And the timing matters. Regulators understand that reviews are no longer just influencing websites. They now influence the AI systems that shape decisions at scale. If an LLM makes a confident recommendation based on false or manipulated reviews, the harm spreads further and faster than a misleading search snippet ever could.
It is entirely plausible that guidance on LLM-related review obligations will emerge. If AI amplifies review fraud, who is responsible? The business being misrepresented? The platform hosting the data? The model? Or the regulator who did not intervene earlier?
The UK is quickly becoming the test case for those questions.
Businesses that adapt early will develop an advantage that compounds.
Clean, authenticated review data becomes part of a brand’s digital infrastructure. It improves visibility in an LLM-led world. It reduces regulatory risk. It ensures that what AI surfaces about your business is actually accurate. And it signals to customers that you are willing to stand behind your feedback rather than hide from it.
For those who ignore the shift, the risk is blunt: You won’t slide down a list of 10 results - you simply won’t appear.
Reviews were meant for humans. Machines now interpret them at a scale we never designed for.
In the UK’s highly regulated, review-driven market, this shift is already underway. Brands with clean, consistent and authenticated review data will be the ones surfaced and surfaced favourably - the others risk fading from view.
LLMs are reshaping discovery. Success now depends on whether your review data is strong enough, genuine enough and structured well enough to be found.
Written by Andrew Winton on December 10, 2025
Daniel Mohacek
- November 18, 2025
Daniel Mohacek
- November 11, 2025
Daniel Mohacek
- November 3, 2025