Nearly two out of three American adults have used an AI-powered search tool in the past six months. But here’s the stat that should keep every product builder up at night: only 15% say they trust the results “a lot.”
That gap between adoption and trust is the defining challenge for the next era of AI search. Consumers are showing up, but they are questioning the results. As product builders, we have to ask ourselves an uncomfortable question: Are we building experiences that earn and deserve consumer trust?
The Walled Garden Problem
Yelp partnered with Morning Consult to survey more than 2,200 U.S. adults on how they use and perceive AI-powered search. The findings point to a single, recurring problem: consumers feel trapped.
More than half of respondents (51%) say AI results feel like a “walled garden” that makes it hard to verify what they’re reading. Sixty-three percent say they double-check AI search results against other trusted sources like news websites and review platforms. And 57% say they’re less likely to use AI-powered search specifically because it lacks trusted sources.
The early days of AI search were defined by hallucination, with models confidently fabricating answers. Most leading platforms have largely solved that technical problem. But what lingers is a deeper skepticism: not just “Is this answer correct?” “How would I even know?” When platforms strip away sources, citations, and links to the real-world content that informed their answers, they’re building walls, not bridges. Consumers are telling us, loudly, that they want the links, the sources, the ability to verify for themselves.
What It Takes to Open the Gates and Build Trust
The research paints a remarkably consistent picture of what it would take to close the trust gap. Nearly three out of four respondents (72%) say AI platforms should always show where their information comes from. Two-thirds (66%) want more proof of trusted sources, like links to review platforms and news sites, alongside AI-generated answers, while more than half (52%) say visual evidence, like photos of a dish or before-and-after shots of a service, would increase their trust.
Consumers aren’t anti-AI. They’re anti-black boxes. They want AI to do the heavy lifting of parsing massive amounts of information and then show the receipts.
The average person isn’t using AI to vibe code or other technical use cases, they are using it for daily local searches. More than half of respondents (57%) use AI tools to find local businesses at least monthly. They want advice on where to take their family for a birthday dinner or choosing who to let into their home to fix a burst pipe, and a self-contained AI summary without reliable proof isn’t going to cut it.
And when consumers turn to AI to help with these decisions, the expectations are unambiguous: 76% say seeing where the information comes from is important, 73% say ratings and reviews from real customers matter, and 76% say seeing multiple reliable sources is important.
Local businesses are also inherently dynamic. Chefs leave, menus change, hours shift. Without authentic, regularly updated human content from trusted sources, AI risks serving up stale or unreliable information.
If anyone assumed the digital natives of Gen Z would be more trusting, the data says otherwise. Gen Z has the highest adoption rate, with 84% having used an AI search platform in the past six months, but they’re also the most demanding. Seventy-two percent say AI platforms should provide more proof of trusted sources, compared to 63% of Millennials and 59% of Gen X. This is a generation saturated with AI slop, and they’ve developed sharper instincts for distinguishing authentic from synthetic. Platforms that keep them inside a walled garden risk losing the most AI-fluent generation first.
The Counterargument, and Why It Falls Short
Some will argue that adding citations, links, and source indicators creates friction, and that the entire promise of AI search is a seamless, self-contained answer. Why send users away from your platform? But this framing confuses walls with value.
Consumers aren’t rejecting AI-generated summaries. They’re rejecting answers they can’t verify. The majority (69%) of consumers want the option to leave AI platforms and visit trusted sites to do their own research. And when we tested this in practice, showing consumers two versions of an AI search result, one with transparent sourcing and one without, 80% preferred the version that included authentic human content, trusted sources, and actionable links. Tearing down the walls doesn’t drive users away. It drives confidence.
The AI industry is at a crossroads. The platforms that win won’t be the ones generating the most convincing synthetic answers. They’ll be the ones that seamlessly connect users to authentic, real-world experiences, using AI as a bridge to trusted human content.
As the AI ecosystem matures, the platforms that strike the right balance between AI-generated summaries and transparent, authentic human-generated content won’t just close the trust gap. They’ll set the standard for what consumers expect.
And, the good news is that more generous, transparent linking is a rising tide that lifts all boats: consumers get the ability to do their own research and decide with confidence, content creators and publishers receive the traffic that sustains a healthy content ecosystem, and AI platforms themselves benefit from stronger relationships with the quality sources that make their answers worth trusting in the first place.
Transparency isn’t a trade-off. In the attention economy, it’s the moat.
