Google AI Overviews List Scam Numbers in Support Searches

Google's AI Overviews feature has been criticized for listing scam phone numbers in search results for customer support, leading users to fraudsters and financial losses. This stems from AI aggregating unverified web data. Google is addressing the issue, but experts call for better safeguards. Users should verify contacts on official sites.
Google AI Overviews List Scam Numbers in Support Searches
Written by Dave Ritchie

The Perils of AI-Driven Search

In an era where artificial intelligence is increasingly integrated into everyday digital experiences, Google’s AI Overviews feature has come under scrutiny for inadvertently directing users toward fraudulent customer support lines. This tool, designed to provide quick summaries at the top of search results, has been found to list phone numbers that connect callers to scammers rather than legitimate services. According to a recent investigation by Android Authority, queries for customer support numbers of major companies like airlines and banks often yield AI-generated responses that include bogus contacts, potentially exposing users to financial scams.

The issue stems from the way Google’s AI aggregates information from the web, sometimes pulling from manipulated or low-quality sources. Scammers exploit this by optimizing fake websites to appear authoritative, tricking the AI into promoting their numbers. This has led to real-world consequences, with reports of users losing money after calling these lines and being coerced into sharing personal information or making payments.

Unpacking the Mechanism Behind the Flaw

At its core, Google’s AI Overviews rely on large language models to synthesize data from search indices, but they lack robust verification for sensitive information like contact details. Industry experts note that while the feature aims to enhance user convenience, it prioritizes speed over accuracy in certain contexts. A parallel report from The Washington Post highlights a “new AI twist on a travel scam,” where searches for airline support numbers lead to fraudsters posing as representatives, demanding fees for bogus services.

This vulnerability is not isolated; similar problems have surfaced in other AI search tools, raising broader questions about accountability in tech. Google has acknowledged the issue, stating in responses to media that it is “working on broader improvements” after taking down some identified fake numbers, as detailed in coverage by Moneycontrol.

Real-World Impacts and Victim Stories

The human cost of these AI missteps is significant. One notable case involved a real estate developer who, trusting an AI-suggested number for a service provider, ended up transferring funds to scammers, as reported in the same Moneycontrol article. Such incidents underscore the risks for vulnerable populations, including older adults who may rely heavily on search engines for quick assistance.

Moreover, the proliferation of these scams coincides with Google’s own efforts to combat fraud through AI, ironically. The company has rolled out scam detection features on Android devices, as announced in the Google Online Security Blog, which use machine learning to flag suspicious calls and texts. Yet, the integration of these protections with search AI appears inconsistent, leaving gaps that fraudsters exploit.

Industry Responses and Future Safeguards

Tech analysts are calling for enhanced safeguards, such as mandatory human oversight for high-stakes queries or partnerships with verified databases. Google’s Trust & Safety team, in its May 2025 scam advisory published on the Google Blog, outlined trends in online scams but stopped short of addressing AI-specific vulnerabilities directly.

Competitors like Microsoft and OpenAI face similar challenges with their AI tools, suggesting this is an industry-wide issue requiring collective action. As AI becomes more embedded in search, the balance between innovation and user safety will be pivotal.

Toward a More Secure AI Ecosystem

To mitigate these risks, users are advised to cross-verify AI-provided numbers against official websites, a tip echoed in advisories from AARP. Google, for its part, has been iterating on the feature, fixing bugs like the one that incorrectly stated the year as 2024 when asked about 2025, as covered by TechCrunch.

Ultimately, this episode highlights the double-edged sword of AI in information retrieval: while it promises efficiency, without rigorous checks, it can amplify misinformation and scams. Industry insiders anticipate regulatory scrutiny, potentially leading to standards that ensure AI outputs are as trustworthy as traditional search results. As the technology evolves, ongoing vigilance from both developers and users will be essential to prevent exploitation.

Subscribe for Updates

CybersecurityUpdate Newsletter

The CybersecurityUpdate Email Newsletter is your essential source for the latest in cybersecurity news, threat intelligence, and risk management strategies. Perfect for IT security professionals and business leaders focused on protecting their organizations.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us