The Risky Reality of Google's AI Overviews: A Public Health Concern
Have you ever wondered if that persistent cough is a sign of something more serious? Or if your fatigue is a symptom of a hidden illness? For many, Google has been the go-to source for quick medical insights. But here's where it gets controversial: what if the answers provided by Google's AI Overviews are not only inaccurate but potentially harmful?
Google's CEO, Sundar Pichai, unveiled the company's ambitious plan to integrate AI into its search engine in 2024. The result? AI Overviews, a feature designed to provide instant, conversational answers above traditional search results. By 2025, this technology had reached a global audience, serving over 2 billion people monthly. But with great power comes great responsibility, and experts are raising red flags.
While AI Overviews can cite sources, they don't always recognize when those sources are incorrect. Within weeks of its launch, users encountered untruths across various topics. For instance, one AI Overview claimed that Andrew Jackson, the seventh US president, graduated from college in 2005! Google's head of search, Elizabeth Reid, acknowledged that AI Overviews sometimes misinterpret web pages, leading to inaccurate information. But when it comes to health, accuracy is non-negotiable.
A Guardian investigation uncovered a disturbing trend: AI Overviews providing false and misleading health information, putting people at risk. In one alarming case, Google advised pancreatic cancer patients to avoid high-fat foods, which experts say is the exact opposite of what should be recommended and could increase the risk of death. Another example involved misleading information about liver function tests, which could lead those with serious liver disease to believe they are healthy.
Google insists that AI Overviews are "reliable," but the evidence suggests otherwise. Experts warn that these summaries can lead to serious misdiagnoses and potentially life-threatening consequences. The company initially downplayed these concerns, stating that the AI Overviews linked to reputable sources and recommended seeking expert advice. However, they soon removed some of the problematic health-related AI Overviews.
But is this enough? Experts remain worried, pointing out that Google is merely addressing individual search results rather than tackling the broader issue of AI Overviews for health queries. A recent study has only added to these concerns. Researchers found that AI Overviews rely heavily on YouTube, a general-purpose video platform, as their primary source for medical information.
"This creates a new form of unregulated medical authority online," says Hannah van Kolfschooten, a researcher at the University of Basel. "When AI Overviews are built on sources not designed to meet medical standards, such as YouTube videos, it actively restructures health information, potentially leading to dangerous consequences."
Google maintains that AI Overviews surface information backed up by top web results and include links to supporting web content. However, experts argue that the single blocks of text in AI Overviews can cause confusion and prevent users from critically evaluating the information.
"Users are deprived of the opportunity to compare and assess information, even for health-related issues," says Nicole Gross, an associate professor at the National College of Ireland. "This can have serious implications for patient care and outcomes."
And this is the part most people miss: even when AI Overviews provide accurate facts, they may not distinguish between strong evidence from randomized trials and weaker evidence from observational studies. Some experts also argue that AI Overviews miss important caveats about the evidence they present.
"Having these claims listed side by side can give a false impression of their established nature," says Athena Lamnisos, CEO of the Eve Appeal cancer charity. "Answers can change as AI Overviews evolve, even when the science hasn't shifted, leading to inconsistent and potentially misleading information."
The biggest worry, according to Gross, is that bogus medical information in AI Overviews can influence patient practices and routines, even in adapted forms. "In healthcare, this can be a matter of life and death," she warns.
So, what's the solution? Experts call for more rigorous oversight and transparency in how AI Overviews are developed and deployed, especially for health-related queries. As AI continues to shape our online experiences, ensuring its accuracy and reliability is crucial to protecting public health.
What are your thoughts? Do you trust AI Overviews for medical information? Share your experiences and opinions in the comments below!