In the rapidly evolving intersection between artificial intelligence, public health information and digital governance, one of the world’s most powerful technology companies has quietly withdrawn an experimental feature that critics warned risked amplifying unverified medical advice to millions of users. Google has now abandoned an artificial intelligence driven search feature known as “What People Suggest”, a tool that attempted to aggregate and present crowdsourced health advice from strangers across the internet. The removal of the feature comes amid intensifying scrutiny over the role of artificial intelligence in shaping health information consumed by billions of people worldwide.

The now discontinued feature was originally introduced as part of the company’s broader strategy to embed generative artificial intelligence into its search ecosystem. At the time of its unveiling the company framed the initiative as a technological breakthrough capable of transforming global health outcomes by enabling users to quickly access insights from individuals who had lived experience with specific medical conditions. The premise was that someone searching for information about a condition such as arthritis might benefit not only from authoritative medical sources but also from personal accounts of how other individuals manage exercise, diet or treatment routines. Yet the idea of algorithmically summarising health advice drawn from online discussions raised immediate concerns among medical experts and digital governance specialists. Unlike peer reviewed medical literature or guidance issued by recognised health authorities, user generated discussions often contain anecdotal claims, unverified remedies and misinformation that may not be supported by clinical evidence. When artificial intelligence systems reorganise such discussions into seemingly authoritative summaries the distinction between professional medical advice and casual opinion can become blurred. The feature itself was introduced publicly during an event in New York known as “The Check Up”, an annual gathering organised by the company to showcase technological innovations related to healthcare. At the time Karen DeSalvo, who served as the company’s chief health officer, explained the reasoning behind the experiment in a blog post published on the company’s official website. She argued that although users frequently rely on search engines to access reliable medical information from experts, many also seek reassurance and perspective from people who have personally experienced the same condition. According to DeSalvo the artificial intelligence system would analyse online discussions and organise them into easily understandable themes so that users could rapidly grasp the range of experiences shared by others.

The feature was initially rolled out on mobile devices within the United States. Its introduction occurred during a period in which major technology companies were racing to integrate generative artificial intelligence into search engines, messaging platforms and productivity tools. In this competitive landscape the ability to deliver conversational and personalised responses to user queries has become a central strategic objective for technology firms attempting to maintain their dominance in digital ecosystems. However the broader rollout of artificial intelligence within search has generated growing unease among regulators, academics and medical professionals who question whether automated summaries of health information can be relied upon without rigorous safeguards. Concerns intensified earlier this year when an investigation revealed that another artificial intelligence product embedded in the search platform, known as AI Overviews, had generated misleading or inaccurate health related responses for certain queries. These automated summaries appear at the top of search results pages and are currently presented to approximately two billion users each month, placing them among the most widely viewed algorithmic outputs in the digital world.

The investigation prompted alarm among independent experts who warned that inaccurate health advice generated by artificial intelligence could expose users to real world harm. Although the company initially defended the system by noting that the summaries linked to reputable sources and often recommended consulting medical professionals, subsequent adjustments were made to remove AI generated summaries from certain categories of medical queries. The decision suggested that the company recognised the heightened sensitivity surrounding health related information.

Against this backdrop the quiet disappearance of the “What People Suggest” feature has drawn particular attention. Three individuals familiar with the decision confirmed that the tool has been discontinued. One person with direct knowledge of the matter described the feature succinctly by stating that it is now dead. A spokesperson for the company acknowledged that the feature had indeed been scrapped but insisted that the move formed part of a broader effort to simplify the design of the search results page rather than a response to concerns about the safety or reliability of the technology. According to the spokesperson the decision had been implemented months earlier and was unrelated to the quality or safety of the feature. The company also maintained that it continues to support users in accessing reliable health information from a variety of sources, including online forums that contain first person perspectives which many individuals find useful when dealing with medical conditions. Yet the explanation has not entirely satisfied observers who note that the removal occurred during a period of intensifying public scrutiny over the role of artificial intelligence in disseminating health information. When journalists asked the company to identify where the decision had been publicly communicated, the spokesperson referred to a blog post written by John Mueller, a search advocate based at Google Switzerland. That post discussed changes to the structure of search results but did not specifically mention the elimination of the “What People Suggest” feature.

The episode illustrates a broader tension that has emerged within the technology industry as companies attempt to deploy artificial intelligence at unprecedented scale while simultaneously managing the societal risks associated with automated information systems. Search engines have long served as gateways through which billions of people access information about symptoms, diseases and treatments. The integration of generative artificial intelligence into these systems introduces a new layer of complexity because algorithmic outputs may synthesise information in ways that appear authoritative even when the underlying sources are inconsistent or anecdotal.

From a legal and policy perspective the stakes surrounding such technologies are considerable. In many jurisdictions digital platforms are increasingly being scrutinised under regulatory frameworks designed to address systemic risks associated with large online services. Within the European Union, for example, the Digital Services Act imposes obligations on major technology companies to mitigate risks linked to the dissemination of harmful or misleading content. Platforms designated as very large online platforms must conduct regular risk assessments and implement measures to prevent systemic harm to users, including risks related to public health. The regulation operates alongside broader frameworks such as the General Data Protection Regulation, which establishes strict requirements governing the processing of personal data, including sensitive health information. Artificial intelligence systems that analyse online discussions about medical conditions may raise complex questions regarding how such data is collected, processed and presented to users. In the United Kingdom the regulatory landscape includes the Online Safety Act 2023, which aims to impose duties of care on digital platforms to reduce the risk of harm arising from online content. Although the act primarily focuses on issues such as illegal content and child safety, the broader concept of platform responsibility has encouraged regulators to examine how algorithmic systems influence the dissemination of health related information.

In the United States similar debates are unfolding within the framework of the Communications Decency Act Section 230, which historically shielded technology companies from liability for user generated content. As artificial intelligence systems begin to summarise and reinterpret user discussions rather than simply hosting them, legal scholars have begun questioning whether traditional liability protections remain appropriate.

The decision to abandon the crowdsourced health advice feature therefore arrives at a moment when policymakers across multiple jurisdictions are reassessing the responsibilities of technology companies whose platforms have become integral to the global information infrastructure. When a search engine used by billions of people presents a piece of medical advice, even indirectly through algorithmic aggregation, the potential consequences extend far beyond the digital environment.

For the company itself the challenge lies in balancing the pursuit of technological innovation with the imperative to maintain public trust. Artificial intelligence has been positioned by technology firms as a transformative tool capable of revolutionising fields ranging from healthcare to climate science. Yet each high profile controversy surrounding inaccurate or misleading outputs reinforces concerns that the technology may still be insufficiently mature for certain applications.

The company’s upcoming “The Check Up” event is expected to highlight new research initiatives, partnerships and technological developments aimed at addressing global health challenges. The event will be led by Michael Howell, the company’s current chief health officer, alongside other senior staff members who will present advances in artificial intelligence and medical technology. Whether those presentations will address the lessons learned from the brief and controversial life of the “What People Suggest” feature remains uncertain.

What is clear is that the removal of the feature reflects a broader recognition that the integration of artificial intelligence into health information systems carries profound responsibilities. When billions of people turn to search engines for guidance on symptoms, treatments and medical conditions the distinction between helpful innovation and potentially harmful experimentation becomes critically important. The quiet disappearance of a tool designed to amplify amateur health advice suggests that even the most powerful technology companies are beginning to recognise the limits of what artificial intelligence should be allowed to say when the subject is human health.