Google has quietly taken down certain AI-created health summaries from search results following an international inquiry that discovered the feature might endanger users by providing inaccurate medical details.
A feature powered by artificial intelligence, called Google AI Overviews, is intended to appear at the top of search results, offering concise “snapshots” of essential information when users pose questions, including those related to health and medical topics. However, a Guardian investigation revealed that in some instances, the summaries were incorrect and could be harmful, providing users with misleading confidence regarding critical health issues.
In one instance, the AI presented inaccurate “normal ranges” for liver function tests, failing to consider important factors like age, gender, race, or country of origin. This is particularly significant as these elements can influence medical interpretations, especially for South African patients who typically have varied genetic and environmental influences.
This outcome might cause individuals suffering from severe liver disease to incorrectly think their results are normal, leading to postponed essential treatment.
Experts characterized the scenario as “dangerous” and “concerning,” expressing worries that these AI-generated summaries might confuse individuals during times of health-related anxiety or when attempting to understand complicated medical tests online.
Google’s response and limitations
In reaction to the findings, Google stated that it has taken down AI Overviews for certain searches like “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.
The company stated that it regularly implements wide-ranging enhancements to its systems when context is lacking and operates according to internal guidelines to enhance precision, but refused to address specific removals.
Google’s statement highlighted that its internal medical reviewers determined that numerous examples mentioned were still backed by “well-known and credible sources”.
What implications does this hold for South Africans?
Even though Google’s presence in South Africa, similar to other regions globally, leads many individuals to use the platform initially for health-related information, there are currently no localized measures in place to ensure that AI-generated summaries align with regional medical standards or specific reference values used in South African healthcare. This creates concerns among physicians and public health professionals regarding the accuracy of AI systems when addressing health-related questions within the local setting.
South African users are advised not to trust AI Overviews powered by Gemini without scrutiny, particularly when dealing with critical subjects like health and medical issues, because of documented cases of AI generating false information and inaccurate data.
Experts in global health communication emphasize that although better access to online information is beneficial, AI systems should explicitly highlight evidence-backed and trustworthy health sources instead of offering summaries that might be misinterpreted as professional medical guidance.
Even with the specific removals, AI Overviews still show up for other health-related queries, such as conditions like cancer and mental health.
Considering this, South Africans are encouraged to verify AI-generated responses with reliable local health organizations, including the *National Department of Health, Health Professions Council of South Africa, and established hospital systems, prior to relying on medical information encountered online.
IOL
Provided by SyndiGate Media Inc.Syndigate.info).






Leave a comment