The rapid adoption of AI in primary care raises safety concerns
From digital scribes to ChatGPT, artificial intelligence (AI) is quickly finding its way into primary care practices. A new study from the University of Sydney warns that the technology is getting ahead of security controls and putting patients and healthcare systems at risk. The study, published in The Lancet Primary Care, based on data from the United States, the United Kingdom, Australia,...
The rapid adoption of AI in primary care raises safety concerns
From digital scribes to ChatGPT, artificial intelligence (AI) is quickly finding its way into primary care practices. A new study from the University of Sydney warns that the technology is getting ahead of security controls and putting patients and healthcare systems at risk.
The study, published inThe Lancet Primary Caresummarized global insights into how AI is being used in primary care using data from the United States, the United Kingdom, Australia, several African countries, Latin America, Ireland and other regions. It found that AI tools such as ChatGPT, AI Scribes and patient-facing apps are increasingly being used for clinical queries, documentation and patient consultation, but most are used without thorough evaluation or regulatory oversight.
Primary care is the backbone of healthcare systems, providing accessible and continuous care. AI can ease pressure on overstretched services, but without safeguards we risk unintended consequences for patient safety and quality of care.”
Associate Professor Liliana Laranjo, Principal Investigator, Horizon Fellow at Westmead Applied Research Center
GPs and patients are turning to AI, but the evidence lags
Primary care is under pressure worldwide, from workforce shortages to physician burnout to the increasing complexity of healthcare delivery, exacerbated by the COVID-19 pandemic. AI is being touted as a solution, with tools that save time by consolidating consultations, automating administration and supporting decision-making.
In the UK, one in five GPs reported using generative AI in clinical practices in 2024. However, the review found that most studies of AI in primary care are based on simulations rather than real-world studies, leaving critical gaps in effectiveness, safety and equity.
The number of GPs using generative AI in Australia is not reliably known, but is estimated at 40 per cent.
“AI is already in our clinics, but without Australian data on how many GPs are using it, or proper oversight, we are flying blind when it comes to safety,” Associate Professor Laranjo said.
While AI scribes and ambient listening technologies can reduce cognitive load and improve job satisfaction for GPs, they also pose risks such as automation bias and the loss of important social or biographical details in medical records.
"Our study found that many GPs who use AI scribes don't want to go back to typing. They say it speeds up consultations and allows them to focus on patients, but these tools can miss important personal details and introduce bias," Associate Professor Laranjo said.
For patients, symptom checkers and health apps promise convenience and personalized care, but their accuracy often varies and many lack the opportunity for independent assessment.
“Generative models like ChatGPT can sound convincing, but are factually incorrect,” said Associate Professor Laranjo. “They often agree with users even when they are wrong, which is dangerous for patients and challenging for doctors.”
Justice and environmental risks of AI
Experts warn that while AI promises faster diagnoses and personalized care, it can also deepen health gaps as bias creeps in. For example, dermatology tools often misdiagnose darker skin tones, which are typically underrepresented in training datasets.
Conversely, researchers say that when designed well, AI can address disparities: An arthritis study doubled the number of Black patients eligible for knee replacements by using an algorithm trained on a diverse data set, making it better at predicting patient-reported knee pain compared to standard X-ray readings by doctors.
“Ignoring socioeconomic factors and universal design could result in AI in primary care going from a breakthrough to a setback,” said Associate Professor Laranjo.
The environmental costs are also enormous. Training GPT-3, the version of ChatGPT released in 2020, emitted amounts of carbon dioxide equivalent to 188 flights between New York and San Francisco. Data centers now consume around 1 percent of global electricity consumption, and in Ireland data centers are responsible for more than 20 percent of national electricity consumption.
“The environmental footprint of AI is challenging,” said Associate Professor Laranjo. “We need sustainable approaches that balance innovation with equity and planetary health.”
The researchers urge governments, clinicians and technology developers to prioritize:
- robuste Bewertung und reale Überwachung von KI-Tools
- Regulierungsrahmen, die mit der Innovation Schritt halten
- Schulung von Ärzten und der Öffentlichkeit zur Verbesserung der KI-Kenntnisse
- Strategien zur Voreingenommenheitsminderung, um Gerechtigkeit in der Gesundheitsversorgung sicherzustellen
- nachhaltige Praktiken zur Reduzierung der Umweltauswirkungen von KI.
“AI offers the opportunity to reshape primary care, but innovation must not come at the expense of safety or equity,” said Associate Professor Laranjo. “We need cross-industry partnerships to ensure AI benefits everyone – not just the tech-savvy or well-equipped.”
Sources:
Laranjo, L.,et al.(2025). Artificial intelligence in primary care: innovation at a crossroads.The Lancet Primary Care. DOI: 10.1016/j.lanprc.2025.100078. https://www.thelancet.com/journals/lanprc/article/PIIS3050-5143(25)00078-0/fulltext