Medicine's rapid adoption of AI has researchers concerned

Hospitals and universities must step up to fill gaps in regulation, experts say

Many US hospitals rely on medical AI tools that assist with diagnoses and other tasks.Credit: Hilary Swift for The Washington Post via Getty
Artificial intelligence (AI) already helps clinicians to make diagnoses, triage critical cases and transcribe clinical notes in hospitals across the United States. But regulation of medical AI products has not kept up with the rapid pace of their adoption, argue researchers in a report published 5 June in PLOS Digital Health1.
The authors point to limitations in how the US Food and Drug Administration (FDA) approves these devices, and propose broader strategies that extend beyond the agency to help ensure that medical AI tools are safe and effective.
More than 1,000 medical AI products have been cleared by the FDA, and hospitals are rapidly adopting them. Unlike most other FDA-regulated products, AI tools continue to evolve after approval as they are updated or retrained on new data. This raises the need for continuous oversight, which current regulations have limited capacity to ensure.
The discussion is especially timely amid recent signs that the federal government might scale back AI regulation. In January, President Donald Trump revoked an executive order focused on AI safety, citing a need to remove barriers for innovation. The following month, lay-offs affected staff in the FDA’s division responsible for AI and digital health.
Without proper oversight, there is a risk that medical algorithms could give misleading recommendations and compromise patient care. “There have to be safeguards,” says Leo Anthony Celi, a clinical researcher at the Massachusetts Institute of Technology in Cambridge and a co-author of the report. “And I think relying on the FDA to come up with all those safeguards is not realistic and maybe even impossible.”
Low bar for approval
The criteria used by the FDA to review and authorize AI medical tools are often less rigorous than those for drugs. And because other countries rely on the FDA to inform their own decisions, its approach could have global implications. According to the FDA, only those tools that might pose a higher risk to patients are required to go through a clinical trial, for example. Another concern is that medical algorithms often perform poorly when applied to populations that differ from the ones they were trained on. Because of these limitations, regulatory approval does not ensure that a medical AI product is beneficial for the people it is intended to help, Celi says.
Ideally, hospitals should assess how well the algorithms perform in their own patient populations, and train clinicians to interpret the outputs and respond appropriately before adopting these technologies. “One of the biggest challenges is that the vast majority of hospitals and clinics do not have the resources to hire AI teams” to perform such tests, Celi says, adding that a study published this year2 found that many hospitals are buying AI tools ‘off the shelf’ and using them without local validation. “That is a recipe for disaster,” he says.
Enjoying our latest content?
Login or create an account to continue
- Access the most recent journalism from Nature's award-winning team
- Explore the latest features & opinion covering groundbreaking research
or
Sign in or create an accountdoi: https://doi.org/10.1038/d41586-025-01748-y
This story originally appeared on: Nature - Author:Mariana Lenharo