The FDA has now approved more than 1,250 AI-enabled medical devices, with 295 clearances in 2025 alone. Almost all of them — 97% — entered through the 510(k) pathway, which requires equivalence to existing devices rather than original clinical trials. Mount Sinai alone has run more than 100,000 AI-assisted mammograms. Three states — California, Colorado, and Utah — now require hospitals to disclose when AI influences a patient's care. The rest of the country has no such rule.

1. This Is How We Catch What We Miss (Dr. Laurie Margolies, Mount Sinai; Dr. Hugo Aerts, Harvard)

AI is already finding cancers that experienced doctors overlook, and the evidence is hard to argue with.

We're better now, period. Mount Sinai's breast imaging program has processed over 100,000 AI-assisted mammograms, and studies consistently show AI-plus-radiologist teams find more cancers — often smaller ones — than radiologists working alone. Dr. Laurie Margolies, Vice Chair of Breast Imaging at Mount Sinai, put it plainly: "Artificial intelligence is a phenomenal tool. It does not replace the expertise of our radiologists — it enhances it."

The AI is doing incredible work. A UCLA study found that AI flagged 76% of mammograms originally read as normal but later linked to interval breast cancers. For cases where cancer was visible on the mammogram but missed by a radiologist, the AI caught 90%. In Scotland, an AI system caught an aggressive breast tumor too small for the human eye to detect — without it, the tumor likely would have gone unnoticed for three more years.

And it's helping us balance workloads. Dr. Hugo Aerts, director of the Artificial Intelligence in Medicine program at Harvard and Mass General Brigham, frames AI as a workload solution. "AI can automate assessments and tasks that humans currently can do but take a lot of time," he said. "After the AI gives a result, a radiologist simply needs to review what the AI has done — did it make the correct assessment?" A survey of 487 physicians across 54 countries found 75% support AI as an aid for precision medicine.

2. But The Machine Learns Our Worst Habits (University of Michigan Researchers, ACLU)

AI trained on biased medical records doesn't correct historical racism — it automates it.

AI is underestimating the severity of illness among Black patients. In particular, Black patients are less likely than white patients to receive the diagnostic tests doctors use to identify severe illness, and AI trained on that data inherits the gap. University of Michigan researchers found that because some sick Black patients are recorded as healthy in historical data, AI models trained on those records systematically underestimate illness severity in Black patients.

The disparities baked into the training data are not subtle. In emergency departments, 74% of white patients receive painkillers for broken bones, compared to 57% of Black patients. Doctors are 50% more likely to misdiagnose a heart attack if the patient is a woman. For appendicitis in children, 34% of white children receive opioid pain medication versus 12% of Black children. When AI learns from records shaped by these patterns, it doesn't correct the bias — it scales it.

Many FDA-approved AI tools don't disclose demographics in their training data. The UCSF Coordinating Center for Diagnostic Excellence flagged that AI tools are being cleared at an accelerating pace without this transparency requirement. The ACLU warned that AI in healthcare "may only worsen medical racism" by laundering human bias through an algorithm that looks objective.

3. Actually, It Isn't THAT Great (Clinical Researchers, JMIR)

94.5% accuracy in the lab. A 15-30% performance drop in the real world. And a third of radiologists override it anyway.

The benchmark numbers don't persist with actual patients. AI systems that achieve 94.5% accuracy in controlled benchmarks often see performance drops of 15-30% in clinical settings. This is due to population shifts and differences in data distribution: the patients in a hospital in rural Mississippi don't look like those in a Stanford training set.

Doctors have problems trusting it, even when it's right. 34% of radiologists report overriding correct AI recommendations because they don't trust the system's opaque outputs. The AI says one thing, the radiologist can't see why, and they go with their gut. Clinicians need 2.3 times longer to audit a neural network's decision compared to a traditional rule-based system. That makes efficiency collapse.

A quarter of approved devices have zero clinical studies behind them. The 510(k) pathway — which 97% of AI medical devices use — requires equivalence to an existing device, not original clinical trials. That means an AI tool can reach the market by proving it's similar to another AI tool that was itself approved without clinical trials. The FDA process is designed for scalpels and stethoscopes, not for algorithms that learn and change over time.

4. And What Happens When the AI Goes Down? (Lancet Researchers, 2025)

Doctors who rely on AI get worse at diagnosing without it — and the decline is steep.

Doctors actually got worse at their job after using AI. A 2025 Lancet study tracked 19 experienced endoscopists across four Polish medical centers and found that after three to six months of AI-assisted colonoscopies, their unassisted cancer detection ability dropped by 20%. The study's authors concluded that clinicians became "less motivated, less focused, and less responsible when making cognitive decisions without AI assistance."

When AI underperforms or goes offline, the question becomes who bears the cost. Under current U.S. malpractice law, liability rests on the "reasonable physician" standard — courts judge the doctor, not the algorithm. But when a physician follows an AI recommendation that turns out wrong, and neither the doctor nor the developer fully understands why the system made that call, distributing blame becomes a legal puzzle no court has definitively solved.

Where This Lands

Although more patients prefer a human doctor over AI, the technology's advocates have powerful evidence: AI catches cancers that experienced radiologists miss, and in high-volume settings like mammography, it is already proving its clinical worth. But the real-world performance gap, the bias baked into training data, and the deskilling evidence all point to a technology moving faster than its safeguards. Nevertheless, the alternative — radiologists working alone with crushing caseloads — isn't exactly a golden age of diagnostic accuracy either.


Sources

https://www.nature.com/articles/s41746-025-01800-1

https://bipartisanpolicy.org/issue-brief/fda-oversight-understanding-the-regulation-of-health-ai-tools/

https://health.mountsinai.org/blog/how-mount-sinai-using-artificial-intelligence-improve-diagnosis-breast-cancer/

https://www.cancer.gov/news-events/cancer-currents-blog/2022/artificial-intelligence-cancer-imaging

https://www.aamc.org/news/it-cancer-artificial-intelligence-helps-doctors-get-clearer-picture

https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

https://pmc.ncbi.nlm.nih.gov/articles/PMC12615213/

https://www.aclu.org/news/privacy-technology/algorithms-health-care-may-worsen-medical-racism

https://news.engin.umich.edu/2024/10/accounting-for-bias-in-medical-data-helps-prevent-ai-from-amplifying-racial-disparity/

https://magazine.publichealth.jhu.edu/2023/rooting-out-ais-biases

https://codex.ucsf.edu/news/editors-pick-study-finds-ai-medical-tools-show-bias-potential-misdiagnosis-and-patient-harm

https://www.jmir.org/2025/1/e66760/

https://healthsciences.arizona.edu/news/releases/would-you-trust-ai-doctor-new-research-shows-patients-are-split

https://pmc.ncbi.nlm.nih.gov/articles/PMC12143229/

https://pmc.ncbi.nlm.nih.gov/articles/PMC10711067/

https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02

https://www.scotsman.com/news/scottish-news/scottish-woman-incredibly-lucky-ai-caught-tumour-so-small-it-was-missed-by-medics-5626398

https://newsroom.ucla.edu/releases/ai-early-detection-breast-cancers-ucla-study

https://www.foxnews.com/tech/ai-disclosure-healthcare-what-patients-must-know

https://www.statnews.com/2025/12/30/ai-patients-doctors-chatgpt-med-school-dartmouth-harvard/

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices