Four types of bias in medical ai are running under the fda's radar

Four types of bias in medical ai are running under the fda's radar

Play all audios:

Loading...

Although artificial intelligence is entering health care with great promise, clinical AI tools are prone to bias and real-world underperformance from inception to deployment, including the


stages of dataset acquisition, labeling or annotating, algorithm training, and validation. These biases can reinforce existing disparities in diagnosis and treatment. To explore how well


bias is being identified in the FDA review process, we looked at virtually every health care AI product approved between 1997 and October 2022. Our audit of data submitted to the FDA to


clear clinical AI products for the market reveals major flaws in how this technology is being regulated. OUR ANALYSIS The FDA has approved 521 AI products between 1997 and October 2022: 500


under the 510(k) pathway, meaning the new algorithm mimics an existing technology; 18 under the de novo pathway, meaning the algorithm does not mimic existing models but comes packaged with


controls that make it safe; three were submitted with premarket approval. Since the FDA only includes summaries for the first two, we analyzed the rigor of the submission data underlying 518


approvals to understand how well the submissions were considering how bias can enter the equation. STAT+ Exclusive Story Already have an account? Log in THIS ARTICLE IS EXCLUSIVE TO STAT+


SUBSCRIBERS UNLOCK THIS ARTICLE — PLUS IN-DEPTH ANALYSIS, NEWSLETTERS, PREMIUM EVENTS, AND NEWS ALERTS. Already have an account? Log in Individual plans Group plans View All Plans To read


the rest of this story subscribe to STAT+. Subscribe