Blitz Bureau
NEW DELHI A detailed analysis of more than 500 medical AI devices revealed that approximately half of the tools authorised by the US Food and Drug Administration (FDA) lacked reported clinical validation data, researchers said on August 26.
The findings, published in the journal Nature Medicine, warned that almost half of FDA-approved AI medical devices are not trained on real patient data.
“Although AI device manufacturers boast of the credibility of their technology with FDA authorisation, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data,” said Sammy Chouffani El Fassi from the University of North Carolina (UNC) School of Medicine and research scholar at Duke Heart Center in the US.
AI has practically limitless applications in healthcare, ranging from auto-drafting patient messages to optimising organ transplantation and improving tumour removal accuracy.
Despite their potential benefit, these tools have been met with scepticism because of patient privacy concerns, the possibility of bias and device accuracy.
To find out more, a team of researchers at the UNC School of Medicine, Duke University, Ally Bank, Oxford University, Colombia University, and University of Miami have been on a mission to build public trust and evaluate how exactly AI and algorithmic technologies are being approved for use in patient care.
Of the 521 device authorisations, 144 were labelled as “retrospectively validated,” 148 were “prospectively validated,” and 22 were validated using randomised controlled trials.
Most notably, 226 of 521 FDA-approved medical devices, or approximately 43 per cent, lacked published clinical validation data, the team found. A few of the devices used “phantom images” or computer-generated images that were not from a real patient, which did not technically meet the requirements for clinical validation.
Since 2016, the average number of medical AI device authorisations by the FDA per year has increased from 2 to 69, indicating tremendous growth in commercialisation of AI medical technologies.
“A lot of the devices that came out after 2016 were created new, or maybe they were similar to a product that already was on the market,” said Gail E Henderson, PhD, professor at the UNC Department of Social Medicine.
Furthermore, the researchers found that the latest draft guidance, published by the FDA in September 2023, does not clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers.
The team hopes that the findings would inspire researchers and universities globally to conduct clinical validation studies on medical AI to improve the safety and effectiveness of these technologies.