Back to all papers

Artificial intelligence as medical device in radiology in 2025: the regulatory scenario in the EU, USA, and China.

March 15, 2026pubmed logopapers

Authors

Pesapane F,De Cecco C,Wang H,Hauglid MK,Sardanelli F

Affiliations (5)

  • Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Milan, Italy. [email protected].
  • Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA.
  • Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China.
  • Wikborg Rein Advokatfirma AS, Oslo, Norway.
  • Lega Italiana per la Lotta contro i Tumori (LILT) Milano Monza Brianza, Milan, Italy.

Abstract

In the last decade, advanced AI methods were applied to radiology, providing tools for clinical practice. Regulations across countries are a relevant topic, considering that AI tools must be regarded as medical devices. We describe the regulatory scenarios in the EU, USA, and China. For the EU, we considered the 2017 Medical Device Regulation, including AI tools as "active" medical devices, the 2018 General Data Protection Regulation, protecting data privacy, and the risk-based approach by the 2024 AI Act. For the USA, we considered the three FDA premarket pathways: the 510(k)-clearance demonstrating substantial equivalence, the De Novo classification for novel devices without predicates, and the Premarket Approval process for high-risk applications demanding rigorous clinical evidence; recent regulations regarded lifecycle management, post-marketing surveillance and adaptive algorithms, underscoring the importance of real-world evidence of AI tool performance. For China, the role of the 2022 Guidance for classification and definition of AI medical software by the National Medical Products Administration is illustrated, describing how to determine whether a tool is an AI-enabled medical device, categorizing the associated risk level. The NMPA published six premarket technical review guides related to AI-enabled medical devices in radiology and medical imaging; protection of patient privacy is enforced by the law and de-identification is mandatory for manufacturers. Regulations in these three scenarios show meaningful convergences about patient's data protection, risk assessment and classification, ensuring equity and generalizability, transparency and explainability, and the need of human oversight. The radiology community will act in a world scenario more homogeneous than expected. KEY POINTS: Question Regulatory fragmentation across the EU, USA, and China creates uncertainty for radiology AI development, validation, and clinical adoption, requiring clearer international harmonization. Findings Despite differences, regulations in the EU, USA, and China converge on core requirements: patient data protection, risk classification, transparency, bias mitigation, and human oversight. Clinical relevance By highlighting convergences across major jurisdictions, this review informs radiologists and developers on safe integration of AI tools, ensuring patient safety, equity, and trustworthy adoption in clinical practice.

Topics

Journal ArticleReview

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.