Back to all papers

Artificial Intelligence Education in Radiology Training: A Systematic Review of Effectiveness, Barriers, and Future Directions.

November 15, 2025pubmed logopapers

Authors

Keshavarz P,Mohammadigoldar Z,Bedayat A,Raman SS,Tai R

Affiliations (3)

  • Department of Radiology, Trinity Health Oakland Hospital, Pontiac, Michigan (P.K.); Department of Radiology, Wayne State University School of Medicine, Detroit, Michigan (P.K.); Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles (UCLA), Los Angeles, California (P.K., Z.M., A.B., S.S.R.). Electronic address: [email protected].
  • Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles (UCLA), Los Angeles, California (P.K., Z.M., A.B., S.S.R.).
  • Division of Musculoskeletal Imaging and Intervention, Department of Radiology, University of Massachusetts Chan Medical School, 55 Lake Avenue North, Worcester, Massachusetts (R.T).

Abstract

The purpose of this systematic review study was to characterize the current landscape of various artificial intelligence (AI) education in radiology, summarizing existing curricula, outcomes, challenges, and future directions for effective integration into residency training. A comprehensive search of PubMed, Web of Science, Embase, and Google Scholar identified relevant published studies up to June 19, 2025. Of the 2646 studies screened, 14 studies evaluated the performance of AI-based training programs for radiology trainees; among these, 92.9% (13/14) reported improvements in trainees' performance, including better diagnostic precision and interpretation (57.2%, 8/14), greater trainee confidence (57.2%, 8/14), hands-on experience with AI platforms (85.7%, 12/14), increased AI knowledge (85.7%, 12/14), engagement with AI-based case learning (35.7%, 5/14), understanding of AI ethics and bias (7.1%, 1/14), and acceptance of AI-assisted learning (78.6%, 11/14), whereas one study (7.1%, 1/14) found no significant benefit. Performance evaluation metrics varied across studies, with 35.7% (5/14) reporting a higher median of sensitivity, specificity, and accuracy (72%, 80%, and 81.3%) after AI training compared to before AI training (62.2%, 78.9%, and 76.5%, respectively), and 28.6% (4/14) showing improved AI knowledge scores. Hands-on simulations and didactic lectures were the most common AI training formats (78.6% and 71.4%). Risks and concerns included over-reliance on AI, limited exposure to complex or rare cases, and a lack of feedback. Recommendations highlighted the need for AI-faculty teaching, broader content coverage, and standardized multi-center AI-training programs to facilitate wider adoption. 92.9% of studies showed that AI-based training can enhance radiology trainees' knowledge, interpretive skills, or diagnostic performance, especially for junior trainees; however, its safe adoption requires standardized curricula with diverse cases, mentorship, workflow integration, and robust evaluation, with larger studies needed to confirm generalizability.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.