A South Korean study finds that AI-generated chest x-ray reports are nearly as clinically acceptable as radiologist-written reports under standard criteria.
Key Details
- 1AI-generated and radiologist-written reports showed similar acceptability under a standard criterion: 88.4% vs 89.2% (p = 0.36).
- 2Under a more stringent criterion (acceptable without revision), AI was less acceptable: 66.8% vs 75.7% (p < 0.001).
- 3The model (KARA-CXR) was trained on 8.8 million chest x-rays from 42 institutions across South Korea and the U.S.
- 4AI-generated reports demonstrated higher sensitivity for referable abnormalities (81.2% vs 59.4%) but lower specificity (81% vs 93.6%) compared to radiologists.
- 5Seven thoracic radiologists independently evaluated report acceptability; most felt AI was not yet ready to replace human radiologists.
- 6Editorials note AI is diagnostically positioned between residents and board-certified radiologists.
Why It Matters

Source
AuntMinnie
Related News

AI-Assisted Mammography Improves Cancer Detection and Cuts Workload
AI-supported mammography increases breast cancer detection, reduces interval cancers, and substantially lowers radiologist workload versus standard double reading.

Survey Reveals Hurdles and Hopes for GenAI Adoption in U.S. Healthcare
A new survey finds high confidence in generative AI's potential among U.S. nurses but a lack of preparedness and governance impedes its impact.

AI Enhances POCUS Cardiac Screening for Non-Cardiologists
An AI model accurately detects cardiac structural issues using point-of-care ultrasound (POCUS) images, even by non-cardiologists.