A South Korean study finds that AI-generated chest x-ray reports are nearly as clinically acceptable as radiologist-written reports under standard criteria.
Key Details
- 1AI-generated and radiologist-written reports showed similar acceptability under a standard criterion: 88.4% vs 89.2% (p = 0.36).
- 2Under a more stringent criterion (acceptable without revision), AI was less acceptable: 66.8% vs 75.7% (p < 0.001).
- 3The model (KARA-CXR) was trained on 8.8 million chest x-rays from 42 institutions across South Korea and the U.S.
- 4AI-generated reports demonstrated higher sensitivity for referable abnormalities (81.2% vs 59.4%) but lower specificity (81% vs 93.6%) compared to radiologists.
- 5Seven thoracic radiologists independently evaluated report acceptability; most felt AI was not yet ready to replace human radiologists.
- 6Editorials note AI is diagnostically positioned between residents and board-certified radiologists.
Why It Matters

Source
AuntMinnie
Related News

Harrison.ai Receives FDA Breakthrough Status for Imaging AI Device
Harrison.ai has been awarded three FDA breakthrough device designations for its imaging AI solutions, including a tool for obstructive hydrocephalus triage.

Head-to-Head Study Evaluates AI Accuracy in Fracture Detection on X-Ray
A prospective study compared three commercial AI tools for fracture detection on x-ray, showing moderate-to-high accuracy for simple cases but weaker performance in complex scenarios.

FDA Authorizes AI Tool for Detecting Mitral Annular Calcification on Routine CT
Bunkerhill Health receives FDA clearance for an AI model that detects mitral annular calcification (MAC) on routine, non-gated CT scans.