
Stanford and Rad Partners developed a structured framework for pre-deployment evaluation of radiology AI models to guide purchasing decisions.
Key Details
- 1Framework was developed by Stanford University and Rad Partners, detailed in the American Journal of Roentgenology.
- 2A workgroup of four radiologists evaluated 13 AI models from one vendor (Aidoc) between 2022 and 2024.
- 3Nearly 89,000 exams across multiple sites were used in the assessment.
- 4Attributes for evaluating value included task tediousness, likelihood of radiologist oversight, and clinical impact of misses.
- 5Five tasks were rated as high value, five as medium, and two as low based on the framework.
Why It Matters
This framework provides radiology groups with a practical, evidence-based method to evaluate AI models before investment, addressing the gap between AI marketing claims and real-world outcomes. Adoption of such structured assessment tools can improve clinical effectiveness and ROI for AI integration in imaging practices.

Source
Radiology Business
Related News

•AuntMinnie
FDA Rejects Petition to Exempt Radiology AI Devices from 510(k) Review
FDA denies a petition to exempt certain radiology AI software from 510(k) review, stressing ongoing regulatory oversight.

•AuntMinnie
LLM Boosts Accuracy and Clarity of Patient Radiology Report Translations
A study found GPT-o1 effectively simplified and accurately translated emergency radiology reports into multiple languages, outperforming Google Translate.

•Radiology Business
AI Rarely Mentioned in Radiology Job Listings Despite Widespread Adoption
A new report finds that AI is rarely specified in radiology job postings, despite its broad use in imaging.