An explainable deep learning AI model using gadoxetic acid-enhanced MRI improves sensitivity in diagnosing hepatocellular carcinoma (HCC).
Key Details
- 1The AI model was trained on 1,023 liver lesions from 839 patients using multi-phase MRI.
- 2It classified lesions as HCC or non-HCC and provided visual explanations (LI-RADS feature identification).
- 3On a test set, the model achieved AUC 0.97 for HCC diagnosis.
- 4Compared to LI-RADS 5, AI had higher sensitivity (91.6% vs. 74.8%) with similar specificity (90.7% vs. 96%).
- 5Radiologists assisted by the AI showed improved sensitivity (up to 89%) with no loss in specificity.
- 6Explainability is highlighted, aligning with regulatory emphasis on interpretable AI.
Why It Matters
This study demonstrates that explainable AI models can significantly increase diagnostic sensitivity for HCC on MRI, supporting radiologists without sacrificing specificity. The focus on explainability is crucial for regulatory compliance and clinical trust in AI tools for radiology.

Source
AuntMinnie
Related News

•Radiology Business
AI Triage Cuts CT Report Turnaround for Pulmonary Embolism—Daytime Only
FDA-backed study finds AI triage tools reduce radiology CT report turnaround times for pulmonary embolism during peak hours.

•Cardiovascular Business
AI Uses Mammograms to Predict Women’s Cardiovascular Disease Risk
AI algorithms can analyze mammograms to predict cardiovascular disease risk, expanding the utility of breast imaging.

•Health Imaging
Most FDA-Cleared AI Devices Lack Pre-Approval Safety Data, Study Finds
A new study finds fewer than 30% of FDA-cleared AI medical devices reported key safety or adverse event data before approval.