Explainable Deep Learning Model Enhances Liver Cancer MRI Diagnosis

May 30, 2025

An explainable deep learning AI model using gadoxetic acid-enhanced MRI improves sensitivity in diagnosing hepatocellular carcinoma (HCC).

Key Details

  • The AI model was trained on 1,023 liver lesions from 839 patients using multi-phase MRI.
  • It classified lesions as HCC or non-HCC and provided visual explanations (LI-RADS feature identification).
  • On a test set, the model achieved AUC 0.97 for HCC diagnosis.
  • Compared to LI-RADS 5, AI had higher sensitivity (91.6% vs. 74.8%) with similar specificity (90.7% vs. 96%).
  • Radiologists assisted by the AI showed improved sensitivity (up to 89%) with no loss in specificity.
  • Explainability is highlighted, aligning with regulatory emphasis on interpretable AI.

Why It Matters

This study demonstrates that explainable AI models can significantly increase diagnostic sensitivity for HCC on MRI, supporting radiologists without sacrificing specificity. The focus on explainability is crucial for regulatory compliance and clinical trust in AI tools for radiology.

Read more

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.