Medical slice transformer for improved diagnosis and explainability on 3D medical images with DINOv2.

Authors

Müller-Franzes G,Khader F,Siepmann R,Han T,Kather JN,Nebelung S,Truhn D

Affiliations (5)

  • Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany. [email protected].
  • Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
  • Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
  • Department of Medicine I, University Hospital Dresden, Dresden, Germany.
  • National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.

Abstract

Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are essential clinical cross-sectional imaging techniques for diagnosing complex conditions. However, large 3D datasets with annotations for deep learning are scarce. While methods like DINOv2 are encouraging for 2D image analysis, these methods have not been applied to 3D medical images. Furthermore, deep learning models often lack explainability due to their "black-box" nature. This study aims to extend 2D self-supervised models, specifically DINOv2, to 3D medical imaging while evaluating their potential for explainable outcomes. We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis. MST combines a Transformer architecture with a 2D feature extractor, i.e., DINOv2. We evaluate its diagnostic performance against a 3D convolutional neural network (3D ResNet) across three clinical datasets: breast MRI (651 patients), chest CT (722 patients), and knee MRI (1199 patients). Both methods were tested for diagnosing breast cancer, predicting lung nodule dignity, and detecting meniscus tears. Diagnostic performance was assessed by calculating the Area Under the Receiver Operating Characteristic Curve (AUC). Explainability was evaluated through a radiologist's qualitative comparison of saliency maps based on slice and lesion correctness. P-values were calculated using Delong's test. MST achieved higher AUC values compared to ResNet across all three datasets: breast (0.94 ± 0.01 vs. 0.91 ± 0.02, P = 0.02), chest (0.95 ± 0.01 vs. 0.92 ± 0.02, P = 0.13), and knee (0.85 ± 0.04 vs. 0.69 ± 0.05, P = 0.001). Saliency maps were consistently more precise and anatomically correct for MST than for ResNet. Self-supervised 2D models like DINOv2 can be effectively adapted for 3D medical imaging using MST, offering enhanced diagnostic accuracy and explainability compared to convolutional neural networks.

Topics

Magnetic Resonance ImagingImaging, Three-DimensionalImage Processing, Computer-AssistedJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.