Sort by:
Page 5 of 23224 results

Deep Learning-Driven High Spatial Resolution Attenuation Imaging for Ultrasound Tomography (AI-UT).

Liu M, Kou Z, Wiskin JW, Czarnota GJ, Oelze ML

pubmed logopapersJul 24 2025
Ultrasonic attenuation can be used to characterize tissue properties of the human breast. Both quantitative ultrasound (QUS) and ultrasound tomography (USCT) can provide attenuation estimation. However, limitations have been identified for both approaches. In QUS, the generation of attenuation maps involves separating the whole image into different data blocks. The optimal size of the data block is around 15 to 30 pulse lengths, which dramatically decreases the spatial resolution for attenuation imaging. In USCT, the attenuation is often estimated with a full wave inversion (FWI) method, which is affected by background noise. In order to achieve a high resolution attenuation image with low variance, a deep learning (DL) based method was proposed. In the approach, RF data from 60 angle views from the QTI Breast Acoustic CT<sup>TM</sup> Scanner were acquired as the input and attenuation images as the output. To improve image quality for the DL method, the spatial correlation between speed of sound (SOS) and attenuation were used as a constraint in the model. The results indicated that by including the SOS structural information, the performance of the model was improved. With a higher spatial resolution attenuation image, further segmentation of the breast can be achieved. The structural information and actual attenuation values provided by DL-generated attenuation images were validated with the values from the literature and the SOS-based segmentation map. The information provided by DL-generated attenuation images can be used as an additional biomarker for breast cancer diagnosis.

Mammo-Mamba: A Hybrid State-Space and Transformer Architecture with Sequential Mixture of Experts for Multi-View Mammography

Farnoush Bayatmakou, Reza Taleei, Nicole Simone, Arash Mohammadi

arxiv logopreprintJul 23 2025
Breast cancer (BC) remains one of the leading causes of cancer-related mortality among women, despite recent advances in Computer-Aided Diagnosis (CAD) systems. Accurate and efficient interpretation of multi-view mammograms is essential for early detection, driving a surge of interest in Artificial Intelligence (AI)-powered CAD models. While state-of-the-art multi-view mammogram classification models are largely based on Transformer architectures, their computational complexity scales quadratically with the number of image patches, highlighting the need for more efficient alternatives. To address this challenge, we propose Mammo-Mamba, a novel framework that integrates Selective State-Space Models (SSMs), transformer-based attention, and expert-driven feature refinement into a unified architecture. Mammo-Mamba extends the MambaVision backbone by introducing the Sequential Mixture of Experts (SeqMoE) mechanism through its customized SecMamba block. The SecMamba is a modified MambaVision block that enhances representation learning in high-resolution mammographic images by enabling content-adaptive feature refinement. These blocks are integrated into the deeper stages of MambaVision, allowing the model to progressively adjust feature emphasis through dynamic expert gating, effectively mitigating the limitations of traditional Transformer models. Evaluated on the CBIS-DDSM benchmark dataset, Mammo-Mamba achieves superior classification performance across all key metrics while maintaining computational efficiency.

A Hybrid CNN-VSSM model for Multi-View, Multi-Task Mammography Analysis: Robust Diagnosis with Attention-Based Fusion

Yalda Zafari, Roaa Elalfy, Mohamed Mabrok, Somaya Al-Maadeed, Tamer Khattab, Essam A. Rashed

arxiv logopreprintJul 22 2025
Early and accurate interpretation of screening mammograms is essential for effective breast cancer detection, yet it remains a complex challenge due to subtle imaging findings and diagnostic ambiguity. Many existing AI approaches fall short by focusing on single view inputs or single-task outputs, limiting their clinical utility. To address these limitations, we propose a novel multi-view, multitask hybrid deep learning framework that processes all four standard mammography views and jointly predicts diagnostic labels and BI-RADS scores for each breast. Our architecture integrates a hybrid CNN VSSM backbone, combining convolutional encoders for rich local feature extraction with Visual State Space Models (VSSMs) to capture global contextual dependencies. To improve robustness and interpretability, we incorporate a gated attention-based fusion module that dynamically weights information across views, effectively handling cases with missing data. We conduct extensive experiments across diagnostic tasks of varying complexity, benchmarking our proposed hybrid models against baseline CNN architectures and VSSM models in both single task and multi task learning settings. Across all tasks, the hybrid models consistently outperform the baselines. In the binary BI-RADS 1 vs. 5 classification task, the shared hybrid model achieves an AUC of 0.9967 and an F1 score of 0.9830. For the more challenging ternary classification, it attains an F1 score of 0.7790, while in the five-class BI-RADS task, the best F1 score reaches 0.4904. These results highlight the effectiveness of the proposed hybrid framework and underscore both the potential and limitations of multitask learning for improving diagnostic performance and enabling clinically meaningful mammography analysis.

Prediction of OncotypeDX recurrence score using H&E stained WSI images

Cohen, S., Shamai, G., Sabo, E., Cretu, A., Barshack, I., Goldman, T., Bar-Sela, G., Pearson, A. T., Huo, D., Howard, F. M., Kimmel, R., Mayer, C.

medrxiv logopreprintJul 21 2025
The OncotypeDX 21-gene assay is a widely adopted tool for estimating recurrence risk and informing chemotherapy decisions in early-stage, hormone receptor-positive, HER2-negative breast cancer. Although informative, its high cost and long turnaround time limit accessibility and delay treatment in low- and middle-income countries, creating a need for alternative solutions. This study presents a deep learning-based approach for predicting OncotypeDX recurrence scores directly from hematoxylin and eosin-stained whole slide images. Our approach leverages a deep learning foundation model pre-trained on 171,189 slides via self-supervised learning, which is fine-tuned for our task. The model was developed and validated using five independent cohorts, out of which three are external. On the two external cohorts that include OncotypeDX scores, the model achieved an AUC of 0.825 and 0.817, and identified 21.9% and 25.1% of the patients as low-risk with sensitivity of 0.97 and 0.95 and negative predictive value of 0.97 and 0.96, showing strong generalizability despite variations in staining protocols and imaging devices. Kaplan-Meier analysis demonstrated that patients classified as low-risk by the model had a significantly better prognosis than those classified as high-risk, with a hazard ratio of 4.1 (P<0.001) and 2.0 (P<0.01) on the two external cohorts that include patient outcomes. This artificial intelligence-driven solution offers a rapid, cost-effective, and scalable alternative to genomic testing, with the potential to enhance personalized treatment planning, especially in resource-constrained settings.

Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

Krishna Kanth Nakka

arxiv logopreprintJul 21 2025
Interpretability is critical in high-stakes domains such as medical imaging, where understanding model decisions is essential for clinical adoption. In this work, we introduce Sparse Autoencoder (SAE)-based interpretability to breast imaging by analyzing {Mammo-CLIP}, a vision--language foundation model pretrained on large-scale mammogram image--report pairs. We train a patch-level \texttt{Mammo-SAE} on Mammo-CLIP to identify and probe latent features associated with clinically relevant breast concepts such as \textit{mass} and \textit{suspicious calcification}. Our findings reveal that top activated class level latent neurons in the SAE latent space often tend to align with ground truth regions, and also uncover several confounding factors influencing the model's decision-making process. Additionally, we analyze which latent neurons the model relies on during downstream finetuning for improving the breast concept prediction. This study highlights the promise of interpretable SAE latent representations in providing deeper insight into the internal workings of foundation models at every layer for breast imaging.

OpenBreastUS: Benchmarking Neural Operators for Wave Imaging Using Breast Ultrasound Computed Tomography

Zhijun Zeng, Youjia Zheng, Hao Hu, Zeyuan Dong, Yihang Zheng, Xinliang Liu, Jinzhuo Wang, Zuoqiang Shi, Linfeng Zhang, Yubing Li, He Sun

arxiv logopreprintJul 20 2025
Accurate and efficient simulation of wave equations is crucial in computational wave imaging applications, such as ultrasound computed tomography (USCT), which reconstructs tissue material properties from observed scattered waves. Traditional numerical solvers for wave equations are computationally intensive and often unstable, limiting their practical applications for quasi-real-time image reconstruction. Neural operators offer an innovative approach by accelerating PDE solving using neural networks; however, their effectiveness in realistic imaging is limited because existing datasets oversimplify real-world complexity. In this paper, we present OpenBreastUS, a large-scale wave equation dataset designed to bridge the gap between theoretical equations and practical imaging applications. OpenBreastUS includes 8,000 anatomically realistic human breast phantoms and over 16 million frequency-domain wave simulations using real USCT configurations. It enables a comprehensive benchmarking of popular neural operators for both forward simulation and inverse imaging tasks, allowing analysis of their performance, scalability, and generalization capabilities. By offering a realistic and extensive dataset, OpenBreastUS not only serves as a platform for developing innovative neural PDE solvers but also facilitates their deployment in real-world medical imaging problems. For the first time, we demonstrate efficient in vivo imaging of the human breast using neural operator solvers.

Results from a Swedish model-based analysis of the cost-effectiveness of AI-assisted digital mammography.

Lyth J, Gialias P, Husberg M, Bernfort L, Bjerner T, Wiberg MK, Levin LÅ, Gustafsson H

pubmed logopapersJul 19 2025
To evaluate the cost-effectiveness of AI-assisted digital mammography (AI-DM) compared to conventional biennial breast cancer digital mammography screening (cDM) with double reading of screening mammograms, and to investigate the change in cost-effectiveness based on four different sub-strategies of AI-DM. A decision-analytic state-transition Markov model was used to analyse the decision of whether to use cDM or AI-DM in breast cancer screening. In this Markov model, one-year cycles were used, and the analysis was performed from a healthcare perspective with a lifetime horizon. In the model, we analysed 1000 hypothetical individuals attending mammography screenings assessed with AI-DM compared with 1000 hypothetical individuals assessed with cDM. The total costs, including both screening-related costs and breast cancer-related costs, were €3,468,967 and €3,528,288 for AI-DM and cDM, respectively. AI-DM resulted in a cost saving of €59,320 compared to cDM. Per 1000 individuals, AI-DM gained 10.8 quality-adjusted life years (QALYs) compared to cDM. Gained QALYs at a lower cost means that the AI-DM screening strategy was dominant compared to cDM. Break-even occurred at the second screening at age 42 years. This analysis showed that AI-assisted mammography for biennial breast cancer screening in a Swedish population of women aged 40-74 years is a cost-saving strategy compared to a conventional strategy using double human screen reading. Further clinical studies are needed, as scenario analyses showed that other strategies, more dependent on AI, are also cost-saving. Question To evaluate the cost-effectiveness of AI-DM in comparison to conventional biennial breast cDM screening. Findings AI-DM is cost-effective, and the break-even point occurred at the second screening at age 42 years. Clinical relevance The implementation of AI is clearly cost-effective as it reduces the total cost for the healthcare system and simultaneously results in a gain in QALYs.

Divide and Conquer: A Large-Scale Dataset and Model for Left-Right Breast MRI Segmentation

Maximilian Rokuss, Benjamin Hamm, Yannick Kirchhoff, Klaus Maier-Hein

arxiv logopreprintJul 18 2025
We introduce the first publicly available breast MRI dataset with explicit left and right breast segmentation labels, encompassing more than 13,000 annotated cases. Alongside this dataset, we provide a robust deep-learning model trained for left-right breast segmentation. This work addresses a critical gap in breast MRI analysis and offers a valuable resource for the development of advanced tools in women's health. The dataset and trained model are publicly available at: www.github.com/MIC-DKFZ/BreastDivider

Development of a clinical decision support system for breast cancer detection using ensemble deep learning.

Sandhu JK, Sharma C, Kaur A, Pandey SK, Sinha A, Shreyas J

pubmed logopapersJul 18 2025
Advancements in diagnostic technology are required to improve patient outcomes and facilitate early diagnosis, as breast cancer is a substantial global health concern. This research discusses the creation of a unique Deep Learning (DL) Ensemble Deep Learning based on a Clinical Decision Support System (EDL-CDSS) that enables the precise and expeditious diagnosis of breast cancer. Numerous DL models are combined in the proposed EDL-CDSS to create an ensemble method that optimizes the advantages and reduces the disadvantages of individual techniques. The team improves its capacity to extricate intricate patterns and features from medical imaging data by incorporating the Kelm Extreme Learning Machine (KELM), Deep Belief Network (DBN), and other DL architectures. Comprehensive testing has been conducted across various datasets to assess the efficacy of this system in comparison to individual DL models and traditional diagnostic methods. Among other objectives, the evaluation prioritizes precision, sensitivity, specificity, F1-score, accuracy, and overall accuracy to mitigate false positives and negatives. The experiment's conclusion exhibits a remarkable accuracy of 96.14% in comparison to prior advanced methodologies.

Establishment of an interpretable MRI radiomics-based machine learning model capable of predicting axillary lymph node metastasis in invasive breast cancer.

Zhang D, Shen M, Zhang L, He X, Huang X

pubmed logopapersJul 18 2025
This study sought to develop a radiomics model capable of predicting axillary lymph node metastasis (ALNM) in patients with invasive breast cancer (IBC) based on dual-sequence magnetic resonance imaging(MRI) of diffusion-weighted imaging (DWI) and dynamic contrast enhancement (DCE) data. The interpretability of the resultant model was probed with the SHAP (Shapley Additive Explanations) method. Established inclusion/exclusion criteria were used to retrospectively compile MRI and matching clinical data from 183 patients with pathologically confirmed IBC from our hospital evaluated between June 2021 and December 2023. All of these patients had undergone plain and enhanced MRI scans prior to treatment. These patients were separated according to their pathological biopsy results into those with ALNM (n = 107) and those without ALNM (n = 76). These patients were then randomized into training (n = 128) and testing (n = 55) cohorts at a 7:3 ratio. Optimal radiomics features were selected from the extracted data. The random forest method was used to establish three predictive models (DWI, DCE, and combined DWI + DCE sequence models). Area under the curve (AUC) values for receiver operating characteristic (ROC) curves were utilized to assess model performance. The DeLong test was utilized to compare model predictive efficacy. Model discrimination was assessed based on the integrated discrimination improvement (IDI) method. Decision curves revealed net clinical benefits for each of these models. The SHAP method was used to achieve the best model interpretability. Clinicopathological characteristics (age, menopausal status, molecular subtypes, and estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2, and Ki-67 status) were comparable when comparing the ALNM and non-ALNM groups as well as the training and testing cohorts (P > 0.05). AUC values for the DWI, DCE, and combined models in the training cohort were 0.793, 0.774, and 0.864, respectively, with corresponding values of 0.728, 0.760, and 0.859 in the testing cohort. The predictive efficacy of the DWI and combined models was found to differ significantly according to the DeLong test, as did the predictive efficacy of the DCE and combined models in the training groups (P < 0.05), while no other significant differences were noted in model performance (P > 0.05). IDI results indicated that the combined model offered predictive power levels that were 13.5% (P < 0.05) and 10.2% (P < 0.05) higher than those for the respective DWI and DCE models. In a decision curve analysis, the combined model offered a net clinical benefit over the DCE model. The combined dual-sequence MRI-based radiomics model constructed herein and the supporting interpretability analyses can aid in the prediction of the ALNM status of IBC patients, helping to guide clinical decision-making in these cases.
Page 5 of 23224 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.