Sort by:
Page 2 of 30293 results

Region-of-Interest Augmentation for Mammography Classification under Patient-Level Cross-Validation

Farbod Bigdeli, Mohsen Mohammadagha, Ali Bigdeli

arxiv logopreprintSep 24 2025
Breast cancer screening with mammography remains central to early detection and mortality reduction. Deep learning has shown strong potential for automating mammogram interpretation, yet limited-resolution datasets and small sample sizes continue to restrict performance. We revisit the Mini-DDSM dataset (9,684 images; 2,414 patients) and introduce a lightweight region-of-interest (ROI) augmentation strategy. During training, full images are probabilistically replaced with random ROI crops sampled from a precomputed, label-free bounding-box bank, with optional jitter to increase variability. We evaluate under strict patient-level cross-validation and report ROC-AUC, PR-AUC, and training-time efficiency metrics (throughput and GPU memory). Because ROI augmentation is training-only, inference-time cost remains unchanged. On Mini-DDSM, ROI augmentation (best: p_roi = 0.10, alpha = 0.10) yields modest average ROC-AUC gains, with performance varying across folds; PR-AUC is flat to slightly lower. These results demonstrate that simple, data-centric ROI strategies can enhance mammography classification in constrained settings without requiring additional labels or architectural modifications.

Deep learning and radiomics integration of photoacoustic/ultrasound imaging for non-invasive prediction of luminal and non-luminal breast cancer subtypes.

Wang M, Mo S, Li G, Zheng J, Wu H, Tian H, Chen J, Tang S, Chen Z, Xu J, Huang Z, Dong F

pubmed logopapersSep 24 2025
This study aimed to develop a Deep Learning Radiomics integrated model (DLRN), which combines photoacoustic/ultrasound(PA/US)imaging with clinical and radiomics features to distinguish between luminal and non-luminal BC in a preoperative setting. A total of 388 BC patients were included, with 271 in the training group and 117 in the testing group. Radiomics and deep learning features were extracted from PA/US images using Pyradiomics and ResNet50, respectively. Feature selection was performed using independent sample t-tests, Pearson correlation analysis, and LASSO regression to build a Deep Learning Radiomics (DLR) model. Based on the results of univariate and multivariate logistic regression analyses, the DLR model was combined with valuable clinical features to construct the DLRN model. Model efficacy was assessed using AUC, accuracy, sensitivity, specificity, and NPV. The DLR model comprised 3 radiomic features and 6 deep learning features, which, when combined with significant clinical predictors, formed the DLRN model. In the testing set, the AUC of the DLRN model (0.924 [0.877-0.972]) was significantly higher than that of the DLR (AUC 0.847 [0.758-0.936], p = 0.026), DL (AUC 0.822 [0.725-0.919], p = 0.06), Rad (AUC 0.717 [0.597-0.838], p < 0.001), and clinical (AUC 0.820 [0.745-0.895], p = 0.002) models. These findings indicate that the DLRN model (integrated model) exhibited the most favorable predictive performance among all models evaluated. The DLRN model effectively integrates PA/US imaging with clinical data, showing potential for preoperative molecular subtype prediction and guiding personalized treatment strategies for BC patients.

A Contrastive Learning Framework for Breast Cancer Detection

Samia Saeed, Khuram Naveed

arxiv logopreprintSep 24 2025
Breast cancer, the second leading cause of cancer-related deaths globally, accounts for a quarter of all cancer cases [1]. To lower this death rate, it is crucial to detect tumors early, as early-stage detection significantly improves treatment outcomes. Advances in non-invasive imaging techniques have made early detection possible through computer-aided detection (CAD) systems which rely on traditional image analysis to identify malignancies. However, there is a growing shift towards deep learning methods due to their superior effectiveness. Despite their potential, deep learning methods often struggle with accuracy due to the limited availability of large-labeled datasets for training. To address this issue, our study introduces a Contrastive Learning (CL) framework, which excels with smaller labeled datasets. In this regard, we train Resnet-50 in semi supervised CL approach using similarity index on a large amount of unlabeled mammogram data. In this regard, we use various augmentation and transformations which help improve the performance of our approach. Finally, we tune our model on a small set of labelled data that outperforms the existing state of the art. Specifically, we observed a 96.7% accuracy in detecting breast cancer on benchmark datasets INbreast and MIAS.

Radiomics integrated with machine and deep learning analysis of T2-weighted and arterial-phase T1-weighted Magnetic Resonance Imaging for non-invasive detection of metastatic axillary lymph nodes in breast cancer.

Fusco R, Granata V, Mattace Raso M, Simonetti I, Vallone P, Pupo D, Tovecci F, Iasevoli MAD, Maio F, Gargiulo P, Giannotti G, Pariante P, Simonelli S, Ferrara G, Siani C, Di Giacomo R, Setola SV, Petrillo A

pubmed logopapersSep 23 2025
To compare the diagnostic performance of radiomic features extracted from T2-weighted and arterial-phase T1-weighted MRI sequences using univariate, machine and deep learning analysis and to assess their effectiveness in predicting axillary lymph node (ALN) metastasis in breast cancer patients. We retrospectively analyzed MRI data from 100 breast cancer patients, comprising 52 metastatic and 103 non-metastatic lymph nodes. Radiomic features were extracted from T2-weighted and subtracted arterial-phase T1-weighted images. Feature normalization and selection were performed. Various machine learning classifiers, including logistic regression, gradient boosting, random forest, and neural networks, were trained and evaluated. Diagnostic performance was assessed using metrics such as area under the curve (AUC), sensitivity, specificity, and accuracy. T2-weighted imaging provided strong performance in multivariate modeling, with the neural network achieving the highest AUC (0.978) and accuracy (91.1%), showing statistically significant differences over models. The stepwise logistic regression model also showed competitive results (AUC = 0.796; accuracy = 73.3%). In contrast, arterial-phase T1-weighted imaging features performed better when analyzed individually, with the best univariate AUC reaching 0.787. When multivariate modeling was applied to arterial-phase features, the best-performing logistic regression model achieved an AUC of 0.853 and accuracy of 77.8%. Radiomic analysis of T2-weighted MRI, particularly through deep learning models like neural networks, demonstrated the highest overall diagnostic performance for predicting metastatic ALNs. In contrast, arterial-phase T1-weighted features showed better results in univariate analysis. These findings support the integration of radiomic features, especially from T2-weighted sequences, into multivariate models to enhance noninvasive preoperative assessment.

Including AI in diffusion-weighted breast MRI has potential to increase reader confidence and reduce workload.

Bounias D, Simons L, Baumgartner M, Ehring C, Neher P, Kapsner LA, Kovacs B, Floca R, Jaeger PF, Eberle J, Hadler D, Laun FB, Ohlmeyer S, Maier-Hein L, Uder M, Wenkel E, Maier-Hein KH, Bickelhaupt S

pubmed logopapersSep 23 2025
Breast diffusion-weighted imaging (DWI) has shown potential as a standalone imaging technique for certain indications, eg, supplemental screening of women with dense breasts. This study evaluates an artificial intelligence (AI)-powered computer-aided diagnosis (CAD) system for clinical interpretation and workload reduction in breast DWI. This retrospective IRB-approved study included: n = 824 examinations for model development (2017-2020) and n = 235 for evaluation (01/2021-06/2021). Readings were performed by three readers using either the AI-CAD or manual readings. BI-RADS-like (Breast Imaging Reporting and Data System) classification was based on DWI. Histopathology served as ground truth. The model was nnDetection-based, trained using 5-fold cross-validation and ensembling. Statistical significance was determined using McNemar's test. Inter-rater agreement was calculated using Cohen's kappa. Model performance was calculated using the area under the receiver operating curve (AUC). The AI-augmented approach significantly reduced BI-RADS-like 3 calls in breast DWI by 29% (P =.019) and increased interrater agreement (0.57 ± 0.10 vs 0.49 ± 0.11), while preserving diagnostic accuracy. Two of the three readers detected more malignant lesions (63/69 vs 59/69 and 64/69 vs 62/69) with the AI-CAD. The AI model achieved an AUC of 0.78 (95% CI: [0.72, 0.85]; P <.001), which increased for women at screening age to 0.82 (95% CI: [0.73, 0.90]; P <.001), indicating a potential for workload reduction of 20.9% at 96% sensitivity. Breast DWI might benefit from AI support. In our study, AI showed potential for reduction of BI-RADS-like 3 calls and increase of inter-rater agreement. However, given the limited study size, further research is needed.

The LongiMam model for improved breast cancer risk prediction using longitudinal mammograms

Manel Rakez, Thomas Louis, Julien Guillaumin, Foucauld Chamming's, Pierre Fillard, Brice Amadeo, Virginie Rondeau

arxiv logopreprintSep 23 2025
Risk-adapted breast cancer screening requires robust models that leverage longitudinal imaging data. Most current deep learning models use single or limited prior mammograms and lack adaptation for real-world settings marked by imbalanced outcome distribution and heterogeneous follow-up. We developed LongiMam, an end-to-end deep learning model that integrates both current and up to four prior mammograms. LongiMam combines a convolutional and a recurrent neural network to capture spatial and temporal patterns predictive of breast cancer. The model was trained and evaluated using a large, population-based screening dataset with disproportionate case-to-control ratio typical of clinical screening. Across several scenarios that varied in the number and composition of prior exams, LongiMam consistently improved prediction when prior mammograms were included. The addition of prior and current visits outperformed single-visit models, while priors alone performed less well, highlighting the importance of combining historical and recent information. Subgroup analyses confirmed the model's efficacy across key risk groups, including women with dense breasts and those aged 55 years or older. Moreover, the model performed best in women with observed changes in mammographic density over time. These findings demonstrate that longitudinal modeling enhances breast cancer prediction and support the use of repeated mammograms to refine risk stratification in screening programs. LongiMam is publicly available as open-source software.

Benign vs malignant tumors classification from tumor outlines in mammography scans using artificial intelligence techniques.

Beni HM, Asaei FY

pubmed logopapersSep 21 2025
Breast cancer is one of the most important causes of death among women due to cancer. With the early diagnosis of this condition, the probability of survival will increase. For this purpose, medical imaging methods, especially mammography, are used for screening and early diagnosis of breast abnormalities. The main goal of this study is to distinguish benign or malignant tumors based on tumor morphology features extracted from tumor outlines extracted from mammography images. Unlike previous studies, this study does not use the mammographic image itself but only extracts the exact outline of the tumor. These outlines were extracted from a new and publicly available mammography database published in 2024. The features outlines were calculated using known pre-trained Convolutional Neural Networks (CNN), including VGG16, ResNet50, Xception65, AlexNet, DenseNet, GoogLeNet, Inception-v3, and a combination of them to improve performance. These pre-trained networks have been used in many studies in various fields. In the classification part, known Machine Learning (ML) algorithms, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Neural Network (NN), Naïve Bayes (NB), Decision Tree (DT), and a combination of them have been compared in outcome measures, namely accuracy, specificity, sensitivity, and precision. Also, with the use of data augmentation, the dataset size was increased about 6-8 times, and the K-fold cross-validation technique (K = 5) was used in this study. Based on the performed simulations, a combination of the features from all pre-trained deep networks and the NB classifier resulted in the best possible outcomes with 88.13 % accuracy, 92.52 % specificity, 83.73 % sensitivity, and 92.04 % precision. Furthermore, validation on DMID dataset using ResNet50 features along with NB classifier, led to 92.03 % accuracy, 95.57 % specificity, 88.49 % sensitivity, and 95.23 % precision. This study sheds light on using AI algorithms to prevent biopsy tests and speed up breast cancer tumor classification using tumor outlines in mammographic images.

Predictive Analysis of Neoadjuvant Chemotherapy Efficacy in Breast Cancer Using Multi-Region Ultrasound Imaging Features Combined With Pathological Parameters.

Wei C, Jia Y, Gu Y, He Z, Nie F

pubmed logopapersSep 20 2025
This study aimed to analyze the correlation between the ultrasonographic radiomic features of multiple regions within and surrounding the primary tumor in breast cancer patients prior to receiving neoadjuvant chemotherapy (NAC) and the efficacy of NAC. By integrating clinical and pathological parameters, a predictive model was constructed to provide an accurate basis for personalized treatment and precise prognosis in breast cancer patients. This retrospective study included 321 breast cancer patients who underwent NAC treatment at the Second Hospital of Lanzhou University from January 2019 to December 2024. According to post-operative pathological results, the patients were divided into pathological complete response (PCR) and non-pathological complete response (non-PCR) groups. Regions of interest were outlined on 2-D ultrasound images using Itk-snap software. The intra-tumor (Intra) region and 5 mm (Peri-5 mm), 10 mm (Peri-10 mm) and 15 mm (Peri-15 mm) the peri-tumoralregions were demarcated, with radiomics features extracted from each region. Patients were randomly divided into a training set (n = 224) and a validation set (n = 97) in a 7:3 ratio. All features underwent Z-score normalization followed by dimensionality reduction using t-tests, Pearson correlation coefficients and least absolute shrinkage and selection operator. Radiomics models for Intra, Peri-5 mm, Peri-10 mm, Peri-15 mm and the combined intra-tumoral and peri-tumoral regions (Intra-tumoral, Peri-tumoral, IntraPeri) were constructed using a random forest machine-learning classifier. The predictive performance of the models was assessed by plotting receiver operating characteristic curves and calculating the area under the curve (AUC). Additionally, calibration curves and decision curve analysis were plotted to evaluate the model's goodness of fit and clinical net benefit RESULTS: A total of 214 radiomics features were extracted from the intra-tumoral and multi-region peri-tumoral areas. Using the least absolute shrinkage and selection operator regression model, eight intra-tumoral radiomics features, eight peri-10 mm radiomics features and nine IntraPeri-10 mm radiomics features were selected as being closely associated with PCR. The AUC of the intra-tumoral model was 0.860 and 0.823 in the training and validation sets, respectively. The AUCs of the peri-5 mm, Peri-10 mm and Peri-15 mm models were 0.836, 0.854 and 0.822 in the training set, and 0.793, 0.799 and 0.792 in the validation set. Among them, the AUC of the IntraPeri-10 mm model in the validation set was 0.842 (95% confidence interval [CI]: 0.764-0.921), which was superior to the AUC of the IntraPeri-5 mm model (0.831; 95% CI: 0.758-0.914) and the IntraPeri-15 mm model (0.838; 95% CI: 0.761-0.917). The combined model based on IntraPeri-10 mm and clinical pathological parameters (HER-2, Ki-67) achieved an AUC of 0.869 (95% CI: 0.800-0.937). The Delong test showed that the AUC of the combined model was significantly superior to that of the other models. The calibration curve indicated that the combined model had a good fit, and decision curve analysis demonstrated that the combined model provided a better clinical net benefit. The peri-10 mm region is the optimal predictive area for the tumor's surrounding tissue after NAC in breast cancer. The IntraPeri-10 mm model, incorporating clinical pathological parameters, performs better at predicting the efficacy of NAC in breast cancer and can accurately assess treatment response, offering valuable guidance for subsequent treatment decisions.

Uncertainty-Gated Deformable Network for Breast Tumor Segmentation in MR Images

Yue Zhang, Jiahua Dong, Chengtao Peng, Qiuli Wang, Dan Song, Guiduo Duan

arxiv logopreprintSep 19 2025
Accurate segmentation of breast tumors in magnetic resonance images (MRI) is essential for breast cancer diagnosis, yet existing methods face challenges in capturing irregular tumor shapes and effectively integrating local and global features. To address these limitations, we propose an uncertainty-gated deformable network to leverage the complementary information from CNN and Transformers. Specifically, we incorporates deformable feature modeling into both convolution and attention modules, enabling adaptive receptive fields for irregular tumor contours. We also design an Uncertainty-Gated Enhancing Module (U-GEM) to selectively exchange complementary features between CNN and Transformer based on pixel-wise uncertainty, enhancing both local and global representations. Additionally, a Boundary-sensitive Deep Supervision Loss is introduced to further improve tumor boundary delineation. Comprehensive experiments on two clinical breast MRI datasets demonstrate that our method achieves superior segmentation performance compared with state-of-the-art methods, highlighting its clinical potential for accurate breast tumor delineation.

Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance.

Lotter W, Hippe DS, Oshiro T, Lowry KP, Milch HS, Miglioretti DL, Elmore JG, Lee CI, Hsu W

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of AI and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kVp, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography DREAM Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28,278 screening two-dimensional mammograms from 22,626 women (mean age 58.5 years ± 11.5 [SD]; 4913 women had multiple mammograms). Of these, 324 examinations resulted in breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. ©RSNA, 2025.
Page 2 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.