Sort by:
Page 1 of 30293 results
Next

Global mapping of artificial intelligence applications in breast cancer from 1988-2024: a machine learning approach.

Nguyen THT, Jeon S, Yoon J, Park B

pubmed logopapersSep 29 2025
Artificial intelligence (AI) has become increasingly integral to various aspects of breast cancer care, including screening, diagnosis, and treatment. This study aimed to critically examine the application of AI throughout the breast cancer care continuum to elucidate key research developments, emerging trends, and prevalent patterns. English articles and reviews published between 1988 and 2024 were retrieved from the Web of Science database, focusing on studies that applied AI in breast cancer research. Collaboration among countries was analyzed using co-authorship networks and co-occurrence mapping. Additionally, clustering analysis using Latent Dirichlet Allocation (LDA) was conducted for topic modeling, whereas linear regression was employed to assess trends in research outputs over time. A total of 8,711 publications were included in the analysis. The United States has led the research in applying AI to the breast cancer care continuum, followed by China and India. Recent publications have increasingly focused on the utilization of deep learning and machine learning (ML) algorithms for automated breast cancer detection in mammography and histopathology. Moreover, the integration of multi-omics data and molecular profiling with AI has emerged as a significant trend. However, research on the applications of robotic and ML technologies in surgical oncology and postoperative care remains limited. Overall, the volume of research addressing AI for early detection, diagnosis, and classification of breast cancer has markedly increased over the past five years. The rapid expansion of AI-related research on breast cancer underscores its potential impact. However, significant challenges remain. Ongoing rigorous investigations are essential to ensure that AI technologies yield evidence-based benefits across diverse patient populations, thereby avoiding the inadvertent exacerbation of existing healthcare disparities.

Prediction of neoadjuvant chemotherapy efficacy in patients with HER2-low breast cancer based on ultrasound radiomics.

Peng Q, Ji Z, Xu N, Dong Z, Zhang T, Ding M, Qu L, Liu Y, Xie J, Jin F, Chen B, Song J, Zheng A

pubmed logopapersSep 26 2025
Neoadjuvant chemotherapy (NAC) is a crucial therapeutic approach for treating breast cancer, yet accurately predicting treatment response remains a significant clinical challenge. Conventional ultrasound plays a vital role in assessing tumor morphology but lacks the ability to quantitatively capture intratumoral heterogeneity. Ultrasound radiomics, which extracts high-throughput quantitative imaging features, offers a novel approach to enhance NAC response prediction. This study aims to evaluate the predictive efficacy of ultrasound radiomics models based on pre-treatment, post-treatment, and combined imaging features for assessing the NAC response in patients with HER2-low breast cancer. This retrospective multicenter study included 359 patients with HER2-low breast cancer who underwent NAC between January 1, 2016, and December 31, 2020. A total of 488 radiomic features were extracted from pre- and post-treatment ultrasound images. Feature selection was conducted in two stages: first, Pearson correlation analysis (threshold: 0.65) was applied to remove highly correlated features and reduce redundancy; then, Recursive Feature Elimination with Cross-Validation (RFECV) was employed to identify the optimal feature subset for model construction. The dataset was divided into a training set (244 patients) and an external validation set (115 patients from independent centers). Model performance was assessed via the area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score. Three models were initially developed: (1) a pre-treatment model (AUC = 0.716), (2) a post-treatment model (AUC = 0.772), and (3) a combined pre- and post-treatment model (AUC = 0.762).To enhance feature selection, Recursive Feature Elimination with Cross-Validation was applied, resulting in optimized models with reduced feature sets: (1) the pre-treatment model (AUC = 0.746), (2) the post-treatment model (AUC = 0.712), and (3) the combined model (AUC = 0.759). Ultrasound radiomics is a non-invasive and promising approach for predicting response to neoadjuvant chemotherapy in HER2-low breast cancer. The pre-treatment model yielded reliable performance after feature selection. While the combined model did not substantially enhance predictive accuracy, its stable performance suggests that longitudinal ultrasound imaging may help capture treatment-induced phenotypic changes. These findings offer preliminary support for individualized therapeutic decision-making.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

AI-driven MRI biomarker for triple-class HER2 expression classification in breast cancer: a large-scale multicenter study.

Wong C, Yang Q, Liang Y, Wei Z, Dai Y, Xu Z, Chen X, Du S, Han C, Liang C, Zhang L, Liu Z, Wang Y, Shi Z

pubmed logopapersSep 26 2025
Accurate classification of Human epidermal growth factor receptor 2 (HER2) expression is crucial for guiding treatment in breast cancer, especially with emerging therapies like trastuzumab deruxtecan (T-DXd) for HER2-low patients. Current gold-standard methods relying on invasive biopsy and immunohistochemistry suffer from sampling bias and interobserver variability, highlighting the need for reliable non-invasive alternatives. We developed an artificial intelligence framework that integrates a pretrained foundation model with a task-specific classifier to predict HER2 expression categories (HER2-zero, HER2-low, HER2-positive) directly from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The model was trained and validated using multicenter datasets. Model interpretability was assessed through feature visualization using t-SNE and UMAP dimensionality reduction techniques, complemented by SHAP analysis for post-hoc interpretation of critical predictive imaging features. The developed model demonstrated robust performance across datasets, achieving micro-average AUCs of 0.821 (95% CI 0.795–0.846) and 0.835 (95% CI 0.797–0.864), and macro-average AUCs of 0.833 (95% CI 0.818–0.847) and 0.857 (95% CI 0.837–0.872) in external validation. Subgroup analysis demonstrated strong discriminative power in distinguishing HER2 categories, particularly HER2-zero and HER2-low cases. Visualization techniques revealed distinct, biologically plausible clustering patterns corresponding to HER2 expression categories. This study presents a reproducible, non-invasive solution for comprehensive HER2 phenotyping using DCE-MRI, addressing fundamental limitations of biopsy-dependent assessment. Our approach enables accurate identification of HER2-low patients who may benefit from novel therapies like T-DXd. This framework represents a significant advancement in precision oncology, with potential to transform diagnostic workflows and guide targeted therapy selection in breast cancer care. The online version contains supplementary material available at 10.1186/s13058-025-02118-2.

The Evolution and Clinical Impact of Deep Learning Technologies in Breast MRI.

Fujioka T, Fujita S, Ueda D, Ito R, Kawamura M, Fushimi Y, Tsuboyama T, Yanagawa M, Yamada A, Tatsugami F, Kamagata K, Nozaki T, Matsui Y, Fujima N, Hirata K, Nakaura T, Tateishi U, Naganawa S

pubmed logopapersSep 26 2025
The integration of deep learning (DL) in breast MRI has revolutionized the field of medical imaging, notably enhancing diagnostic accuracy and efficiency. This review discusses the substantial influence of DL technologies across various facets of breast MRI, including image reconstruction, classification, object detection, segmentation, and prediction of clinical outcomes such as response to neoadjuvant chemotherapy and recurrence of breast cancer. Utilizing sophisticated models such as convolutional neural networks, recurrent neural networks, and generative adversarial networks, DL has improved image quality and precision, enabling more accurate differentiation between benign and malignant lesions and providing deeper insights into disease behavior and treatment responses. DL's predictive capabilities for patient-specific outcomes also suggest potential for more personalized treatment strategies. The advancements in DL are pioneering a new era in breast cancer diagnostics, promising more personalized and effective healthcare solutions. Nonetheless, the integration of this technology into clinical practice faces challenges, necessitating further research, validation, and development of legal and ethical frameworks to fully leverage its potential.

Machine and Deep Learning applied to Medical Microwave Imaging: a Scoping Review from Reconstruction to Classification.

Silva T, Conceicao RC, Godinho DM

pubmed logopapersSep 25 2025
Microwave Imaging (MWI) is a promising modality due to its noninvasive nature and lower cost compared to other medical imaging techniques. These characteristics make it a potential alternative to traditional imaging techniques. It has various medical applications, particularly exploited in breast and brain imaging. Machine Learning (ML) has also been increasingly used for medical applications. This paper provides a scoping review of the role of ML in MWI, focusing on two key areas: image reconstruction and classification. The reconstruction section discusses various ML algorithms used to enhance image quality and computational efficiency, highlighting methods such as Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs). The classification section delves into the application of ML for distinguishing between different tissue types, including applications in breast cancer detection and neurological disorder classification. By analyzing the latest studies and methodologies, this review aims review to the current state of ML-enhanced MWI and sheds light on its potential for clinical applications.

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

Mammo-CLIP Dissect: A Framework for Analysing Mammography Concepts in Vision-Language Models

Suaiba Amina Salahuddin, Teresa Dorszewski, Marit Almenning Martiniussen, Tone Hovda, Antonio Portaluri, Solveig Thrun, Michael Kampffmeyer, Elisabeth Wetzer, Kristoffer Wickstrøm, Robert Jenssen

arxiv logopreprintSep 25 2025
Understanding what deep learning (DL) models learn is essential for the safe deployment of artificial intelligence (AI) in clinical settings. While previous work has focused on pixel-based explainability methods, less attention has been paid to the textual concepts learned by these models, which may better reflect the reasoning used by clinicians. We introduce Mammo-CLIP Dissect, the first concept-based explainability framework for systematically dissecting DL vision models trained for mammography. Leveraging a mammography-specific vision-language model (Mammo-CLIP) as a "dissector," our approach labels neurons at specified layers with human-interpretable textual concepts and quantifies their alignment to domain knowledge. Using Mammo-CLIP Dissect, we investigate three key questions: (1) how concept learning differs between DL vision models trained on general image datasets versus mammography-specific datasets; (2) how fine-tuning for downstream mammography tasks affects concept specialisation; and (3) which mammography-relevant concepts remain underrepresented. We show that models trained on mammography data capture more clinically relevant concepts and align more closely with radiologists' workflows than models not trained on mammography data. Fine-tuning for task-specific classification enhances the capture of certain concept categories (e.g., benign calcifications) but can reduce coverage of others (e.g., density-related features), indicating a trade-off between specialisation and generalisation. Our findings show that Mammo-CLIP Dissect provides insights into how convolutional neural networks (CNNs) capture mammography-specific knowledge. By comparing models across training data and fine-tuning regimes, we reveal how domain-specific training and task-specific adaptation shape concept learning. Code and concept set are available: https://github.com/Suaiba/Mammo-CLIP-Dissect.

End-to-end CNN-based deep learning enhances breast lesion characterization using quantitative ultrasound (QUS) spectral parametric images.

Osapoetra LO, Moslemi A, Moore-Palhares D, Halstead S, Alberico D, Hwang A, Sannachi L, Curpen B, Czarnota GJ

pubmed logopapersSep 25 2025
QUS spectral parametric imaging offers a fast and accurate method for breast lesion characterization. This study explored using deep CNNs to classify breast lesions from QUS spectral parametric images, aiming to enhance radiomics and conventional machine learning. Predictive models were developed using transfer learning with pre-trained CNNs to distinguish malignant from benign lesions. The dataset included 276 participants: 184 malignant (median age, 51 years [IQR: 27-81 years]) and 92 benign cases (median age, 46 years [IQR: 18-75 years]). QUS spectral parametric imaging was applied to the US RF data and resulted in 1764 images of QUS spectral (MBF, SS, and SI), along with QUS scattering parameters (ASD and AAC). The data were randomly split into 60% training, 20% validation, and 20% test sets, stratified by lesion subtype, and repeated five times. The number of convolutional blocks was optimized, and the final convolutional layer was fine-tuned. Models tested included ResNet, Inception-v3, Xception, and EfficientNet. Xception-41 achieved a recall of 86 ± 3%, specificity of 87 ± 5%, balanced accuracy of 87 ± 3%, and an AUC of 0.93 ± 0.02 on test sets. EfficientNetV2-M showed similar performance with a recall of 91 ± 1%, specificity of 81 ± 7%, balanced accuracy of 86 ± 3%, and an AUC of 0.92 ± 0.02. CNN models outperformed radiomics and conventional machine learning (p-values < 0.05). This study demonstrated the capability of end-to-end CNN-based models for the accurate characterization of breast masses from QUS spectral parametric images.

Deep learning and radiomics integration of photoacoustic/ultrasound imaging for non-invasive prediction of luminal and non-luminal breast cancer subtypes.

Wang M, Mo S, Li G, Zheng J, Wu H, Tian H, Chen J, Tang S, Chen Z, Xu J, Huang Z, Dong F

pubmed logopapersSep 24 2025
This study aimed to develop a Deep Learning Radiomics integrated model (DLRN), which combines photoacoustic/ultrasound(PA/US)imaging with clinical and radiomics features to distinguish between luminal and non-luminal BC in a preoperative setting. A total of 388 BC patients were included, with 271 in the training group and 117 in the testing group. Radiomics and deep learning features were extracted from PA/US images using Pyradiomics and ResNet50, respectively. Feature selection was performed using independent sample t-tests, Pearson correlation analysis, and LASSO regression to build a Deep Learning Radiomics (DLR) model. Based on the results of univariate and multivariate logistic regression analyses, the DLR model was combined with valuable clinical features to construct the DLRN model. Model efficacy was assessed using AUC, accuracy, sensitivity, specificity, and NPV. The DLR model comprised 3 radiomic features and 6 deep learning features, which, when combined with significant clinical predictors, formed the DLRN model. In the testing set, the AUC of the DLRN model (0.924 [0.877-0.972]) was significantly higher than that of the DLR (AUC 0.847 [0.758-0.936], p = 0.026), DL (AUC 0.822 [0.725-0.919], p = 0.06), Rad (AUC 0.717 [0.597-0.838], p < 0.001), and clinical (AUC 0.820 [0.745-0.895], p = 0.002) models. These findings indicate that the DLRN model (integrated model) exhibited the most favorable predictive performance among all models evaluated. The DLRN model effectively integrates PA/US imaging with clinical data, showing potential for preoperative molecular subtype prediction and guiding personalized treatment strategies for BC patients.
Page 1 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.