Sort by:
Page 1 of 876 results
Next

Global mapping of artificial intelligence applications in breast cancer from 1988-2024: a machine learning approach.

Nguyen THT, Jeon S, Yoon J, Park B

pubmed logopapersSep 29 2025
Artificial intelligence (AI) has become increasingly integral to various aspects of breast cancer care, including screening, diagnosis, and treatment. This study aimed to critically examine the application of AI throughout the breast cancer care continuum to elucidate key research developments, emerging trends, and prevalent patterns. English articles and reviews published between 1988 and 2024 were retrieved from the Web of Science database, focusing on studies that applied AI in breast cancer research. Collaboration among countries was analyzed using co-authorship networks and co-occurrence mapping. Additionally, clustering analysis using Latent Dirichlet Allocation (LDA) was conducted for topic modeling, whereas linear regression was employed to assess trends in research outputs over time. A total of 8,711 publications were included in the analysis. The United States has led the research in applying AI to the breast cancer care continuum, followed by China and India. Recent publications have increasingly focused on the utilization of deep learning and machine learning (ML) algorithms for automated breast cancer detection in mammography and histopathology. Moreover, the integration of multi-omics data and molecular profiling with AI has emerged as a significant trend. However, research on the applications of robotic and ML technologies in surgical oncology and postoperative care remains limited. Overall, the volume of research addressing AI for early detection, diagnosis, and classification of breast cancer has markedly increased over the past five years. The rapid expansion of AI-related research on breast cancer underscores its potential impact. However, significant challenges remain. Ongoing rigorous investigations are essential to ensure that AI technologies yield evidence-based benefits across diverse patient populations, thereby avoiding the inadvertent exacerbation of existing healthcare disparities.

Benign vs malignant tumors classification from tumor outlines in mammography scans using artificial intelligence techniques.

Beni HM, Asaei FY

pubmed logopapersSep 21 2025
Breast cancer is one of the most important causes of death among women due to cancer. With the early diagnosis of this condition, the probability of survival will increase. For this purpose, medical imaging methods, especially mammography, are used for screening and early diagnosis of breast abnormalities. The main goal of this study is to distinguish benign or malignant tumors based on tumor morphology features extracted from tumor outlines extracted from mammography images. Unlike previous studies, this study does not use the mammographic image itself but only extracts the exact outline of the tumor. These outlines were extracted from a new and publicly available mammography database published in 2024. The features outlines were calculated using known pre-trained Convolutional Neural Networks (CNN), including VGG16, ResNet50, Xception65, AlexNet, DenseNet, GoogLeNet, Inception-v3, and a combination of them to improve performance. These pre-trained networks have been used in many studies in various fields. In the classification part, known Machine Learning (ML) algorithms, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Neural Network (NN), Naïve Bayes (NB), Decision Tree (DT), and a combination of them have been compared in outcome measures, namely accuracy, specificity, sensitivity, and precision. Also, with the use of data augmentation, the dataset size was increased about 6-8 times, and the K-fold cross-validation technique (K = 5) was used in this study. Based on the performed simulations, a combination of the features from all pre-trained deep networks and the NB classifier resulted in the best possible outcomes with 88.13 % accuracy, 92.52 % specificity, 83.73 % sensitivity, and 92.04 % precision. Furthermore, validation on DMID dataset using ResNet50 features along with NB classifier, led to 92.03 % accuracy, 95.57 % specificity, 88.49 % sensitivity, and 95.23 % precision. This study sheds light on using AI algorithms to prevent biopsy tests and speed up breast cancer tumor classification using tumor outlines in mammographic images.

Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance.

Lotter W, Hippe DS, Oshiro T, Lowry KP, Milch HS, Miglioretti DL, Elmore JG, Lee CI, Hsu W

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of AI and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kVp, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography DREAM Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28,278 screening two-dimensional mammograms from 22,626 women (mean age 58.5 years ± 11.5 [SD]; 4913 women had multiple mammograms). Of these, 324 examinations resulted in breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. ©RSNA, 2025.

Predicting cardiovascular events from routine mammograms using machine learning.

Barraclough JY, Gandomkar Z, Fletcher RA, Barbieri S, Kuo NI, Rodgers A, Douglas K, Poppe KK, Woodward M, Luxan BG, Neal B, Jorm L, Brennan P, Arnott C

pubmed logopapersSep 16 2025
Cardiovascular risk is underassessed in women. Many women undergo screening mammography in midlife when the risk of cardiovascular disease rises. Mammographic features such as breast arterial calcification and tissue density are associated with cardiovascular risk. We developed and tested a deep learning algorithm for cardiovascular risk prediction based on routine mammography images. Lifepool is a cohort of women with at least one screening mammogram linked to hospitalisation and death databases. A deep learning model based on DeepSurv architecture was developed to predict major cardiovascular events from mammography images. Model performance was compared against standard risk prediction models using the concordance index, comparative to the Harrells C-statistic. There were 49 196 women included, with a median follow-up of 8.8 years (IQR 7.7-10.6), among whom 3392 experienced a first major cardiovascular event. The DeepSurv model using mammography features and participant age had a concordance index of 0.72 (95% CI 0.71 to 0.73), with similar performance to modern models containing age and clinical variables including the New Zealand 'PREDICT' tool and the American Heart Association 'PREVENT' equations. A deep learning algorithm based on only mammographic features and age predicted cardiovascular risk with performance comparable to traditional cardiovascular risk equations. Risk assessments based on mammography may be a novel opportunity for improving cardiovascular risk screening in women.

Mammographic features in screening mammograms with high AI scores but a true-negative screening result.

Koch HW, Bergan MB, Gjesvik J, Larsen M, Bartsch H, Haldorsen IHS, Hofvind S

pubmed logopapersSep 16 2025
BackgroundThe use of artificial intelligence (AI) in screen-reading of mammograms has shown promising results for cancer detection. However, less attention has been paid to the false positives generated by AI.PurposeTo investigate mammographic features in screening mammograms with high AI scores but a true-negative screening result.Material and MethodsIn this retrospective study, 54,662 screening examinations from BreastScreen Norway 2010-2022 were analyzed with a commercially available AI system (Transpara v. 2.0.0). An AI score of 1-10 indicated the suspiciousness of malignancy. We selected examinations with an AI score of 10, with a true-negative screening result, followed by two consecutive true-negative screening examinations. Of the 2,124 examinations matching these criteria, 382 random examinations underwent blinded consensus review by three experienced breast radiologists. The examinations were classified according to mammographic features, radiologist interpretation score (1-5), and mammographic breast density (BI-RADS 5th ed. a-d).ResultsThe reviews classified 91.1% (348/382) of the examinations as negative (interpretation score 1). All examinations (26/26) categorized as BI-RADS d were given an interpretation score of 1. Classification of mammographic features: asymmetry = 30.6% (117/382); calcifications = 30.1% (115/382); asymmetry with calcifications = 29.3% (112/382); mass = 8.9% (34/382); distortion = 0.8% (3/382); spiculated mass = 0.3% (1/382). For examinations with calcifications, 79.1% (91/115) were classified with benign morphology.ConclusionThe majority of false-positive screening examinations generated by AI were classified as non-suspicious in a retrospective blinded consensus review and would likely not have been recalled for further assessment in a real screening setting using AI as a decision support.

Breast cancer risk assessment for screening: a hybrid artificial intelligence approach.

Tendero R, Larroza A, Pérez-Benito FJ, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersSep 11 2025
This study evaluates whether integrating clinical data with mammographic features using artificial intelligence (AI) improves 2-year breast cancer risk prediction compared to using either data type alone. This retrospective nested case-control study included 2193 women (mean age, 59 ± 5 years) screened at Hospital del Mar, Spain (2013-2020), with 418 cases (mammograms taken 2 years before diagnosis) and 1775 controls (cancer-free for ≥ 2 years). Three models were evaluated: (1) ERTpd + im, based on Extremely Randomized Trees (ERT), split into sub-models for personal data (ERTpd) and image features (ERTim); (2) an image-only model (CNN); and (3) a hybrid model (ERTpd + im + CNN). Five-fold cross-validation, area under the receiver operating characteristic curve (AUC), bootstrapping for confidence intervals, and DeLong tests for paired data assessed performance. Robustness was evaluated across breast density quartiles and detection type (screen-detected vs. interval cancers). The hybrid model achieved an AUC of 0.75 (95% CI: 0.71-0.76), significantly outperforming the CNN model (AUC, 0.74; 95% CI: 0.70-0.75; p < 0.05) and slightly surpassing ERTpd + im (AUC, 0.74; 95% CI: 0.70-0.76). Sub-models ERTpd and ERTim had AUCs of 0.59 and 0.73, respectively. The hybrid model performed consistently across breast density quartiles (p > 0.05) and better for screen-detected (AUC, 0.79) than interval cancers (AUC, 0.59; p < 0.001). This study shows that integrating clinical and mammographic data with AI improves 2-year breast cancer risk prediction, outperforming single-source models. The hybrid model demonstrated higher accuracy and robustness across breast density quartiles, with better performance for screen-detected cancers. Question Current breast cancer risk models have limitations in accuracy. Can integrating clinical and mammographic data using artificial intelligence (AI) improve short-term risk prediction? Findings A hybrid model combining clinical and imaging data achieved the highest accuracy in predicting 2-year breast cancer risk, outperforming models using either data type alone. Clinical relevance Integrating clinical and mammographic data with AI improves breast cancer risk prediction. This approach enables personalized screening strategies and supports early detection. It helps identify high-risk women and optimizes the use of additional assessments within screening programs.

Implementing a Resource-Light and Low-Code Large Language Model System for Information Extraction from Mammography Reports: A Pilot Study.

Dennstädt F, Fauser S, Cihoric N, Schmerder M, Lombardo P, Cereghetti GM, von Däniken S, Minder T, Meyer J, Chiang L, Gaio R, Lerch L, Filchenko I, Reichenpfader D, Denecke K, Vojvodic C, Tatalovic I, Sander A, Hastings J, Aebersold DM, von Tengg-Kobligk H, Nairz K

pubmed logopapersSep 10 2025
Large language models (LLMs) have been successfully used for data extraction from free-text radiology reports. Most current studies were conducted with LLMs accessed via an application programming interface (API). We evaluated the feasibility of using open-source LLMs, deployed on limited local hardware resources for data extraction from free-text mammography reports, using a common data element (CDE)-based structure. Seventy-nine CDEs were defined by an interdisciplinary expert panel, reflecting real-world reporting practice. Sixty-one reports were classified by two independent researchers to establish ground truth. Five different open-source LLMs deployable on a single GPU were used for data extraction using the general-classifier Python package. Extractions were performed for five different prompt approaches with calculation of overall accuracy, micro-recall and micro-F1. Additional analyses were conducted using thresholds for the relative probability of classifications. High inter-rater agreement was observed between manual classifiers (Cohen's kappa 0.83). Using default prompts, the LLMs achieved accuracies of 59.2-72.9%. Chain-of-thought prompting yielded mixed results, while few-shot prompting led to decreased accuracy. Adaptation of the default prompts to precisely define classification tasks improved performance for all models, with accuracies of 64.7-85.3%. Setting certainty thresholds further improved accuracies to > 90% but reduced the coverage rate to < 50%. Locally deployed open-source LLMs can effectively extract information from mammography reports, maintaining compatibility with limited computational resources. Selection and evaluation of the model and prompting strategy are critical. Clear, task-specific instructions appear crucial for high performance. Using a CDE-based framework provides clear semantics and structure for the data extraction.

An economic scenario analysis of implementing artificial intelligence in BreastScreen Norway-Impact on radiologist person-years, costs and effects.

Moger TA, Nardin SB, Holen ÅS, Moshina N, Hofvind S

pubmed logopapersSep 9 2025
ObjectiveTo study the implications of implementing artificial intelligence (AI) as a decision support tool in the Norwegian breast cancer screening program concerning cost-effectiveness and time savings for radiologists.MethodsIn a decision tree model using recent data from AI vendors and the Cancer Registry of Norway, and assuming equal effectiveness of radiologists plus AI compared to standard practice, we simulated costs, effects and radiologist person-years over the next 20 years under different scenarios: 1) Assuming a €1 additional running cost of AI instead of the €3 assumed in the base case, 2) varying the AI-score thresholds for single vs. double readings, 3) varying the consensus and recall rates, and 4) reductions in the interval cancer rate compared to standard practice.ResultsAI was unlikely to be cost-effective, even when only one radiologist was used alongside AI for all screening exams. This also applied when assuming a 10% reduction in the consensus and recall rates. However, there was a 30-50% reduction in the radiologists' screen-reading volume. Assuming an additional running cost of €1 for AI, the costs were comparable, with similar probabilities of cost-effectiveness for AI and standard practice. Assuming a 5% reduction in the interval cancer rate, AI proved to be cost-effective across all willingness-to-pay values.ConclusionsAI may be cost-effective if the interval cancer rate is reduced by 5% or more, or if its additional cost is €1 per screening exam. Despite a substantial reduction in screening volume, this remains modest relative to the total radiologist person-years available within breast centers, accounting for only 3-4% of person-years.

Lesion Asymmetry Screening Assisted Global Awareness Multi-view Network for Mammogram Classification.

Liu X, Sun L, Li C, Han B, Jiang W, Yuan T, Liu W, Liu Z, Yu Z, Liu B

pubmed logopapersSep 9 2025
Mammography is a primary method for early screening, and developing deep learning-based computer-aided systems is of great significance. However, current deep learning models typically treat each image as an independent entity for diagnosis, rather than integrating images from multiple views to diagnose the patient. These methods do not fully consider and address the complex interactions between different views, resulting in poor diagnostic performance and interpretability. To address this issue, this paper proposes a novel end-to-end framework for breast cancer diagnosis: lesion asymmetry screening assisted global awareness multi-view network (LAS-GAM). More than just the most common image-level diagnostic model, LAS-GAM operates at the patient level, simulating the workflow of radiologists analyzing mammographic images. The framework processes the four views of a patient and revolves around two key modules: a global module and a lesion screening module. The global module simulates the comprehensive assessment by radiologists, integrating complementary information from the craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts to generate global features that represent the patient's overall condition. The lesion screening module mimics the process of locating lesions by comparing symmetric regions in contralateral views, identifying potential lesion areas and extracting lesion-specific features using a lightweight model. By combining the global features and lesion-specific features, LAS-GAM simulates the diagnostic process, making patient-level predictions. Moreover, it is trained using only patient-level labels, significantly reducing data annotation costs. Experiments on the Digital Database for Screening Mammography (DDSM) and In-house datasets validate LAS-GAM, achieving AUCs of 0.817 and 0.894, respectively.

Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence.

da Rocha NC, Barbosa AMP, Schnr YO, Peres LDB, de Andrade LGM, de Magalhaes Rosa GJ, Pessoa EC, Corrente JE, de Arruda Silveira LV

pubmed logopapersSep 5 2025
Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.
Page 1 of 876 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.