Sort by:
Page 6 of 876 results

AI-supported approaches for mammography single and double reading: A controlled multireader study.

Brancato B, Magni V, Saieva C, Risso GG, Buti F, Catarzi S, Ciuffi F, Peruzzi F, Regini F, Ambrogetti D, Alabiso G, Cruciani A, Doronzio V, Frati S, Giannetti GP, Guerra C, Valente P, Vignoli C, Atzori S, Carrera V, D'Agostino G, Fazzini G, Picano E, Turini FM, Vani V, Fantozzi F, Vietro D, Cavallero D, Vietro F, Plataroti D, Schiaffino S, Cozzi A

pubmed logopapersJun 1 2025
To assess the impact of artificial intelligence (AI) on the diagnostic performance of radiologists with varying experience levels in mammography reading, considering single and simulated double reading approaches. In this retrospective study, 150 mammography examinations (30 with pathology-confirmed malignancies, 120 without malignancies [confirmed by 2-year follow-up]) were reviewed according to five approaches: A) human single reading by 26 radiologists of varying experience; B) AI single reading (Lunit INSIGHT MMG; C) human single reading with simultaneous AI support; D) simulated human-human double reading; E) simulated human-AI double reading, with AI as second independent reader flagging cases with a cancer probability ≥10 %. Sensitivity and specificity were calculated and compared using McNemar's test, univariate and multivariable logistic regression. Compared to single reading without AI support, single reading with simultaneous AI support improved mean sensitivity from 69.2 % (standard deviation [SD] 15.6) to 84.5 % (SD 8.1, p < 0.001), providing comparable mean specificity (91.8 % versus 90.8 %, p = 0.06). The sensitivity increase provided by the AI-supported single reading was largest in the group of radiologists with a sensitivity below the median in the non-supported single reading, from 56.7 % (SD 12.1) to 79.7 % (SD 10.2, p < 0.001). In the simulated human-AI double reading approach, sensitivity further increased to 91.8 % (SD 3.4), surpassing that of the human-human simulated double reading (87.4 %, SD 8.8, p = 0.016), with comparable mean specificity (from 84.0 % to 83.0 %, p = 0.17). AI support significantly enhanced sensitivity across all reading approaches, particularly benefiting worse performing radiologists. In the simulated double reading approaches, AI incorporation as independent second reader significantly increased sensitivity without compromising specificity.

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends.

Wang R, Chen F, Chen H, Lin C, Shuai J, Wu Y, Ma L, Hu X, Wu M, Wang J, Zhao Q, Shuai J, Pan J

pubmed logopapersJun 1 2025
The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Mexican dataset of digital mammograms (MEXBreast) with suspicious clusters of microcalcifications.

Lozoya RSL, Barragán KN, Domínguez HJO, Azuela JHS, Sánchez VGC, Villegas OOV

pubmed logopapersJun 1 2025
Breast cancer is one of the most prevalent cancers affecting women worldwide. Early detection and treatment are crucial in significantly reducing mortality rates Microcalcifications (MCs) are of particular importance among the various breast lesions. These tiny calcium deposits within breast tissue are present in approximately 30% of malignant tumors and can serve as critical indirect indicators of early-stage breast cancer. Three or more MCs within an area of 1 cm² are considered a Microcalcification Cluster (MCC) and assigned a BI-RADS category 4, indicating a suspicion of malignancy. Mammography is the most used technique for breast cancer detection. Approximately one in two mammograms showing MCCs is confirmed as cancerous through biopsy. MCCs are challenging to detect, even for experienced radiologists, underscoring the need for computer-aided detection tools such as Convolutional Neural Networks (CNNs). CNNs require large amounts of domain-specific data with consistent resolutions for effective training. However, most publicly available mammogram datasets either lack resolution information or are compiled from heterogeneous sources. Additionally, MCCs are often either unlabeled or sparsely represented in these datasets, limiting their utility for training CNNs. In this dataset, we present the MEXBreast, an annotated MCCs Mexican digital mammogram database, containing images from resolutions of 50, 70, and 100 microns. MEXBreast aims to support the training, validation, and testing of deep learning CNNs.

Keeping AI on Track: Regular monitoring of algorithmic updates in mammography.

Taib AG, James JJ, Partridge GJW, Chen Y

pubmed logopapersJun 1 2025
To demonstrate a method of benchmarking the performance of two consecutive software releases of the same commercial artificial intelligence (AI) product to trained human readers using the Personal Performance in Mammographic Screening scheme (PERFORMS) external quality assurance scheme. In this retrospective study, ten PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between 2012 and 2023 and were evaluated by Version 1 (V1) and Version 2 (V2) of the same AI model in 2022 and 2023 respectively. Both AI and humans considered each breast independently. Both AI and humans considered the highest suspicion of malignancy score per breast for non-malignant cases and per lesion for breasts with malignancy. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for comparison, with the study powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. The study included 1,254 human readers, with a total of 328 malignant lesions, 823 normal, and 55 benign breasts analysed. No significant difference was found between the AUCs for AI V1 (0.93) and V2 (0.94) (p = 0.13). In terms of sensitivity, no difference was observed between human readers and AI V1 (83.2 % vs 87.5 % respectively, p = 0.12), however V2 outperformed humans (88.7 %, p = 0.04). Specificity was higher for AI V1 (87.4 %) and V2 (88.2 %) compared to human readers (79.0 %, p < 0.01 respectively). The upgraded AI model showed no significant difference in diagnostic performance compared to its predecessor when evaluating mammograms from PERFORMS test sets.

Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles.

Walton WC, Kim SJ

pubmed logopapersJun 1 2025
Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.

A Machine Learning Model Based on Global Mammographic Radiomic Features Can Predict Which Normal Mammographic Cases Radiology Trainees Find Most Difficult.

Siviengphanom S, Brennan PC, Lewis SJ, Trieu PD, Gandomkar Z

pubmed logopapersJun 1 2025
This study aims to investigate whether global mammographic radiomic features (GMRFs) can distinguish hardest- from easiest-to-interpret normal cases for radiology trainees (RTs). Data from 137 RTs were analysed, with each interpreting seven educational self-assessment test sets comprising 60 cases (40 normal and 20 cancer). The study only examined normal cases. Difficulty scores were computed based on the percentage of readers who incorrectly classified each case, leading to their classification as hardest- or easiest-to-interpret based on whether their difficulty scores fell within and above the 75th or within and below the 25th percentile, respectively (resulted in 140 cases in total used). Fifty-nine low-density and 81 high-density cases were identified. Thirty-four GMRFs were extracted for each case. A random forest machine learning model was trained to differentiate between hardest- and easiest-to-interpret normal cases and validated using leave-one-out-cross-validation approach. The model's performance was evaluated using the area under receiver operating characteristic curve (AUC). Significant features were identified through feature importance analysis. Difference between hardest- and easiest-to-interpret cases among 34 GMRFs and in difficulty level between low- and high-density cases was tested using Kruskal-Wallis. The model achieved AUC = 0.75 with cluster prominence and range emerging as the most useful features. Fifteen GMRFs differed significantly (p < 0.05) between hardest- and easiest-to-interpret cases. Difficulty level among low- vs high-density cases did not differ significantly (p = 0.12). GMRFs can predict hardest-to-interpret normal cases for RTs, underscoring the importance of GMRFs in identifying the most difficult normal cases for RTs and facilitating customised training programmes tailored to trainees' learning needs.

Artificial Intelligence for Assessment of Digital Mammography Positioning Reveals Persistent Challenges.

Margolies LR, Spear GG, Payne JI, Iles SE, Abdolell M

pubmed logopapersMay 30 2025
Mammographic breast cancer detection depends on high-quality positioning, which is traditionally assessed and monitored subjectively. This study used artificial intelligence (AI) to evaluate mammography positioning on digital screening mammograms to identify and quantify unmet mammography positioning quality (MPQ). Data were collected within an IRB-approved collaboration. In total, 126 367 digital mammography studies (553 339 images) were processed. Unmet MPQ criteria, including exaggeration, portion cutoff, posterior tissue missing, nipple not in profile, too high on image receptor, inadequate pectoralis length, sagging, and posterior nipple line (PNL) length difference, were evaluated using MPQ AI algorithms. The similarity of unmet MPQ occurrence and rank order was compared for each health system. Altogether, 163 759 and 219 785 unmet MPQ criteria were identified, respectively, at the health systems. The rank order and the probability distribution of the unmet MPQ criteria were not statistically significantly different between health systems (P = .844 and P = .92, respectively). The 3 most-common unmet MPQ criteria were: short PNL length on the craniocaudal (CC) view, inadequate pectoralis muscle, and excessive exaggeration on the CC view. The percentages of unmet positioning criteria out of the total potential unmet positioning criteria at health system 1 and health system 2 were 8.4% (163 759/1 949 922) and 7.3% (219 785/3 030 129), respectively. Artificial intelligence identified a similar distribution of unmet MPQ criteria in 2 health systems' daily work. Knowledge of current commonly unmet MPQ criteria can facilitate the improvement of mammography quality through tailored education strategies.

Mammogram mastery: Breast cancer image classification using an ensemble of deep learning with explainable artificial intelligence.

Kumar Mondal P, Jahan MK, Byeon H

pubmed logopapersMay 30 2025
Breast cancer is a serious public health problem and is one of the leading causes of cancer-related deaths in women worldwide. Early detection of the disease can significantly increase the chances of survival. However, manual analysis of mammogram mastery images is complex and time-consuming, which can lead to disagreements among experts. For this reason, automated diagnostic systems can play a significant role in increasing the accuracy and efficiency of diagnosis. In this study, we present an effective deep learning (DL) method, which classifies mammogram mastery images into cancer and noncancer categories using a collected dataset. Our model is pretrained based on the Inception V3 architecture. First, we run 5-fold cross-validation tests on the fully trained and fine-tuned Inception V3 model. Next, we apply a combined method based on likelihood and mean, where the fine-tuned Inception V3 model demonstrated superior performance in classification. Our DL model achieved 99% accuracy and 99% F1 score. In addition, interpretable AI techniques were used to enhance the transparency of the classification process. The finely tuned Inception V3 model demonstrated the highest performance in classification, confirming its effectiveness in automatic breast cancer detection. The experimental results clearly indicate that our proposed DL-based method for breast cancer image classification is highly effective, especially its application in image-based diagnostic methods. This study brings to the fore the huge potential of AI-based solutions, which can play a significant role in increasing the accuracy and reliability of breast cancer diagnosis.

Bias in Artificial Intelligence: Impact on Breast Imaging.

Net JM, Collado-Mesa F

pubmed logopapersMay 30 2025
Artificial intelligence (AI) in breast imaging has garnered significant attention given the numerous reports of improved efficiency, accuracy, and the potential to bridge the gap of expanded volume in the face of limited physician resources. While AI models are developed with specific data points, on specific equipment, and in specific populations, the real-world clinical environment is dynamic, and patient populations are diverse, which can impact generalizability and widespread adoption of AI in clinical practice. Implementation of AI models into clinical practice requires focused attention on the potential of AI bias impacting outcomes. The following review presents the concept, sources, and types of AI bias to be considered when implementing AI models and offers suggestions on strategies to mitigate AI bias in practice.

Deep Learning-Based Breast Cancer Detection in Mammography: A Multi-Center Validation Study in Thai Population

Isarun Chamveha, Supphanut Chaiyungyuen, Sasinun Worakriangkrai, Nattawadee Prasawang, Warasinee Chaisangmongkon, Pornpim Korpraphong, Voraparee Suvannarerg, Shanigarn Thiravit, Chalermdej Kannawat, Kewalin Rungsinaporn, Suwara Issaragrisil, Payia Chadbunchachai, Pattiya Gatechumpol, Chawiporn Muktabhant, Patarachai Sereerat

arxiv logopreprintMay 29 2025
This study presents a deep learning system for breast cancer detection in mammography, developed using a modified EfficientNetV2 architecture with enhanced attention mechanisms. The model was trained on mammograms from a major Thai medical center and validated on three distinct datasets: an in-domain test set (9,421 cases), a biopsy-confirmed set (883 cases), and an out-of-domain generalizability set (761 cases) collected from two different hospitals. For cancer detection, the model achieved AUROCs of 0.89, 0.96, and 0.94 on the respective datasets. The system's lesion localization capability, evaluated using metrics including Lesion Localization Fraction (LLF) and Non-Lesion Localization Fraction (NLF), demonstrated robust performance in identifying suspicious regions. Clinical validation through concordance tests showed strong agreement with radiologists: 83.5% classification and 84.0% localization concordance for biopsy-confirmed cases, and 78.1% classification and 79.6% localization concordance for out-of-domain cases. Expert radiologists' acceptance rate also averaged 96.7% for biopsy-confirmed cases, and 89.3% for out-of-domain cases. The system achieved a System Usability Scale score of 74.17 for source hospital, and 69.20 for validation hospitals, indicating good clinical acceptance. These results demonstrate the model's effectiveness in assisting mammogram interpretation, with the potential to enhance breast cancer screening workflows in clinical practice.
Page 6 of 876 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.