Sort by:
Page 3 of 876 results

Enhancing breast positioning quality through real-time AI feedback.

Sexauer R, Riehle F, Borkowski K, Ruppert C, Potthast S, Schmidt N

pubmed logopapersJul 15 2025
Enhance mammography quality to increase cancer detection by implementing continuous AI-driven feedback mechanisms, ensuring reliable, consistent, and high-quality screening by the 'Perfect', 'Good', 'Moderate', and 'Inadequate' (PGMI) criteria. To assess the impact of the AI software 'b-box<sup>TM</sup>' on mammography quality, we conducted a comparative analysis of PGMI scores. We evaluated scores 50 days before (A) and after the software's implementation in 2021 (B), along with assessments made in the first week of August 2022 (C1) and 2023 (C2), comparing them to evaluations conducted by two readers. Except for postsurgical patients, we included all diagnostic and screening mammograms from one tertiary hospital. A total of 4577 mammograms from 1220 women (mean age: 59, range: 21-94, standard deviation: 11.18) were included. 1728 images were obtained before (A) and 2330 images after the 2021 software implementation (B), along with 269 images in 2022 (C1) and 250 images in 2023 (C2). The results indicated a significant improvement in diagnostic image quality (p < 0.01). The percentage of 'Perfect' examinations rose from 22.34% to 32.27%, while 'Inadequate' images decreased from 13.31% to 5.41% in 2021, continuing the positive trend with 4.46% and 3.20% 'inadequate' images in 2022 and 2023, respectively (p < 0.01). Using a reliable software platform to perform AI-driven quality evaluation in real-time has the potential to make lasting improvements in image quality, support radiographers' professional growth, and elevate institutional quality standards and documentation simultaneously. Question How can AI-powered quality assessment reduce inadequate mammographic quality, which is known to impact sensitivity and increase the risk of interval cancers? Findings AI implementation decreased 'inadequate' mammograms from 13.31% to 3.20% and substantially improved parenchyma visualization, with consistent subgroup trends. Clinical relevance By reducing 'inadequate' mammograms and enhancing imaging quality, AI-driven tools improve diagnostic reliability and support better outcomes in breast cancer screening.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

Breast lesion classification via colorized mammograms and transfer learning in a novel CAD framework.

Hussein AA, Valizadeh M, Amirani MC, Mirbolouk S

pubmed logopapersJul 11 2025
Medical imaging sciences and diagnostic techniques for Breast Cancer (BC) imaging have advanced tremendously, particularly with the use of mammography images; however, radiologists may still misinterpret medical images of the breast, resulting in limitations and flaws in the screening process. As a result, Computer-Aided Design (CAD) systems have become increasingly popular due to their ability to operate independently of human analysis. Current CAD systems use grayscale analysis, which lacks the contrast needed to differentiate benign from malignant lesions. As part of this study, an innovative CAD system is presented that transforms standard grayscale mammography images into RGB colored through a three-path preprocessing framework developed for noise reduction, lesion highlighting, and tumor-centric intensity adjustment using a data-driven transfer function. In contrast to a generic approach, this approach statistically tailors colorization in order to emphasize malignant regions, thus enhancing the ability of both machines and humans to recognize cancerous areas. As a consequence of this conversion, breast tumors with anomalies become more visible, which allows us to extract more accurate features about them. In a subsequent step, Machine Learning (ML) algorithms are employed to classify these tumors as malign or benign cases. A pre-trained model is developed to extract comprehensive features from colored mammography images by employing this approach. A variety of techniques are implemented in the pre-processing section to minimize noise and improve image perception; however, the most challenging methodology is the application of creative techniques to adjust pixels' intensity values in mammography images using a data-driven transfer function derived from tumor intensity histograms. This adjustment serves to draw attention to tumors while reducing the brightness of other areas in the breast image. Measuring criteria such as accuracy, sensitivity, specificity, precision, F1-Score, and Area Under the Curve (AUC) are used to evaluate the efficacy of the employed methodologies. This work employed and tested a variety of pre-training and ML techniques. However, the combination of EfficientNetB0 pre-training with ML Support Vector Machines (SVM) produced optimal results with accuracy, sensitivity, specificity, precision, F1-Score, and AUC, of 99.4%, 98.7%, 99.1%, 99%, 98.8%, and 100%, respectively. It is clear from these results that the developed method does not only advance the state-of-the-art in technical terms, but also provides radiologists with a practical tool to aid in the reduction of diagnostic errors and increase the detection of early breast cancer.

Uncertainty and normalized glandular dose evaluations in digital mammography and digital breast tomosynthesis with a machine learning methodology.

Sarno A, Massera RT, Paternò G, Cardarelli P, Marshall N, Bosmans H, Bliznakova K

pubmed logopapersJul 8 2025
To predict the normalized glandular dose (DgN) coefficients and the related uncertainty in mammography and digital breast tomosynthesis (DBT) using a machine learning algorithm and patient-like digital breast models. 126 patient-like digital breast phantoms were used for DgN Monte Carlo ground truth calculations. An Automatic Relevance Determination Regression algorithm was used to predict DgN from anatomical breast features. These features included compressed breast thickness, glandular fraction by volume, glandular volume, center of mass and standard deviation of the glandular tissue distribution in the cranio-caudal direction. An algorithm for data imputation was explored to account for avoiding the use of the latter two features. 5-fold cross validation showed that the predictive model provides an estimation of DgN with 1% average difference from the ground truth; this difference was less than 3% in 50% of the cases. The average uncertainty of the estimated DgN values was 9%. Excluding the information related to the glandular distribution increased this uncertainty to 17% without inducing a significant discrepancy in estimated DgN values, with half of the predicted cases differing from the ground truth by less than 9%. The data imputation algorithm reduced the estimated uncertainty, without restoring the original performance. Predictive performance improved by increasing tube voltage. The proposed methodology predicts the DgN in mammography and DBT for patient-derived breasts with an uncertainty below 9%. Predicting test evaluations reported 1% average difference from the ground truth, with 50% of the cohort cases differing by less than 5%.

Robust Bi-CBMSegNet framework for advancing breast mass segmentation in mammography with a dual module encoder-decoder approach.

Wang Y, Ali M, Mahmood T, Rehman A, Saba T

pubmed logopapersJul 8 2025
Breast cancer is a prevalent disease affecting millions of women worldwide, and early screening can significantly reduce mortality rates. Mammograms are widely used for screening, but manual readings can lead to misdiagnosis. Computer-assisted diagnosis can help physicians make faster, more accurate judgments, which benefits patients. However, segmenting and classifying breast masses in mammograms is challenging due to their similar shapes to the surrounding glands. Current target detection algorithms have limited applications and low accuracy. Automated segmentation of breast masses on mammograms is a significant research challenge due to its considerable classification and contouring. This study introduces the Bi-Contextual Breast Mass Segmentation Framework (Bi-CBMSegNet), a novel paradigm that enhances the precision and efficiency of breast mass segmentation within full-field mammograms. Bi-CBMSegNet employs an advanced encoder-decoder architecture comprising two distinct modules: the Global Feature Enhancement Module (GFEM) and the Local Feature Enhancement Module (LFEM). GFEM aggregates and assimilates features from all positions within the mammogram, capturing extensive contextual dependencies that facilitate the enriched representation of homogeneous regions. The LFEM module accentuates semantic information pertinent to each specific position, refining the delineation of heterogeneous regions. The efficacy of Bi-CBMSegNet has been rigorously evaluated on two publicly available mammography databases, demonstrating superior computational efficiency and performance metrics. The findings advocate for Bi-CBMSegNet to effectuate a significant leap forward in medical imaging, particularly in breast cancer screening, thereby augmenting the accuracy and efficacy of diagnostic and treatment planning processes.

Attention-Enhanced Deep Learning Ensemble for Breast Density Classification in Mammography

Peyman Sharifian, Xiaotong Hong, Alireza Karimian, Mehdi Amini, Hossein Arabi

arxiv logopreprintJul 8 2025
Breast density assessment is a crucial component of mammographic interpretation, with high breast density (BI-RADS categories C and D) representing both a significant risk factor for developing breast cancer and a technical challenge for tumor detection. This study proposes an automated deep learning system for robust binary classification of breast density (low: A/B vs. high: C/D) using the VinDr-Mammo dataset. We implemented and compared four advanced convolutional neural networks: ResNet18, ResNet50, EfficientNet-B0, and DenseNet121, each enhanced with channel attention mechanisms. To address the inherent class imbalance, we developed a novel Combined Focal Label Smoothing Loss function that integrates focal loss, label smoothing, and class-balanced weighting. Our preprocessing pipeline incorporated advanced techniques, including contrast-limited adaptive histogram equalization (CLAHE) and comprehensive data augmentation. The individual models were combined through an optimized ensemble voting approach, achieving superior performance (AUC: 0.963, F1-score: 0.952) compared to any single model. This system demonstrates significant potential to standardize density assessments in clinical practice, potentially improving screening efficiency and early cancer detection rates while reducing inter-observer variability among radiologists.

Attention-Enhanced Deep Learning Ensemble for Breast Density Classification in Mammography

Peyman Sharifian, Xiaotong Hong, Alireza Karimian, Mehdi Amini, Hossein Arabi

arxiv logopreprintJul 8 2025
Breast density assessment is a crucial component of mammographic interpretation, with high breast density (BI-RADS categories C and D) representing both a significant risk factor for developing breast cancer and a technical challenge for tumor detection. This study proposes an automated deep learning system for robust binary classification of breast density (low: A/B vs. high: C/D) using the VinDr-Mammo dataset. We implemented and compared four advanced convolutional neural networks: ResNet18, ResNet50, EfficientNet-B0, and DenseNet121, each enhanced with channel attention mechanisms. To address the inherent class imbalance, we developed a novel Combined Focal Label Smoothing Loss function that integrates focal loss, label smoothing, and class-balanced weighting. Our preprocessing pipeline incorporated advanced techniques, including contrast-limited adaptive histogram equalization (CLAHE) and comprehensive data augmentation. The individual models were combined through an optimized ensemble voting approach, achieving superior performance (AUC: 0.963, F1-score: 0.952) compared to any single model. This system demonstrates significant potential to standardize density assessments in clinical practice, potentially improving screening efficiency and early cancer detection rates while reducing inter-observer variability among radiologists.

Development and validation of an improved volumetric breast density estimation model using the ResNet technique.

Asai Y, Yamamuro M, Yamada T, Kimura Y, Ishii K, Nakamura Y, Otsuka Y, Kondo Y

pubmed logopapersJul 7 2025
&#xD;Temporal changes in volumetric breast density (VBD) may serve as prognostic biomarkers for predicting the risk of future breast cancer development. However, accurately measuring VBD from archived X-ray mammograms remains challenging. In a previous study, we proposed a method to estimate volumetric breast density using imaging parameters (tube voltage, tube current, and exposure time) and patient age. This approach, based on a multiple regression model, achieved a determination coefficient (R²) of 0.868. &#xD;Approach:&#xD;In this study, we developed and applied machine learning models-Random Forest, XG-Boost-and the deep learning model Residual Network (ResNet) to the same dataset. Model performance was assessed using several metrics: determination coefficient, correlation coefficient, root mean square error, mean absolute error, root mean square percentage error, and mean absolute percentage error. A five-fold cross-validation was conducted to ensure robust validation. &#xD;Main results:&#xD;The best-performing fold resulted in R² values of 0.895, 0.907, and 0.918 for Random Forest, XG-Boost, and ResNet, respectively, all surpassing the previous study's results. ResNet consistently achieved the lowest error values across all metrics. &#xD;Significance:&#xD;These findings suggest that ResNet successfully achieved the task of accurately determining VBD from past mammography-a task that has not been realised to date. We are confident that this achievement contributes to advancing research aimed at predicting future risks of breast cancer development by enabling high-accuracy time-series analyses of retrospective VBD.&#xD.

PGMI assessment in mammography: AI software versus human readers.

Santner T, Ruppert C, Gianolini S, Stalheim JG, Frei S, Hondl M, Fröhlich V, Hofvind S, Widmann G

pubmed logopapersJul 5 2025
The aim of this study was to evaluate human inter-reader agreement of parameters included in PGMI (perfect-good-moderate-inadequate) classification of screening mammograms and explore the role of artificial intelligence (AI) as an alternative reader. Five radiographers from three European countries independently performed a PGMI assessment of 520 anonymized mammography screening examinations randomly selected from representative subsets from 13 imaging centres within two European countries. As a sixth reader, a dedicated AI software was used. Accuracy, Cohen's Kappa, and confusion matrices were calculated to compare the predictions of the software against the individual assessment of the readers, as well as potential discrepancies between them. A questionnaire and a personality test were used to better understand the decision-making processes of the human readers. Significant inter-reader variability among human readers with poor to moderate agreement (κ = -0.018 to κ = 0.41) was observed, with some showing more homogenous interpretations of single features and overall quality than others. In comparison, the software surpassed human inter-reader agreement in detecting glandular tissue cuts, mammilla deviation, pectoral muscle detection, and pectoral angle measurement, while remaining features and overall image quality exhibited comparable performance to human assessment. Notably, human inter-reader disagreement of PGMI assessment in mammography is considerably high. AI software may already reliably categorize quality. Its potential for standardization and immediate feedback to achieve and monitor high levels of quality in screening programs needs further attention and should be included in future approaches. AI has promising potential for automated assessment of diagnostic image quality. Faster, more representative and more objective feedback may support radiographers in their quality management processes. Direct transformation of common PGMI workflows into an AI algorithm could be challenging.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.
Page 3 of 876 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.