Sort by:
Page 3 of 30293 results

Mammographic features in screening mammograms with high AI scores but a true-negative screening result.

Koch HW, Bergan MB, Gjesvik J, Larsen M, Bartsch H, Haldorsen IHS, Hofvind S

pubmed logopapersSep 16 2025
BackgroundThe use of artificial intelligence (AI) in screen-reading of mammograms has shown promising results for cancer detection. However, less attention has been paid to the false positives generated by AI.PurposeTo investigate mammographic features in screening mammograms with high AI scores but a true-negative screening result.Material and MethodsIn this retrospective study, 54,662 screening examinations from BreastScreen Norway 2010-2022 were analyzed with a commercially available AI system (Transpara v. 2.0.0). An AI score of 1-10 indicated the suspiciousness of malignancy. We selected examinations with an AI score of 10, with a true-negative screening result, followed by two consecutive true-negative screening examinations. Of the 2,124 examinations matching these criteria, 382 random examinations underwent blinded consensus review by three experienced breast radiologists. The examinations were classified according to mammographic features, radiologist interpretation score (1-5), and mammographic breast density (BI-RADS 5th ed. a-d).ResultsThe reviews classified 91.1% (348/382) of the examinations as negative (interpretation score 1). All examinations (26/26) categorized as BI-RADS d were given an interpretation score of 1. Classification of mammographic features: asymmetry = 30.6% (117/382); calcifications = 30.1% (115/382); asymmetry with calcifications = 29.3% (112/382); mass = 8.9% (34/382); distortion = 0.8% (3/382); spiculated mass = 0.3% (1/382). For examinations with calcifications, 79.1% (91/115) were classified with benign morphology.ConclusionThe majority of false-positive screening examinations generated by AI were classified as non-suspicious in a retrospective blinded consensus review and would likely not have been recalled for further assessment in a real screening setting using AI as a decision support.

Predicting cardiovascular events from routine mammograms using machine learning.

Barraclough JY, Gandomkar Z, Fletcher RA, Barbieri S, Kuo NI, Rodgers A, Douglas K, Poppe KK, Woodward M, Luxan BG, Neal B, Jorm L, Brennan P, Arnott C

pubmed logopapersSep 16 2025
Cardiovascular risk is underassessed in women. Many women undergo screening mammography in midlife when the risk of cardiovascular disease rises. Mammographic features such as breast arterial calcification and tissue density are associated with cardiovascular risk. We developed and tested a deep learning algorithm for cardiovascular risk prediction based on routine mammography images. Lifepool is a cohort of women with at least one screening mammogram linked to hospitalisation and death databases. A deep learning model based on DeepSurv architecture was developed to predict major cardiovascular events from mammography images. Model performance was compared against standard risk prediction models using the concordance index, comparative to the Harrells C-statistic. There were 49 196 women included, with a median follow-up of 8.8 years (IQR 7.7-10.6), among whom 3392 experienced a first major cardiovascular event. The DeepSurv model using mammography features and participant age had a concordance index of 0.72 (95% CI 0.71 to 0.73), with similar performance to modern models containing age and clinical variables including the New Zealand 'PREDICT' tool and the American Heart Association 'PREVENT' equations. A deep learning algorithm based on only mammographic features and age predicted cardiovascular risk with performance comparable to traditional cardiovascular risk equations. Risk assessments based on mammography may be a novel opportunity for improving cardiovascular risk screening in women.

Cross-modality transformer model leveraging DCE-MRI and pathological images for predicting pathological complete response and lymph node metastasis in breast cancer.

Fan M, Zhu Z, Yu Z, Du J, Xie S, Pan X, Chen S, Li L

pubmed logopapersSep 16 2025
Pathological diagnosis remains the gold standard for diagnosing breast cancer and is highly accurate and sensitive, which is crucial for assessing pathological complete response (pCR) and lymph node metastasis (LNM) following neoadjuvant chemotherapy (NACT). Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that provides detailed morphological and functional insights into tumors. The optimal complementarity of these two modalities, particularly in situations where one is unavailable, and their integration to enhance therapeutic predictions have not been fully explored. To this end, we propose a cross-modality image transformer (CMIT) model designed for feature synthesis and fusion to predict pCR and LNM in breast cancer. This model enables interaction and integration between the two modalities via a transformer's cross-attention module. A modality information transfer module is developed to produce synthetic pathological image features (sPIFs) from DCE-MRI data and synthetic DCE-MRI features (sMRIs) from pathological images. During training, the model leverages both real and synthetic imaging features to increase the predictive performance. In the prediction phase, the synthetic imaging features are fused with the corresponding real imaging feature to make predictions. The experimental results demonstrate that the proposed CMIT model, which integrates DCE-MRI with sPIFs or histopathological images with sMRI, outperforms (with AUCs of 0.809 and 0.852, respectively) the use of MRI or pathological images alone in predicting the pCR to NACT. Similar improvements were observed in LNM prediction. For LNM prediction, the DCE-MRI model's performance improved from an AUC of 0.637 to 0.712, while the DCE-MRI-guided histopathological model achieved an AUC of 0.792. Notably, our proposed model can predict treatment response effectively via DCE-MRI, regardless of the availability of actual histopathological images.

Enhanced value of chest computed tomography radiomics features in breast density classification.

Zhou W, Yang Q, Zhang H

pubmed logopapersSep 15 2025
This study investigates the correlation between chest computed tomography (CT) radiomics features and breast density classification, and aiming to develop an automated radiomics model for breast density assessment using chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. A retrospective analysis was conducted on patients who underwent both mammography and chest CT scans. The breast density classification results based on mammography images were used to guide the development of CT-based breast density classification models. Radiomic features were extracted from breast regions of interest (ROIs) segmented on chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. Following dimensionality reduction and selection of dominant radiomic features, four four-class classification models were established, including ① Extreme Gradient Boosting (XGBoost), ② One Vs Rest Classifier-Logistic Regression, ③ Gradient Boosting, and ④ Random Forest Classifier. The performance of these models in classifying breast density using CT images was then evaluated. A total of 330 patients, aged 23-79 years, were included for analysis. The breast ROIs were automatically segmented using a U-net neural network model and subsequently refined and calibrated manually. A total of 1427 radiomic features were extracted, and after dimensionality reduction and feature selection, 28 dominant features closely associated with breast density classification were obtained to construct four classification models. Among the tested models-XGBoost, One-vs-Rest Logistic Regression, Gradient Boosting Classifier, and Random Forest Classifier-the XGBoost model achieved the best performance, with a classification accuracy of 86.6%. Analysis of the receiver operating characteristic curves showed Area Under the Curve (AUC) values of 1.00, 0.93, 0.93, and 0.99 for the four breast density categories, along with a micro-averaged AUC of 0.97 and a macro-averaged AUC of 0.96. Chest CT scans, combined with imaging radiomics models, can accurately classify breast density, providing valuable information related to breast cancer risk stratification. The proposed classification model offers a promising tool for automated breast density assessment, which could enhance personalized breast cancer screening and clinical decision-making.

Deep Learning for Breast Mass Discrimination: Integration of B-Mode Ultrasound & Nakagami Imaging with Automatic Lesion Segmentation

Hassan, M. W., Hossain, M. M.

medrxiv logopreprintSep 15 2025
ObjectiveThis study aims to enhance breast cancer diagnosis by developing an automated deep learning framework for real-time, quantitative ultrasound imaging. Breast cancer is the second leading cause of cancer-related deaths among women, and early detection is crucial for improving survival rates. Conventional ultrasound, valued for its non-invasive nature and real-time capability, is limited by qualitative assessments and inter-observer variability. Quantitative ultrasound (QUS) methods, including Nakagami imaging--which models the statistical distribution of backscattered signals and lesion morphology--present an opportunity for more objective analysis. MethodsThe proposed framework integrates three convolutional neural networks (CNNs): (1) NakaSynthNet, synthesizing quantitative Nakagami parameter images from B-mode ultrasound; (2) SegmentNet, enabling automated lesion segmentation; and (3) FeatureNet, which combines anatomical and statistical features for classifying lesions as benign or malignant. Training utilized a diverse dataset of 110,247 images, comprising clinical B-mode scans and various simulated examples (fruit, mammographic lesions, digital phantoms). Quantitative performance was evaluated using mean squared error (MSE), structural similarity index (SSIM), segmentation accuracy, sensitivity, specificity, and area under the curve (AUC). ResultsNakaSynthNet achieved real-time synthesis at 21 frames/s, with MSE of 0.09% and SSIM of 98%. SegmentNet reached 98.4% accuracy, and FeatureNet delivered 96.7% overall classification accuracy, 93% sensitivity, 98% specificity, and an AUC of 98%. ConclusionThe proposed multi-parametric deep learning pipeline enables accurate, real-time breast cancer diagnosis from ultrasound data using objective quantitative imaging. SignificanceThis framework advances the clinical utility of ultrasound by reducing subjectivity and providing robust, multi-parametric information for improved breast cancer detection.

Deep learning based multi-shot breast diffusion MRI: Improving imaging quality and reduced distortion.

Chien N, Cho YH, Wang MY, Tsai LW, Yeh CY, Li CW, Lan P, Wang X, Liu KL, Chang YC

pubmed logopapersSep 15 2025
To investigate the imaging performance of deep-learning reconstruction on multiplexed sensitivity encoding (MUSE DL) compared to single-shot diffusion-weighted imaging (SS-DWI) in the breast. In this prospective, institutional review board-approved study, both single-shot (SS-DWI) and multi-shot MUSE DWI were performed on patients. MUSE DWI was processed using deep-learning reconstruction (MUSE DL). Quantitative analysis included calculating apparent diffusion coefficients (ADCs), signal-to-noise ratio (SNR) within fibroglandular tissue (FGT), adjacent pectoralis muscle, and breast tumors. The Hausdorff distance (HD) was used as a distortion index to compare breast contours between T2-weighted anatomical images, SS-DWI, and MUSE images. Subjective visual qualitative analysis was performed using Likert scale. Quantitative analyses were assessed using Friedman's rank-based analysis with Bonferroni correction. Sixty-one female participants (mean age 49.07 years ± 11.0 [standard deviation]; age range 23-75 years) with 65 breast lesions were included in this study. All data were acquired using a 3 T MRI scanner. The MUSE DL yielded significant improvement in image quality compared with non-DL MUSE in both 2-shot and 4-shot settings (SNR enhancement FGT 2-shot DL 207.8 % [125.5-309.3],4- shot DL 175.1 % [102.2-223.5]). No significant difference was observed in the ADC between MUSE, MUSE DL, and SS-DWI in both benign (P = 0.154) and malignant tumors (P = 0.167). There was significantly less distortion in the 2- and 4-shot MUSE DL images (HD 3.11 mm, 2.58 mm) than in the SS-DWI images (4.15 mm, P < 0.001). MUSE DL enhances SNR, minimizes image distortion, and preserves lesion diagnosis accuracy and ADC values.

GLAM: Geometry-Guided Local Alignment for Multi-View VLP in Mammography

Yuexi Du, Lihui Chen, Nicha C. Dvornek

arxiv logopreprintSep 12 2025
Mammography screening is an essential tool for early detection of breast cancer. The speed and accuracy of mammography interpretation have the potential to be improved with deep learning methods. However, the development of a foundation visual language model (VLM) is hindered by limited data and domain differences between natural and medical images. Existing mammography VLMs, adapted from natural images, often ignore domain-specific characteristics, such as multi-view relationships in mammography. Unlike radiologists who analyze both views together to process ipsilateral correspondence, current methods treat them as independent images or do not properly model the multi-view correspondence learning, losing critical geometric context and resulting in suboptimal prediction. We propose GLAM: Global and Local Alignment for Multi-view mammography for VLM pretraining using geometry guidance. By leveraging the prior knowledge about the multi-view imaging process of mammograms, our model learns local cross-view alignments and fine-grained local features through joint global and local, visual-visual, and visual-language contrastive learning. Pretrained on EMBED [14], one of the largest open mammography datasets, our model outperforms baselines across multiple datasets under different settings.

Machine learning model based on the radiomics features of CE-CBBCT shows promising predictive ability for HER2-positive BC.

Chen X, Li M, Liang X, Su D

pubmed logopapersSep 12 2025
This study aimed to investigate whether establishing a machine learning (ML) model based on contrast-enhanced cone-beam breast computed tomography (CE-CBBCT) radiomic features could predict human epidermal growth factor receptor 2-positive breast cancer (BC). Eighty-eight patients diagnosed with invasive BC who underwent preoperative CE-CBBCT were retrospectively enrolled. Patients were randomly assigned to the training and testing cohorts at a ratio of approximately 7:3. A total of 1046 quantitative radiomics features were extracted from the CE-CBBCT images using PyRadiomics. Z-score normalization was used to standardize the radiomics features, and Pearson correlation coefficient and one-way analysis of variance were used to explore the significant features. Six ML algorithms (support vector machine, random forest [RF], logistic regression, adaboost, linear discriminant analysis, and decision tree) were used to construct optimal predictive models. Receiver operating characteristic curves were constructed and the area under the curve (AUC) was calculated. Four top-performing radiomic models were selected to develop the 6 predictive features. The AUC values for support vector machine, linear discriminant analysis, RF, logistic regression, adaboost, and decision tree were 0.741, 0.753, 1.000, 0.752, 1.000, and 1.000, respectively, in the training cohort, and 0.700, 0.671, 0.806, 0.665, 0.706, and 0.712, respectively, in the testing cohort. Notably, the RF model exhibited the highest predictive ability with an AUC of 0.806 in the testing cohort. For the RF model, the DeLong test showed statistically significant differences in the AUC between the training and testing cohorts (Z = 2.105, P = .035). The ML model based on CE-CBBCT radiomics features showed promising predictive ability for human epidermal growth factor receptor 2-positive BC, with the RF model demonstrating the best diagnostic performance.

Breast cancer risk assessment for screening: a hybrid artificial intelligence approach.

Tendero R, Larroza A, Pérez-Benito FJ, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersSep 11 2025
This study evaluates whether integrating clinical data with mammographic features using artificial intelligence (AI) improves 2-year breast cancer risk prediction compared to using either data type alone. This retrospective nested case-control study included 2193 women (mean age, 59 ± 5 years) screened at Hospital del Mar, Spain (2013-2020), with 418 cases (mammograms taken 2 years before diagnosis) and 1775 controls (cancer-free for ≥ 2 years). Three models were evaluated: (1) ERTpd + im, based on Extremely Randomized Trees (ERT), split into sub-models for personal data (ERTpd) and image features (ERTim); (2) an image-only model (CNN); and (3) a hybrid model (ERTpd + im + CNN). Five-fold cross-validation, area under the receiver operating characteristic curve (AUC), bootstrapping for confidence intervals, and DeLong tests for paired data assessed performance. Robustness was evaluated across breast density quartiles and detection type (screen-detected vs. interval cancers). The hybrid model achieved an AUC of 0.75 (95% CI: 0.71-0.76), significantly outperforming the CNN model (AUC, 0.74; 95% CI: 0.70-0.75; p < 0.05) and slightly surpassing ERTpd + im (AUC, 0.74; 95% CI: 0.70-0.76). Sub-models ERTpd and ERTim had AUCs of 0.59 and 0.73, respectively. The hybrid model performed consistently across breast density quartiles (p > 0.05) and better for screen-detected (AUC, 0.79) than interval cancers (AUC, 0.59; p < 0.001). This study shows that integrating clinical and mammographic data with AI improves 2-year breast cancer risk prediction, outperforming single-source models. The hybrid model demonstrated higher accuracy and robustness across breast density quartiles, with better performance for screen-detected cancers. Question Current breast cancer risk models have limitations in accuracy. Can integrating clinical and mammographic data using artificial intelligence (AI) improve short-term risk prediction? Findings A hybrid model combining clinical and imaging data achieved the highest accuracy in predicting 2-year breast cancer risk, outperforming models using either data type alone. Clinical relevance Integrating clinical and mammographic data with AI improves breast cancer risk prediction. This approach enables personalized screening strategies and supports early detection. It helps identify high-risk women and optimizes the use of additional assessments within screening programs.

Exploring Women's Perceptions of Traditional Mammography and the Concept of AI-Driven Thermography to Improve the Breast Cancer Screening Journey: Mixed Methods Study.

Sirka Kacafírková K, Poll A, Jacobs A, Cardone A, Ventura JJ

pubmed logopapersSep 10 2025
Breast cancer is the most common cancer among women and a leading cause of mortality in Europe. Early detection through screening reduces mortality, yet participation in mammography-based programs remains suboptimal due to discomfort, radiation exposure, and accessibility issues. Thermography, particularly when driven by artificial intelligence (AI), is being explored as a noninvasive, radiation-free alternative. However, its acceptance, reliability, and impact on the screening experience remain underexplored. This study aimed to explore women's perceptions of AI-enhanced thermography (ThermoBreast) as an alternative to mammography. It aims to identify barriers and motivators related to breast cancer screening and assess how ThermoBreast might improve the screening experience. A mixed methods approach was adopted, combining an online survey with follow-up focus groups. The survey captured women's knowledge, attitudes, and experiences related to breast cancer screening and was used to recruit participants for qualitative exploration. After the focus groups, the survey was relaunched to include additional respondents. Quantitative data were analyzed using SPSS (IBM Corp), and qualitative data were analyzed in MAXQDA (VERBI software). Findings from both strands were synthesized to redesign the breast cancer screening journey. A total of 228 valid survey responses were analyzed. Of 228, 154 women (68%) had previously undergone mammography, while 74 (32%) had not. The most reported motivators were belief in prevention (69/154, 45%), invitations from screening programs (68/154, 44%), and doctor recommendations (45/154, 29%). Among nonscreeners, key barriers included no recommendation from a doctor (39/74, 53%), absence of symptoms (27/74, 36%), and perceived age ineligibility (17/74, 23%). Pain, long appointment waits, and fear of radiation were also mentioned. In total, 18 women (mean age 45.3 years, SD 13.6) participated in 6 focus groups. Participants emphasized the importance of respectful and empathetic interactions with medical staff, clear communication, and emotional comfort-factors they perceived as more influential than the screening technology itself. ThermoBreast was positively received for being contactless, radiation-free, and potentially more comfortable. Participants described it as "less traumatic," "easier," and "a game changer." However, concerns were raised regarding its novelty, lack of clinical validation, and data privacy. Some participants expressed the need for human oversight in AI-supported procedures and requested more information on how AI is used. Based on these insights, an updated screening journey was developed, highlighting improvements in preparation, appointment booking, privacy, and communication of results. While AI-driven thermography shows promise as a noninvasive, user-friendly alternative to mammography, its adoption depends on trust, clinical validation, and effective communication from health care professionals. It may expand screening access for populations underserved by mammography, such as younger and immobile women, but does not eliminate all participation barriers. Long-term studies and direct comparisons between mammography and thermography are needed to assess diagnostic accuracy, patient experience, and their impact on screening participation and outcomes.
Page 3 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.