Sort by:
Page 7 of 14133 results

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

AI-supported approaches for mammography single and double reading: A controlled multireader study.

Brancato B, Magni V, Saieva C, Risso GG, Buti F, Catarzi S, Ciuffi F, Peruzzi F, Regini F, Ambrogetti D, Alabiso G, Cruciani A, Doronzio V, Frati S, Giannetti GP, Guerra C, Valente P, Vignoli C, Atzori S, Carrera V, D'Agostino G, Fazzini G, Picano E, Turini FM, Vani V, Fantozzi F, Vietro D, Cavallero D, Vietro F, Plataroti D, Schiaffino S, Cozzi A

pubmed logopapersJun 1 2025
To assess the impact of artificial intelligence (AI) on the diagnostic performance of radiologists with varying experience levels in mammography reading, considering single and simulated double reading approaches. In this retrospective study, 150 mammography examinations (30 with pathology-confirmed malignancies, 120 without malignancies [confirmed by 2-year follow-up]) were reviewed according to five approaches: A) human single reading by 26 radiologists of varying experience; B) AI single reading (Lunit INSIGHT MMG; C) human single reading with simultaneous AI support; D) simulated human-human double reading; E) simulated human-AI double reading, with AI as second independent reader flagging cases with a cancer probability ≥10 %. Sensitivity and specificity were calculated and compared using McNemar's test, univariate and multivariable logistic regression. Compared to single reading without AI support, single reading with simultaneous AI support improved mean sensitivity from 69.2 % (standard deviation [SD] 15.6) to 84.5 % (SD 8.1, p < 0.001), providing comparable mean specificity (91.8 % versus 90.8 %, p = 0.06). The sensitivity increase provided by the AI-supported single reading was largest in the group of radiologists with a sensitivity below the median in the non-supported single reading, from 56.7 % (SD 12.1) to 79.7 % (SD 10.2, p < 0.001). In the simulated human-AI double reading approach, sensitivity further increased to 91.8 % (SD 3.4), surpassing that of the human-human simulated double reading (87.4 %, SD 8.8, p = 0.016), with comparable mean specificity (from 84.0 % to 83.0 %, p = 0.17). AI support significantly enhanced sensitivity across all reading approaches, particularly benefiting worse performing radiologists. In the simulated double reading approaches, AI incorporation as independent second reader significantly increased sensitivity without compromising specificity.

BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis.

Yu L, Gou B, Xia X, Yang Y, Yi Z, Min X, He T

pubmed logopapersJun 1 2025
Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model's ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model's overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.

Keeping AI on Track: Regular monitoring of algorithmic updates in mammography.

Taib AG, James JJ, Partridge GJW, Chen Y

pubmed logopapersJun 1 2025
To demonstrate a method of benchmarking the performance of two consecutive software releases of the same commercial artificial intelligence (AI) product to trained human readers using the Personal Performance in Mammographic Screening scheme (PERFORMS) external quality assurance scheme. In this retrospective study, ten PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between 2012 and 2023 and were evaluated by Version 1 (V1) and Version 2 (V2) of the same AI model in 2022 and 2023 respectively. Both AI and humans considered each breast independently. Both AI and humans considered the highest suspicion of malignancy score per breast for non-malignant cases and per lesion for breasts with malignancy. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for comparison, with the study powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. The study included 1,254 human readers, with a total of 328 malignant lesions, 823 normal, and 55 benign breasts analysed. No significant difference was found between the AUCs for AI V1 (0.93) and V2 (0.94) (p = 0.13). In terms of sensitivity, no difference was observed between human readers and AI V1 (83.2 % vs 87.5 % respectively, p = 0.12), however V2 outperformed humans (88.7 %, p = 0.04). Specificity was higher for AI V1 (87.4 %) and V2 (88.2 %) compared to human readers (79.0 %, p < 0.01 respectively). The upgraded AI model showed no significant difference in diagnostic performance compared to its predecessor when evaluating mammograms from PERFORMS test sets.

Artificial intelligence-assisted magnetic resonance lymphography for evaluation of micro- and macro-sentinel lymph node metastasis in breast cancer.

Yang Z, Ling J, Sun W, Pan C, Chen T, Dong C, Zhou X, Zhang J, Zheng J, Ma X

pubmed logopapersJun 1 2025
Contrast-enhanced magnetic resonance lymphography (CE-MRL) plays a crucial role in preoperative diagnostic for evaluating tumor metastatic sentinel lymph node (T-SLN), by integrating detailed lymphatic information about lymphatic anatomy and drainage function from MR images. However, the clinical gadolinium-based contrast agents for identifying T-SLN is seriously limited, owing to their small molecular structure and rapid diffusion into the bloodstream. Herein, we propose a novel albumin-modified manganese-based nanoprobes enhanced MRL method for accurately assessing micro- and macro-T-SLN. Specifically, the inherent concentration gradient of albumin between blood and interstitial fluid aids in the movement of nanoprobes into the lymphatic system. The micro-T-SLN exhibits a notably higher MR signal due to the formation of new lymphatic vessels and increased lymphatic flow, allowing for a greater influx of nanoprobes. In contrast, the macro-T-SLN shows a lower MR signal as a result of tumor cell proliferation and damage to the lymphatic vessels. Additionally, a highly accurate and sensitive machine learning model has been developed to guide the identification of micro- and macro-T-SLN by analyzing manganese-enhanced MR images. In conclusion, our research presents a novel comprehensive assessment framework utilizing albumin-modified manganese-based nanoprobes for a highly sensitive evaluation of micro- and macro-T-SLN in breast cancer.

Review and reflections on live AI mammographic screen reading in a large UK NHS breast screening unit.

Puri S, Bagnall M, Erdelyi G

pubmed logopapersJun 1 2025
The Radiology team from a large Breast Screening Unit in the UK with a screening population of over 135,000 took part in a service evaluation project using artificial intelligence (AI) for reading breast screening mammograms. To evaluate the clinical benefit AI may provide when implemented as a silent reader in a double reading breast screening programme and to evaluate feasibility and the operational impact of deploying AI into the breast screening programme. The service was one of 14 breast screening sites in the UK to take part in this project and we present our local experience with AI in breast screening. A commercially available AI platform was deployed and worked in real time as a 'silent third reader' so as not to impact standard workflows and patient care. All cases flagged by AI but not recalled by standard double reading (positive discordant cases) were reviewed along with all cases recalled by human readers but not flagged by AI (negative discordant cases). 9,547 cases were included in the evaluation. 1,135 positive discordant cases were reviewed, and one woman was recalled from the reviews who was not found to have cancer on further assessment in the breast assessment clinic. 139 negative discordant cases were reviewed, and eight cancer cases (8.79% of total cancers detected in this period) recalled by human readers were not detected by AI. No additional cancers were detected by AI during the study. Performance of AI was inferior to human readers in our unit. Having missed a significant number of cancers makes it unreliable and not safe to be used in clinical practice. AI is not currently of sufficient accuracy to be considered in the NHS Breast Screening Programme.

Enhancing radiomics features via a large language model for classifying benign and malignant breast tumors in mammography.

Ra S, Kim J, Na I, Ko ES, Park H

pubmed logopapersJun 1 2025
Radiomics is widely used to assist in clinical decision-making, disease diagnosis, and treatment planning for various target organs, including the breast. Recent advances in large language models (LLMs) have helped enhance radiomics analysis. Herein, we sought to improve radiomics analysis by incorporating LLM-learned clinical knowledge, to classify benign and malignant tumors in breast mammography. We extracted radiomics features from the mammograms based on the region of interest and retained the features related to the target task. Using prompt engineering, we devised an input sequence that reflected the selected features and the target task. The input sequence was fed to the chosen LLM (LLaMA variant), which was fine-tuned using low-rank adaptation to enhance radiomics features. This was then evaluated on two mammogram datasets (VinDr-Mammo and INbreast) against conventional baselines. The enhanced radiomics-based method performed better than baselines using conventional radiomics features tested on two mammogram datasets, achieving accuracies of 0.671 for the VinDr-Mammo dataset and 0.839 for the INbreast dataset. Conventional radiomics models require retraining from scratch for an unseen dataset using a new set of features. In contrast, the model developed in this study effectively reused the common features between the training and unseen datasets by explicitly linking feature names with feature values, leading to extensible learning across datasets. Our method performed better than the baseline method in this retraining setting using an unseen dataset. Our method, one of the first to incorporate LLM into radiomics, has the potential to improve radiomics analysis.

Image normalization techniques and their effect on the robustness and predictive power of breast MRI radiomics.

Schwarzhans F, George G, Escudero Sanchez L, Zaric O, Abraham JE, Woitek R, Hatamikia S

pubmed logopapersJun 1 2025
Radiomics analysis has emerged as a promising approach to aid in cancer diagnosis and treatment. However, radiomics research currently lacks standardization, and radiomics features can be highly dependent on acquisition and pre-processing techniques used. In this study, we aim to investigate the effect of various image normalization techniques on robustness of radiomics features extracted from breast cancer patient MRI scans. MRI scans from the publicly available MAMA-MIA dataset and an internal breast MRI test set depicting triple negative breast cancer (TNBC) were used. We compared the effect of commonly used image normalization techniques on radiomics feature robustnessusing Concordance-Correlation-Coefficient (CCC) between multiple combinations of normalization approaches. We also trained machine learning-based prediction models of pathologic complete response (pCR) on radiomics after different normalization techniques were used and compared their areas under the receiver operating characteristic curve (ROC-AUC). For predicting complete pathological response from pre-treatment breast cancer MRI radiomics, the highest overall ROC-AUC was achieved by using a combination of three different normalization techniques indicating their potentially powerful role when working with heterogeneous imaging data. The effect of normalization was more pronounced with smaller training data and normalization may be less important with increasing abundance of training data. Additionally, we observed considerable differences between MRI data sets and their feature robustness towards normalization. Overall, we were able to demonstrate the importance of selecting and standardizing normalization methods for accurate and reliable radiomics analysis in breast MRI scans especially with small training data sets.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.

DCE-MRI based deep learning analysis of intratumoral subregion for predicting Ki-67 expression level in breast cancer.

Ding Z, Zhang C, Xia C, Yao Q, Wei Y, Zhang X, Zhao N, Wang X, Shi S

pubmed logopapersJun 1 2025
To evaluate whether deep learning (DL) analysis of intratumor subregion based on dynamic contrast-enhanced MRI (DCE-MRI) can help predict Ki-67 expression level in breast cancer. A total of 290 breast cancer patients from two hospitals were retrospectively collected. A k-means clustering algorithm confirmed subregions of tumor. DL features of whole tumor and subregions were extracted from DCE-MRI images based on 3D ResNet18 pre-trained model. The logistic regression model was constructed after dimension reduction. Model performance was assessed using the area under the curve (AUC), and clinical value was demonstrated through decision curve analysis (DCA). The k-means clustering method clustered the tumor into two subregions (habitat 1 and habitat 2) based on voxel values. Both the habitat 1 model (validation set: AUC = 0.771, 95 %CI: 0.642-0.900 and external test set: AUC = 0.794, 95 %CI: 0.696-0.891) and the habitat 2 model (AUC = 0.734, 95 %CI: 0.605-0.862 and AUC = 0.756, 95 %CI: 0.646-0.866) showed better predictive capabilities for Ki-67 expression level than the whole tumor model (AUC = 0.686, 95 %CI: 0.550-0.823 and AUC = 0.680, 95 %CI: 0.555-0.804). The combined model based on the two subregions further enhanced the predictive capability (AUC = 0.808, 95 %CI: 0.696-0.921 and AUC = 0.842, 95 %CI: 0.758-0.926), and it demonstrated higher clinical value than other models in DCA. The deep learning model derived from subregion of tumor showed better performance for predicting Ki-67 expression level in breast cancer patients. Additionally, the model that integrated two subregions further enhanced the predictive performance.
Page 7 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.