Sort by:
Page 23 of 30293 results

BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis.

Yu L, Gou B, Xia X, Yang Y, Yi Z, Min X, He T

pubmed logopapersJun 1 2025
Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model's ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model's overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.

AI-supported approaches for mammography single and double reading: A controlled multireader study.

Brancato B, Magni V, Saieva C, Risso GG, Buti F, Catarzi S, Ciuffi F, Peruzzi F, Regini F, Ambrogetti D, Alabiso G, Cruciani A, Doronzio V, Frati S, Giannetti GP, Guerra C, Valente P, Vignoli C, Atzori S, Carrera V, D'Agostino G, Fazzini G, Picano E, Turini FM, Vani V, Fantozzi F, Vietro D, Cavallero D, Vietro F, Plataroti D, Schiaffino S, Cozzi A

pubmed logopapersJun 1 2025
To assess the impact of artificial intelligence (AI) on the diagnostic performance of radiologists with varying experience levels in mammography reading, considering single and simulated double reading approaches. In this retrospective study, 150 mammography examinations (30 with pathology-confirmed malignancies, 120 without malignancies [confirmed by 2-year follow-up]) were reviewed according to five approaches: A) human single reading by 26 radiologists of varying experience; B) AI single reading (Lunit INSIGHT MMG; C) human single reading with simultaneous AI support; D) simulated human-human double reading; E) simulated human-AI double reading, with AI as second independent reader flagging cases with a cancer probability ≥10 %. Sensitivity and specificity were calculated and compared using McNemar's test, univariate and multivariable logistic regression. Compared to single reading without AI support, single reading with simultaneous AI support improved mean sensitivity from 69.2 % (standard deviation [SD] 15.6) to 84.5 % (SD 8.1, p < 0.001), providing comparable mean specificity (91.8 % versus 90.8 %, p = 0.06). The sensitivity increase provided by the AI-supported single reading was largest in the group of radiologists with a sensitivity below the median in the non-supported single reading, from 56.7 % (SD 12.1) to 79.7 % (SD 10.2, p < 0.001). In the simulated human-AI double reading approach, sensitivity further increased to 91.8 % (SD 3.4), surpassing that of the human-human simulated double reading (87.4 %, SD 8.8, p = 0.016), with comparable mean specificity (from 84.0 % to 83.0 %, p = 0.17). AI support significantly enhanced sensitivity across all reading approaches, particularly benefiting worse performing radiologists. In the simulated double reading approaches, AI incorporation as independent second reader significantly increased sensitivity without compromising specificity.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Enhancing Pathological Complete Response Prediction in Breast Cancer: The Added Value of Pretherapeutic Contrast-Enhanced Cone Beam Breast CT Semantic Features.

Wang Y, Ma Y, Wang F, Liu A, Zhao M, Bian K, Zhu Y, Yin L, Ye Z

pubmed logopapersJun 1 2025
To explore the association between pretherapeutic contrast-enhanced cone beam breast CT (CE-CBBCT) features and pathological complete response (pCR), and to develop a predictive model that integrates clinicopathological and imaging features. In this prospective study, a cohort of 200 female patients who underwent CE-CBBCT prior to neoadjuvant therapy and surgery was divided into train (n=150) and test (n=50) sets in a 3:1 ratio. Optimal predictive features were identified using univariate logistic regression and recursive feature elimination with cross-validation (RFECV). Models were constructed using XGBoost and evaluated through the receiver operating characteristic (ROC) curve, calibration curves, and decision curve analysis. The performance of combined model was further evaluated across molecular subtypes. Feature significance within the combined model was determined using the SHapley Additive exPlanation (SHAP) algorithm. The model incorporating three clinicopathological and six CE-CBBCT imaging features demonstrated robust predictive performance for pCR, with area under curves (AUCs) of 0.924 in the train set and 0.870 in the test set. Molecular subtype, spiculation, and adjacent vascular sign (AVS) grade emerged as the most influential SHAP features. The highest AUCs were observed for HER2-positive subgroup (train: 0.935; test: 0.844), followed by luminal (train: 0.841; test: 0.717) and triple-negative breast cancer (TNBC; train: 0.760; test: 0.583). SHAP analysis indicated that spiculation was crucial for luminal breast cancer prediction, while AVS grade was critical for HER2-positive and TNBC cases. Integrating clinicopathological and CE-CBBCT imaging features enhanced pCR prediction accuracy, particularly in HER2-positive cases, underscoring its potential clinical applicability.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

FedBCD: Federated Ultrasound Video and Image Joint Learning for Breast Cancer Diagnosis.

Deng T, Huang C, Cai M, Liu Y, Liu M, Lin J, Shi Z, Zhao B, Huang J, Liang C, Han G, Liu Z, Wang Y, Han C

pubmed logopapersJun 1 2025
Ultrasonography plays an essential role in breast cancer diagnosis. Current deep learning based studies train the models on either images or videos in a centralized learning manner, lacking consideration of joint benefits between two different modality models or the privacy issue of data centralization. In this study, we propose the first decentralized learning solution for joint learning with breast ultrasound video and image, called FedBCD. To enable the model to learn from images and videos simultaneously and seamlessly in client-level local training, we propose a Joint Ultrasound Video and Image Learning (JUVIL) model to bridge the dimension gap between video and image data by incorporating temporal and spatial adapters. The parameter-efficient design of JUVIL with trainable adapters and frozen backbone further reduces the computational cost and communication burden of federated learning, finally improving the overall efficiency. Moreover, considering conventional model-wise aggregation may lead to unstable federated training due to different modalities, data capacities in different clients, and different functionalities across layers. We further propose a Fisher information matrix (FIM) guided Layer-wise Aggregation method named FILA. By measuring layer-wise sensitivity with FIM, FILA assigns higher contributions to the clients with lower sensitivity, improving personalized performance during federated training. Extensive experiments on three image clients and one video client demonstrate the benefits of joint learning architecture, especially for the ones with small-scale data. FedBCD significantly outperforms nine federated learning methods on both video-based and image-based diagnoses, demonstrating the superiority and potential for clinical practice. Code is released at https://github.com/tianpeng-deng/FedBCD.

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends.

Wang R, Chen F, Chen H, Lin C, Shuai J, Wu Y, Ma L, Hu X, Wu M, Wang J, Zhao Q, Shuai J, Pan J

pubmed logopapersJun 1 2025
The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Adaptive Breast MRI Scanning Using AI.

Eskreis-Winkler S, Bhowmik A, Kelly LH, Lo Gullo R, D'Alessio D, Belen K, Hogan MP, Saphier NB, Sevilimedu V, Sung JS, Comstock CE, Sutton EJ, Pinker K

pubmed logopapersJun 1 2025
Background MRI protocols typically involve many imaging sequences and often require too much time. Purpose To simulate artificial intelligence (AI)-directed stratified scanning for screening breast MRI with various triage thresholds and evaluate its diagnostic performance against that of the full breast MRI protocol. Materials and Methods This retrospective reader study included consecutive contrast-enhanced screening breast MRI examinations performed between January 2013 and January 2019 at three regional cancer sites. In this simulation study, an in-house AI tool generated a suspicion score for subtraction maximum intensity projection images during a given MRI examination, and the score was used to determine whether to proceed with the full MRI protocol or end the examination early (abbreviated breast MRI [AB-MRI] protocol). Examinations with suspicion scores under the 50th percentile were read using both the AB-MRI protocol (ie, dynamic contrast-enhanced MRI scans only) and the full MRI protocol. Diagnostic performance metrics for screening with various AI triage thresholds were compared with those for screening without AI triage. Results Of 863 women (mean age, 52 years ± 10 [SD]; 1423 MRI examinations), 51 received a cancer diagnosis within 12 months of screening. The diagnostic performance metrics for AI-directed stratified scanning that triaged 50% of examinations to AB-MRI versus full MRI protocol scanning were as follows: sensitivity, 88.2% (45 of 51; 95% CI: 79.4, 97.1) versus 86.3% (44 of 51; 95% CI: 76.8, 95.7); specificity, 80.8% (1108 of 1372; 95% CI: 78.7, 82.8) versus 81.4% (1117 of 1372; 95% CI: 79.4, 83.5); positive predictive value 3 (ie, percent of biopsies yielding cancer), 23.6% (43 of 182; 95% CI: 17.5, 29.8) versus 24.7% (42 of 170; 95% CI: 18.2, 31.2); cancer detection rate (per 1000 examinations), 31.6 (95% CI: 22.5, 40.7) versus 30.9 (95% CI: 21.9, 39.9); and interval cancer rate (per 1000 examinations), 4.2 (95% CI: 0.9, 7.6) versus 4.9 (95% CI: 1.3, 8.6). Specificity decreased by no more than 2.7 percentage points with AI triage. There were no AI-triaged examinations for which conducting the full MRI protocol would have resulted in additional cancer detection. Conclusion AI-directed stratified MRI decreased simulated scan times while maintaining diagnostic performance. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Strand in this issue.

Mexican dataset of digital mammograms (MEXBreast) with suspicious clusters of microcalcifications.

Lozoya RSL, Barragán KN, Domínguez HJO, Azuela JHS, Sánchez VGC, Villegas OOV

pubmed logopapersJun 1 2025
Breast cancer is one of the most prevalent cancers affecting women worldwide. Early detection and treatment are crucial in significantly reducing mortality rates Microcalcifications (MCs) are of particular importance among the various breast lesions. These tiny calcium deposits within breast tissue are present in approximately 30% of malignant tumors and can serve as critical indirect indicators of early-stage breast cancer. Three or more MCs within an area of 1 cm² are considered a Microcalcification Cluster (MCC) and assigned a BI-RADS category 4, indicating a suspicion of malignancy. Mammography is the most used technique for breast cancer detection. Approximately one in two mammograms showing MCCs is confirmed as cancerous through biopsy. MCCs are challenging to detect, even for experienced radiologists, underscoring the need for computer-aided detection tools such as Convolutional Neural Networks (CNNs). CNNs require large amounts of domain-specific data with consistent resolutions for effective training. However, most publicly available mammogram datasets either lack resolution information or are compiled from heterogeneous sources. Additionally, MCCs are often either unlabeled or sparsely represented in these datasets, limiting their utility for training CNNs. In this dataset, we present the MEXBreast, an annotated MCCs Mexican digital mammogram database, containing images from resolutions of 50, 70, and 100 microns. MEXBreast aims to support the training, validation, and testing of deep learning CNNs.
Page 23 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.