Sort by:
Page 6 of 30293 results

The African Breast Imaging Dataset for Equitable Cancer Care: Protocol for an Open Mammogram and Ultrasound Breast Cancer Detection Dataset

Musinguzi, D., Katumba, A., Kawooya, M. G., Malumba, R., Nakatumba-Nabende, J., Achuka, S. A., Adewole, M., Anazodo, U.

medrxiv logopreprintAug 28 2025
IntroductionBreast cancer is one of the most common cancers globally. Its incidence in Africa has increased sharply, surpassing that in high-income countries. Mortality remains high due to late-stage diagnosis, when treatment is less effetive. We propose the first open, longitudinal breast imaging dataset from Africa comprising point-of-care ultrasound scans, mammograms, biopsy pathology, and clinical profiles to support early detection using machine learning. Methods and AnalysisWe will engage women through community outreach and train them in self-examination. Those with suspected lesions, particularly with a family history of breast cancer, will be invited to participate. A total of 100 women will undergo baseline assessment at medical centers, including clinical exams, blood tests, and mammograms. Follow-up point-of-care ultrasound scans and clinical data will be collected at 3 and 6 months, with final assessments at 9 months including mammograms. Ethics and DisseminationThe study has been approved by the Institutional Review Boards at ECUREI and the MAI Lab. Findings will be disseminated through peer-reviewed journals and scientific conferences.

Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Mask-Guided Multi-Channel SwinUNETR Framework for Robust MRI Classification

Smriti Joshi, Lidia Garrucho, Richard Osuala, Oliver Diaz, Karim Lekadir

arxiv logopreprintAug 28 2025
Breast cancer is one of the leading causes of cancer-related mortality in women, and early detection is essential for improving outcomes. Magnetic resonance imaging (MRI) is a highly sensitive tool for breast cancer detection, particularly in women at high risk or with dense breast tissue, where mammography is less effective. The ODELIA consortium organized a multi-center challenge to foster AI-based solutions for breast cancer diagnosis and classification. The dataset included 511 studies from six European centers, acquired on scanners from multiple vendors at both 1.5 T and 3 T. Each study was labeled for the left and right breast as no lesion, benign lesion, or malignant lesion. We developed a SwinUNETR-based deep learning framework that incorporates breast region masking, extensive data augmentation, and ensemble learning to improve robustness and generalizability. Our method achieved second place on the challenge leaderboard, highlighting its potential to support clinical breast MRI interpretation. We publicly share our codebase at https://github.com/smriti-joshi/bcnaim-odelia-challenge.git.

Deep learning-based prediction of axillary pathological complete response in patients with breast cancer using longitudinal multiregional ultrasound.

Liu Y, Wang Y, Huang J, Pei S, Wang Y, Cui Y, Yan L, Yao M, Wang Y, Zhu Z, Huang C, Liu Z, Liang C, Shi J, Li Z, Pei X, Wu L

pubmed logopapersAug 27 2025
Noninvasive biomarkers that capture the longitudinal multiregional tumour burden in patients with breast cancer may improve the assessment of residual nodal disease and guide axillary surgery. Additionally, a significant barrier to the clinical translation of the current data-driven deep learning model is the lack of interpretability. This study aims to develop and validate an information shared-private (iShape) model to predict axillary pathological complete response in patients with axillary lymph node (ALN)-positive breast cancer receiving neoadjuvant therapy (NAT) by learning common and specific image representations from longitudinal primary tumour and ALN ultrasound images. A total of 1135 patients with biopsy-proven ALN-positive breast cancer who received NAT were included in this multicentre, retrospective study. The iShape was trained on a dataset of 371 patients and validated on three external validation sets (EVS1-3), with 295, 244, and 225 patients, respectively. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). The false-negative rates (FNRs) of iShape alone and in combination with sentinel lymph node biopsy (SLNB) were also evaluated. Imaging feature visualisation and RNA sequencing analysis were performed to explore the underlying basis of iShape. The iShape achieved AUCs of 0.950-0.971 for EVS 1-3, which were better than those of the clinical model and the image signatures derived from the primary tumour, longitudinal primary tumour, or ALN (P < 0.05, as per the DeLong test). The performance of iShape remained satisfactory in subgroup analyses stratified by age, menstrual status, T stage, molecular subtype, treatment regimens, and machine type (AUCs of 0.812-1.000). More importantly, the FNR of iShape was 7.7%-8.1% in the EVSs, and the FNR of SLNB decreased from 13.4% to 3.6% with the aid of iShape in patients receiving SLNB and ALN dissection. The decision-making process of iShape was explained by feature visualisation. Additionally, RNA sequencing analysis revealed that a lower deep learning score was associated with immune infiltration and tumour proliferation pathways. The iShape model demonstrated good performance for the precise quantification of ALN status in patients with ALN-positive breast cancer receiving NAT, potentially benefiting individualised decision-making, and avoiding unnecessary axillary lymph node dissection. This study was supported by (1) Noncommunicable Chronic Diseases-National Science and Technology Major Project (No. 2024ZD0531100); (2) Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006); (3) National Natural Science Foundation of China (No. 82472051, 82471947, 82271941, 82272088); (4) National Science Foundation for Young Scientists of China (No. 82402270, 82202095, 82302190); (5) Guangzhou Municipal Science and Technology Planning Project (No. 2025A04J4773, 2025A04J4774); (6) the Natural Science Foundation of Guangdong Province of China (No. 2025A1515011607); (7) Medical Scientific Research Foundation of Guangdong Province of China (No. A2024403); (8) Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); (9) Outstanding Youth Science Foundation of Yunnan Basic Research Project (No. 202401AY070001-316); (10) Innovative Research Team of Yunnan Province (No. 202505AS350013).

Quantum integration in swin transformer mitigates overfitting in breast cancer screening.

Xie Z, Yang X, Zhang S, Yang J, Zhu Y, Zhang A, Sun H, Dai Q, Li L, Liu H, Ming W, Dou M

pubmed logopapersAug 27 2025
To explore the potential of quantum computing in advancing transformer-based deep learning models for breast cancer screening, this study introduces the Quantum-Enhanced Swin Transformer (QEST). This model integrates a Variational Quantum Circuit (VQC) to replace the fully connected layer responsible for classification in the Swin Transformer architecture. In simulations, QEST exhibited competitive accuracy and generalization performance compared to the original Swin Transformer, while also demonstrating an effect in mitigating overfitting. Specifically, in 16-qubit simulations, the VQC reduced the parameter count by 62.5% compared with the replaced fully connected layer and improved the Balanced Accuracy (BACC) by 3.62% in external validation. Furthermore, validation experiments conducted on an actual quantum computer have corroborated the effectiveness of QEST.

A Hybrid CNN-Transformer Deep Learning Model for Differentiating Benign and Malignant Breast Tumors Using Multi-View Ultrasound Images

qi, z., Jianxing, Z., Pan, T., Miao, C.

medrxiv logopreprintAug 27 2025
Breast cancer is a leading malignancy threatening womens health globally, making early and accurate diagnosis crucial. Ultrasound is a key screening and diagnostic tool due to its non- invasive, real-time, and cost-effective nature. However, its diagnostic accuracy is highly dependent on operator experience, and conventional single-image analysis often fails to capture the comprehensive features of a lesion. This study introduces a computer-aided diagnosis (CAD) system that emulates a clinicians multi-view diagnostic process. We developed a novel hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Transformer architecture. The model uses a pretrained EfficientNetV2 to extract spatial features from multiple, unordered ultrasound images of a single lesion. These features are then processed by a Transformer encoder, whose self-attention mechanism globally models and fuses their intrinsic correlations. A strict lesion-level data partitioning strategy ensured a rigorous evaluation. On an internal test set, our CNN-Transformer model achieved an accuracy of 0.93, a sensitivity of 0.92, a specificity of 0.94, and an Area Under the Curve (AUC) of 0.98. On an external test set, it demonstrated an accuracy of 0.93, a sensitivity of 0.94, a specificity of 0.91, and an AUC of 0.97. These results significantly outperform those of a baseline single-image model, which achieved accuracies of 0.88 and 0.89 and AUCs of 0.95 and 0.94 on the internal and external test sets, respectively. This study confirms that combining CNNs with Transformers yields a highly accurate and robust diagnostic system for breast ultrasound. By effectively fusing multi-view information, our model aligns with clinical logic and shows immense potential for improving diagnostic reliability.

[Comparison of diagnostic performance between artificial intelligence-assisted automated breast ultrasound and handheld ultrasound in breast cancer screening].

Yi DS, Sun WY, Song HP, Zhao XL, Hu SY, Gu X, Gao Y, Zhao FH

pubmed logopapersAug 26 2025
<b>Objective:</b> To compare the diagnostic performance of artificial intelligence-assisted automated breast ultrasound (AI-ABUS) with traditional handheld ultrasound (HHUS) in breast cancer screening. <b>Methods:</b> A total of 36 171 women undergoing breast cancer ultrasound screening in Futian District, Shenzhen, between July 1, 2023 and June 30, 2024 were prospectively recruited and assigned to either the AI-ABUS or HHUS group based on the screening modality used. In the AI-ABUS group, image acquisition was performed on-site by technicians, and two ultrasound physicians conducted remote diagnoses with AI assistance, supported by a follow-up management system. In the HHUS group, one ultrasound physician conducted both image acquisition and diagnosis on-site, and follow-up was led by clinical physicians. Based on the reported malignancy rates of different BI-RADS categories, the number of undiagnosed breast cancer cases in individuals without pathology was estimated, and adjusted detection rates were calculated. Primary outcomes included screening positive rate, biopsy rate, cancer detection rate, loss-to-follow-up rate, specificity, and sensitivity. <b>Results:</b> The median age [interquartile range, <i>M</i> (<i>Q</i><sub>1</sub>, <i>Q</i><sub>3</sub>)] of the 36 171 women was 43.8 (36.6, 50.8) years. A total of 14 766 women (40.82%) were screened with AI-ABUS and 21 405 (59.18%) with HHUS. Baseline characteristics showed no significant differences between the groups (all <i>P</i>>0.05). The AI-ABUS group had a lower screening positive rate [0.59% (87/14 766) vs 1.94% (416/21 405)], but higher biopsy rate [47.13% (41/87) vs 16.10% (67/416)], higher cancer detection rate [1.69‰ (25/14 766) vs 0.47‰ (10/21 428)], and lower loss-to-follow-up rate (6.90% vs 71.39%) compared to the HHUS group (all <i>P</i><0.05). There was no statistically significant difference in the distribution of breast cancer pathological stages among those who underwent biopsy between the two groups (<i>P</i>>0.05). The specificity of AI-ABUS was higher than that of HHUS [89.77% (13, 231/14 739) vs 74.12% (15, 858/21 394), <i>P</i><0.05], while sensitivity did not differ significantly [92.59% (25/27) vs 90.91% (10/11), <i>P</i>>0.05]. After estimating undiagnosed cancer cases among participants without pathology, the adjusted detection rate was 2.30‰ (34/14 766) in the AI-ABUS group and ranged from 1.17‰ to 2.75‰ [(25-59)/21 428] in the HHUS group. In the minimum estimation scenario, the detection rate in the AI-ABUS group was significantly higher (<i>P</i><0.05); in the maximum estimation scenario, the difference was not statistically significant (<i>P</i>>0.05). <b>Conclusions:</b> The AI-ABUS model, combined with an intelligent follow-up management system, enables a higher breast cancer detection rate with a lower screening positive rate, improved specificity, and reduced loss to follow-up. This suggests AI-ABUS is a promising alternative model for breast cancer screening.

Relation knowledge distillation 3D-ResNet-based deep learning for breast cancer molecular subtypes prediction on ultrasound videos: a multicenter study.

Wu Y, Zhou L, Zhao J, Peng Y, Li X, Wang Y, Zhu S, Hou C, Du P, Ling L, Wang Y, Tian J, Sun L

pubmed logopapersAug 26 2025
To develop and test a relation knowledge distillation three-dimensional residual network (RKD-R3D) model for predicting breast cancer molecular subtypes using ultrasound (US) videos to aid clinical personalized management. This multicentre study retrospectively included 882 breast cancer patients (2375 US videos and 9499 images) between January 2017 and December 2021, which was divided into training, validation, and internal test cohorts. Additionally, 86 patients was collected between May 2023 and November 2023 as the external test cohort. St. Gallen molecular subtypes (luminal A, luminal B, HER2-positive, and triple-negative) were confirmed via postoperative immunohistochemistry. The RKD-R3D based on US videos was developed and validated to predict four-classification molecular subtypes of breast cancer. The predictive performance of RKD-R3D was compared with RKD-R2D, traditional R3D, and preoperative core needle biopsy (CNB). The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, balanced accuracy, precision, recall, and F1-score were analyzed. RKD-R3D (AUC: 0.88, 0.95) outperformed RKD-R2D (AUC: 0.72, 0.85) and traditional R3D (AUC: 0.65, 0.79) in predicting four-classification breast cancer molecular subtypes in the internal and external test cohorts. RKD-R3D outperformed CNB (Accuracy: 0.87 vs. 0.79) in the external test cohort, achieved good performance in predicting triple negative from non-triple negative breast cancers (AUC: 0.98), and obtained satisfactory prediction performance for both T1 and non-T1 lesions (AUC: 0.96, 0.90). RKD-R3D when used with US videos becomes a potential supplementary tool to non-invasively assess breast cancer molecular subtypes.

ESR Essentials: artificial intelligence in breast imaging-practice recommendations by the European Society of Breast Imaging.

Schiaffino S, Bernardi D, Healy N, Marino MA, Romeo V, Sechopoulos I, Mann RM, Pinker K

pubmed logopapersAug 26 2025
Artificial intelligence (AI) can enhance the diagnostic performance of breast cancer imaging and improve workflow optimization, potentially mitigating excessive radiologist workload and suboptimal diagnostic accuracy. AI can also boost imaging capabilities through individual risk prediction, molecular subtyping, and neoadjuvant therapy response predictions. Evidence demonstrates AI's potential across multiple modalities. The most robust data come from mammographic screening, where AI models improve diagnostic accuracy and optimize workflow, but rigorous post-market surveillance is required before any implementation strategy in this field. Commercial tools for digital breast tomosynthesis and ultrasound, potentially able to reduce interpretation time and improve accuracy, are also available, but post-implementation evaluation studies are likewise lacking. Besides basic tools for breast MRI with limited proven clinical benefit, AI applications for other modalities are not yet commercially available. Applications in contrast-enhanced mammography are still in the research stage, especially for radiomics-based molecular subtype classification. Applications of Large Language Models (LLMs) are in their infancy, and there are currently no clinical applications. Consequently, and despite their promise, all commercially available AI tools for breast imaging should currently still be regarded as techniques that, at best, aid radiologists in image evaluation. Their use is therefore optional, and the findings may always be overruled. KEY POINTS: AI systems improve diagnostic accuracy and efficiency of mammography screening, but long-term outcomes data are lacking. Commercial tools for digital breast tomosynthesis and ultrasound are available, but post-implementation evaluation studies are lacking. AI tools for breast imaging should still be regarded as a non-obligatory aid to radiologists for image interpretation.

Breast Cancer Diagnosis Using a Dual-Modality Complementary Deep Learning Network With Integrated Attention Mechanism Fusion of B-Mode Ultrasound and Shear Wave Elastography.

Dong L, Cai X, Ge H, Sun L, Pan X, Sun F, Meng Q

pubmed logopapersAug 25 2025
To develop and evaluate a Dual-modality Complementary Feature Attention Network (DCFAN) that integrates spatial and stiffness information from B-mode ultrasound and shear wave elastography (SWE) for improved breast tumor classification and axillary lymph node (ALN) metastasis prediction. A total of 387 paired B-mode and SWE images from 218 patients were retrospectively analyzed. The proposed DCFAN incorporates attention mechanisms to effectively fuse structural features from B-mode ultrasound with stiffness features from SWE. Two classification tasks were performed: (1) differentiating benign from malignant tumors, and (2) classifying benign tumors, malignant tumors without ALN metastasis, and malignant tumors with ALN metastasis. Model performance was assessed using accuracy, sensitivity, specificity, and AUC, and compared with conventional CNN-based models and two radiologists with varying experience. In Task 1, DCFAN achieved an accuracy of 94.36% ± 1.45% and the highest AUC of 0.97. In Task 2, it attained 91.70% ± 3.77% accuracy and an average AUC of 0.83. The multimodal approach significantly outperformed the single-modality models in both tasks. Notably, in Task 1, DCFAN demonstrated higher specificity (94.9%) compared to the experienced radiologist (p = 0.002), and yielded higher F1-scores than both radiologists. It also outperformed several state-of-the-art deep learning models in diagnostic accuracy. DCFAN demonstrated robust and superior performance over existing CNN-based methods and radiologists in both breast tumor classification and ALN metastasis prediction. This approach may serve as a valuable assistive tool to enhance diagnostic accuracy in breast ultrasound.
Page 6 of 30293 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.