Sort by:
Page 11 of 14133 results

Mammography-based artificial intelligence for breast cancer detection, diagnosis, and BI-RADS categorization using multi-view and multi-level convolutional neural networks.

Tan H, Wu Q, Wu Y, Zheng B, Wang B, Chen Y, Du L, Zhou J, Fu F, Guo H, Fu C, Ma L, Dong P, Xue Z, Shen D, Wang M

pubmed logopapersMay 21 2025
We developed an artificial intelligence system (AIS) using multi-view multi-level convolutional neural networks for breast cancer detection, diagnosis, and BI-RADS categorization support in mammography. Twenty-four thousand eight hundred sixty-six breasts from 12,433 Asian women between August 2012 and December 2018 were enrolled. The study consisted of three parts: (1) evaluation of AIS performance in malignancy diagnosis; (2) stratified analysis of BI-RADS 3-4 subgroups with AIS; and (3) reassessment of BI-RADS 0 breasts with AIS assistance. We further evaluate AIS by conducting a counterbalance-designed AI-assisted study, where ten radiologists read 1302 cases with/without AIS assistance. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and F1 score were measured. The AIS yielded AUC values of 0.995, 0.933, and 0.947 for malignancy diagnosis in the validation set, testing set 1, and testing set 2, respectively. Within BI-RADS 3-4 subgroups with pathological results, AIS downgraded 83.1% of false-positives into benign groups, and upgraded 54.1% of false-negatives into malignant groups. AIS also successfully assisted radiologists in identifying 7 out of 43 malignancies initially diagnosed with BI-RADS 0, with a specificity of 96.7%. In the counterbalance-designed AI-assisted study, the average AUC across ten readers significantly improved with AIS assistance (p = 0.001). AIS can accurately detect and diagnose breast cancer on mammography and further serve as a supportive tool for BI-RADS categorization. An AI risk assessment tool employing deep learning algorithms was developed and validated for enhancing breast cancer diagnosis from mammograms, to improve risk stratification accuracy, particularly in patients with dense breasts, and serve as a decision support aid for radiologists. The false positive and negative rates of mammography diagnosis remain high. The AIS can yield a high AUC for malignancy diagnosis. The AIS is important in stratifying BI-RADS categorization.

Feasibility of an AI-driven Classification of Tuberous Breast Deformity: A Siamese Network Approach with a Continuous Tuberosity Score.

Vaccari S, Paderno A, Furlan S, Cavallero MF, Lupacchini AM, Di Giuli R, Klinger M, Klinger F, Vinci V

pubmed logopapersMay 20 2025
Tuberous breast deformity (TBD) is a congenital condition characterized by constriction of the breast base, parenchymal hypoplasia, and areolar herniation. The absence of a universally accepted classification system complicates diagnosis and surgical planning, leading to variability in clinical outcomes. Artificial intelligence (AI) has emerged as a powerful adjunct in medical imaging, enabling objective, reproducible, and data-driven diagnostic assessments. This study introduces an AI-driven diagnostic tool for tuberous breast deformity (TBD) classification using a Siamese Network trained on paired frontal and lateral images. Additionally, the model generates a continuous Tuberosity Score (ranging from 0 to 1) based on embedding vector distances, offering an objective measure to enhance surgical planning and improved clinical outcomes. A dataset of 200 expertly classified frontal and lateral breast images (100 tuberous, 100 non-tuberous) was used to train a Siamese Network with contrastive loss. The model extracted high-dimensional feature embeddings to differentiate tuberous from non-tuberous breasts. Five-fold cross-validation ensured robust performance evaluation. Performance metrics included accuracy, precision, recall, and F1-score. Visualization techniques, such as t-SNE clustering and occlusion sensitivity mapping, were employed to interpret model decisions. The model achieved an average accuracy of 96.2% ± 5.5%, with balanced precision and recall. The Tuberosity Score, derived from the Euclidean distance between embeddings, provided a continuous measure of deformity severity, correlating well with clinical assessments. This AI-based framework offers an objective, high-accuracy classification system for TBD. The Tuberosity Score enhances diagnostic precision, potentially aiding in surgical planning and improving patient outcomes.

An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images.

Alom MR, Farid FA, Rahaman MA, Rahman A, Debnath T, Miah ASM, Mansor S

pubmed logopapersMay 20 2025
Breast cancer represents a significant global health challenge, which makes it essential to detect breast cancer early and accurately to improve patient prognosis and reduce mortality rates. However, traditional diagnostic processes relying on manual analysis of medical images are inherently complex and subject to variability between observers, highlighting the urgent need for robust automated breast cancer detection systems. While deep learning has demonstrated potential, many current models struggle with limited accuracy and lack of interpretability. This research introduces the Deep Neural Breast Cancer Detection (DNBCD) model, an explainable AI-based framework that utilizes deep learning methods for classifying breast cancer using histopathological and ultrasound images. The proposed model employs Densenet121 as a foundation, integrating customized Convolutional Neural Network (CNN) layers including GlobalAveragePooling2D, Dense, and Dropout layers along with transfer learning to achieve both high accuracy and interpretability for breast cancer diagnosis. The proposed DNBCD model integrates several preprocessing techniques, including image normalization and resizing, and augmentation techniques to enhance the model's robustness and address class imbalances using class weight. It employs Grad-CAM (Gradient-weighted Class Activation Mapping) to offer visual justifications for its predictions, increasing trust and transparency among healthcare providers. The model was assessed using two benchmark datasets: Breakhis-400x (B-400x) and Breast Ultrasound Images Dataset (BUSI) containing 1820 and 1578 images, respectively. We systematically divided the datasets into training (70%), testing (20%,) and validation (10%) sets, ensuring efficient model training and evaluation obtaining accuracies of 93.97% for B-400x dataset having benign and malignant classes and 89.87% for BUSI dataset having benign, malignant, and normal classes for breast cancer detection. Experimental results demonstrate that the proposed DNBCD model significantly outperforms existing state-of-the-art approaches with potential uses in clinical environments. We also made all the materials publicly accessible for the research community at: https://github.com/romzanalom/XAI-Based-Deep-Neural-Breast-Cancer-Detection .

Semiautomated segmentation of breast tumor on automatic breast ultrasound image using a large-scale model with customized modules.

Zhou Y, Ye M, Ye H, Zeng S, Shu X, Pan Y, Wu A, Liu P, Zhang G, Cai S, Chen S

pubmed logopapersMay 19 2025
To verify the capability of the Segment Anything Model for medical images in 3D (SAM-Med3D), tailored with low-rank adaptation (LoRA) strategies, in segmenting breast tumors in Automated Breast Ultrasound (ABUS) images. This retrospective study collected data from 329 patients diagnosed with breast cancer (average age 54 years). The dataset was randomly divided into training (n = 204), validation (n = 29), and test sets (n = 59). Two experienced radiologists manually annotated the regions of interest of each sample in the dataset, which served as ground truth for training and evaluating the SAM-Med3D model with additional customized modules. For semi-automatic tumor segmentation, points were randomly sampled within the lesion areas to simulate the radiologists' clicks in real-world scenarios. The segmentation performance was evaluated using the Dice coefficient. A total of 492 cases (200 from the "Tumor Detection, Segmentation, and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS) 2023 challenge") were subjected to semi-automatic segmentation inference. The average Dice Similariy Coefficient (DSC) scores for the training, validation, and test sets of the Lishui dataset were 0.75, 0.78, and 0.75, respectively. The Breast Imaging Reporting and Data System (BI-RADS) categories of all samples range from BI-RADS 3 to 6, yielding an average DSC coefficient between 0.73 and 0.77. By categorizing the samples (lesion volumes ranging from 1.64 to 100.03 cm<sup>3</sup>) based on lesion size, the average DSC falls between 0.72 and 0.77.And the overall average DSC for the TDSC-ABUS 2023 challenge dataset was 0.79, with the test set achieving a sora-of-art scores of 0.79. The SAM-Med3D model with additional customized modules demonstrates good performance in semi-automatic 3D ABUS breast cancer tumor segmentation, indicating its feasibility for application in computer-aided diagnosis systems.

Preoperative DBT-based radiomics for predicting axillary lymph node metastasis in breast cancer: a multi-center study.

He S, Deng B, Chen J, Li J, Wang X, Li G, Long S, Wan J, Zhang Y

pubmed logopapersMay 19 2025
In the prognosis of breast cancer, the status of axillary lymph nodes (ALN) is critically important. While traditional axillary lymph node dissection (ALND) provides comprehensive information, it is associated with high risks. Sentinel lymph node biopsy (SLND), as an alternative, is less invasive but still poses a risk of overtreatment. In recent years, digital breast tomosynthesis (DBT) technology has emerged as a new precise diagnostic tool for breast cancer, leveraging its high detection capability for lesions obscured by dense glandular tissue. This multi-center study evaluates the feasibility of preoperative DBT-based radiomics, using tumor and peritumoral features, to predict ALN metastasis in breast cancer. We retrospectively collected DBT imaging data from 536 preoperative breast cancer patients across two centers. Specifically, 390 cases were from one Hospital, and 146 cases were from another Hospital. These data were assigned to internal training and external validation sets, respectively. We performed 3D region of interest (ROI) delineation on the cranio-caudal (CC) and mediolateral oblique (MLO) views of DBT images and extracted radiomic features. Using methods such as analysis of variance (ANOVA) and least absolute shrinkage and selection operator (LASSO), we selected radiomic features extracted from the tumor and its surrounding 3 mm, 5 mm, and 10 mm regions, and constructed a radiomic feature set. We then developed a combined model that includes the optimal radiomic features and clinical pathological factors. The performance of the combined model was evaluated using the area under the curve (AUC), and it was directly compared with the diagnostic results of radiologists. The results showed that the AUC of the radiomic features from the surrounding regions of the tumor were generally lower than those from the tumor itself. Among them, the Signature<sub>tuomor+10 mm</sub> model performed best, achieving an AUC of 0.806 using a logistic regression (LR) classifier to generate the RadScore.The nomogram incorporating both Ki67 and RadScore demonstrated a slightly higher AUC (0.813) compared to the Signature<sub>tuomor+10 mm</sub> model alone (0.806). By integrating relevant clinical information, the nomogram enhances potential clinical utility. Moreover, it outperformed radiologists' assessments in predictive accuracy, highlighting its added value in clinical decision-making. Radiomics based on DBT imaging of the tumor and surrounding regions can provide a non-invasive auxiliary tool to guide treatment strategies for ALN metastasis in breast cancer. Not applicable.

GuidedMorph: Two-Stage Deformable Registration for Breast MRI

Yaqian Chen, Hanxue Gu, Haoyu Dong, Qihang Li, Yuwen Chen, Nicholas Konz, Lin Li, Maciej A. Mazurowski

arxiv logopreprintMay 19 2025
Accurately registering breast MR images from different time points enables the alignment of anatomical structures and tracking of tumor progression, supporting more effective breast cancer detection, diagnosis, and treatment planning. However, the complexity of dense tissue and its highly non-rigid nature pose challenges for conventional registration methods, which primarily focus on aligning general structures while overlooking intricate internal details. To address this, we propose \textbf{GuidedMorph}, a novel two-stage registration framework designed to better align dense tissue. In addition to a single-scale network for global structure alignment, we introduce a framework that utilizes dense tissue information to track breast movement. The learned transformation fields are fused by introducing the Dual Spatial Transformer Network (DSTN), improving overall alignment accuracy. A novel warping method based on the Euclidean distance transform (EDT) is also proposed to accurately warp the registered dense tissue and breast masks, preserving fine structural details during deformation. The framework supports paradigms that require external segmentation models and with image data only. It also operates effectively with the VoxelMorph and TransMorph backbones, offering a versatile solution for breast registration. We validate our method on ISPY2 and internal dataset, demonstrating superior performance in dense tissue, overall breast alignment, and breast structural similarity index measure (SSIM), with notable improvements by over 13.01% in dense tissue Dice, 3.13% in breast Dice, and 1.21% in breast SSIM compared to the best learning-based baseline.

Breast Arterial Calcifications on Mammography: A Review of the Literature.

Rossi J, Cho L, Newell MS, Venta LA, Montgomery GH, Destounis SV, Moy L, Brem RF, Parghi C, Margolies LR

pubmed logopapersMay 17 2025
Identifying systemic disease with medical imaging studies may improve population health outcomes. Although the pathogenesis of peripheral arterial calcification and coronary artery calcification differ, breast arterial calcification (BAC) on mammography is associated with cardiovascular disease (CVD), a leading cause of death in women. While professional society guidelines on the reporting or management of BAC have not yet been established, and assessment and quantification methods are not yet standardized, the value of reporting BAC is being considered internationally as a possible indicator of subclinical CVD. Furthermore, artificial intelligence (AI) models are being developed to identify and quantify BAC on mammography, as well as to predict the risk of CVD. This review outlines studies evaluating the association of BAC and CVD, introduces the role of preventative cardiology in clinical management, discusses reasons to consider reporting BAC, acknowledges current knowledge gaps and barriers to assessing and reporting calcifications, and provides examples of how AI can be utilized to measure BAC and contribute to cardiovascular risk assessment. Ultimately, reporting BAC on mammography might facilitate earlier mitigation of cardiovascular risk factors in asymptomatic women.

Computational modeling of breast tissue mechanics and machine learning in cancer diagnostics: enhancing precision in risk prediction and therapeutic strategies.

Ashi L, Taurin S

pubmed logopapersMay 17 2025
Breast cancer remains a significant global health issue. Despite advances in detection and treatment, its complexity is driven by genetic, environmental, and structural factors. Computational methods like Finite Element Modeling (FEM) have transformed our understanding of breast cancer risk and progression. Advanced computational approaches in breast cancer research are the focus, with an emphasis on FEM's role in simulating breast tissue mechanics and enhancing precision in therapies such as radiofrequency ablation (RFA). Machine learning (ML), particularly Convolutional Neural Networks (CNNs), has revolutionized imaging modalities like mammograms and MRIs, improving diagnostic accuracy and early detection. AI applications in analyzing histopathological images have advanced tumor classification and grading, offering consistency and reducing inter-observer variability. Explainability tools like Grad-CAM, SHAP, and LIME enhance the transparency of AI-driven models, facilitating their integration into clinical workflows. Integrating FEM and ML represents a paradigm shift in breast cancer management. FEM offers precise modeling of tissue mechanics, while ML excels in predictive analytics and image analysis. Despite challenges such as data variability and limited standardization, synergizing these approaches promises adaptive, personalized care. These computational methods have the potential to redefine diagnostics, optimize treatment, and improve patient outcomes.

Deep learning predicts HER2 status in invasive breast cancer from multimodal ultrasound and MRI.

Fan Y, Sun K, Xiao Y, Zhong P, Meng Y, Yang Y, Du Z, Fang J

pubmed logopapersMay 16 2025
The preoperative human epidermal growth factor receptor type 2 (HER2) status of breast cancer is typically determined by pathological examination of a core needle biopsy, which influences the efficacy of neoadjuvant chemotherapy (NAC). However, the highly heterogeneous nature of breast cancer and the limitations of needle aspiration biopsy increase the instability of pathological evaluation. The aim of this study was to predict HER2 status in preoperative breast cancer using deep learning (DL) models based on ultrasound (US) and magnetic resonance imaging (MRI). The study included women with invasive breast cancer who underwent US and MRI at our institution between January 2021 and July 2024. US images and dynamic contrast-enhanced T1-weighted MRI images were used to construct DL models (DL-US: the DL model based on US; DL-MRI: the model based on MRI; and DL-MRI&US: the combined model based on both MRI and US). All classifications were based on postoperative pathological evaluation. Receiver operating characteristic analysis and the DeLong test were used to compare the diagnostic performance of the DL models. In the test cohort, DL-US differentiated the HER2 status of breast cancer with an AUC of 0.842 (95% CI: 0.708-0.931), and sensitivity and specificity of 89.5% and 79.3%, respectively. DL-MRI achieved an AUC of 0.800 (95% CI: 0.660-0.902), with sensitivity and specificity of 78.9% and 79.3%, respectively. DL-MRI&US yielded an AUC of 0.898 (95% CI: 0.777-0.967), with sensitivity and specificity of 63.2% and 100.0%, respectively.
Page 11 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.