Sort by:
Page 10 of 14133 results

Deep learning enables fast and accurate quantification of MRI-guided near-infrared spectral tomography for breast cancer diagnosis.

Feng J, Tang Y, Lin S, Jiang S, Xu J, Zhang W, Geng M, Dang Y, Wei C, Li Z, Sun Z, Jia K, Pogue BW, Paulsen KD

pubmed logopapersMay 29 2025
The utilization of magnetic resonance (MR) im-aging to guide near-infrared spectral tomography (NIRST) shows significant potential for improving the specificity and sensitivity of breast cancer diagnosis. However, the ef-ficiency and accuracy of NIRST image reconstruction have been limited by the complexities of light propagation mod-eling and MRI image segmentation. To address these chal-lenges, we developed and evaluated a deep learning-based approach for MR-guided 3D NIRST image reconstruction (DL-MRg-NIRST). Using a network trained on synthetic data, the DL-MRg-NIRST system reconstructed images from data acquired during 38 clinical imaging exams of pa-tients with breast abnormalities. Statistical analysis of the results demonstrated a sensitivity of 87.5%, a specificity of 92.9%, and a diagnostic accuracy of 89.5% in distinguishing pathologically defined benign from malignant lesions. Ad-ditionally, the combined use of MRI and DL-MRg-NIRST di-agnoses achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Remarkably, the DL-MRg-NIRST image reconstruction process required only 1.4 seconds, significantly faster than state-of-the-art MR-guided NIRST methods.

Deep Learning-Based Breast Cancer Detection in Mammography: A Multi-Center Validation Study in Thai Population

Isarun Chamveha, Supphanut Chaiyungyuen, Sasinun Worakriangkrai, Nattawadee Prasawang, Warasinee Chaisangmongkon, Pornpim Korpraphong, Voraparee Suvannarerg, Shanigarn Thiravit, Chalermdej Kannawat, Kewalin Rungsinaporn, Suwara Issaragrisil, Payia Chadbunchachai, Pattiya Gatechumpol, Chawiporn Muktabhant, Patarachai Sereerat

arxiv logopreprintMay 29 2025
This study presents a deep learning system for breast cancer detection in mammography, developed using a modified EfficientNetV2 architecture with enhanced attention mechanisms. The model was trained on mammograms from a major Thai medical center and validated on three distinct datasets: an in-domain test set (9,421 cases), a biopsy-confirmed set (883 cases), and an out-of-domain generalizability set (761 cases) collected from two different hospitals. For cancer detection, the model achieved AUROCs of 0.89, 0.96, and 0.94 on the respective datasets. The system's lesion localization capability, evaluated using metrics including Lesion Localization Fraction (LLF) and Non-Lesion Localization Fraction (NLF), demonstrated robust performance in identifying suspicious regions. Clinical validation through concordance tests showed strong agreement with radiologists: 83.5% classification and 84.0% localization concordance for biopsy-confirmed cases, and 78.1% classification and 79.6% localization concordance for out-of-domain cases. Expert radiologists' acceptance rate also averaged 96.7% for biopsy-confirmed cases, and 89.3% for out-of-domain cases. The system achieved a System Usability Scale score of 74.17 for source hospital, and 69.20 for validation hospitals, indicating good clinical acceptance. These results demonstrate the model's effectiveness in assisting mammogram interpretation, with the potential to enhance breast cancer screening workflows in clinical practice.

Improving Breast Cancer Diagnosis in Ultrasound Images Using Deep Learning with Feature Fusion and Attention Mechanism.

Asif S, Yan Y, Feng B, Wang M, Zheng Y, Jiang T, Fu R, Yao J, Lv L, Song M, Sui L, Yin Z, Wang VY, Xu D

pubmed logopapersMay 27 2025
Early detection of malignant lesions in ultrasound images is crucial for effective cancer diagnosis and treatment. While traditional methods rely on radiologists, deep learning models can improve accuracy, reduce errors, and enhance efficiency. This study explores the application of a deep learning model for classifying benign and malignant lesions, focusing on its performance and interpretability. In this study, we proposed a feature fusion-based deep learning model for classifying benign and malignant lesions in ultrasound images. The model leverages advanced architectures such as MobileNetV2 and DenseNet121, enhanced with feature fusion and attention mechanisms to boost classification accuracy. The clinical dataset comprises 2171 images collected from 1758 patients between December 2020 and May 2024. Additionally, we utilized the publicly available BUSI dataset, consisting of 780 images from female patients aged 25 to 75, collected in 2018. To enhance interpretability, we applied Grad-CAM, Saliency Maps, and shapley additive explanations (SHAP) techniques to explain the model's decision-making. A comparative analysis with radiologists of varying expertise levels is also conducted. The proposed model exhibited the highest performance, achieving an AUC of 0.9320 on our private dataset and an area under the curve (AUC) of 0.9834 on the public dataset, significantly outperforming traditional deep convolutional neural network models. It also exceeded the diagnostic performance of radiologists, showcasing its potential as a reliable tool for medical image classification. The model's success can be attributed to its incorporation of advanced architectures, feature fusion, and attention mechanisms. The model's decision-making process was further clarified using interpretability techniques like Grad-CAM, Saliency Maps, and SHAP, offering insights into its ability to focus on relevant image features for accurate classification. The proposed deep learning model offers superior accuracy in classifying benign and malignant lesions in ultrasound images, outperforming traditional models and radiologists. Its strong performance, coupled with interpretability techniques, demonstrates its potential as a reliable and efficient tool for medical diagnostics. The datasets generated and analyzed during the current study are not publicly available due to the nature of this research and participants of this study, but may be available from the corresponding author on reasonable request.

STA-Risk: A Deep Dive of Spatio-Temporal Asymmetries for Breast Cancer Risk Prediction

Zhengbo Zhou, Dooman Arefan, Margarita Zuley, Jules Sumkin, Shandong Wu

arxiv logopreprintMay 27 2025
Predicting the risk of developing breast cancer is an important clinical tool to guide early intervention and tailoring personalized screening strategies. Early risk models have limited performance and recently machine learning-based analysis of mammogram images showed encouraging risk prediction effects. These models however are limited to the use of a single exam or tend to overlook nuanced breast tissue evolvement in spatial and temporal details of longitudinal imaging exams that are indicative of breast cancer risk. In this paper, we propose STA-Risk (Spatial and Temporal Asymmetry-based Risk Prediction), a novel Transformer-based model that captures fine-grained mammographic imaging evolution simultaneously from bilateral and longitudinal asymmetries for breast cancer risk prediction. STA-Risk is innovative by the side encoding and temporal encoding to learn spatial-temporal asymmetries, regulated by a customized asymmetry loss. We performed extensive experiments with two independent mammogram datasets and achieved superior performance than four representative SOTA models for 1- to 5-year future risk prediction. Source codes will be released upon publishing of the paper.

Decoding Breast Cancer in X-ray Mammograms: A Multi-Parameter Approach Using Fractals, Multifractals, and Structural Disorder Analysis

Santanu Maity, Mousa Alrubayan, Prabhakar Pradhan

arxiv logopreprintMay 27 2025
We explored the fractal and multifractal characteristics of breast mammogram micrographs to identify quantitative biomarkers associated with breast cancer progression. In addition to conventional fractal and multifractal analyses, we employed a recently developed fractal-functional distribution method, which transforms fractal measures into Gaussian distributions for more robust statistical interpretation. Given the sparsity of mammogram intensity data, we also analyzed how variations in intensity thresholds, used for binary transformations of the fractal dimension, follow unique trajectories that may serve as novel indicators of disease progression. Our findings demonstrate that fractal, multifractal, and fractal-functional parameters effectively differentiate between benign and cancerous tissue. Furthermore, the threshold-dependent behavior of intensity-based fractal measures presents distinct patterns in cancer cases. To complement these analyses, we applied the Inverse Participation Ratio (IPR) light localization technique to quantify structural disorder at the microscopic level. This multi-parametric approach, integrating spatial complexity and structural disorder metrics, offers a promising framework for enhancing the sensitivity and specificity of breast cancer detection.

MobNas ensembled model for breast cancer prediction.

Shahzad T, Saqib SM, Mazhar T, Iqbal M, Almogren A, Ghadi YY, Saeed MM, Hamam H

pubmed logopapersMay 25 2025
Breast cancer poses a real and immense threat to humankind, thus a need to develop a way of diagnosing this devastating disease early, accurately, and in a simpler manner. Thus, while substantial progress has been made in developing machine learning algorithms, deep learning, and transfer learning models, issues with diagnostic accuracy and minimizing diagnostic errors persist. This paper introduces MobNAS, a model that uses MobileNetV2 and NASNetLarge to sort breast cancer images into benign, malignant, or normal classes. The study employs a multi-class classification design and uses a publicly available dataset comprising 1,578 ultrasound images, including 891 benign, 421 malignant, and 266 normal cases. By deploying MobileNetV2, it is easy to work well on devices with less computational capability than is used by NASNetLarge, which enhances its applicability and effectiveness in other tasks. The performance of the proposed MobNAS model was tested on the breast cancer image dataset, and the accuracy level achieved was 97%, the Mean Absolute Error (MAE) was 0.05, and the Matthews Correlation Coefficient (MCC) was 95%. From the findings of this research, it is evident that MobNAS can enhance diagnostic accuracy and reduce existing shortcomings in breast cancer detection.

Deep Learning for Breast Cancer Detection: Comparative Analysis of ConvNeXT and EfficientNet

Mahmudul Hasan

arxiv logopreprintMay 24 2025
Breast cancer is the most commonly occurring cancer worldwide. This cancer caused 670,000 deaths globally in 2022, as reported by the WHO. Yet since health officials began routine mammography screening in age groups deemed at risk in the 1980s, breast cancer mortality has decreased by 40% in high-income nations. Every day, a greater and greater number of people are receiving a breast cancer diagnosis. Reducing cancer-related deaths requires early detection and treatment. This paper compares two convolutional neural networks called ConvNeXT and EfficientNet to predict the likelihood of cancer in mammograms from screening exams. Preprocessing of the images, classification, and performance evaluation are main parts of the whole procedure. Several evaluation metrics were used to compare and evaluate the performance of the models. The result shows that ConvNeXT generates better results with a 94.33% AUC score, 93.36% accuracy, and 95.13% F-score compared to EfficientNet with a 92.34% AUC score, 91.47% accuracy, and 93.06% F-score on RSNA screening mammography breast cancer dataset.

MRI-based habitat analysis for Intratumoral heterogeneity quantification combined with deep learning for HER2 status prediction in breast cancer.

Li QY, Liang Y, Zhang L, Li JH, Wang BJ, Wang CF

pubmed logopapersMay 23 2025
Human epidermal growth factor receptor 2 (HER2) is a crucial determinant of breast cancer prognosis and treatment options. The study aimed to establish an MRI-based habitat model to quantify intratumoral heterogeneity (ITH) and evaluate its potential in predicting HER2 expression status. Data from 340 patients with pathologically confirmed invasive breast cancer were retrospectively analyzed. Two tasks were designed for this study: Task 1 distinguished between HER2-positive and HER2-negative breast cancer. Task 2 distinguished between HER2-low and HER2-zero breast cancer. We developed the ITH, deep learning (DL), and radiomics signatures based on the features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Clinical independent predictors were determined by multivariable logistic regression. Finally, a combined model was constructed by integrating the clinical independent predictors, ITH signature, and DL signature. The area under the receiver operating characteristic curve (AUC) served as the standard for assessing the performance of models. In task 1, the ITH signature performed well in the training set (AUC = 0.855) and the validation set (AUC = 0.842). In task 2, the AUCs of the ITH signature were 0.844 and 0.840, respectively, which still showed good prediction performance. In the validation sets of both tasks, the combined model exhibited the best prediction performance, with AUCs of 0.912 and 0.917 respectively, making it the optimal model. A combined model integrating clinical independent predictors, ITH signature, and DL signature can predict HER2 expression status preoperatively and noninvasively.

Three-Blind Validation Strategy of Deep Learning Models for Image Segmentation.

Larroza A, Pérez-Benito FJ, Tendero R, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersMay 21 2025
Image segmentation plays a central role in computer vision applications such as medical imaging, industrial inspection, and environmental monitoring. However, evaluating segmentation performance can be particularly challenging when ground truth is not clearly defined, as is often the case in tasks involving subjective interpretation. These challenges are amplified by inter- and intra-observer variability, which complicates the use of human annotations as a reliable reference. To address this, we propose a novel validation framework-referred to as the three-blind validation strategy-that enables rigorous assessment of segmentation models in contexts where subjectivity and label variability are significant. The core idea is to have a third independent expert, blind to the labeler identities, assess a shuffled set of segmentations produced by multiple human annotators and/or automated models. This allows for the unbiased evaluation of model performance and helps uncover patterns of disagreement that may indicate systematic issues with either human or machine annotations. The primary objective of this study is to introduce and demonstrate this validation strategy as a generalizable framework for robust model evaluation in subjective segmentation tasks. We illustrate its practical implementation in a mammography use case involving dense tissue segmentation while emphasizing its potential applicability to a broad range of segmentation scenarios.

Enhancing nuclei segmentation in breast histopathology images using U-Net with backbone architectures.

C V LP, V G B, Bhooshan RS

pubmed logopapersMay 21 2025
Breast cancer remains a leading cause of mortality among women worldwide, underscoring the need for accurate and timely diagnostic methods. Precise segmentation of nuclei in breast histopathology images is crucial for effective diagnosis and prognosis, offering critical insights into tumor characteristics and informing treatment strategies. This paper presents an enhanced U-Net architecture utilizing ResNet-34 as an advanced backbone, aimed at improving nuclei segmentation performance. The proposed model is evaluated and compared with standard U-Net and its other variants, including U-Net with VGG-16 and Inception-v3 backbones, using the BreCaHad dataset with nuclei masks generated through ImageJ software. The U-Net model with ResNet-34 backbone achieved superior performance, recording an Intersection over Union (IoU) score of 0.795, significantly outperforming the basic U-Net's IoU score of 0.725. The integration of advanced backbones and data augmentation techniques substantially improved segmentation accuracy, especially on limited medical imaging datasets. Comparative analysis demonstrated that ResNet-34 consistently surpassed other configurations across multiple metrics, including IoU, accuracy, precision, and F1 score. Further validation on the BNS and MoNuSeg-2018 datasets confirmed the robustness of the proposed model. This study highlights the potential of advanced deep learning architectures combined with augmentation methods to address challenges in nuclei segmentation, contributing to the development of more effective clinical diagnostic tools and improved patient care outcomes.
Page 10 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.