Sort by:
Page 10 of 23225 results

Prediction of axillary lymph node metastasis in triple negative breast cancer using MRI radiomics and clinical features.

Shen Y, Huang R, Zhang Y, Zhu J, Li Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model to predict axillary lymph node (ALN) metastasis in triple negative breast cancer (TNBC) patients using magnetic resonance imaging (MRI) and clinical characteristics. This retrospective study included TNBC patients from the First Affiliated Hospital of Soochow University and Jiangsu Province Hospital (2016-2023). We analyzed clinical characteristics and radiomic features from T2-weighted MRI. Using LASSO regression for feature selection, we applied Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) to build prediction models. A total of 163 patients, with a median age of 53 years (range: 24-73), were divided into a training group (n = 115) and a validation group (n = 48). Among them, 54 (33.13%) had ALN metastasis, and 109 (66.87%) were non-metastasis. Nottingham grade (P = 0.005), tumor size (P = 0.016) were significant difference between non-metastasis cases and metastasis cases. In the validation set, the LR-based combined model achieved the highest AUC (0.828, 95%CI: 0.706-0.950) with excellent sensitivity (0.813) and accuracy (0.812). Although the RF-based model had the highest AUC in the training set and the highest specificity (0.906) in the validation set, its performance was less consistent compared to the LR model. MRI-T2WI radiomic features predict ALN metastasis in TNBC, with integration into clinical models enhancing preoperative predictions and personalizing management.

ADAptation: Reconstruction-based Unsupervised Active Learning for Breast Ultrasound Diagnosis

Yaofei Duan, Yuhao Huang, Xin Yang, Luyi Han, Xinyu Xie, Zhiyuan Zhu, Ping He, Ka-Hou Chan, Ligang Cui, Sio-Kei Im, Dong Ni, Tao Tan

arxiv logopreprintJul 1 2025
Deep learning-based diagnostic models often suffer performance drops due to distribution shifts between training (source) and test (target) domains. Collecting and labeling sufficient target domain data for model retraining represents an optimal solution, yet is limited by time and scarce resources. Active learning (AL) offers an efficient approach to reduce annotation costs while maintaining performance, but struggles to handle the challenge posed by distribution variations across different datasets. In this study, we propose a novel unsupervised Active learning framework for Domain Adaptation, named ADAptation, which efficiently selects informative samples from multi-domain data pools under limited annotation budget. As a fundamental step, our method first utilizes the distribution homogenization capabilities of diffusion models to bridge cross-dataset gaps by translating target images into source-domain style. We then introduce two key innovations: (a) a hypersphere-constrained contrastive learning network for compact feature clustering, and (b) a dual-scoring mechanism that quantifies and balances sample uncertainty and representativeness. Extensive experiments on four breast ultrasound datasets (three public and one in-house/multi-center) across five common deep classifiers demonstrate that our method surpasses existing strong AL-based competitors, validating its effectiveness and generalization for clinical domain adaptation. The code is available at the anonymized link: https://github.com/miccai25-966/ADAptation.

Knowledge mapping of ultrasound technology and triple-negative breast cancer: a visual and bibliometric analysis.

Wan Y, Shen Y, Wang J, Zhang T, Fu X

pubmed logopapersJul 1 2025
This study aims to explore the application of ultrasound technology in triple-negative breast cancer (TNBC) using bibliometric methods. It presents a visual knowledge map to exhibit global research dynamics and elucidates the research directions, hotspots, trends, and frontiers in this field. The Web of Science Core Collection database was used, and CiteSpace and VOSviewer software were employed to visualize the annual publication volume, collaborative networks (including countries, institutions, and authors), citation characteristics (such as references, co-citations, and publications), as well as keywords (including emergence and clustering) related to ultrasound applications in TNBC over the past 15 years. A total of 310 papers were included. The first paper was published in 2010, and after that, publications in this field really took off, especially after 2020. China emerged as the leading country in terms of publication volume, while Shanghai Jiao Tong University had the highest output among institutions. Memorial Sloan Kettering Cancer Center was recognized as a key research institution within this domain. Adrada BE was the most prolific author in terms of publication count. Ko Es held the highest citation frequency among authors. Co-occurrence analysis of keywords revealed that the top three keywords by frequency were "triple-negative breast cancer," "breast cancer," and "sonography." The timeline visualization indicated a strong temporal continuity in the clusters of "breast cancer," "recommendations," "biopsy," "estrogen receptor," and "radiomics." The keyword with the highest emergence value was "neoplasms" (6.80). Trend analysis of emerging terms indicated a growing focus on "machine learning approaches," "prognosis," and "molecular subtypes," with "machine learning approach" emerging as a significant keyword currently. This study provided a systematic analysis of the current state of ultrasound technology applications in TNBC. It highlighted that "machine learning methods" have emerged as a central focus and frontier in this research area, both presently and for the foreseeable future. The findings offer valuable theoretical insights for the application of ultrasound technology in TNBC diagnosis and treatment and establish a solid foundation for further advancements in medical imaging research related to TNBC.

Multiparametric MRI-based Interpretable Machine Learning Radiomics Model for Distinguishing Between Luminal and Non-luminal Tumors in Breast Cancer: A Multicenter Study.

Zhou Y, Lin G, Chen W, Chen Y, Shi C, Peng Z, Chen L, Cai S, Pan Y, Chen M, Lu C, Ji J, Chen S

pubmed logopapersJul 1 2025
To construct and validate an interpretable machine learning (ML) radiomics model derived from multiparametric magnetic resonance imaging (MRI) images to differentiate between luminal and non-luminal breast cancer (BC) subtypes. This study enrolled 1098 BC participants from four medical centers, categorized into a training cohort (n = 580) and validation cohorts 1-3 (n = 252, 89, and 177, respectively). Multiparametric MRI-based radiomics features, including T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) imaging, were extracted. Five ML algorithms were applied to develop various radiomics models, from which the best performing model was identified. A ML-based combined model including optimal radiomics features and clinical predictors was constructed, with performance assessed through receiver operating characteristic (ROC) analysis. The Shapley additive explanation (SHAP) method was utilized to assess model interpretability. Tumor size and MR-reported lymph node status were chosen as significant clinical variables. Thirteen radiomics features were identified from multiparametric MRI images. The extreme gradient boosting (XGBoost) radiomics model performed the best, achieving area under the curves (AUCs) of 0.941, 0.903, 0.862, and 0.894 across training and validation cohorts 1-3, respectively. The XGBoost combined model showed favorable discriminative power, with AUCs of 0.956, 0.912, 0.894, and 0.906 in training and validation cohorts 1-3, respectively. The SHAP visualization facilitated global interpretation, identifying "ADC_wavelet-HLH_glszm_ZoneEntropy" and "DCE_wavelet-HLL_gldm_DependenceVariance" as the most significant features for the model's predictions. The XGBoost combined model derived from multiparametric MRI may proficiently differentiate between luminal and non-luminal BC and aid in treatment decision-making. An interpretable machine learning radiomics model can preoperatively predict luminal and non-luminal subtypes in breast cancer, thereby aiding therapeutic decision-making.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.

BIScreener: enhancing breast cancer ultrasound diagnosis through integrated deep learning with interpretability.

Chen Y, Wang P, Ouyang J, Tan M, Nie L, Zhang Y, Wang T

pubmed logopapersJun 30 2025
Breast cancer is the leading cause of death among women worldwide, and early detection through the standardized BI-RADS framework helps physicians assess the risk of malignancy and guide appropriate diagnostic and treatment decisions. In this study, an interpretable deep learning model (BIScreener) was proposed for predicting BI-RADS classifications from breast ultrasound images, aiding in the accurate assessment of breast cancer risk and improving diagnostic efficiency. BIScreener utilizes the stacked generalization of three pretrained convolutional neural networks to analyze ultrasound images obtained from two specific instruments (Mindray R5 and HITACHI) used at local hospitals. BIScreener achieved a classification total accuracy of 90.0% and ROC-AUC value of 0.982 in the external test set for five BI-RADS categories. The proposed method achieved 83.8% classification total accuracy and 0.967 ROC-AUC value for seven BI-RADS categories. In addition, the model improved the diagnostic accuracy of two radiologists by more than 8.1% for five BI-RADS categories and by more than 4.8% for seven BI-RADS categories and reduced the explanation time by more than 19.0%, demonstrating its potential to accelerate and improve the breast cancer diagnosis process.

Ultrasound Radio Frequency Time Series for Tissue Typing: Experiments on In-Vivo Breast Samples Using Texture-Optimized Features and Multi-Origin Method of Classification (MOMC).

Arab M, Fallah A, Rashidi S, Dastjerdi MM, Ahmadinejad N

pubmed logopapersJun 30 2025
One of the most promising auxiliaries for screening breast cancer (BC) is ultrasound (US) radio-frequency (RF) time series. It has the superiority of not requiring any supplementary equipment over other methods. This article sought to propound a machine learning (ML) method for the automated categorization of breast lesions-categorized as benign, probably benign, suspicious, or malignant-using features extracted from the accumulated US RF time series. In this research, 220 data points of the categories as mentioned earlier, recorded from 118 patients, were analyzed. The RFTSBU dataset was registered by a SuperSonic Imagine Aixplorer® medical/research system fitted with a linear transducer. The expert radiologist manually selected regions of interest (ROIs) in B-mode images before extracting 283 features from each ROI in the ML approach, utilizing textural features such as Gabor filter (GF), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size zone matrix (GLSZM), and gray-level dependence matrix (GLDM). Subsequently, the particle swarm optimization (PSO) narrowed the features to 131 highly effective ones. Ultimately, the features underwent classification using an innovative multi-origin method classification (MOMC), marking a significant leap in BC diagnosis. Employing 5-fold cross-validation, the study achieved notable accuracy rates of 98.57 ± 1.09%, 91.53 ± 0.89%, and 83.71 ± 1.30% for 2-, 3-, and 4-class classifications, respectively, using MOMC-SVM and MOMC-ensemble classifiers. This research introduces an innovative ML-based approach to differentiate between diverse breast lesion types using in vivo US RF time series data. The findings underscore its efficacy in enhancing classification accuracy, promising significant strides in computer-aided diagnosis (CAD) for BC screening.

Towards 3D Semantic Image Synthesis for Medical Imaging

Wenwu Tang, Khaled Seyam, Bin Yang

arxiv logopreprintJun 30 2025
In the medical domain, acquiring large datasets is challenging due to both accessibility issues and stringent privacy regulations. Consequently, data availability and privacy protection are major obstacles to applying machine learning in medical imaging. To address this, our study proposes the Med-LSDM (Latent Semantic Diffusion Model), which operates directly in the 3D domain and leverages de-identified semantic maps to generate synthetic data as a method of privacy preservation and data augmentation. Unlike many existing methods that focus on generating 2D slices, Med-LSDM is designed specifically for 3D semantic image synthesis, making it well-suited for applications requiring full volumetric data. Med-LSDM incorporates a guiding mechanism that controls the 3D image generation process by applying a diffusion model within the latent space of a pre-trained VQ-GAN. By operating in the compressed latent space, the model significantly reduces computational complexity while still preserving critical 3D spatial details. Our approach demonstrates strong performance in 3D semantic medical image synthesis, achieving a 3D-FID score of 0.0054 on the conditional Duke Breast dataset and similar Dice scores (0.70964) to those of real images (0.71496). These results demonstrate that the synthetic data from our model have a small domain gap with real data and are useful for data augmentation.

Federated Breast Cancer Detection Enhanced by Synthetic Ultrasound Image Augmentation

Hongyi Pan, Ziliang Hong, Gorkem Durak, Ziyue Xu, Ulas Bagci

arxiv logopreprintJun 29 2025
Federated learning (FL) has emerged as a promising paradigm for collaboratively training deep learning models across institutions without exchanging sensitive medical data. However, its effectiveness is often hindered by limited data availability and non-independent, identically distributed data across participating clients, which can degrade model performance and generalization. To address these challenges, we propose a generative AI based data augmentation framework that integrates synthetic image sharing into the federated training process for breast cancer diagnosis via ultrasound images. Specifically, we train two simple class-specific Deep Convolutional Generative Adversarial Networks: one for benign and one for malignant lesions. We then simulate a realistic FL setting using three publicly available breast ultrasound image datasets: BUSI, BUS-BRA, and UDIAT. FedAvg and FedProx are adopted as baseline FL algorithms. Experimental results show that incorporating a suitable number of synthetic images improved the average AUC from 0.9206 to 0.9237 for FedAvg and from 0.9429 to 0.9538 for FedProx. We also note that excessive use of synthetic data reduced performance, underscoring the importance of maintaining a balanced ratio of real and synthetic samples. Our findings highlight the potential of generative AI based data augmentation to enhance FL results in the breast ultrasound image classification task.
Page 10 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.