Sort by:
Page 9 of 23225 results

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Breast tumour classification in DCE-MRI via cross-attention and discriminant correlation analysis enhanced feature fusion.

Pan F, Wu B, Jian X, Li C, Liu D, Zhang N

pubmed logopapersJul 1 2025
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has proven to be highly sensitive in diagnosing breast tumours, due to the kinetic and volumetric features inherent in it. To utilise the kinetics-related and volume-related information, this paper aims to develop and validate a classification for differentiating benign and malignant breast tumours based on DCE-MRI, though fusing deep features and cross-attention-encoded radiomics features using discriminant correlation analysis (DCA). Classification experiments were conducted on a dataset comprising 261 individuals who underwent DCE-MRI including those with multiple tumours, resulting in 137 benign and 163 malignant tumours. To improve the strength of correlation between features and reduce features' redundancy, a novel fusion method that fuses deep features and encoded radiomics features based on DCA (eFF-DCA) is proposed. The eFF-DCA includes three components: (1) a feature extraction module to capture kinetic information across phases, (2) a radiomics feature encoding module employing a cross-attention mechanism to enhance inter-phase feature correlation, and (3) a DCA-based fusion module that transforms features to maximise intra-class correlation while minimising inter-class redundancy, facilitating effective classification. The proposed eFF-DCA method achieved an accuracy of 90.9% and an area under the receiver operating characteristic curve of 0.942, outperforming methods using single-modal features. The proposed eFF-DCA utilises DCE-MRI kinetic-related and volume-related features to improve breast tumour diagnosis accuracy, but non-end-to-end design limits multimodal fusion. Future research should explore unified end-to-end deep learning architectures that enable seamless multimodal feature fusion and joint optimisation of feature extraction and classification.

Prediction of axillary lymph node metastasis in triple negative breast cancer using MRI radiomics and clinical features.

Shen Y, Huang R, Zhang Y, Zhu J, Li Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model to predict axillary lymph node (ALN) metastasis in triple negative breast cancer (TNBC) patients using magnetic resonance imaging (MRI) and clinical characteristics. This retrospective study included TNBC patients from the First Affiliated Hospital of Soochow University and Jiangsu Province Hospital (2016-2023). We analyzed clinical characteristics and radiomic features from T2-weighted MRI. Using LASSO regression for feature selection, we applied Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) to build prediction models. A total of 163 patients, with a median age of 53 years (range: 24-73), were divided into a training group (n = 115) and a validation group (n = 48). Among them, 54 (33.13%) had ALN metastasis, and 109 (66.87%) were non-metastasis. Nottingham grade (P = 0.005), tumor size (P = 0.016) were significant difference between non-metastasis cases and metastasis cases. In the validation set, the LR-based combined model achieved the highest AUC (0.828, 95%CI: 0.706-0.950) with excellent sensitivity (0.813) and accuracy (0.812). Although the RF-based model had the highest AUC in the training set and the highest specificity (0.906) in the validation set, its performance was less consistent compared to the LR model. MRI-T2WI radiomic features predict ALN metastasis in TNBC, with integration into clinical models enhancing preoperative predictions and personalizing management.

ADAptation: Reconstruction-based Unsupervised Active Learning for Breast Ultrasound Diagnosis

Yaofei Duan, Yuhao Huang, Xin Yang, Luyi Han, Xinyu Xie, Zhiyuan Zhu, Ping He, Ka-Hou Chan, Ligang Cui, Sio-Kei Im, Dong Ni, Tao Tan

arxiv logopreprintJul 1 2025
Deep learning-based diagnostic models often suffer performance drops due to distribution shifts between training (source) and test (target) domains. Collecting and labeling sufficient target domain data for model retraining represents an optimal solution, yet is limited by time and scarce resources. Active learning (AL) offers an efficient approach to reduce annotation costs while maintaining performance, but struggles to handle the challenge posed by distribution variations across different datasets. In this study, we propose a novel unsupervised Active learning framework for Domain Adaptation, named ADAptation, which efficiently selects informative samples from multi-domain data pools under limited annotation budget. As a fundamental step, our method first utilizes the distribution homogenization capabilities of diffusion models to bridge cross-dataset gaps by translating target images into source-domain style. We then introduce two key innovations: (a) a hypersphere-constrained contrastive learning network for compact feature clustering, and (b) a dual-scoring mechanism that quantifies and balances sample uncertainty and representativeness. Extensive experiments on four breast ultrasound datasets (three public and one in-house/multi-center) across five common deep classifiers demonstrate that our method surpasses existing strong AL-based competitors, validating its effectiveness and generalization for clinical domain adaptation. The code is available at the anonymized link: https://github.com/miccai25-966/ADAptation.

A Machine Learning Model for Predicting the HER2 Positive Expression of Breast Cancer Based on Clinicopathological and Imaging Features.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

Development and validation of an interpretable machine learning model for diagnosing pathologic complete response in breast cancer.

Zhou Q, Peng F, Pang Z, He R, Zhang H, Jiang X, Song J, Li J

pubmed logopapersJul 1 2025
Pathologic complete response (pCR) following neoadjuvant chemotherapy (NACT) is a critical prognostic marker for patients with breast cancer, potentially allowing surgery omission. However, noninvasive and accurate pCR diagnosis remains a significant challenge due to the limitations of current imaging techniques, particularly in cases where tumors completely disappear post-NACT. We developed a novel framework incorporating Dimensional Accumulation for Layered Images (DALI) and an Attention-Box annotation tool to address the unique challenge of analyzing imaging data where target lesions are absent. These methods transform three-dimensional magnetic resonance imaging into two-dimensional representations and ensure consistent target tracking across time-points. Preprocessing techniques, including tissue-region normalization and subtraction imaging, were used to enhance model performance. Imaging features were extracted using radiomics and pretrained deep-learning models, and machine-learning algorithms were integrated into a stacked ensemble model. The approach was developed using the I-SPY 2 dataset and validated with an independent Tangshan People's Hospital cohort. The stacked ensemble model achieved superior diagnostic performance, with an area under the receiver operating characteristic curve of 0.831 (95 % confidence interval, 0.769-0.887) on the test set, outperforming individual models. Tissue-region normalization and subtraction imaging significantly enhanced diagnostic accuracy. SHAP analysis identified variables that contributed to the model predictions, ensuring model interpretability. This innovative framework addresses challenges of noninvasive pCR diagnosis. Integrating advanced preprocessing techniques improves feature quality and model performance, supporting clinicians in identifying patients who can safely omit surgery. This innovation reduces unnecessary treatments and improves quality of life for patients with breast cancer.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Dual-Modality Virtual Biopsy System Integrating MRI and MG for Noninvasive Predicting HER2 Status in Breast Cancer.

Wang Q, Zhang ZQ, Huang CC, Xue HW, Zhang H, Bo F, Guan WT, Zhou W, Bai GJ

pubmed logopapersJul 1 2025
Accurate determination of human epidermal growth factor receptor 2 (HER2) expression is critical for guiding targeted therapy in breast cancer. This study aimed to develop and validate a deep learning (DL)-based decision-making visual biomarker system (DM-VBS) for predicting HER2 status using radiomics and DL features derived from magnetic resonance imaging (MRI) and mammography (MG). Radiomics features were extracted from MRI, and DL features were derived from MG. Four submodels were constructed: Model I (MRI-radiomics) and Model III (mammography-DL) for distinguishing HER2-zero/low from HER2-positive cases, and Model II (MRI-radiomics) and Model IV (mammography-DL) for differentiating HER2-zero from HER2-low/positive cases. These submodels were integrated into a XGBoost model for ternary classification of HER2 status. Radiologists assessed imaging features associated with HER2 expression, and model performance was validated using two independent datasets from The Cancer Image Archive. A total of 550 patients were divided into training, internal validation, and external validation cohorts. Models I and III achieved an area under the curve (AUC) of 0.800-0.850 for distinguishing HER2-zero/low from HER2-positive cases, while Models II and IV demonstrated AUC values of 0.793-0.847 for differentiating HER2-zero from HER2-low/positive cases. The DM-VBS achieved average accuracy of 85.42%, 80.4%, and 89.68% for HER2-zero, -low, and -positive patients in the validation cohorts, respectively. Imaging features such as lesion size, number of lesions, enhancement type, and microcalcifications significantly differed across HER2 statuses, except between HER2-zero and -low groups. DM-VBS can predict HER2 status and assist clinicians in making treatment decisions for breast cancer.

An adaptive deep learning approach based on InBNFus and CNNDen-GRU networks for breast cancer and maternal fetal classification using ultrasound images.

Fatima M, Khan MA, Mirza AM, Shin J, Alasiry A, Marzougui M, Cha J, Chang B

pubmed logopapersJul 1 2025
Convolutional Neural Networks (CNNs), a sophisticated deep learning technique, have proven highly effective in identifying and classifying abnormalities related to various diseases. The manual classification of these is a hectic and time-consuming process; therefore, it is essential to develop a computerized technique. Most existing methods are designed to address a single specific problem, limiting their adaptability. In this work, we proposed a novel adaptive deep-learning framework for simultaneously classifying breast cancer and maternal-fetal ultrasound datasets. Data augmentation was applied in the preprocessing phase to address the data imbalance problem. After, two novel architectures are proposed: InBnFUS and CNNDen-GRU. The InBnFUS network combines 5-Blocks inception-based architecture (Model 1) and 5-Blocks inverted bottleneck-based architecture (Model 2) through a depth-wise concatenation layer, while CNNDen-GRU incorporates 5-Blocks dense architecture with an integrated GRU layer. Post-training features were extracted from the global average pooling and GRU layer and classified using neural network classifiers. The experimental evaluation achieved enhanced accuracy rates of 99.0% for breast cancer, 96.6% for maternal-fetal (common planes), and 94.6% for maternal-fetal (brain) datasets. Additionally, the models consistently achieve high precision, recall, and F1 scores across both datasets. A comprehensive ablation study has been performed, and the results show the superior performance of the proposed models.

Innovative deep learning classifiers for breast cancer detection through hybrid feature extraction techniques.

Vijayalakshmi S, Pandey BK, Pandey D, Lelisho ME

pubmed logopapersJul 1 2025
Breast cancer remains a major cause of mortality among women, where early and accurate detection is critical to improving survival rates. This study presents a hybrid classification approach for mammogram analysis by combining handcrafted statistical features and deep learning techniques. The methodology involves preprocessing with the Shearlet Transform, segmentation using Improved Otsu thresholding and Canny edge detection, followed by feature extraction through Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and 1st-order statistical descriptors. These features are input into a 2D BiLSTM-CNN model designed to learn spatial and sequential patterns in mammogram images. Evaluated on the MIAS dataset, the proposed method achieved 97.14% accuracy, outperforming several benchmark models. The results indicate that this hybrid strategy offers improvements in classification performance and may assist radiologists in more effective breast cancer screening.
Page 9 of 23225 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.