Sort by:
Page 1 of 14133 results
Next

Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.

Muhtadi S, Gallippi CM

pubmed logopapersNov 1 2025
We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches. An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks. The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level. Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.

Dong V, Mankowski W, Silva Filho TM, McCarthy AM, Kontos D, Maidment ADA, Barufaldi B

pubmed logopapersNov 1 2025
Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability. We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes. LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : 0.880, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : 0.779, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : 0.878, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades. Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

Comparing percent breast density assessments of an AI-based method with expert reader estimates: inter-observer variability.

Romanov S, Howell S, Harkness E, Gareth Evans D, Astley S, Fergie M

pubmed logopapersNov 1 2025
Breast density estimation is an important part of breast cancer risk assessment, as mammographic density is associated with risk. However, density assessed by multiple experts can be subject to high inter-observer variability, so automated methods are increasingly used. We investigate the inter-reader variability and risk prediction for expert assessors and a deep learning approach. Screening data from a cohort of 1328 women, case-control matched, was used to compare between two expert readers and between a single reader and a deep learning model, Manchester artificial intelligence - visual analog scale (MAI-VAS). Bland-Altman analysis was used to assess the variability and matched concordance index to assess risk. Although the mean differences for the two experiments were alike, the limits of agreement between MAI-VAS and a single reader are substantially lower at +SD (standard deviation) 21 (95% CI: 19.65, 21.69) -SD 22 (95% CI: <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>22.71</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>20.68</mn></mrow> </math> ) than between two expert readers +SD 31 (95% CI: 32.08, 29.23) -SD 29 (95% CI: <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>29.94</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>-</mo> <mn>27.09</mn></mrow> </math> ). In addition, breast cancer risk discrimination for the deep learning method and density readings from a single expert was similar, with a matched concordance of 0.628 (95% CI: 0.598, 0.658) and 0.624 (95% CI: 0.595, 0.654), respectively. The automatic method had a similar inter-view agreement to experts and maintained consistency across density quartiles. The artificial intelligence breast density assessment tool MAI-VAS has a better inter-observer agreement with a randomly selected expert reader than that between two expert readers. Deep learning-based density methods provide consistent density scores without compromising on breast cancer risk discrimination.

Sureness of classification of breast cancers as pure ductal carcinoma <i>in situ</i> or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis.

Whitney HM, Drukker K, Edwards A, Giger ML

pubmed logopapersNov 1 2025
Breast cancer may persist within milk ducts (ductal carcinoma <i>in situ</i>, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value. We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion. The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions. Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Breast tumour classification in DCE-MRI via cross-attention and discriminant correlation analysis enhanced feature fusion.

Pan F, Wu B, Jian X, Li C, Liu D, Zhang N

pubmed logopapersJul 1 2025
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has proven to be highly sensitive in diagnosing breast tumours, due to the kinetic and volumetric features inherent in it. To utilise the kinetics-related and volume-related information, this paper aims to develop and validate a classification for differentiating benign and malignant breast tumours based on DCE-MRI, though fusing deep features and cross-attention-encoded radiomics features using discriminant correlation analysis (DCA). Classification experiments were conducted on a dataset comprising 261 individuals who underwent DCE-MRI including those with multiple tumours, resulting in 137 benign and 163 malignant tumours. To improve the strength of correlation between features and reduce features' redundancy, a novel fusion method that fuses deep features and encoded radiomics features based on DCA (eFF-DCA) is proposed. The eFF-DCA includes three components: (1) a feature extraction module to capture kinetic information across phases, (2) a radiomics feature encoding module employing a cross-attention mechanism to enhance inter-phase feature correlation, and (3) a DCA-based fusion module that transforms features to maximise intra-class correlation while minimising inter-class redundancy, facilitating effective classification. The proposed eFF-DCA method achieved an accuracy of 90.9% and an area under the receiver operating characteristic curve of 0.942, outperforming methods using single-modal features. The proposed eFF-DCA utilises DCE-MRI kinetic-related and volume-related features to improve breast tumour diagnosis accuracy, but non-end-to-end design limits multimodal fusion. Future research should explore unified end-to-end deep learning architectures that enable seamless multimodal feature fusion and joint optimisation of feature extraction and classification.

Development and validation of an interpretable machine learning model for diagnosing pathologic complete response in breast cancer.

Zhou Q, Peng F, Pang Z, He R, Zhang H, Jiang X, Song J, Li J

pubmed logopapersJul 1 2025
Pathologic complete response (pCR) following neoadjuvant chemotherapy (NACT) is a critical prognostic marker for patients with breast cancer, potentially allowing surgery omission. However, noninvasive and accurate pCR diagnosis remains a significant challenge due to the limitations of current imaging techniques, particularly in cases where tumors completely disappear post-NACT. We developed a novel framework incorporating Dimensional Accumulation for Layered Images (DALI) and an Attention-Box annotation tool to address the unique challenge of analyzing imaging data where target lesions are absent. These methods transform three-dimensional magnetic resonance imaging into two-dimensional representations and ensure consistent target tracking across time-points. Preprocessing techniques, including tissue-region normalization and subtraction imaging, were used to enhance model performance. Imaging features were extracted using radiomics and pretrained deep-learning models, and machine-learning algorithms were integrated into a stacked ensemble model. The approach was developed using the I-SPY 2 dataset and validated with an independent Tangshan People's Hospital cohort. The stacked ensemble model achieved superior diagnostic performance, with an area under the receiver operating characteristic curve of 0.831 (95 % confidence interval, 0.769-0.887) on the test set, outperforming individual models. Tissue-region normalization and subtraction imaging significantly enhanced diagnostic accuracy. SHAP analysis identified variables that contributed to the model predictions, ensuring model interpretability. This innovative framework addresses challenges of noninvasive pCR diagnosis. Integrating advanced preprocessing techniques improves feature quality and model performance, supporting clinicians in identifying patients who can safely omit surgery. This innovation reduces unnecessary treatments and improves quality of life for patients with breast cancer.

EfficientNet-Based Attention Residual U-Net With Guided Loss for Breast Tumor Segmentation in Ultrasound Images.

Jasrotia H, Singh C, Kaur S

pubmed logopapersJul 1 2025
Breast cancer poses a major health concern for women globally. Effective segmentation of breast tumors for ultrasound images is crucial for early diagnosis and treatment. Conventional convolutional neural networks have shown promising results in this domain but face challenges due to image complexities and are computationally expensive, limiting their practical application in real-time clinical settings. We propose Eff-AResUNet-GL, a segmentation model using EfficienetNet-B3 as the encoder. This design integrates attention gates in skip connections to focus on significant features and residual blocks in the decoder to retain details and reduce gradient loss. Additionally, guided loss functions are applied at each decoder layer to generate better features, subsequently improving segmentation accuracy. Experimental results on BUSIS and Dataset B demonstrate that Eff-AResUNet-GL achieves superior performance and computational efficiency compared to state-of-the-art models, showing robustness in handling complex breast ultrasound images. Eff-AResUNet-GL offers a practical, high-performing solution for breast tumor segmentation, demonstrating potential clinical through improved segmentation accuracy and efficiency.

Habitat-Based Radiomics for Revealing Tumor Heterogeneity and Predicting Residual Cancer Burden Classification in Breast Cancer.

Li ZY, Wu SN, Lin P, Jiang MC, Chen C, Lin WJ, Xue ES, Liang RX, Lin ZH

pubmed logopapersJul 1 2025
To investigate the feasibility of characterizing tumor heterogeneity in breast cancer ultrasound images using habitat analysis technology and establish a radiomics machine learning model for predicting response to neoadjuvant chemotherapy (NAC). Ultrasound images from patients with pathologically confirmed breast cancer who underwent neoadjuvant therapy at our institution between July 2021 and December 2023 were retrospectively reviewed. Initially, the region of interest was delineated and segmented into multiple habitat areas using local feature delineation and cluster analysis techniques. Subsequently, radiomics features were extracted from each habitat area to construct 3 machine learning models. Finally, the model's efficacy was assessed through operating characteristic (ROC) curve analysis, decision curve analysis (DCA), and calibration curve evaluation. A total of 945 patients were enrolled, with 333 demonstrating a favorable response to NAC and 612 exhibiting an unfavorable response to NAC. Through the application of habitat analysis techniques, 3 distinct habitat regions within the tumor were identified. Subsequently, a predictive model was developed by incorporating 19 radiomics features, and all 3 machine learning models demonstrated excellent performance in predicting treatment outcomes. Notably, extreme gradient boosting (XGBoost) exhibited superior performance with an area under the curve (AUC) of 0.872 in the training cohort and 0.740 in the testing cohort. Additionally, DCA and calibration curves were employed for further evaluation. The habitat analysis technique effectively distinguishes distinct biological subregions of breast cancer, while the established radiomics machine learning model predicts NAC response by forecasting residual cancer burden (RCB) classification.

The impact of updated imaging software on the performance of machine learning models for breast cancer diagnosis: a multi-center, retrospective study.

Cai L, Golatta M, Sidey-Gibbons C, Barr RG, Pfob A

pubmed logopapersJul 1 2025
Artificial Intelligence models based on medical (imaging) data are increasingly developed. However, the imaging software on which the original data is generated is frequently updated. The impact of updated imaging software on the performance of AI models is unclear. We aimed to develop machine learning models using shear wave elastography (SWE) data to identify malignant breast lesions and to test the models' generalizability by validating them on external data generated by both the original updated software versions. We developed and validated different machine learning models (GLM, MARS, XGBoost, SVM) using multicenter, international SWE data (NCT02638935) using tenfold cross-validation. Findings were compared to the histopathologic evaluation of the biopsy specimen or 2-year follow-up. The outcome measure was the area under the curve (AUROC). We included 1288 cases in the development set using the original imaging software and 385 cases in the validation set using both, original and updated software. In the external validation set, the GLM and XGBoost models showed better performance with the updated software data compared to the original software data (AUROC 0.941 vs. 0.902, p < 0.001 and 0.934 vs. 0.872, p < 0.001). The MARS model showed worse performance with the updated software data (0.847 vs. 0.894, p = 0.045). SVM was not calibrated. In this multicenter study using SWE data, some machine learning models demonstrated great potential to bridge the gap between original software and updated software, whereas others exhibited weak generalizability.
Page 1 of 14133 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.