Sort by:
Page 12 of 41404 results

Integrative multimodal ultrasound and radiomics for early prediction of neoadjuvant therapy response in breast cancer: a clinical study.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.

Machine learning models using non-invasive tests & B-mode ultrasound to predict liver-related outcomes in metabolic dysfunction-associated steatotic liver disease.

Kosick HM, McIntosh C, Bera C, Fakhriyehasl M, Shengir M, Adeyi O, Amiri L, Sebastiani G, Jhaveri K, Patel K

pubmed logopapersJul 8 2025
Advanced metabolic-dysfunction-associated steatotic liver disease (MASLD) fibrosis (F3-4) predicts liver-related outcomes. Serum and elastography-based non-invasive tests (NIT) cannot yet reliably predict MASLD outcomes. The role of B-mode ultrasound (US) for outcome prediction is not yet known. We aimed to evaluate machine learning (ML) algorithms based on simple NIT and US for prediction of adverse liver-related outcomes in MASLD. Retrospective cohort study of adult MASLD patients biopsied between 2010-2021 at one of two Canadian tertiary care centers. Random forest was used to create predictive models for outcomes-hepatic decompensation, liver-related outcomes (decompensation, hepatocellular carcinoma (HCC), liver transplant, and liver-related mortality), HCC, liver-related mortality, F3-4, and fibrotic metabolic dysfunction-associated steatohepatitis (MASH). Diagnostic performance was assessed using area under the curve (AUC). 457 MASLD patients were included with 44.9% F3-4, diabetes prevalence 31.6%, 53.8% male, mean age 49.2 and BMI 32.8 kg/m<sup>2</sup>. 6.3% had an adverse liver-related outcome over mean 43 months follow-up. AUC for ML predictive models were-hepatic decompensation 0.90(0.79-0.98), liver-related outcomes 0.87(0.76-0.96), HCC 0.72(0.29-0.96), liver-related mortality 0.79(0.31-0.98), F3-4 0.83(0.76-0.87), and fibrotic MASH 0.74(0.65-0.85). Biochemical and clinical variables had greatest feature importance overall, compared to US parameters. FIB-4 and AST:ALT ratio were highest ranked biochemical variables, while age was the highest ranked clinical variable. ML models based on clinical, biochemical, and US-based variables accurately predict adverse MASLD outcomes in this multi-centre cohort. Overall, biochemical variables had greatest feature importance. US-based features were not substantial predictors of outcomes in this study.

Adaptive batch-fusion self-supervised learning for ultrasound image pretraining.

Zhang J, Wu X, Liu S, Fan Y, Chen Y, Lyu G, Liu P, Liu Z, He S

pubmed logopapersJul 8 2025
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

Efficient Ultrasound Breast Cancer Detection with DMFormer: A Dynamic Multiscale Fusion Transformer.

Guo L, Zhang H, Ma C

pubmed logopapersJul 7 2025
To develop an advanced deep learning model for accurate differentiation between benign and malignant masses in ultrasound breast cancer screening, addressing the challenges of noise, blur, and complex tissue structures in ultrasound imaging. We propose Dynamic Multiscale Fusion Transformer (DMFormer), a novel Transformer-based architecture featuring a dynamic multiscale feature fusion mechanism. The model integrates window attention for local feature interaction with grid attention for global context mixing, enabling comprehensive capture of both fine-grained tissue details and broader anatomical contexts. DMFormer was evaluated on two independent datasets and compared against state-of-the-art approaches, including convolutional neural networks, Transformer-based architectures, and hybrid models. The model achieved areas under the curve of 90.48% and 86.57% on the respective datasets, consistently outperforming all comparison models. DMFormer demonstrates superior performance in ultrasound breast cancer detection through its innovative dual-attention approach. The model's ability to effectively balance local and global feature processing while maintaining computational efficiency represents a significant advancement in medical image analysis. These results validate DMFormer's potential for enhancing the accuracy and reliability of breast cancer screening in clinical settings.

Self-supervised Deep Learning for Denoising in Ultrasound Microvascular Imaging

Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M. DeRuiter, Jieyang Jin, Kate M. Knoll, Kendra E. Petersen, James D. Krier, Xiang-yang Zhu, Gina K. Hesley, Kathryn A. Robinson, Andrew J. Bentall, Thomas D. Atwell, Andrew D. Rule, Lilach O. Lerman, Shigao Chen, Chengwu Huang

arxiv logopreprintJul 7 2025
Ultrasound microvascular imaging (UMI) is often hindered by low signal-to-noise ratio (SNR), especially in contrast-free or deep tissue scenarios, which impairs subsequent vascular quantification and reliable disease diagnosis. To address this challenge, we propose Half-Angle-to-Half-Angle (HA2HA), a self-supervised denoising framework specifically designed for UMI. HA2HA constructs training pairs from complementary angular subsets of beamformed radio-frequency (RF) blood flow data, across which vascular signals remain consistent while noise varies. HA2HA was trained using in-vivo contrast-free pig kidney data and validated across diverse datasets, including contrast-free and contrast-enhanced data from pig kidneys, as well as human liver and kidney. An improvement exceeding 15 dB in both contrast-to-noise ratio (CNR) and SNR was observed, indicating a substantial enhancement in image quality. In addition to power Doppler imaging, denoising directly in the RF domain is also beneficial for other downstream processing such as color Doppler imaging (CDI). CDI results of human liver derived from the HA2HA-denoised signals exhibited improved microvascular flow visualization, with a suppressed noisy background. HA2HA offers a label-free, generalizable, and clinically applicable solution for robust vascular imaging in both contrast-free and contrast-enhanced UMI.

Artificial Intelligence-Enabled Point-of-Care Echocardiography: Bringing Precision Imaging to the Bedside.

East SA, Wang Y, Yanamala N, Maganti K, Sengupta PP

pubmed logopapersJul 7 2025
The integration of artificial intelligence (AI) with point-of-care ultrasound (POCUS) is transforming cardiovascular diagnostics by enhancing image acquisition, interpretation, and workflow efficiency. These advancements hold promise in expanding access to cardiovascular imaging in resource-limited settings and enabling early disease detection through screening applications. This review explores the opportunities and challenges of AI-enabled POCUS as it reshapes the landscape of cardiovascular imaging. AI-enabled systems can reduce operator dependency, improve image quality, and support clinicians-both novice and experienced-in capturing diagnostically valuable images, ultimately promoting consistency across diverse clinical environments. However, widespread adoption faces significant challenges, including concerns around algorithm generalizability, bias, explainability, clinician trust, and data privacy. Addressing these issues through standardized development, ethical oversight, and clinician-AI collaboration will be critical to safe and effective implementation. Looking ahead, emerging innovations-such as autonomous scanning, real-time predictive analytics, tele-ultrasound, and patient-performed imaging-underscore the transformative potential of AI-enabled POCUS in reshaping cardiovascular care and advancing equitable healthcare delivery worldwide.

ViTaL: A Multimodality Dataset and Benchmark for Multi-pathological Ovarian Tumor Recognition

You Zhou, Lijiang Chen, Guangxia Cui, Wenpei Bai, Yu Guo, Shuchang Lyu, Guangliang Cheng, Qi Zhao

arxiv logopreprintJul 6 2025
Ovarian tumor, as a common gynecological disease, can rapidly deteriorate into serious health crises when undetected early, thus posing significant threats to the health of women. Deep neural networks have the potential to identify ovarian tumors, thereby reducing mortality rates, but limited public datasets hinder its progress. To address this gap, we introduce a vital ovarian tumor pathological recognition dataset called \textbf{ViTaL} that contains \textbf{V}isual, \textbf{T}abular and \textbf{L}inguistic modality data of 496 patients across six pathological categories. The ViTaL dataset comprises three subsets corresponding to different patient data modalities: visual data from 2216 two-dimensional ultrasound images, tabular data from medical examinations of 496 patients, and linguistic data from ultrasound reports of 496 patients. It is insufficient to merely distinguish between benign and malignant ovarian tumors in clinical practice. To enable multi-pathology classification of ovarian tumor, we propose a ViTaL-Net based on the Triplet Hierarchical Offset Attention Mechanism (THOAM) to minimize the loss incurred during feature fusion of multi-modal data. This mechanism could effectively enhance the relevance and complementarity between information from different modalities. ViTaL-Net serves as a benchmark for the task of multi-pathology, multi-modality classification of ovarian tumors. In our comprehensive experiments, the proposed method exhibited satisfactory performance, achieving accuracies exceeding 90\% on the two most common pathological types of ovarian tumor and an overall performance of 85\%. Our dataset and code are available at https://github.com/GGbond-study/vitalnet.

Artificial Intelligence-Assisted Standard Plane Detection in Hip Ultrasound for Developmental Dysplasia of the Hip: A Novel Real-Time Deep Learning Approach.

Darilmaz MF, Demirel M, Altun HO, Adiyaman MC, Bilgili F, Durmaz H, Sağlam Y

pubmed logopapersJul 6 2025
Developmental dysplasia of the hip (DDH) includes a range of conditions caused by inadequate hip joint development. Early diagnosis is essential to prevent long-term complications. Ultrasound, particularly the Graf method, is commonly used for DDH screening, but its interpretation is highly operator-dependent and lacks standardization, especially in identifying the correct standard plane. This variability often leads to misdiagnosis, particularly among less experienced users. This study presents AI-SPS, an AI-based instant standard plane detection software for real-time hip ultrasound analysis. Using 2,737 annotated frames, including 1,737 standard and 1,000 non-standard examples extracted from 45 clinical ultrasound videos, we trained and evaluated two object detection models: SSD-MobileNet V2 and YOLOv11n. The software was further validated on an independent set of 934 additional frames (347 standard and 587 non-standard) from the same video sources. YOLOv11n achieved an accuracy of 86.3%, precision of 0.78, recall of 0.88, and F1-score of 0.83, outperforming SSD-MobileNet V2, which reached an accuracy of 75.2%. These results indicate that AI-SPS can detect the standard plane with expert-level performance and improve consistency in DDH screening. By reducing operator variability, the software supports more reliable ultrasound assessments. Integration with live systems and Graf typing may enable a fully automated DDH diagnostic workflow. Level of Evidence: Level III, diagnostic study.

Predicting Cardiopulmonary Exercise Testing Performance in Patients Undergoing Transthoracic Echocardiography - An AI Based, Multimodal Model

Alishetti, S., Pan, W., Beecy, A. N., Liu, Z., Gong, A., Huang, Z., Clerkin, K. J., Goldsmith, R. L., Majure, D. T., Kelsey, C., vanMaanan, D., Ruhl, J., Tesfuzigta, N., Lancet, E., Kumaraiah, D., Sayer, G., Estrin, D., Weinberger, K., Kuleshov, V., Wang, F., Uriel, N.

medrxiv logopreprintJul 6 2025
Background and AimsTransthoracic echocardiography (TTE) is a widely available tool for diagnosing and managing heart failure but has limited predictive value for survival. Cardiopulmonary exercise test (CPET) performance strongly correlates with survival in heart failure patients but is less accessible. We sought to develop an artificial intelligence (AI) algorithm using TTE and electronic medical records to predict CPET peak oxygen consumption (peak VO2) [&le;] 14 mL/kg/min. MethodsAn AI model was trained to predict peak VO2 [&le;] 14 mL/kg/min from TTE images, structured TTE reports, demographics, medications, labs, and vitals. The training set included patients with a TTE within 6 months of a CPET. Performance was retrospectively tested in a held-out group from the development cohort and an external validation cohort. Results1,127 CPET studies paired with concomitant TTE were identified. The best performance was achieved by using all components (TTE images, all structured clinical data). The model performed well at predicting a peak VO2 [&le;] 14 mL/kg/min, with an AUROC of 0.84 (development cohort) and 0.80 (external validation cohort). It performed consistently well using higher ([&le;] 18 mL/kg/min) and lower ([&le;] 12 mL/kg/min) cut-offs. ConclusionsThis multimodal AI model effectively categorized patients into low and high risk predicted peak VO2, demonstrating the potential to identify previously unrecognized patients in need of advanced heart failure therapies where CPET is not available.
Page 12 of 41404 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.