Sort by:
Page 27 of 56556 results

Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models.

Liu L, Wang T, Zhu W, Zhang H, Tian H, Li Y, Cai W, Yang P

pubmed logopapersJul 10 2025
As increased nuchal translucency (NT) thickness is notably associated with fetal chromosomal abnormalities, structural defects, and genetic syndromes, accurate measurement of NT thickness is crucial for the screening of fetal abnormalities during the first trimester. We aimed to develop a model for quality assessment of ultrasound images for precise measurement of fetal NT thickness. We collected 2140 ultrasound images of midsagittal sections of the fetal face between 11 and 14 weeks of gestation. Several image segmentation models were trained, and the one exhibiting the highest DSC and HD 95 was chosen to automatically segment the ROI. The radiomics features and deep transfer learning (DTL) features were extracted and selected to construct radiomics and DTL models. Feature screening was conducted using the <i>t</i>-test, Mann-Whitney <i>U</i>-test, Spearman’s rank correlation analysis, and LASSO. We also developed early fusion and late fusion models to integrate the advantages of radiomics and DTL models. The optimal model was compared with junior radiologists. We used SHapley Additive exPlanations (SHAP) to investigate the model’s interpretability. The DeepLabV3 ResNet achieved the best segmentation performance (DSC: 98.07 ± 0.02%, HD 95: 0.75 ± 0.15 mm). The feature fusion model demonstrated the optimal performance (AUC: 0.978, 95% CI: 0.965–0.990, accuracy: 93.2%, sensitivity: 93.1%, specificity: 93.4%, PPV: 93.5%, NPV: 93.0%, precision: 93.5%). This model exhibited more reliable performance compared to junior radiologists and significantly improved the capabilities of junior radiologists. The SHAP summary plot showed DTL features were the most important features for feature fusion model. The proposed models innovatively bridge the gaps in previous studies, achieving intelligent quality assessment of ultrasound images for NT measurement and highly accurate automatic segmentation of ROIs. These models are potential tools to enhance quality control for fetal ultrasound examinations, streamline clinical workflows, and improve the professional skills of less-experienced radiologists. The online version contains supplementary material available at 10.1186/s12884-025-07863-y.

Integrative multimodal ultrasound and radiomics for early prediction of neoadjuvant therapy response in breast cancer: a clinical study.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.

Speckle2Self: Self-Supervised Ultrasound Speckle Reduction Without Clean Data

Xuesong Li, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJul 9 2025
Image denoising is a fundamental task in computer vision, particularly in medical ultrasound (US) imaging, where speckle noise significantly degrades image quality. Although recent advancements in deep neural networks have led to substantial improvements in denoising for natural images, these methods cannot be directly applied to US speckle noise, as it is not purely random. Instead, US speckle arises from complex wave interference within the body microstructure, making it tissue-dependent. This dependency means that obtaining two independent noisy observations of the same scene, as required by pioneering Noise2Noise, is not feasible. Additionally, blind-spot networks also cannot handle US speckle noise due to its high spatial dependency. To address this challenge, we introduce Speckle2Self, a novel self-supervised algorithm for speckle reduction using only single noisy observations. The key insight is that applying a multi-scale perturbation (MSP) operation introduces tissue-dependent variations in the speckle pattern across different scales, while preserving the shared anatomical structure. This enables effective speckle suppression by modeling the clean image as a low-rank signal and isolating the sparse noise component. To demonstrate its effectiveness, Speckle2Self is comprehensively compared with conventional filter-based denoising algorithms and SOTA learning-based methods, using both realistic simulated US images and human carotid US images. Additionally, data from multiple US machines are employed to evaluate model generalization and adaptability to images from unseen domains. \textit{Code and datasets will be released upon acceptance.

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

Adaptive batch-fusion self-supervised learning for ultrasound image pretraining.

Zhang J, Wu X, Liu S, Fan Y, Chen Y, Lyu G, Liu P, Liu Z, He S

pubmed logopapersJul 8 2025
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.

Machine learning models using non-invasive tests & B-mode ultrasound to predict liver-related outcomes in metabolic dysfunction-associated steatotic liver disease.

Kosick HM, McIntosh C, Bera C, Fakhriyehasl M, Shengir M, Adeyi O, Amiri L, Sebastiani G, Jhaveri K, Patel K

pubmed logopapersJul 8 2025
Advanced metabolic-dysfunction-associated steatotic liver disease (MASLD) fibrosis (F3-4) predicts liver-related outcomes. Serum and elastography-based non-invasive tests (NIT) cannot yet reliably predict MASLD outcomes. The role of B-mode ultrasound (US) for outcome prediction is not yet known. We aimed to evaluate machine learning (ML) algorithms based on simple NIT and US for prediction of adverse liver-related outcomes in MASLD. Retrospective cohort study of adult MASLD patients biopsied between 2010-2021 at one of two Canadian tertiary care centers. Random forest was used to create predictive models for outcomes-hepatic decompensation, liver-related outcomes (decompensation, hepatocellular carcinoma (HCC), liver transplant, and liver-related mortality), HCC, liver-related mortality, F3-4, and fibrotic metabolic dysfunction-associated steatohepatitis (MASH). Diagnostic performance was assessed using area under the curve (AUC). 457 MASLD patients were included with 44.9% F3-4, diabetes prevalence 31.6%, 53.8% male, mean age 49.2 and BMI 32.8 kg/m<sup>2</sup>. 6.3% had an adverse liver-related outcome over mean 43 months follow-up. AUC for ML predictive models were-hepatic decompensation 0.90(0.79-0.98), liver-related outcomes 0.87(0.76-0.96), HCC 0.72(0.29-0.96), liver-related mortality 0.79(0.31-0.98), F3-4 0.83(0.76-0.87), and fibrotic MASH 0.74(0.65-0.85). Biochemical and clinical variables had greatest feature importance overall, compared to US parameters. FIB-4 and AST:ALT ratio were highest ranked biochemical variables, while age was the highest ranked clinical variable. ML models based on clinical, biochemical, and US-based variables accurately predict adverse MASLD outcomes in this multi-centre cohort. Overall, biochemical variables had greatest feature importance. US-based features were not substantial predictors of outcomes in this study.

Self-supervised Deep Learning for Denoising in Ultrasound Microvascular Imaging

Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M. DeRuiter, Jieyang Jin, Kate M. Knoll, Kendra E. Petersen, James D. Krier, Xiang-yang Zhu, Gina K. Hesley, Kathryn A. Robinson, Andrew J. Bentall, Thomas D. Atwell, Andrew D. Rule, Lilach O. Lerman, Shigao Chen, Chengwu Huang

arxiv logopreprintJul 7 2025
Ultrasound microvascular imaging (UMI) is often hindered by low signal-to-noise ratio (SNR), especially in contrast-free or deep tissue scenarios, which impairs subsequent vascular quantification and reliable disease diagnosis. To address this challenge, we propose Half-Angle-to-Half-Angle (HA2HA), a self-supervised denoising framework specifically designed for UMI. HA2HA constructs training pairs from complementary angular subsets of beamformed radio-frequency (RF) blood flow data, across which vascular signals remain consistent while noise varies. HA2HA was trained using in-vivo contrast-free pig kidney data and validated across diverse datasets, including contrast-free and contrast-enhanced data from pig kidneys, as well as human liver and kidney. An improvement exceeding 15 dB in both contrast-to-noise ratio (CNR) and SNR was observed, indicating a substantial enhancement in image quality. In addition to power Doppler imaging, denoising directly in the RF domain is also beneficial for other downstream processing such as color Doppler imaging (CDI). CDI results of human liver derived from the HA2HA-denoised signals exhibited improved microvascular flow visualization, with a suppressed noisy background. HA2HA offers a label-free, generalizable, and clinically applicable solution for robust vascular imaging in both contrast-free and contrast-enhanced UMI.

Efficient Ultrasound Breast Cancer Detection with DMFormer: A Dynamic Multiscale Fusion Transformer.

Guo L, Zhang H, Ma C

pubmed logopapersJul 7 2025
To develop an advanced deep learning model for accurate differentiation between benign and malignant masses in ultrasound breast cancer screening, addressing the challenges of noise, blur, and complex tissue structures in ultrasound imaging. We propose Dynamic Multiscale Fusion Transformer (DMFormer), a novel Transformer-based architecture featuring a dynamic multiscale feature fusion mechanism. The model integrates window attention for local feature interaction with grid attention for global context mixing, enabling comprehensive capture of both fine-grained tissue details and broader anatomical contexts. DMFormer was evaluated on two independent datasets and compared against state-of-the-art approaches, including convolutional neural networks, Transformer-based architectures, and hybrid models. The model achieved areas under the curve of 90.48% and 86.57% on the respective datasets, consistently outperforming all comparison models. DMFormer demonstrates superior performance in ultrasound breast cancer detection through its innovative dual-attention approach. The model's ability to effectively balance local and global feature processing while maintaining computational efficiency represents a significant advancement in medical image analysis. These results validate DMFormer's potential for enhancing the accuracy and reliability of breast cancer screening in clinical settings.

Artificial Intelligence-Enabled Point-of-Care Echocardiography: Bringing Precision Imaging to the Bedside.

East SA, Wang Y, Yanamala N, Maganti K, Sengupta PP

pubmed logopapersJul 7 2025
The integration of artificial intelligence (AI) with point-of-care ultrasound (POCUS) is transforming cardiovascular diagnostics by enhancing image acquisition, interpretation, and workflow efficiency. These advancements hold promise in expanding access to cardiovascular imaging in resource-limited settings and enabling early disease detection through screening applications. This review explores the opportunities and challenges of AI-enabled POCUS as it reshapes the landscape of cardiovascular imaging. AI-enabled systems can reduce operator dependency, improve image quality, and support clinicians-both novice and experienced-in capturing diagnostically valuable images, ultimately promoting consistency across diverse clinical environments. However, widespread adoption faces significant challenges, including concerns around algorithm generalizability, bias, explainability, clinician trust, and data privacy. Addressing these issues through standardized development, ethical oversight, and clinician-AI collaboration will be critical to safe and effective implementation. Looking ahead, emerging innovations-such as autonomous scanning, real-time predictive analytics, tele-ultrasound, and patient-performed imaging-underscore the transformative potential of AI-enabled POCUS in reshaping cardiovascular care and advancing equitable healthcare delivery worldwide.

ViTaL: A Multimodality Dataset and Benchmark for Multi-pathological Ovarian Tumor Recognition

You Zhou, Lijiang Chen, Guangxia Cui, Wenpei Bai, Yu Guo, Shuchang Lyu, Guangliang Cheng, Qi Zhao

arxiv logopreprintJul 6 2025
Ovarian tumor, as a common gynecological disease, can rapidly deteriorate into serious health crises when undetected early, thus posing significant threats to the health of women. Deep neural networks have the potential to identify ovarian tumors, thereby reducing mortality rates, but limited public datasets hinder its progress. To address this gap, we introduce a vital ovarian tumor pathological recognition dataset called \textbf{ViTaL} that contains \textbf{V}isual, \textbf{T}abular and \textbf{L}inguistic modality data of 496 patients across six pathological categories. The ViTaL dataset comprises three subsets corresponding to different patient data modalities: visual data from 2216 two-dimensional ultrasound images, tabular data from medical examinations of 496 patients, and linguistic data from ultrasound reports of 496 patients. It is insufficient to merely distinguish between benign and malignant ovarian tumors in clinical practice. To enable multi-pathology classification of ovarian tumor, we propose a ViTaL-Net based on the Triplet Hierarchical Offset Attention Mechanism (THOAM) to minimize the loss incurred during feature fusion of multi-modal data. This mechanism could effectively enhance the relevance and complementarity between information from different modalities. ViTaL-Net serves as a benchmark for the task of multi-pathology, multi-modality classification of ovarian tumors. In our comprehensive experiments, the proposed method exhibited satisfactory performance, achieving accuracies exceeding 90\% on the two most common pathological types of ovarian tumor and an overall performance of 85\%. Our dataset and code are available at https://github.com/GGbond-study/vitalnet.
Page 27 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.