Sort by:
Page 38 of 56556 results

EchoFM: Foundation Model for Generalizable Echocardiogram Analysis.

Kim S, Jin P, Song S, Chen C, Li Y, Ren H, Li X, Liu T, Li Q

pubmed logopapersJun 18 2025
Echocardiography is the first-line noninvasive cardiac imaging modality, providing rich spatio-temporal information on cardiac anatomy and physiology. Recently, foundation model trained on extensive and diverse datasets has shown strong performance in various downstream tasks. However, translating foundation models into the medical imaging domain remains challenging due to domain differences between medical and natural images, the lack of diverse patient and disease datasets. In this paper, we introduce EchoFM, a general-purpose vision foundation model for echocardiography trained on a large-scale dataset of over 20 million echocardiographic images from 6,500 patients. To enable effective learning of rich spatio-temporal representations from periodic videos, we propose a novel self-supervised learning framework based on a masked autoencoder with a spatio-temporal consistent masking strategy and periodic-driven contrastive learning. The learned cardiac representations can be readily adapted and fine-tuned for a wide range of downstream tasks, serving as a strong and flexible backbone model. We validate EchoFM through experiments across key downstream tasks in the clinical echocardiography workflow, leveraging public and multi-center internal datasets. EchoFM consistently outperforms SOTA methods, demonstrating superior generalization capabilities and flexibility. The code and checkpoints are available at: https://github.com/SekeunKim/EchoFM.git.

MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images.

Chen F, Fang W, Wu Q, Zhou M, Guo W, Lin L, Chen Z, Zou Z

pubmed logopapersJun 18 2025
Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.

Applying a multi-task and multi-instance framework to predict axillary lymph node metastases in breast cancer.

Li Y, Chen Z, Ding Z, Mei D, Liu Z, Wang J, Tang K, Yi W, Xu Y, Liang Y, Cheng Y

pubmed logopapersJun 18 2025
Deep learning (DL) models have shown promise in predicting axillary lymph node (ALN) status. However, most existing DL models were classification-only models and did not consider the practical application scenarios of multi-view joint prediction. Here, we propose a Multi-Task Learning (MTL) and Multi-Instance Learning (MIL) framework that simulates the real-world clinical diagnostic scenario for ALN status prediction in breast cancer. Ultrasound images of the primary tumor and ALN (if available) regions were collected, each annotated with a segmentation label. The model was trained on a training cohort and tested on both internal and external test cohorts. The proposed two-stage DL framework using one of the Transformer models, Segformer, as the network backbone, exhibits the top-performing model. It achieved an AUC of 0.832, a sensitivity of 0.815, and a specificity of 0.854 in the internal test cohort. In the external cohort, this model attained an AUC of 0.918, a sensitivity of 0.851 and a specificity of 0.957. The Class Activation Mapping method demonstrated that the DL model correctly identified the characteristic areas of metastasis within the primary tumor and ALN regions. This framework may serve as an effective second reader to assist clinicians in ALN status assessment.

Echo-DND: A dual noise diffusion model for robust and precise left ventricle segmentation in echocardiography

Abdur Rahman, Keerthiveena Balraj, Manojkumar Ramteke, Anurag Singh Rathore

arxiv logopreprintJun 18 2025
Recent advancements in diffusion probabilistic models (DPMs) have revolutionized image processing, demonstrating significant potential in medical applications. Accurate segmentation of the left ventricle (LV) in echocardiograms is crucial for diagnostic procedures and necessary treatments. However, ultrasound images are notoriously noisy with low contrast and ambiguous LV boundaries, thereby complicating the segmentation process. To address these challenges, this paper introduces Echo-DND, a novel dual-noise diffusion model specifically designed for this task. Echo-DND leverages a unique combination of Gaussian and Bernoulli noises. It also incorporates a multi-scale fusion conditioning module to improve segmentation precision. Furthermore, it utilizes spatial coherence calibration to maintain spatial integrity in segmentation masks. The model's performance was rigorously validated on the CAMUS and EchoNet-Dynamic datasets. Extensive evaluations demonstrate that the proposed framework outperforms existing SOTA models. It achieves high Dice scores of 0.962 and 0.939 on these datasets, respectively. The proposed Echo-DND model establishes a new standard in echocardiogram segmentation, and its architecture holds promise for broader applicability in other medical imaging tasks, potentially improving diagnostic accuracy across various medical domains. Project page: https://abdur75648.github.io/Echo-DND

Artificial Intelligence in Breast US Diagnosis and Report Generation.

Wang J, Tian H, Yang X, Wu H, Zhu X, Chen R, Chang A, Chen Y, Dou H, Huang R, Cheng J, Zhou Y, Gao R, Yang K, Li G, Chen J, Ni D, Dong F, Xu J, Gu N

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast ultrasound (BUS) reports. Materials and Methods This retrospective study included 104,364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82,896 cases, validated on 10,385 cases, and tested on an internal set (10,383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (> 10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (7 years of experience), as well as reports from three junior radiologists (2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for non-inferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% [285/300] versus 92.33% [277/300] (<i>P</i> < .001; non-inferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% [348/400] versus 90.75% [363/400] (<i>P</i> = .06), 86.50% [346/400] versus 92.00% [368/400] ( <i>P</i> = .007), and 84.75% [339/400] versus 90.25% [361/400] (<i>P</i> = .02) with and without AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. ©RSNA, 2025.

Step-by-Step Approach to Design Image Classifiers in AI: An Exemplary Application of the CNN Architecture for Breast Cancer Diagnosis

Lohani, A., Mishra, B. K., Wertheim, K. Y., Fagbola, T. M.

medrxiv logopreprintJun 17 2025
In recent years, different Convolutional Neural Networks (CNNs) approaches have been applied for image classification in general and specific problems such as breast cancer diagnosis, but there is no standardising approach to facilitate comparison and synergy. This paper attempts a step-by-step approach to standardise a common application of image classification with the specific problem of classifying breast ultrasound images for breast cancer diagnosis as an illustrative example. In this study, three distinct datasets: Breast Ultrasound Image (BUSI), Breast Ultrasound Image (BUI), and Ultrasound Breast Images for Breast Cancer (UBIBC) datasets have been used to build and fine-tune custom and pre-trained CNN models systematically. Custom CNN models have been built, and hence, transfer learning (TL) has been applied to deploy a broad range of pre-trained models, optimised by applying data augmentation techniques and hyperparameter tuning. Models were trained and tested in scenarios involving limited and large datasets to gain insights into their robustness and generality. The obtained results indicated that the custom CNN and VGG19 are the two most suitable architectures for this problem. The experimental results highlight the significance of employing an effective step-by-step approach in image classification tasks to enhance the robustness and generalisation capabilities of CNN-based classifiers.

Enhancing Ultrasound-Based Diagnosis of Unilateral Diaphragmatic Paralysis with a Visual Transformer-Based Model.

Kalkanis A, Bakalis D, Testelmans D, Buyse B, Simos YV, Tsamis KI, Manis G

pubmed logopapersJun 17 2025
This paper presents a novel methodology that combines a pre-trained Visual Transformer-Based Deep Model (ViT) with a custom denoising image filter for the diagnosis of Unilateral Diaphragmatic Paralysis (UDP) using Ultrasound (US) images. The ViT is employed to extract complex features from US images of 17 volunteers, capturing intricate patterns and details that are critical for accurate diagnosis. The extracted features are then fed into an ensemble learning model to determine the presence of UDP. The proposed framework achieves an average accuracy of 93.8% on a stratified 5-fold cross-validation, surpassing relevant state-of-the-art (SOTA) image classifiers. This high level of performance underscores the robustness and effectiveness of the framework, highlighting its potential as a prominent diagnostic tool in medical imaging.

2nd trimester ultrasound (anomaly).

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

A Semi-supervised Ultrasound Image Segmentation Network Integrating Enhanced Mask Learning and Dynamic Temperature-controlled Self-distillation.

Xu L, Huang Y, Zhou H, Mao Q, Yin W

pubmed logopapersJun 16 2025
Ultrasound imaging is widely used in clinical practice due to its advantages of no radiation and real-time capability. However, its image quality is often degraded by speckle noise, low contrast, and blurred boundaries, which pose significant challenges for automatic segmentation. In recent years, deep learning methods have achieved notable progress in ultrasound image segmentation. Nonetheless, these methods typically require large-scale annotated datasets, incur high computational costs, and suffer from slow inference speeds, limiting their clinical applicability. To overcome these limitations, we propose EML-DMSD, a novel semi-supervised segmentation network that combines Enhanced Mask Learning (EML) and Dynamic Temperature-Controlled Multi-Scale Self-Distillation (DMSD). The EML module improves the model's robustness to noise and boundary ambiguity, while the DMSD module introduces a teacher-free, multi-scale self-distillation strategy with dynamic temperature adjustment to boost inference efficiency and reduce reliance on extensive resources. Experiments on multiple ultrasound benchmark datasets demonstrate that EML-DMSD achieves superior segmentation accuracy with efficient inference, highlighting its strong generalization ability and clinical potential.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.
Page 38 of 56556 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.