Sort by:
Page 7 of 56552 results

Harnessing Artificial Intelligence for Shoulder Ultrasonography: A Narrative Review.

Wu WT, Shu YC, Lin CY, Gonzalez-Suarez CB, Özçakar L, Chang KV

pubmed logopapersSep 12 2025
Shoulder pain is a common musculoskeletal complaint requiring accurate imaging for diagnosis and management. Ultrasound is favored for its accessibility, dynamic imaging, and high-resolution soft tissue visualization. However, its operator dependency and variability in interpretation present challenges. Recent advancements in artificial intelligence (AI), particularly deep learning algorithms like convolutional neural networks, offer promising applications in musculoskeletal imaging, enhancing diagnostic accuracy and efficiency. This narrative review explores AI integration in shoulder ultrasound, emphasizing automated pathology detection, image segmentation, and outcome prediction. Deep learning models have demonstrated high accuracy in grading bicipital peritendinous effusion and discriminating rotator cuff tendon tears, while machine learning techniques have shown efficacy in predicting the success of ultrasound-guided percutaneous irrigation for rotator cuff calcification. AI-powered segmentation models have improved anatomical delineation; however, despite these advancements, challenges remain, including the need for large, well-annotated datasets, model generalizability across diverse populations, and clinical validation. Future research should optimize AI algorithms for real-time applications, integrate multimodal imaging, and enhance clinician-AI collaboration.

The Combined Use of Cervical Ultrasound and Deep Learning Improves the Detection of Patients at Risk for Spontaneous Preterm Delivery.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.

U-ConvNext: A Robust Approach to Glioma Segmentation in Intraoperative Ultrasound.

Vahdani AM, Rahmani M, Pour-Rashidi A, Ahmadian A, Farnia P

pubmed logopapersSep 11 2025
Intraoperative tumor imaging is critical to achieving maximal safe resection during neurosurgery, especially for low-grade glioma resection. Given the convenience of ultrasound as an intraoperative imaging modality, but also the limitations of the ultrasound modality and the time-consuming process of manual tumor segmentation, we propose a learning-based model for the accurate segmentation of low-grade gliomas in ultrasound images. We developed a novel U-net-based architecture adopting the block architecture of the ConvNext V2 model, titled U-ConvNext, which also incorporates various architectural improvements including global response normalization, fine-tuned kernel sizes, and inception layers. We also adopted the CutMix data augmentation technique for semantic segmentation, aiming for enhanced texture detection. Conformal segmentation, a novel approach to conformal prediction for binary semantic segmentation, was also developed for uncertainty quantification, providing calibrated measures of model uncertainty in a visual format. The proposed models were trained and evaluated on three subsets of images in the RESECT dataset and achieved hold-out test Dice scores of 84.63%, 74.52%, and 90.82% on the "before," "during," and "after" subsets, respectively, which indicates increases of ~ 13-31% compared to the state of the art. Furthermore, external evaluation on the ReMIND dataset indicated a robust performance (dice score of 79.17% [95% CI: 77.82-81.62] and only a moderate decline of < 3% in expected calibration error. Our approach integrates various innovations in model design, model training, and uncertainty quantification, achieving improved results on the segmentation of low-grade glioma in ultrasound images during neurosurgery.

Automatic approach for B-lines detection in lung ultrasound images using You Only Look Once algorithm.

Bottino A, Botrugno C, Casciaro E, Conversano F, Lay-Ekuakille A, Lombardi FA, Morello R, Pisani P, Vetrugno L, Casciaro S

pubmed logopapersSep 11 2025
B-lines are among the key artifact signs observed in Lung Ultrasound (LUS), playing a critical role in differentiating pulmonary diseases and assessing overall lung condition. However, their accurate detection and quantification can be time-consuming and technically challenging, especially for less experienced operators. This study aims to evaluate the performance of a YOLO (You Only Look Once)-based algorithm for the automated detection of B-lines, offering a novel tool to support clinical decision-making. The proposed approach is designed to improve the efficiency and consistency of LUS interpretation, particularly for non-expert practitioners, and to enhance its utility in guiding respiratory management. In this observational agreement study, 644 images from both anonymized internal and clinical online database were evaluated. After a quality selection step, 386 images remained available for analysis from 46 patients. Ground truth was established by blinded expert sonographer identifying B-lines within rectangular Region Of Interest (ROI) on each frame. Algorithm performances were assessed through Precision, Recall and F1 Score, whereas to quantify the agreement between the YOLO-based algorithm and the expert operator, weighted kappa (kw) statistics were employed. The algorithm achieved a precision of 0.92 (95% CI 0.89-0.94), recall of 0.81 (95% CI 0.77-0.85), and F1-score of 0.86 (95% CI 0.83-0.88). The weighted kappa was 0.68 (95% CI 0.64-0.72), indicating substantial agreement algorithm and expert annotations. The proposed algorithm has demonstrated its potential to significantly enhance diagnostic support by accurately detecting B-lines in LUS images.

Ultrasam: a foundation model for ultrasound using large open-access segmentation datasets.

Meyer A, Murali A, Zarin F, Mutter D, Padoy N

pubmed logopapersSep 11 2025
Automated ultrasound (US) image analysis remains a longstanding challenge due to anatomical complexity and the scarcity of annotated data. Although large-scale pretraining has improved data efficiency in many visual domains, its impact in US is limited by a pronounced domain shift from other imaging modalities and high variability across clinical applications, such as chest, ovarian, and endoscopic imaging. To address this, we propose UltraSam, a SAM-style model trained on a heterogeneous collection of publicly available segmentation datasets, originally developed in isolation. UltraSam is trained under the prompt-conditioned segmentation paradigm, which eliminates the need for unified labels and enables generalization to a broad range of downstream tasks. We compile US-43d, a large-scale collection of 43 open-access US datasets comprising over 282,000 images with segmentation masks covering 58 anatomical structures. We explore adaptation and fine-tuning strategies for SAM and systematically evaluate transferability across downstream tasks, comparing against state-of-the-art pretraining methods. We further propose prompted classification, a new use case where object-specific prompts and image features are jointly decoded to improve classification performance. In experiments on three diverse public US datasets, UltraSam outperforms existing SAM variants on prompt-based segmentation and surpasses self-supervised US foundation models on downstream (prompted) classification and instance segmentation tasks. UltraSam demonstrates that SAM-style training on diverse, sparsely annotated US data enables effective generalization across tasks. By unlocking the value of fragmented public datasets, our approach lays the foundation for scalable, real-world US representation learning. We release our code and pretrained models at https://github.com/CAMMA-public/UltraSam and invite the community to further this effort by continuing to contribute high-quality datasets.

Ultrasound Assessment of Muscle Atrophy During Short- and Medium-Term Head-Down Bed Rest.

Yang X, Yu L, Tian Y, Yin G, Lv Q, Guo J

pubmed logopapersSep 11 2025
This study aims to investigate the feasibility of ultrasound technology for assessing muscle atrophy progression in a head-down bed rest model, providing a reference for monitoring muscle functional status in a microgravity environment. A 40-day head-down bed rest model using rhesus monkeys was established to simulate the microgravity environment in space. A dual-encoder parallel deep learning model was developed to extract features from B-mode ultrasound images and radiofrequency signals separately. Additionally, an up-sampling module incorporating the Coordinate Attention mechanism and the Pixel-attention-guided fusion module was designed to enhance direction and position awareness, as well as improve the recognition of target boundaries and detailed features. The evaluation efficacy of single ultrasound signals and fused signals was compared. The assessment accuracy reached approximately 87% through inter-individual cross-validation in 6 rhesus monkeys. The fusion of ultrasound signals significantly enhanced classification performance compared to using single modalities, such as B-mode images or radiofrequency signals. This study demonstrates that ultrasound technology combined with deep learning algorithms can effectively assess disuse muscle atrophy. The proposed approach offers a promising reference for diagnosing muscle atrophy under long-term immobilization, with significant application value and potential for widespread adoption.

DualTrack: Sensorless 3D Ultrasound needs Local and Global Context

Paul F. R. Wilson, Matteo Ronchetti, Rüdiger Göbl, Viktoria Markova, Sebastian Rosenzweig, Raphael Prevost, Parvin Mousavi, Oliver Zettinig

arxiv logopreprintSep 11 2025
Three-dimensional ultrasound (US) offers many clinical advantages over conventional 2D imaging, yet its widespread adoption is limited by the cost and complexity of traditional 3D systems. Sensorless 3D US, which uses deep learning to estimate a 3D probe trajectory from a sequence of 2D US images, is a promising alternative. Local features, such as speckle patterns, can help predict frame-to-frame motion, while global features, such as coarse shapes and anatomical structures, can situate the scan relative to anatomy and help predict its general shape. In prior approaches, global features are either ignored or tightly coupled with local feature extraction, restricting the ability to robustly model these two complementary aspects. We propose DualTrack, a novel dual-encoder architecture that leverages decoupled local and global encoders specialized for their respective scales of feature extraction. The local encoder uses dense spatiotemporal convolutions to capture fine-grained features, while the global encoder utilizes an image backbone (e.g., a 2D CNN or foundation model) and temporal attention layers to embed high-level anatomical features and long-range dependencies. A lightweight fusion module then combines these features to estimate the trajectory. Experimental results on a large public benchmark show that DualTrack achieves state-of-the-art accuracy and globally consistent 3D reconstructions, outperforming previous methods and yielding an average reconstruction error below 5 mm.

A Lightweight CNN Approach for Hand Gesture Recognition via GAF Encoding of A-mode Ultrasound Signals.

Shangguan Q, Lian Y, Liao Z, Chen J, Song Y, Yao L, Jiang C, Lu Z, Lin Z

pubmed logopapersSep 10 2025
Hand gesture recognition(HGR) is a key technology in human-computer interaction and human communication. This paper presents a lightweight, parameter-free attention convolutional neural network (LPA-CNN) approach leveraging Gramian Angular Field(GAF)transformation of A-mode ultrasound signals for HGR. First, this paper maps 1-dimensional (1D) A-mode ultrasound signals, collected from the forearm muscles of 10 healthy participants, into 2-dimensional (2D) images. Second, GAF is selected owing to its higher sensitivity against Markov Transition Field (MTF) and Recurrence Plot (RP) in HGR. Third, a novel LPA-CNN consisting of four components, i.e., a convolution-pooling block, an attention mechanism, an inverted residual block, and a classification block, is proposed. Among them, the convolution-pooling block consists of convolutional and pooling layers, the attention mechanism is applied to generate 3-D weights, the inverted residual block consists of multiple channel shuffling units, and the classification block is performed through fully connected layers. Fourth, comparative experiments were conducted on GoogLeNet, MobileNet, and LPA-CNN to validate the effectiveness of the proposed method. Experimental results show that compared to GoogLeNet and MobileNet, LPA-CNN has a smaller model size and better recognition performance, achieving a classification accuracy of 0.98 ±0.02. This paper achieves efficient and high-accuracy HGR by encoding A-mode ultrasound signals into 2D images and integrating the LPA-CNN model, providing a new technological approach for HGR based on ultrasonic signals.

LD-ViCE: Latent Diffusion Model for Video Counterfactual Explanations

Payal Varshney, Adriano Lucieri, Christoph Balada, Sheraz Ahmed, Andreas Dengel

arxiv logopreprintSep 10 2025
Video-based AI systems are increasingly adopted in safety-critical domains such as autonomous driving and healthcare. However, interpreting their decisions remains challenging due to the inherent spatiotemporal complexity of video data and the opacity of deep learning models. Existing explanation techniques often suffer from limited temporal coherence, insufficient robustness, and a lack of actionable causal insights. Current counterfactual explanation methods typically do not incorporate guidance from the target model, reducing semantic fidelity and practical utility. We introduce Latent Diffusion for Video Counterfactual Explanations (LD-ViCE), a novel framework designed to explain the behavior of video-based AI models. Compared to previous approaches, LD-ViCE reduces the computational costs of generating explanations by operating in latent space using a state-of-the-art diffusion model, while producing realistic and interpretable counterfactuals through an additional refinement step. Our experiments demonstrate the effectiveness of LD-ViCE across three diverse video datasets, including EchoNet-Dynamic (cardiac ultrasound), FERV39k (facial expression), and Something-Something V2 (action recognition). LD-ViCE outperforms a recent state-of-the-art method, achieving an increase in R2 score of up to 68% while reducing inference time by half. Qualitative analysis confirms that LD-ViCE generates semantically meaningful and temporally coherent explanations, offering valuable insights into the target model behavior. LD-ViCE represents a valuable step toward the trustworthy deployment of AI in safety-critical domains.

Role of artificial intelligence in congenital heart disease.

Niyogi SG, Nag DS, Shah MM, Swain A, Naskar C, Srivastava P, Kant R

pubmed logopapersSep 9 2025
This mini-review explores the transformative potential of artificial intelligence (AI) in improving the diagnosis, management, and long-term care of congenital heart diseases (CHDs). AI offers significant advancements across the spectrum of CHD care, from prenatal screening to postnatal management and long-term monitoring. Using AI algorithms, enhanced fetal echocardiography, and genetic tests improves prenatal diagnosis and risk stratification. Postnatally, AI revolutionizes diagnostic imaging analysis, providing more accurate and efficient identification of CHD subtypes and severity. Compared with traditional methods, advanced signal processing techniques enable a more precise assessment of hemodynamic parameters. AI-driven decision support systems tailor treatment strategies, thereby optimizing therapeutic interventions and predicting patient outcomes with greater accuracy. This personalized approach leads to better clinical outcomes and reduced morbidity. Furthermore, AI-enabled remote monitoring and wearable devices facilitate ongoing surveillance, thereby enabling early detection of complications and provision of prompt interventions. This continuous monitoring is crucial in the immediate postoperative period and throughout the patient's life. Despite the immense potential of AI, challenges remain. These include the need for standardized datasets, the development of transparent and understandable AI algorithms, ethical considerations, and seamless integration into existing clinical workflows. Overcoming these obstacles through collaborative data sharing and responsible implementation will unlock the full potential of AI to improve the lives of patients with CHD, ultimately leading to better patient outcomes and improved quality of life.
Page 7 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.