Sort by:
Page 4 of 24237 results

Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}.

Ultrasound Displacement Tracking Techniques for Post-Stroke Myofascial Shear Strain Quantification.

Ashikuzzaman M, Huang J, Bonwit S, Etemadimanesh A, Ghasemi A, Debs P, Nickl R, Enslein J, Fayad LM, Raghavan P, Bell MAL

pubmed logopapersJun 24 2025
Ultrasound shear strain is a potential biomarker of myofascial dysfunction. However, the quality of estimated shear strains can be impacted by differences in ultrasound displacement tracking techniques, potentially altering clinical conclusions surrounding myofascial pain. This work assesses the reliability of four displacement estimation algorithms under a novel clinical hypothesis that the shear strain between muscles on a stroke-affected (paretic) shoulder with myofascial pain is lower than that on the non-paretic side of the same patient. After initial validation with simulations, four approaches were evaluated with in vivo data acquired from ten research participants with myofascial post-stroke shoulder pain: (1) Search is a common window-based method that determines displacements by searching for maximum normalized cross-correlations within windowed data, whereas (2) OVERWIND-Search, (3) SOUL-Search, and (4) $L1$-SOUL-Search fine-tune the Search initial estimates by optimizing cost functions comprising data and regularization terms, utilizing $L1$-norm-based first-order regularization, $L2$-norm-based first- and second-order regularization, and $L1$-norm-based first- and second-order regularization, respectively. SOUL-Search and $L1$-SOUL-Search most accurately and reliably estimate shear strain relative to our clinical hypothesis, when validated with visual inspection of ultrasound cine loops and quantitative T1$\rho$ magnetic resonance imaging. In addition, $L1$-SOUL-Search produced the most reliable displacement tracking performance by generating lateral displacement images with smooth displacement gradients (measured as the mean and variance of displacement derivatives) and sharp edges (which enables distinction of shoulder muscle layers). Among the four investigated methods, $L1$-SOUL-Search emerged as the most suitable option to investigate myofascial pain and dysfunction, despite the drawback of slow runtimes, which can potentially be resolved with a deep learning solution. This work advances musculoskeletal health, ultrasound shear strain imaging, and related applications by establishing the foundation required to develop reliable image-based biomarkers for accurate diagnoses and treatments.

Bedside Ultrasound Vector Doppler Imaging System with GPU Processing and Deep Learning.

Nahas H, Yiu BYS, Chee AJY, Ishii T, Yu ACH

pubmed logopapersJun 24 2025
Recent innovations in vector flow imaging promise to bring the modality closer to clinical application and allow for more comprehensive high-frame-rate vascular assessments. One such innovation is plane-wave multi-angle vector Doppler, where pulsed Doppler principles from multiple steering angles are used to realize vector flow imaging at frame rates upward of 1,000 frames per second (fps). Currently, vector Doppler is limited by the presence of aliasing artifacts that have prevented its reliable realization at the bedside. In this work, we present a new aliasing-resistant vector Doppler imaging system that can be deployed at the bedside using a programmable ultrasound core, graphics processing unit (GPU) processing, and deep learning principles. The framework supports two operational modes: 1) live imaging at 17 fps where vector flow imaging serves to guide image view navigation in blood vessels with complex dynamics; 2) on-demand replay mode where flow data acquired at high frame rates of over 1,000 fps is depicted as a slow-motion playback at 60 fps using an aliasing-resistant vector projectile visualization. Using our new system, aliasing-free vector flow cineloops were successfully obtained in a stenosis phantom experiment and in human bifurcation imaging scans. This system represents a major engineering advance towards the clinical adoption of vector flow imaging.

Brain ultrasonography in neurosurgical patients.

Mahajan C, Kapoor I, Prabhakar H

pubmed logopapersJun 24 2025
Brain ultrasound is a popular point-of-care test that helps visualize brain structures. This review highlights recent developments in brain ultrasonography. There is a need to keep pace with the ongoing technological advancements and establishing standardized quality criteria for improving its utility in clinical practice. Newer automated indices derived from transcranial Doppler help establish its role as a noninvasive monitor of intracranial pressure and diagnosing vasospasm/delayed cerebral ischemia. A novel robotic transcranial Doppler system equipped with artificial intelligence allows real-time continuous neuromonitoring. Intraoperative ultrasound assists neurosurgeons in real-time localization of brain lesions and helps in assessing the extent of resection, thereby enhancing surgical precision and safety. Optic nerve sheath diameter point-of-care ultrasonography is an effective means of diagnosing raised intracranial pressure, triaging, and prognostication. The quality criteria checklist can help standardize this technique. Newer advancements like focused ultrasound, contrast-enhanced ultrasound, and functional ultrasound have also been discussed. Brain ultrasound continues to be a critical bedside tool in neurologically injured patients. With the advent of technological advancements, its utility has widened and its capabilities have expanded, making it more accurate and versatile in clinical practice.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

General Methods Make Great Domain-specific Foundation Models: A Case-study on Fetal Ultrasound

Jakob Ambsdorf, Asbjørn Munk, Sebastian Llambias, Anders Nymark Christensen, Kamil Mikolaj, Randall Balestriero, Martin Tolsgaard, Aasa Feragen, Mads Nielsen

arxiv logopreprintJun 24 2025
With access to large-scale, unlabeled medical datasets, researchers are confronted with two questions: Should they attempt to pretrain a custom foundation model on this medical data, or use transfer-learning from an existing generalist model? And, if a custom model is pretrained, are novel methods required? In this paper we explore these questions by conducting a case-study, in which we train a foundation model on a large regional fetal ultrasound dataset of 2M images. By selecting the well-established DINOv2 method for pretraining, we achieve state-of-the-art results on three fetal ultrasound datasets, covering data from different countries, classification, segmentation, and few-shot tasks. We compare against a series of models pretrained on natural images, ultrasound images, and supervised baselines. Our results demonstrate two key insights: (i) Pretraining on custom data is worth it, even if smaller models are trained on less data, as scaling in natural image pretraining does not translate to ultrasound performance. (ii) Well-tuned methods from computer vision are making it feasible to train custom foundation models for a given medical domain, requiring no hyperparameter tuning and little methodological adaptation. Given these findings, we argue that a bias towards methodological innovation should be avoided when developing domain specific foundation models under common computational resource constraints.

Multimodal Deep Learning Based on Ultrasound Images and Clinical Data for Better Ovarian Cancer Diagnosis.

Su C, Miao K, Zhang L, Yu X, Guo Z, Li D, Xu M, Zhang Q, Dong X

pubmed logopapersJun 24 2025
This study aimed to develop and validate a multimodal deep learning model that leverages 2D grayscale ultrasound (US) images alongside readily available clinical data to improve diagnostic performance for ovarian cancer (OC). A retrospective analysis was conducted involving 1899 patients who underwent preoperative US examinations and subsequent surgeries for adnexal masses between 2019 and 2024. A multimodal deep learning model was constructed for OC diagnosis and extracting US morphological features from the images. The model's performance was evaluated using metrics such as receiver operating characteristic (ROC) curves, accuracy, and F1 score. The multimodal deep learning model exhibited superior performance compared to the image-only model, achieving areas under the curves (AUCs) of 0.9393 (95% CI 0.9139-0.9648) and 0.9317 (95% CI 0.9062-0.9573) in the internal and external test sets, respectively. The model significantly improved the AUCs for OC diagnosis by radiologists and enhanced inter-reader agreement. Regarding US morphological feature extraction, the model demonstrated robust performance, attaining accuracies of 86.34% and 85.62% in the internal and external test sets, respectively. Multimodal deep learning has the potential to enhance the diagnostic accuracy and consistency of radiologists in identifying OC. The model's effective feature extraction from ultrasound images underscores the capability of multimodal deep learning to automate the generation of structured ultrasound reports.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

Development and validation of a SOTA-based system for biliopancreatic segmentation and station recognition system in EUS.

Zhang J, Zhang J, Chen H, Tian F, Zhang Y, Zhou Y, Jiang Z

pubmed logopapersJun 23 2025
Endoscopic ultrasound (EUS) is a vital tool for diagnosing biliopancreatic disease, offering detailed imaging to identify key abnormalities. Its interpretation demands expertise, which limits its accessibility for less trained practitioners. Thus, the creation of tools or systems to assist in interpreting EUS images is crucial for improving diagnostic accuracy and efficiency. To develop an AI-assisted EUS system for accurate pancreatic and biliopancreatic duct segmentation, and evaluate its impact on endoscopists' ability to identify biliary-pancreatic diseases during segmentation and anatomical localization. The EUS-AI system was designed to perform station positioning and anatomical structure segmentation. A total of 45,737 EUS images from 1852 patients were used for model training. Among them, 2881 images were for internal testing, and 2747 images from 208 patients were for external validation. Additionally, 340 images formed a man-machine competition test set. During the research process, various newer state-of-the-art (SOTA) deep learning algorithms were also compared. In classification, in the station recognition task, compared to the ResNet-50 and YOLOv8-CLS algorithms, the Mean Teacher algorithm achieved the highest accuracy, with an average of 95.60% (92.07%-99.12%) in the internal test set and 92.72% (88.30%-97.15%) in the external test set. For segmentation, compared to the UNet ++ and YOLOv8 algorithms, the U-Net v2 algorithm was optimal. Ultimately, the EUS-AI system was constructed using the optimal models from two tasks, and a man-machine competition experiment was conducted. The results demonstrated that the performance of the EUS-AI system significantly outperformed that of mid-level endoscopists, both in terms of position recognition (p < 0.001) and pancreas and biliopancreatic duct segmentation tasks (p < 0.001, p = 0.004). The EUS-AI system is expected to significantly shorten the learning curve for the pancreatic EUS examination and enhance procedural standardization.

Self-Supervised Optimization of RF Data Coherence for Improving Breast Reflection UCT Reconstruction.

He L, Liu Z, Cai Y, Zhang Q, Zhou L, Yuan J, Xu Y, Ding M, Yuchi M, Qiu W

pubmed logopapersJun 23 2025
Reflection Ultrasound Computed Tomography (UCT) is gaining prominence as an essential instrument for breast cancer screening. However, reflection UCT quality is often compromised by the variability in sound speed across breast tissue. Traditionally, reflection UCT utilizes the Delay and Sum (DAS) algorithm, where the Time of Flight significantly affects the coherence of the reflected radio frequency (RF) data, based on an oversimplified assumption of uniform sound speed. This study introduces three meticulously engineered modules that leverage the spatial correlation of receiving arrays to improve the coherence of RF data and enable more effective summation. These modules include the self-supervised blind RF data segment block (BSegB) and the state-space model-based strong reflection prediction block (SSM-SRP), followed by a polarity-based adaptive replacing refinement (PARR) strategy to suppress sidelobe noise caused by aperture narrowing. To assess the effectiveness of our method, we utilized standard image quality metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE). Additionally, coherence factor (CF) and variance (Var) were employed to verify the method's ability to enhance signal coherence at the RF data level. The findings reveal that our approach greatly improves performance, achieving an average PSNR of 19.64 dB, an average SSIM of 0.71, and an average RMSE of 0.10, notably under conditions of sparse transmission. The conducted experimental analyses affirm the superior performance of our framework compared to alternative enhancement strategies, including adaptive beamforming methods and deep learning-based beamforming approaches.
Page 4 of 24237 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.