Sort by:
Page 3 of 766 results

Deformable detection transformers for domain adaptable ultrasound localization microscopy with robustness to point spread function variations.

Gharamaleki SK, Helfield B, Rivaz H

pubmed logopapersJul 10 2025
Super-resolution imaging has emerged as a rapidly advancing field in diagnostic ultrasound. Ultrasound Localization Microscopy (ULM) achieves sub-wavelength precision in microvasculature imaging by tracking gas microbubbles (MBs) flowing through blood vessels. However, MB localization faces challenges due to dynamic point spread functions (PSFs) caused by harmonic and sub-harmonic emissions, as well as depth-dependent PSF variations in ultrasound imaging. Additionally, deep learning models often struggle to generalize from simulated to in vivo data due to significant disparities between the two domains. To address these issues, we propose a novel approach using the DEformable DEtection TRansformer (DE-DETR). This object detection network tackles object deformations by utilizing multi-scale feature maps and incorporating a deformable attention module. We further refine the super-resolution map by employing a KDTree algorithm for efficient MB tracking across consecutive frames. We evaluated our method using both simulated and in vivo data, demonstrating improved precision and recall compared to current state-of-the-art methodologies. These results highlight the potential of our approach to enhance ULM performance in clinical applications.

Speckle2Self: Self-Supervised Ultrasound Speckle Reduction Without Clean Data

Xuesong Li, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJul 9 2025
Image denoising is a fundamental task in computer vision, particularly in medical ultrasound (US) imaging, where speckle noise significantly degrades image quality. Although recent advancements in deep neural networks have led to substantial improvements in denoising for natural images, these methods cannot be directly applied to US speckle noise, as it is not purely random. Instead, US speckle arises from complex wave interference within the body microstructure, making it tissue-dependent. This dependency means that obtaining two independent noisy observations of the same scene, as required by pioneering Noise2Noise, is not feasible. Additionally, blind-spot networks also cannot handle US speckle noise due to its high spatial dependency. To address this challenge, we introduce Speckle2Self, a novel self-supervised algorithm for speckle reduction using only single noisy observations. The key insight is that applying a multi-scale perturbation (MSP) operation introduces tissue-dependent variations in the speckle pattern across different scales, while preserving the shared anatomical structure. This enables effective speckle suppression by modeling the clean image as a low-rank signal and isolating the sparse noise component. To demonstrate its effectiveness, Speckle2Self is comprehensively compared with conventional filter-based denoising algorithms and SOTA learning-based methods, using both realistic simulated US images and human carotid US images. Additionally, data from multiple US machines are employed to evaluate model generalization and adaptability to images from unseen domains. \textit{Code and datasets will be released upon acceptance.

Multi-Stage Cascaded Deep Learning-Based Model for Acute Aortic Syndrome Detection: A Multisite Validation Study.

Chang J, Lee KJ, Wang TH, Chen CM

pubmed logopapersJul 7 2025
<b>Background</b>: Acute Aortic Syndrome (AAS), encompassing aortic dissection (AD), intramural hematoma (IMH), and penetrating atherosclerotic ulcer (PAU), presents diagnostic challenges due to its varied manifestations and the critical need for rapid assessment. <b>Methods</b>: We developed a multi-stage deep learning model trained on chest computed tomography angiography (CTA) scans. The model utilizes a U-Net architecture for aortic segmentation, followed by a cascaded classification approach for detecting AD and IMH, and a multiscale CNN for identifying PAU. External validation was conducted on 260 anonymized CTA scans from 14 U.S. clinical sites, encompassing data from four different CT manufacturers. Performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), were calculated with 95% confidence intervals (CIs) using Wilson's method. Model performance was compared against predefined benchmarks. <b>Results</b>: The model achieved a sensitivity of 0.94 (95% CI: 0.88-0.97), specificity of 0.93 (95% CI: 0.89-0.97), and an AUC of 0.96 (95% CI: 0.94-0.98) for overall AAS detection, with <i>p</i>-values < 0.001 when compared to the 0.80 benchmark. Subgroup analyses demonstrated consistent performance across different patient demographics, CT manufacturers, slice thicknesses, and anatomical locations. <b>Conclusions</b>: This deep learning model effectively detects the full spectrum of AAS across diverse populations and imaging platforms, suggesting its potential utility in clinical settings to enable faster triage and expedite patient management.

Artifact-robust Deep Learning-based Segmentation of 3D Phase-contrast MR Angiography: A Novel Data Augmentation Approach.

Tamada D, Oechtering TH, Heidenreich JF, Starekova J, Takai E, Reeder SB

pubmed logopapersJul 5 2025
This study presents a novel data augmentation approach to improve deep learning (DL)-based segmentation for 3D phase-contrast magnetic resonance angiography (PC-MRA) images affected by pulsation artifacts. Augmentation was achieved by simulating pulsation artifacts through the addition of periodic errors in k-space magnitude. The approach was evaluated on PC-MRA datasets from 16 volunteers, comparing DL segmentation with and without pulsation artifact augmentation to a level-set algorithm. Results demonstrate that DL methods significantly outperform the level-set approach and that pulsation artifact augmentation further improves segmentation accuracy, especially for images with lower velocity encoding. Quantitative analysis using Dice-Sørensen coefficient, Intersection over Union, and Average Symmetric Surface Distance metrics confirms the effectiveness of the proposed method. This technique shows promise for enhancing vascular segmentation in various anatomical regions affected by pulsation artifacts, potentially improving clinical applications of PC-MRA.

PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator-Part I: Numerical Methods.

Blanken N, Heiles B, Kuliesh A, Versluis M, Jain K, Maresca D, Lajoinie G

pubmed logopapersJul 1 2025
Ultrasound contrast agents (UCAs) have been used as vascular reporters for the past 40 years. The ability to enhance vascular features in ultrasound images with engineered lipid-shelled microbubbles has enabled breakthroughs such as the detection of tissue perfusion or super-resolution imaging of the microvasculature. However, advances in the field of contrast-enhanced ultrasound are hindered by experimental variables that are difficult to control in a laboratory setting, such as complex vascular geometries, the lack of ground truth, and tissue nonlinearities. In addition, the demand for large datasets to train deep learning-based computational ultrasound imaging methods calls for the development of a simulation tool that can reproduce the physics of ultrasound wave interactions with tissues and microbubbles. Here, we introduce a physically realistic contrast-enhanced ultrasound simulator (PROTEUS) consisting of four interconnected modules that account for blood flow dynamics in segmented vascular geometries, intravascular microbubble trajectories, ultrasound wave propagation, and nonlinear microbubble scattering. The first part of this study describes the numerical methods that enabled this development. We demonstrate that PROTEUS can generate contrast-enhanced radio-frequency (RF) data in various vascular architectures across the range of medical ultrasound frequencies. PROTEUS offers a customizable framework to explore novel ideas in the field of contrast-enhanced ultrasound imaging. It is released as an open-source tool for the scientific community.

Hierarchical Corpus-View-Category Refinement for Carotid Plaque Risk Grading in Ultrasound

Zhiyuan Zhu, Jian Wang, Yong Jiang, Tong Han, Yuhao Huang, Ang Zhang, Kaiwen Yang, Mingyuan Luo, Zhe Liu, Yaofei Duan, Dong Ni, Tianhong Tang, Xin Yang

arxiv logopreprintJun 29 2025
Accurate carotid plaque grading (CPG) is vital to assess the risk of cardiovascular and cerebrovascular diseases. Due to the small size and high intra-class variability of plaque, CPG is commonly evaluated using a combination of transverse and longitudinal ultrasound views in clinical practice. However, most existing deep learning-based multi-view classification methods focus on feature fusion across different views, neglecting the importance of representation learning and the difference in class features. To address these issues, we propose a novel Corpus-View-Category Refinement Framework (CVC-RF) that processes information from Corpus-, View-, and Category-levels, enhancing model performance. Our contribution is four-fold. First, to the best of our knowledge, we are the foremost deep learning-based method for CPG according to the latest Carotid Plaque-RADS guidelines. Second, we propose a novel center-memory contrastive loss, which enhances the network's global modeling capability by comparing with representative cluster centers and diverse negative samples at the Corpus level. Third, we design a cascaded down-sampling attention module to fuse multi-scale information and achieve implicit feature interaction at the View level. Finally, a parameter-free mixture-of-experts weighting strategy is introduced to leverage class clustering knowledge to weight different experts, enabling feature decoupling at the Category level. Experimental results indicate that CVC-RF effectively models global features via multi-level refinement, achieving state-of-the-art performance in the challenging CPG task.

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.

Aneurysm Analysis Using Deep Learning

Bagheri Rajeoni, A., Pederson, B., Lessner, S. M., Valafar, H.

medrxiv logopreprintJun 25 2025
Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard-of-care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets.

Weighted Mean Frequencies: a handcraft Fourier feature for 4D Flow MRI segmentation

Simon Perrin, Sébastien Levilly, Huajun Sun, Harold Mouchère, Jean-Michel Serfaty

arxiv logopreprintJun 25 2025
In recent decades, the use of 4D Flow MRI images has enabled the quantification of velocity fields within a volume of interest and along the cardiac cycle. However, the lack of resolution and the presence of noise in these biomarkers are significant issues. As indicated by recent studies, it appears that biomarkers such as wall shear stress are particularly impacted by the poor resolution of vessel segmentation. The Phase Contrast Magnetic Resonance Angiography (PC-MRA) is the state-of-the-art method to facilitate segmentation. The objective of this work is to introduce a new handcraft feature that provides a novel visualisation of 4D Flow MRI images, which is useful in the segmentation task. This feature, termed Weighted Mean Frequencies (WMF), is capable of revealing the region in three dimensions where a voxel has been passed by pulsatile flow. Indeed, this feature is representative of the hull of all pulsatile velocity voxels. The value of the feature under discussion is illustrated by two experiments. The experiments involved segmenting 4D Flow MRI images using optimal thresholding and deep learning methods. The results obtained demonstrate a substantial enhancement in terms of IoU and Dice, with a respective increase of 0.12 and 0.13 in comparison with the PC-MRA feature, as evidenced by the deep learning task. This feature has the potential to yield valuable insights that could inform future segmentation processes in other vascular regions, such as the heart or the brain.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.
Page 3 of 766 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.