Sort by:
Page 1 of 439 results
Next

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.

Weighted Mean Frequencies: a handcraft Fourier feature for 4D Flow MRI segmentation

Simon Perrin, Sébastien Levilly, Huajun Sun, Harold Mouchère, Jean-Michel Serfaty

arxiv logopreprintJun 25 2025
In recent decades, the use of 4D Flow MRI images has enabled the quantification of velocity fields within a volume of interest and along the cardiac cycle. However, the lack of resolution and the presence of noise in these biomarkers are significant issues. As indicated by recent studies, it appears that biomarkers such as wall shear stress are particularly impacted by the poor resolution of vessel segmentation. The Phase Contrast Magnetic Resonance Angiography (PC-MRA) is the state-of-the-art method to facilitate segmentation. The objective of this work is to introduce a new handcraft feature that provides a novel visualisation of 4D Flow MRI images, which is useful in the segmentation task. This feature, termed Weighted Mean Frequencies (WMF), is capable of revealing the region in three dimensions where a voxel has been passed by pulsatile flow. Indeed, this feature is representative of the hull of all pulsatile velocity voxels. The value of the feature under discussion is illustrated by two experiments. The experiments involved segmenting 4D Flow MRI images using optimal thresholding and deep learning methods. The results obtained demonstrate a substantial enhancement in terms of IoU and Dice, with a respective increase of 0.12 and 0.13 in comparison with the PC-MRA feature, as evidenced by the deep learning task. This feature has the potential to yield valuable insights that could inform future segmentation processes in other vascular regions, such as the heart or the brain.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

Bedside Ultrasound Vector Doppler Imaging System with GPU Processing and Deep Learning.

Nahas H, Yiu BYS, Chee AJY, Ishii T, Yu ACH

pubmed logopapersJun 24 2025
Recent innovations in vector flow imaging promise to bring the modality closer to clinical application and allow for more comprehensive high-frame-rate vascular assessments. One such innovation is plane-wave multi-angle vector Doppler, where pulsed Doppler principles from multiple steering angles are used to realize vector flow imaging at frame rates upward of 1,000 frames per second (fps). Currently, vector Doppler is limited by the presence of aliasing artifacts that have prevented its reliable realization at the bedside. In this work, we present a new aliasing-resistant vector Doppler imaging system that can be deployed at the bedside using a programmable ultrasound core, graphics processing unit (GPU) processing, and deep learning principles. The framework supports two operational modes: 1) live imaging at 17 fps where vector flow imaging serves to guide image view navigation in blood vessels with complex dynamics; 2) on-demand replay mode where flow data acquired at high frame rates of over 1,000 fps is depicted as a slow-motion playback at 60 fps using an aliasing-resistant vector projectile visualization. Using our new system, aliasing-free vector flow cineloops were successfully obtained in a stenosis phantom experiment and in human bifurcation imaging scans. This system represents a major engineering advance towards the clinical adoption of vector flow imaging.

Angio-Diff: Learning a Self-Supervised Adversarial Diffusion Model for Angiographic Geometry Generation

Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu, Kunlun He

arxiv logopreprintJun 24 2025
Vascular diseases pose a significant threat to human health, with X-ray angiography established as the gold standard for diagnosis, allowing for detailed observation of blood vessels. However, angiographic X-rays expose personnel and patients to higher radiation levels than non-angiographic X-rays, which are unwanted. Thus, modality translation from non-angiographic to angiographic X-rays is desirable. Data-driven deep approaches are hindered by the lack of paired large-scale X-ray angiography datasets. While making high-quality vascular angiography synthesis crucial, it remains challenging. We find that current medical image synthesis primarily operates at pixel level and struggles to adapt to the complex geometric structure of blood vessels, resulting in unsatisfactory quality of blood vessel image synthesis, such as disconnections or unnatural curvatures. To overcome this issue, we propose a self-supervised method via diffusion models to transform non-angiographic X-rays into angiographic X-rays, mitigating data shortages for data-driven approaches. Our model comprises a diffusion model that learns the distribution of vascular data from diffusion latent, a generator for vessel synthesis, and a mask-based adversarial module. To enhance geometric accuracy, we propose a parametric vascular model to fit the shape and distribution of blood vessels. The proposed method contributes a pipeline and a synthetic dataset for X-ray angiography. We conducted extensive comparative and ablation experiments to evaluate the Angio-Diff. The results demonstrate that our method achieves state-of-the-art performance in synthetic angiography image quality and more accurately synthesizes the geometric structure of blood vessels. The code is available at https://github.com/zfw-cv/AngioDiff.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

Physiological Response of Tissue-Engineered Vascular Grafts to Vasoactive Agents in an Ovine Model.

Guo M, Villarreal D, Watanabe T, Wiet M, Ulziibayar A, Morrison A, Nelson K, Yuhara S, Hussaini SF, Shinoka T, Breuer C

pubmed logopapersJun 23 2025
Tissue-engineered vascular grafts (TEVGs) are emerging as promising alternatives to synthetic grafts, particularly in pediatric cardiovascular surgery. While TEVGs have demonstrated growth potential, compliance, and resistance to calcification, their functional integration into the circulation, especially their ability to respond to physiological stimuli, remains underexplored. Vasoreactivity, the dynamic contraction or dilation of blood vessels in response to vasoactive agents, is a key property of native vessels that affects systemic hemodynamics and long-term vascular function. This study aimed to develop and validate an <i>in vivo</i> protocol to assess the vasoreactive capacity of TEVGs implanted as inferior vena cava (IVC) interposition grafts in a large animal model. Bone marrow-seeded TEVGs were implanted in the thoracic IVC of Dorset sheep. A combination of intravascular ultrasound (IVUS) imaging and invasive hemodynamic monitoring was used to evaluate vessel response to norepinephrine (NE) and sodium nitroprusside (SNP). Cross-sectional luminal area changes were measured using a custom Python-based software package (VIVUS) that leverages deep learning for IVUS image segmentation. Physiological parameters including blood pressure, heart rate, and cardiac output were continuously recorded. NE injections induced significant, dose-dependent vasoconstriction of TEVGs, with peak reductions in luminal area averaging ∼15% and corresponding increases in heart rate and mean arterial pressure. Conversely, SNP did not elicit measurable vasodilation in TEVGs, likely due to structural differences in venous tissue, the low-pressure environment of the thoracic IVC, and systemic confounders. Overall, the TEVGs demonstrated active, rapid, and reversible vasoconstrictive behavior in response to pharmacologic stimuli. This study presents a novel <i>in vivo</i> method for assessing TEVG vasoreactivity using real-time imaging and hemodynamic data. TEVGs possess functional vasoactivity, suggesting they may play an active role in modulating venous return and systemic hemodynamics. These findings are particularly relevant for Fontan patients and other scenarios where dynamic venous regulation is critical. Future work will compare TEVG vasoreactivity with native veins and synthetic grafts to further characterize their physiological integration and potential clinical benefits.

Generative deep-learning-model based contrast enhancement for digital subtraction angiography using a text-conditioned image-to-image model.

Takata T, Yamada K, Yamamoto M, Kondo H

pubmed logopapersJun 20 2025
Digital subtraction angiography (DSA) is an essential imaging technique in interventional radiology, enabling detailed visualization of blood vessels by subtracting pre- and post-contrast images. However, reduced contrast, either accidental or intentional, can impair the clarity of vascular structures. This issue becomes particularly critical in patients with chronic kidney disease (CKD), where minimizing iodinated contrast is necessary to reduce the risk of contrast-induced nephropathy (CIN). This study explored the potential of using a generative deep-learning-model based contrast enhancement technique for DSA. A text-conditioned image-to-image model was developed using Stable Diffusion, augmented with ControlNet to reduce hallucinations and Low-Rank Adaptation for model fine-tuning. A total of 1207 DSA series were used for training and testing, with additional low-contrast images generated through data augmentation. The model was trained using tagged text labels and evaluated using metrics such as Root Mean Square (RMS) contrast, Michelson contrast, signal-to-noise ratio (SNR), and entropy. Evaluation results indicated significant improvements, with RMS contrast, Michelson contrast, and entropy respectively increased from 7.91 to 17.7, 0.875 to 0.992, and 3.60 to 5.60, reflecting enhanced detail. However, SNR decreased from 21.3 to 8.50, indicating increased noise. This study demonstrated the feasibility of deep learning-based contrast enhancement for DSA images and highlights the potential for generative deep-learning-model to improve angiographic imaging. Further refinements, particularly in artifact suppression and clinical validation, are necessary for practical implementation in medical settings.

Automatic Multi-Task Segmentation and Vulnerability Assessment of Carotid Plaque on Contrast-Enhanced Ultrasound Images and Videos via Deep Learning.

Hu B, Zhang H, Jia C, Chen K, Tang X, He D, Zhang L, Gu S, Chen J, Zhang J, Wu R, Chen SL

pubmed logopapersJun 20 2025
Intraplaque neovascularization (IPN) within carotid plaque is a crucial indicator of plaque vulnerability. Contrast-enhanced ultrasound (CEUS) is a valuable tool for assessing IPN by evaluating the location and quantity of microbubbles within the carotid plaque. However, this task is typically performed by experienced radiologists. Here we propose a deep learning-based multi-task model for the automatic segmentation and IPN grade classification of carotid plaque on CEUS images and videos. We also compare the performance of our model with that of radiologists. To simulate the clinical practice of radiologists, who often use CEUS videos with dynamic imaging to track microbubble flow and identify IPN, we develop a workflow for plaque vulnerability assessment using CEUS videos. Our multi-task model outperformed individually trained segmentation and classification models, achieving superior performance in IPN grade classification based on CEUS images. Specifically, our model achieved a high segmentation Dice coefficient of 84.64% and a high classification accuracy of 81.67%. Moreover, our model surpassed the performance of junior and medium-level radiologists, providing more accurate IPN grading of carotid plaque on CEUS images. For CEUS videos, our model achieved a classification accuracy of 80.00% in IPN grading. Overall, our multi-task model demonstrates great performance in the automatic, accurate, objective, and efficient IPN grading in both CEUS images and videos. This work holds significant promise for enhancing the clinical diagnosis of plaque vulnerability associated with IPN in CEUS evaluations.

VesselSDF: Distance Field Priors for Vascular Network Reconstruction

Salvatore Esposito, Daniel Rebain, Arno Onken, Changjian Li, Oisin Mac Aodha

arxiv logopreprintJun 19 2025
Accurate segmentation of vascular networks from sparse CT scan slices remains a significant challenge in medical imaging, particularly due to the thin, branching nature of vessels and the inherent sparsity between imaging planes. Existing deep learning approaches, based on binary voxel classification, often struggle with structural continuity and geometric fidelity. To address this challenge, we present VesselSDF, a novel framework that leverages signed distance fields (SDFs) for robust vessel reconstruction. Our method reformulates vessel segmentation as a continuous SDF regression problem, where each point in the volume is represented by its signed distance to the nearest vessel surface. This continuous representation inherently captures the smooth, tubular geometry of blood vessels and their branching patterns. We obtain accurate vessel reconstructions while eliminating common SDF artifacts such as floating segments, thanks to our adaptive Gaussian regularizer which ensures smoothness in regions far from vessel surfaces while producing precise geometry near the surface boundaries. Our experimental results demonstrate that VesselSDF significantly outperforms existing methods and preserves vessel geometry and connectivity, enabling more reliable vascular analysis in clinical settings.
Page 1 of 439 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.