Sort by:
Page 4 of 34338 results

Contrast-enhanced image synthesis using latent diffusion model for precise online tumor delineation in MRI-guided adaptive radiotherapy for brain metastases.

Ma X, Ma Y, Wang Y, Li C, Liu Y, Chen X, Dai J, Bi N, Men K

pubmed logopapersJun 25 2025
&#xD;Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) is a promising technique for long-course RT of large-volume brain metastasis (BM), due to the capacity to track tumor changes throughout treatment course. Contrast-enhanced T1-weighted (T1CE) MRI is essential for BM delineation, yet is often unavailable during online treatment concerning the requirement of contrast agent injection. This study aims to develop a synthetic T1CE (sT1CE) generation method to facilitate accurate online adaptive BM delineation.&#xD;Approach:&#xD;We developed a novel ControlNet-coupled latent diffusion model (CTN-LDM) combined with a personalized transfer learning strategy and a denoising diffusion implicit model (DDIM) inversion method to generate high quality sT1CE images from online T2-weighted (T2) or fluid attenuated inversion recovery (FLAIR) images. Visual quality of sT1CE images generated by the CTN-LDM was compared with classical deep learning models. BM delineation results using the combination of our sT1CE images and online T2/FLAIR images were compared with the results solely using online T2/FLAIR images, which is the current clinical method.&#xD;Main results:&#xD;Visual quality of sT1CE images from our CTN-LDM was superior to classical models both quantitatively and qualitatively. Leveraging sT1CE images, radiation oncologists achieved significant higher precision of adaptive BM delineation, with average Dice similarity coefficient of 0.93 ± 0.02 vs. 0.86 ± 0.04 (p < 0.01), compared with only using online T2/FLAIR images. &#xD;Significance:&#xD;The proposed method could generate high quality sT1CE images and significantly improve accuracy of online adaptive tumor delineation for long-course MRIgART of large-volume BM, potentially enhancing treatment outcomes and minimizing toxicity.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Med-Art: Diffusion Transformer for 2D Medical Text-to-Image Generation

Changlu Guo, Anders Nymark Christensen, Morten Rieger Hannemose

arxiv logopreprintJun 25 2025
Text-to-image generative models have achieved remarkable breakthroughs in recent years. However, their application in medical image generation still faces significant challenges, including small dataset sizes, and scarcity of medical textual data. To address these challenges, we propose Med-Art, a framework specifically designed for medical image generation with limited data. Med-Art leverages vision-language models to generate visual descriptions of medical images which overcomes the scarcity of applicable medical textual data. Med-Art adapts a large-scale pre-trained text-to-image model, PixArt-$\alpha$, based on the Diffusion Transformer (DiT), achieving high performance under limited data. Furthermore, we propose an innovative Hybrid-Level Diffusion Fine-tuning (HLDF) method, which enables pixel-level losses, effectively addressing issues such as overly saturated colors. We achieve state-of-the-art performance on two medical image datasets, measured by FID, KID, and downstream classification performance.

Radiomic fingerprints for knee MR images assessment

Yaxi Chen, Simin Ni, Shaheer U. Saeed, Aleksandra Ivanova, Rikin Hargunani, Jie Huang, Chaozong Liu, Yipeng Hu

arxiv logopreprintJun 25 2025
Accurate interpretation of knee MRI scans relies on expert clinical judgment, often with high variability and limited scalability. Existing radiomic approaches use a fixed set of radiomic features (the signature), selected at the population level and applied uniformly to all patients. While interpretable, these signatures are often too constrained to represent individual pathological variations. As a result, conventional radiomic-based approaches are found to be limited in performance, compared with recent end-to-end deep learning (DL) alternatives without using interpretable radiomic features. We argue that the individual-agnostic nature in current radiomic selection is not central to its intepretability, but is responsible for the poor generalization in our application. Here, we propose a novel radiomic fingerprint framework, in which a radiomic feature set (the fingerprint) is dynamically constructed for each patient, selected by a DL model. Unlike the existing radiomic signatures, our fingerprints are derived on a per-patient basis by predicting the feature relevance in a large radiomic feature pool, and selecting only those that are predictive of clinical conditions for individual patients. The radiomic-selecting model is trained simultaneously with a low-dimensional (considered relatively explainable) logistic regression for downstream classification. We validate our methods across multiple diagnostic tasks including general knee abnormalities, anterior cruciate ligament (ACL) tears, and meniscus tears, demonstrating comparable or superior diagnostic accuracy relative to state-of-the-art end-to-end DL models. More importantly, we show that the interpretability inherent in our approach facilitates meaningful clinical insights and potential biomarker discovery, with detailed discussion, quantitative and qualitative analysis of real-world clinical cases to evidence these advantages.

Integrating handheld ultrasound in rheumatology: A review of benefits and drawbacks.

Sabido-Sauri R, Eder L, Emery P, Aydin SZ

pubmed logopapersJun 25 2025
Musculoskeletal ultrasound is a key tool in rheumatology for diagnosing and managing inflammatory arthritis. Traditional ultrasound systems, while effective, can be cumbersome and costly, limiting their use in many clinical settings. Handheld ultrasound (HHUS) devices, which are portable, affordable, and user-friendly, have emerged as a promising alternative. This review explores the role of HHUS in rheumatology, specifically evaluating its impact on diagnostic accuracy, ease of use, and utility in screening for inflammatory arthritis. The review also addresses key challenges, such as image quality, storage and data security, and the potential for integrating artificial intelligence to improve device performance. We compare HHUS devices to cart-based ultrasound machines, discuss their advantages and limitations, and examine the potential for widespread adoption. Our findings suggest that HHUS devices can effectively support musculoskeletal assessments and offer significant benefits in resource-limited settings. However, proper training, standardized protocols, and continued technological advancements are essential for optimizing their use in clinical practice.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance

Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang

arxiv logopreprintJun 24 2025
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.

AI-based large-scale screening of gastric cancer from noncontrast CT imaging.

Hu C, Xia Y, Zheng Z, Cao M, Zheng G, Chen S, Sun J, Chen W, Zheng Q, Pan S, Zhang Y, Chen J, Yu P, Xu J, Xu J, Qiu Z, Lin T, Yun B, Yao J, Guo W, Gao C, Kong X, Chen K, Wen Z, Zhu G, Qiao J, Pan Y, Li H, Gong X, Ye Z, Ao W, Zhang L, Yan X, Tong Y, Yang X, Zheng X, Fan S, Cao J, Yan C, Xie K, Zhang S, Wang Y, Zheng L, Wu Y, Ge Z, Tian X, Zhang X, Wang Y, Zhang R, Wei Y, Zhu W, Zhang J, Qiu H, Su M, Shi L, Xu Z, Zhang L, Cheng X

pubmed logopapersJun 24 2025
Early detection through screening is critical for reducing gastric cancer (GC) mortality. However, in most high-prevalence regions, large-scale screening remains challenging due to limited resources, low compliance and suboptimal detection rate of upper endoscopic screening. Therefore, there is an urgent need for more efficient screening protocols. Noncontrast computed tomography (CT), routinely performed for clinical purposes, presents a promising avenue for large-scale designed or opportunistic screening. Here we developed the Gastric Cancer Risk Assessment Procedure with Artificial Intelligence (GRAPE), leveraging noncontrast CT and deep learning to identify GC. Our study comprised three phases. First, we developed GRAPE using a cohort from 2 centers in China (3,470 GC and 3,250 non-GC cases) and validated its performance on an internal validation set (1,298 cases, area under curve = 0.970) and an independent external cohort from 16 centers (18,160 cases, area under curve = 0.927). Subgroup analysis showed that the detection rate of GRAPE increased with advancing T stage but was independent of tumor location. Next, we compared the interpretations of GRAPE with those of radiologists and assessed its potential in assisting diagnostic interpretation. Reader studies demonstrated that GRAPE significantly outperformed radiologists, improving sensitivity by 21.8% and specificity by 14.0%, particularly in early-stage GC. Finally, we evaluated GRAPE in real-world opportunistic screening using 78,593 consecutive noncontrast CT scans from a comprehensive cancer center and 2 independent regional hospitals. GRAPE identified persons at high risk with GC detection rates of 24.5% and 17.7% in 2 regional hospitals, with 23.2% and 26.8% of detected cases in T1/T2 stage. Additionally, GRAPE detected GC cases that radiologists had initially missed, enabling earlier diagnosis of GC during follow-up for other diseases. In conclusion, GRAPE demonstrates strong potential for large-scale GC screening, offering a feasible and effective approach for early detection. ClinicalTrials.gov registration: NCT06614179 .

Advances and Integrations of Computer-Assisted Planning, Artificial Intelligence, and Predictive Modeling Tools for Laser Interstitial Thermal Therapy in Neurosurgical Oncology.

Warman A, Moorthy D, Gensler R, Horowtiz MA, Ellis J, Tomasovic L, Srinivasan E, Ahmed K, Azad TD, Anderson WS, Rincon-Torroella J, Bettegowda C

pubmed logopapersJun 24 2025
Laser interstitial thermal therapy (LiTT) has emerged as a minimally invasive, MRI-guided treatment of brain tumors that are otherwise considered inoperable because of their location or the patient's poor surgical candidacy. By directing thermal energy at neoplastic lesions while minimizing damage to surrounding healthy tissue, LiTT offers promising therapeutic outcomes for both newly diagnosed and recurrent tumors. However, challenges such as postprocedural edema, unpredictable heat diffusion near blood vessels and ventricles in real time underscore the need for improved planning and monitoring. Incorporating artificial intelligence (AI) presents a viable solution to many of these obstacles. AI has already demonstrated effectiveness in optimizing surgical trajectories, predicting seizure-free outcomes in epilepsy cases, and generating heat distribution maps to guide real-time ablation. This technology could be similarly deployed in neurosurgical oncology to identify patients most likely to benefit from LiTT, refine trajectory planning, and predict tissue-specific heat responses. Despite promising initial studies, further research is needed to establish the robust data sets and clinical trials necessary to develop and validate AI-driven LiTT protocols. Such advancements have the potential to bolster LiTT's efficacy, minimize complications, and ultimately transform the neurosurgical management of primary and metastatic brain tumors.

Brain ultrasonography in neurosurgical patients.

Mahajan C, Kapoor I, Prabhakar H

pubmed logopapersJun 24 2025
Brain ultrasound is a popular point-of-care test that helps visualize brain structures. This review highlights recent developments in brain ultrasonography. There is a need to keep pace with the ongoing technological advancements and establishing standardized quality criteria for improving its utility in clinical practice. Newer automated indices derived from transcranial Doppler help establish its role as a noninvasive monitor of intracranial pressure and diagnosing vasospasm/delayed cerebral ischemia. A novel robotic transcranial Doppler system equipped with artificial intelligence allows real-time continuous neuromonitoring. Intraoperative ultrasound assists neurosurgeons in real-time localization of brain lesions and helps in assessing the extent of resection, thereby enhancing surgical precision and safety. Optic nerve sheath diameter point-of-care ultrasonography is an effective means of diagnosing raised intracranial pressure, triaging, and prognostication. The quality criteria checklist can help standardize this technique. Newer advancements like focused ultrasound, contrast-enhanced ultrasound, and functional ultrasound have also been discussed. Brain ultrasound continues to be a critical bedside tool in neurologically injured patients. With the advent of technological advancements, its utility has widened and its capabilities have expanded, making it more accurate and versatile in clinical practice.
Page 4 of 34338 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.