Sort by:
Page 39 of 59587 results

SHFormer: Dynamic spectral filtering convolutional neural network and high-pass kernel generation transformer for adaptive MRI reconstruction.

Ramanarayanan S, G S R, Fahim MA, Ram K, Venkatesan R, Sivaprakasam M

pubmed logopapersJul 1 2025
Attention Mechanism (AM) selectively focuses on essential information for imaging tasks and captures relationships between regions from distant pixel neighborhoods to compute feature representations. Accelerated magnetic resonance image (MRI) reconstruction can benefit from AM, as the imaging process involves acquiring Fourier domain measurements that influence the image representation in a non-local manner. However, AM-based models are more adept at capturing low-frequency information and have limited capacity in constructing high-frequency representations, restricting the models to smooth reconstruction. Secondly, AM-based models need mode-specific retraining for multimodal MRI data as their knowledge is restricted to local contextual variations within modes that might be inadequate to capture the diverse transferable features across heterogeneous data domains. To address these challenges, we propose a neuromodulation-based discriminative multi-spectral AM for scalable MRI reconstruction, that can (i) propagate the context-aware high-frequency details for high-quality image reconstruction, and (ii) capture features reusable to deviated unseen domains in multimodal MRI, to offer high practical value for the healthcare industry and researchers. The proposed network consists of a spectral filtering convolutional neural network to capture mode-specific transferable features to generalize to deviated MRI data domains and a dynamic high-pass kernel generation transformer that focuses on high-frequency details for improved reconstruction. We have evaluated our model on various aspects, such as comparative studies in supervised and self-supervised learning, diffusion model-based training, closed-set and open-set generalization under heterogeneous MRI data, and interpretation-based analysis. Our results show that the proposed method offers scalable and high-quality reconstruction with best improvement margins of ∼1 dB in PSNR and ∼0.01 in SSIM under unseen scenarios. Our code is available at https://github.com/sriprabhar/SHFormer.

TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration.

Yang X, Li D, Deng L, Huang S, Wang J

pubmed logopapersJul 1 2025
Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at https://github.com/muzidongxue/TCDE-Net.

Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation.

Kebaili A, Lapuyade-Lahorgue J, Vera P, Ruan S

pubmed logopapersJul 1 2025
Multimodality is often necessary for improving object segmentation tasks, especially in the case of multilabel tasks, such as tumor segmentation, which is crucial for clinical diagnosis and treatment planning. However, a major challenge in utilizing multimodality with deep learning remains: the limited availability of annotated training data, primarily due to the time-consuming acquisition process and the necessity for expert annotations. Although deep learning has significantly advanced many tasks in medical imaging, conventional augmentation techniques are often insufficient due to the inherent complexity of volumetric medical data. To address this problem, we propose an innovative slice-based latent diffusion architecture for the generation of 3D multi-modal images and their corresponding multi-label masks. Our approach enables the simultaneous generation of the image and mask in a slice-by-slice fashion, leveraging a positional encoding and a Latent Aggregation module to maintain spatial coherence and capture slice sequentiality. This method effectively reduces the computational complexity and memory demands typically associated with diffusion models. Additionally, we condition our architecture on tumor characteristics to generate a diverse array of tumor variations and enhance texture using a refining module that acts like a super-resolution mechanism, mitigating the inherent blurriness caused by data scarcity in the autoencoder. We evaluate the effectiveness of our synthesized volumes using the BRATS2021 dataset to segment the tumor with three tissue labels and compare them with other state-of-the-art diffusion models through a downstream segmentation task, demonstrating the superior performance and efficiency of our method. While our primary application is tumor segmentation, this method can be readily adapted to other modalities. Code is available here : https://github.com/Arksyd96/multi-modal-mri-and-mask-synthesis-with-conditional-slice-based-ldm.

$μ^2$Tokenizer: Differentiable Multi-Scale Multi-Modal Tokenizer for Radiology Report Generation

Siyou Li, Pengyao Qin, Huanan Wu, Dong Nie, Arun J. Thirunavukarasu, Juntao Yu, Le Zhang

arxiv logopreprintJun 30 2025
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $\mu^2$LLM, a $\underline{\textbf{mu}}$ltiscale $\underline{\textbf{mu}}$ltimodal large language models for RRG tasks. The novel ${\mu}^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasets demonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $\mu^2$LLMs on limited data for RRG tasks. At the same time, for prompt engineering, we introduce a five-stage, LLM-driven pipeline that converts routine CT reports into paired visual-question-answer triples and citation-linked reasoning narratives, creating a scalable, high-quality supervisory corpus for explainable multimodal radiology LLM. All code, datasets, and models will be publicly available in our official repository. https://github.com/Siyou-Li/u2Tokenizer

Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound

Yuhao Huang, Yueyue Xu, Haoran Dou, Jiaxiao Deng, Xin Yang, Hongyu Zheng, Dong Ni

arxiv logopreprintJun 30 2025
Congenital uterine anomalies (CUAs) can lead to infertility, miscarriage, preterm birth, and an increased risk of pregnancy complications. Compared to traditional 2D ultrasound (US), 3D US can reconstruct the coronal plane, providing a clear visualization of the uterine morphology for assessing CUAs accurately. In this paper, we propose an intelligent system for simultaneous automated plane localization and CUA diagnosis. Our highlights are: 1) we develop a denoising diffusion model with local (plane) and global (volume/text) guidance, using an adaptive weighting strategy to optimize attention allocation to different conditions; 2) we introduce a reinforcement learning-based framework with unsupervised rewards to extract the key slice summary from redundant sequences, fully integrating information across multiple planes to reduce learning difficulty; 3) we provide text-driven uncertainty modeling for coarse prediction, and leverage it to adjust the classification probability for overall performance improvement. Extensive experiments on a large 3D uterine US dataset show the efficacy of our method, in terms of plane localization and CUA diagnosis. Code is available at https://github.com/yuhoo0302/CUA-US.

Diffusion Model-based Data Augmentation Method for Fetal Head Ultrasound Segmentation

Fangyijie Wang, Kevin Whelan, Félix Balado, Guénolé Silvestre, Kathleen M. Curran

arxiv logopreprintJun 30 2025
Medical image data is less accessible than in other domains due to privacy and regulatory constraints. In addition, labeling requires costly, time-intensive manual image annotation by clinical experts. To overcome these challenges, synthetic medical data generation offers a promising solution. Generative AI (GenAI), employing generative deep learning models, has proven effective at producing realistic synthetic images. This study proposes a novel mask-guided GenAI approach using diffusion models to generate synthetic fetal head ultrasound images paired with segmentation masks. These synthetic pairs augment real datasets for supervised fine-tuning of the Segment Anything Model (SAM). Our results show that the synthetic data captures real image features effectively, and this approach reaches state-of-the-art fetal head segmentation, especially when trained with a limited number of real image-mask pairs. In particular, the segmentation reaches Dice Scores of 94.66\% and 94.38\% using a handful of ultrasound images from the Spanish and African cohorts, respectively. Our code, models, and data are available on GitHub.

MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction

Lingtong Zhang, Mengdie Song, Xiaohan Hao, Huayu Mai, Bensheng Qiu

arxiv logopreprintJun 30 2025
Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.

Deep Learning-Based Semantic Segmentation for Real-Time Kidney Imaging and Measurements with Augmented Reality-Assisted Ultrasound

Gijs Luijten, Roberto Maria Scardigno, Lisle Faray de Paiva, Peter Hoyer, Jens Kleesiek, Domenico Buongiorno, Vitoantonio Bevilacqua, Jan Egger

arxiv logopreprintJun 30 2025
Ultrasound (US) is widely accessible and radiation-free but has a steep learning curve due to its dynamic nature and non-standard imaging planes. Additionally, the constant need to shift focus between the US screen and the patient poses a challenge. To address these issues, we integrate deep learning (DL)-based semantic segmentation for real-time (RT) automated kidney volumetric measurements, which are essential for clinical assessment but are traditionally time-consuming and prone to fatigue. This automation allows clinicians to concentrate on image interpretation rather than manual measurements. Complementing DL, augmented reality (AR) enhances the usability of US by projecting the display directly into the clinician's field of view, improving ergonomics and reducing the cognitive load associated with screen-to-patient transitions. Two AR-DL-assisted US pipelines on HoloLens-2 are proposed: one streams directly via the application programming interface for a wireless setup, while the other supports any US device with video output for broader accessibility. We evaluate RT feasibility and accuracy using the Open Kidney Dataset and open-source segmentation models (nnU-Net, Segmenter, YOLO with MedSAM and LiteMedSAM). Our open-source GitHub pipeline includes model implementations, measurement algorithms, and a Wi-Fi-based streaming solution, enhancing US training and diagnostics, especially in point-of-care settings.

Statistical Toolkit for Analysis of Radiotherapy DICOM Data.

Kinz M, Molodowitch C, Killoran J, Hesser JW, Zygmanski P

pubmed logopapersJun 30 2025

Radiotherapy (RT) has become increasingly sophisticated, necessitating advanced tools for analyzing extensive treatment data in hospital databases. Such analyses can enhance future treatments, particularly through Knowledge-Based Planning, and aid in developing new treatment modalities like convergent kV RT.
Purpose: The objective is to develop automated software tools for large-scale retrospective analysis of over 10,000 MeV x-ray radiotherapy plans. This aims to identify trends and references in plans delivered at our institution across all treatment sites, focusing on: (A) Planning-Target-Volume, Clinical-Target-Volume, Gross-Tumor-Volume, and Organ-At-Risk (PTV/CTV/GTV/OAR) topology, morphology, and dosimetry, and (B) RT plan efficiency and complexity.
Methods:
The software tools are coded in Python. Topological metrics are evaluated using principal component analysis, including center of mass, volume, size, and depth. Morphology is quantified using Hounsfield Units, while dose distribution is characterized by conformity and homogeneity indexes. The total dose within the target versus the body is defined as the Dose Balance Index. 
Results:
The primary outcome of this study is the toolkit and an analysis of our database. For example, the mean minimum and maximum PTV depths are about 2.5±2.3 cm and 9±3 cm, respectively.
Conclusions:
This study provides a statistical basis for RT plans and the necessary tools to generate them. It aids in selecting plans for knowledge-based models and deep-learning networks. The site-specific volume and depth results help identify the limitations and opportunities of current and future treatment modalities, in our case convergent kV RT. The compiled statistics and tools are versatile for training, quality assurance, comparing plans from different periods or institutions, and establishing guidelines. The toolkit is publicly available at https://github.com/m-kinz/STAR.

MedRegion-CT: Region-Focused Multimodal LLM for Comprehensive 3D CT Report Generation

Sunggu Kyung, Jinyoung Seo, Hyunseok Lim, Dongyeong Kim, Hyungbin Park, Jimin Sung, Jihyun Kim, Wooyoung Jo, Yoojin Nam, Namkug Kim

arxiv logopreprintJun 29 2025
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.
Page 39 of 59587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.