Sort by:
Page 43 of 59587 results

RECIST<sup>Surv</sup>: Hybrid Multi-task Transformer for Hepatocellular Carcinoma Response and Survival Evaluation.

Jiao R, Liu Q, Zhang Y, Pu B, Xue B, Cheng Y, Yang K, Liu X, Qu J, Jin C, Zhang Y, Wang Y, Zhang YD

pubmed logopapersJun 18 2025
Transarterial Chemoembolization (TACE) is a widely applied alternative treatment for patients with hepatocellular carcinoma who are not eligible for liver resection or transplantation. However, the clinical outcomes after TACE are highly heterogeneous. There remains an urgent need for effective and efficient strategies to accurately assess tumor response and predict long-term outcomes using longitudinal and multi-center datasets. To address this challenge, we here introduce RECIST<sup>Surv</sup>, a novel response-driven Transformer model that integrates multi-task learning with a response-driven co-attention mechanism to simultaneously perform liver and tumor segmentation, predict tumor response to TACE, and estimate overall survival based on longitudinal Computed Tomography (CT) imaging. The proposed Response-driven Co-attention layer models the interactions between pre-TACE and post-TACE features guided by the treatment response embedding. This design enables the model to capture complex relationships between imaging features, treatment response, and survival outcomes, thereby enhancing both prediction accuracy and interpretability. In a multi-center validation study, RECIST<sup>Surv</sup>-predicted prognosis has demonstrated superior precision than state-of-the-art methods with C-indexes ranging from 0.595 to 0.780. Furthermore, when integrated with multi-modal data, RECIST<sup>Surv</sup> has emerged as an independent prognostic factor in all three validation cohorts, with hazard ratio (HR) ranging from 1.693 to 20.7 (P = 0.001-0.042). Our results highlight the potential of RECIST<sup>Surv</sup> as a powerful tool for personalized treatment planning and outcome prediction in hepatocellular carcinoma patients undergoing TACE. The experimental code is made publicly available at https://github.com/rushier/RECISTSurv.

DM-FNet: Unified multimodal medical image fusion via diffusion process-trained encoder-decoder

Dan He, Weisheng Li, Guofen Wang, Yuping Huang, Shiqiang Liu

arxiv logopreprintJun 18 2025
Multimodal medical image fusion (MMIF) extracts the most meaningful information from multiple source images, enabling a more comprehensive and accurate diagnosis. Achieving high-quality fusion results requires a careful balance of brightness, color, contrast, and detail; this ensures that the fused images effectively display relevant anatomical structures and reflect the functional status of the tissues. However, existing MMIF methods have limited capacity to capture detailed features during conventional training and suffer from insufficient cross-modal feature interaction, leading to suboptimal fused image quality. To address these issues, this study proposes a two-stage diffusion model-based fusion network (DM-FNet) to achieve unified MMIF. In Stage I, a diffusion process trains UNet for image reconstruction. UNet captures detailed information through progressive denoising and represents multilevel data, providing a rich set of feature representations for the subsequent fusion network. In Stage II, noisy images at various steps are input into the fusion network to enhance the model's feature recognition capability. Three key fusion modules are also integrated to process medical images from different modalities adaptively. Ultimately, the robust network structure and a hybrid loss function are integrated to harmonize the fused image's brightness, color, contrast, and detail, enhancing its quality and information density. The experimental results across various medical image types demonstrate that the proposed method performs exceptionally well regarding objective evaluation metrics. The fused image preserves appropriate brightness, a comprehensive distribution of radioactive tracers, rich textures, and clear edges. The code is available at https://github.com/HeDan-11/DM-FNet.

Multimodal Large Language Models for Medical Report Generation via Customized Prompt Tuning

Chunlei Li, Jingyang Hou, Yilei Shi, Jingliang Hu, Xiao Xiang Zhu, Lichao Mou

arxiv logopreprintJun 18 2025
Medical report generation from imaging data remains a challenging task in clinical practice. While large language models (LLMs) show great promise in addressing this challenge, their effective integration with medical imaging data still deserves in-depth exploration. In this paper, we present MRG-LLM, a novel multimodal large language model (MLLM) that combines a frozen LLM with a learnable visual encoder and introduces a dynamic prompt customization mechanism. Our key innovation lies in generating instance-specific prompts tailored to individual medical images through conditional affine transformations derived from visual features. We propose two implementations: prompt-wise and promptbook-wise customization, enabling precise and targeted report generation. Extensive experiments on IU X-ray and MIMIC-CXR datasets demonstrate that MRG-LLM achieves state-of-the-art performance in medical report generation. Our code will be made publicly available.

Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration

Kyobin Choo, Hyunkyung Han, Jinyeong Kim, Chanyong Yoon, Seong Jae Hwang

arxiv logopreprintJun 18 2025
In clinical practice, imaging modalities with functional characteristics, such as positron emission tomography (PET) and fractional anisotropy (FA), are often aligned with a structural reference (e.g., MRI, CT) for accurate interpretation or group analysis, necessitating multi-modal deformable image registration (DIR). However, due to the extreme heterogeneity of these modalities compared to standard structural scans, conventional unsupervised DIR methods struggle to learn reliable spatial mappings and often distort images. We find that the similarity metrics guiding these models fail to capture alignment between highly disparate modalities. To address this, we propose M2M-Reg (Multi-to-Mono Registration), a novel framework that trains multi-modal DIR models using only mono-modal similarity while preserving the established architectural paradigm for seamless integration into existing models. We also introduce GradCyCon, a regularizer that leverages M2M-Reg's cyclic training scheme to promote diffeomorphism. Furthermore, our framework naturally extends to a semi-supervised setting, integrating pre-aligned and unaligned pairs only, without requiring ground-truth transformations or segmentation masks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that M2M-Reg achieves up to 2x higher DSC than prior methods for PET-MRI and FA-MRI registration, highlighting its effectiveness in handling highly heterogeneous multi-modal DIR. Our code is available at https://github.com/MICV-yonsei/M2M-Reg.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

Diffusion-based Counterfactual Augmentation: Towards Robust and Interpretable Knee Osteoarthritis Grading

Zhe Wang, Yuhua Ru, Aladine Chetouani, Tina Shiang, Fang Chen, Fabian Bauer, Liping Zhang, Didier Hans, Rachid Jennane, William Ewing Palmer, Mohamed Jarraya, Yung Hsin Chen

arxiv logopreprintJun 18 2025
Automated grading of Knee Osteoarthritis (KOA) from radiographs is challenged by significant inter-observer variability and the limited robustness of deep learning models, particularly near critical decision boundaries. To address these limitations, this paper proposes a novel framework, Diffusion-based Counterfactual Augmentation (DCA), which enhances model robustness and interpretability by generating targeted counterfactual examples. The method navigates the latent space of a diffusion model using a Stochastic Differential Equation (SDE), governed by balancing a classifier-informed boundary drive with a manifold constraint. The resulting counterfactuals are then used within a self-corrective learning strategy to improve the classifier by focusing on its specific areas of uncertainty. Extensive experiments on the public Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) datasets demonstrate that this approach significantly improves classification accuracy across multiple model architectures. Furthermore, the method provides interpretability by visualizing minimal pathological changes and revealing that the learned latent space topology aligns with clinical knowledge of KOA progression. The DCA framework effectively converts model uncertainty into a robust training signal, offering a promising pathway to developing more accurate and trustworthy automated diagnostic systems. Our code is available at https://github.com/ZWang78/DCA.

DiffM<sup>4</sup>RI: A Latent Diffusion Model with Modality Inpainting for Synthesizing Missing Modalities in MRI Analysis.

Ye W, Guo Z, Ren Y, Tian Y, Shen Y, Chen Z, He J, Ke J, Shen Y

pubmed logopapersJun 17 2025
Foundation Models (FMs) have shown great promise for multimodal medical image analysis such as Magnetic Resonance Imaging (MRI). However, certain MRI sequences may be unavailable due to various constraints, such as limited scanning time, patient discomfort, or scanner limitations. The absence of certain modalities can hinder the performance of FMs in clinical applications, making effective missing modality imputation crucial for ensuring their applicability. Previous approaches, including generative adversarial networks (GANs), have been employed to synthesize missing modalities in either a one-to-one or many-to-one manner. However, these methods have limitations, as they require training a new model for different missing scenarios and are prone to mode collapse, generating limited diversity in the synthesized images. To address these challenges, we propose DiffM<sup>4</sup>RI, a diffusion model for many-to-many missing modality imputation in MRI. DiffM<sup>4</sup>RI innovatively formulates the missing modality imputation as a modality-level inpainting task, enabling it to handle arbitrary missing modality situations without the need for training multiple networks. Experiments on the BraTs datasets demonstrate DiffM<sup>4</sup>RI can achieve an average SSIM improvement of 0.15 over MustGAN, 0.1 over SynDiff, and 0.02 over VQ-VAE-2. These results highlight the potential of DiffM<sup>4</sup>RI in enhancing the reliability of FMs in clinical applications. The code is available at https://github.com/27yw/DiffM4RI.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT images remains a signifi-cant challenge due to the limited availability of annotat-ed data and the complex nature of CT imaging. In this work, we present PRO, a novel framework that, to the best of our knowledge, is the first to perform CT image synthesis in the projection domain using latent diffusion models. Unlike previous approaches that operate in the image domain, PRO learns rich structural representa-tions from raw projection data and leverages anatomi-cal text prompts for controllable synthesis. This projec-tion domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capa-ble of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves perfor-mance across multiple downstream tasks, including low-dose and sparse-view reconstruction, even with limited training data. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.

Kernelized weighted local information based picture fuzzy clustering with multivariate coefficient of variation and modified total Bregman divergence measure for brain MRI image segmentation.

Lohit H, Kumar D

pubmed logopapersJun 16 2025
This paper proposes a novel clustering method for noisy image segmentation using a kernelized weighted local information approach under the Picture Fuzzy Set (PFS) framework. Existing kernel-based fuzzy clustering methods struggle with noisy environments and non-linear structures, while intuitionistic fuzzy clustering methods face limitations in handling uncertainty in real-world medical images. To address these challenges, we introduce a local picture fuzzy information measure, developed for the first time using Multivariate Coefficient of Variation (MCV) theory, enhancing robustness in segmentation. Additionally, we integrate non-Euclidean distance measures, including kernel distance for local information computation and modified total Bregman divergence (MTBD) measure for improving clustering accuracy. This combination enhances both local spatial consistency and global membership estimation, leading to precise segmentation. The proposed method is extensively evaluated on synthetic images with Gaussian, Salt and Pepper, and mixed noise, along with Brainweb, IBSR, and MRBrainS18 MRI datasets under varying Rician noise levels, and a CT image template. Furthermore, we benchmark our proposed method against two deep learning-based segmentation models, ResNet34-LinkNet and patch-based U-Net. Experimental results demonstrate significant improvements in segmentation accuracy, as validated by metrics such as Dice Score, Fuzzy Performance Index, Modified Partition Entropy, Average Volume Difference (AVD), and the XB index. Additionally, Friedman's statistical test confirms the superior performance of our approach compared to state-of-the-art clustering methods for noisy image segmentation. To facilitate reproducibility, the implementation of our proposed method is made publicly available at: Google Drive Repository.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT projection data remains a significant challenge due to the limited availability of annotated data and the complex nature of CT imaging. In this work, we present PRO, a projection domain synthesis foundation model for CT imaging. To the best of our knowledge, this is the first study that performs CT synthesis in the projection domain. Unlike previous approaches that operate in the image domain, PRO learns rich structural representations from raw projection data and leverages anatomical text prompts for controllable synthesis. This projection domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capable of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves performance across multiple downstream tasks, including low-dose and sparse-view reconstruction. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.
Page 43 of 59587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.