Sort by:
Page 5 of 19185 results

Hierarchical Diffusion Framework for Pseudo-Healthy Brain MRI Inpainting with Enhanced 3D Consistency

Dou Hoon Kwark, Shirui Luo, Xiyue Zhu, Yudu Li, Zhi-Pei Liang, Volodymyr Kindratenko

arxiv logopreprintJul 23 2025
Pseudo-healthy image inpainting is an essential preprocessing step for analyzing pathological brain MRI scans. Most current inpainting methods favor slice-wise 2D models for their high in-plane fidelity, but their independence across slices produces discontinuities in the volume. Fully 3D models alleviate this issue, but their high model capacity demands extensive training data for reliable, high-fidelity synthesis -- often impractical in medical settings. We address these limitations with a hierarchical diffusion framework by replacing direct 3D modeling with two perpendicular coarse-to-fine 2D stages. An axial diffusion model first yields a coarse, globally consistent inpainting; a coronal diffusion model then refines anatomical details. By combining perpendicular spatial views with adaptive resampling, our method balances data efficiency and volumetric consistency. Our experiments show our approach outperforms state-of-the-art baselines in both realism and volumetric consistency, making it a promising solution for pseudo-healthy image inpainting. Code is available at https://github.com/dou0000/3dMRI-Consistent-Inpaint.

CAPRI-CT: Causal Analysis and Predictive Reasoning for Image Quality Optimization in Computed Tomography

Sneha George Gnanakalavathy, Hairil Abdul Razak, Robert Meertens, Jonathan E. Fieldsend, Xujiong Ye, Mohammed M. Abdelsamea

arxiv logopreprintJul 23 2025
In computed tomography (CT), achieving high image quality while minimizing radiation exposure remains a key clinical challenge. This paper presents CAPRI-CT, a novel causal-aware deep learning framework for Causal Analysis and Predictive Reasoning for Image Quality Optimization in CT imaging. CAPRI-CT integrates image data with acquisition metadata (such as tube voltage, tube current, and contrast agent types) to model the underlying causal relationships that influence image quality. An ensemble of Variational Autoencoders (VAEs) is employed to extract meaningful features and generate causal representations from observational data, including CT images and associated imaging parameters. These input features are fused to predict the Signal-to-Noise Ratio (SNR) and support counterfactual inference, enabling what-if simulations, such as changes in contrast agents (types and concentrations) or scan parameters. CAPRI-CT is trained and validated using an ensemble learning approach, achieving strong predictive performance. By facilitating both prediction and interpretability, CAPRI-CT provides actionable insights that could help radiologists and technicians design more efficient CT protocols without repeated physical scans. The source code and dataset are publicly available at https://github.com/SnehaGeorge22/capri-ct.

Mitigating Data Bias in Healthcare AI with Self-Supervised Standardization.

Lan G, Zhu Y, Xiao S, Iqbal M, Yang J

pubmed logopapersJul 23 2025
The rapid advancement of artificial intelligence (AI) in healthcare has accelerated innovations in medical algorithms, yet its broader adoption faces critical ethical and technical barriers. A key challenge lies in algorithmic bias stemming from heterogeneous medical data across institutions, equipment, and workflows, which may perpetuate disparities in AI-driven diagnoses and exacerbate inequities in patient care. While AI's ability to extract deep features from large-scale data offers transformative potential, its effectiveness heavily depends on standardized, high-quality datasets. Current standardization gaps not only limit model generalizability but also raise concerns about reliability and fairness in real-world clinical settings, particularly for marginalized populations. Addressing these urgent issues, this paper proposes an ethical AI framework centered on a novel self-supervised medical image standardization method. By integrating self-supervised image style conversion, channel attention mechanisms, and contrastive learning-based loss functions, our approach enhances structural and style consistency in diverse datasets while preserving patient privacy through decentralized learning paradigms. Experiments across multi-institutional medical image datasets demonstrate that our method significantly improves AI generalizability without requiring centralized data sharing. By bridging the data standardization gap, this work advances technical foundations for trustworthy AI in healthcare.

Artificial Intelligence Empowers Novice Users to Acquire Diagnostic-Quality Echocardiography.

Trost B, Rodrigues L, Ong C, Dezellus A, Goldberg YH, Bouchat M, Roger E, Moal O, Singh V, Moal B, Lafitte S

pubmed logopapersJul 22 2025
Cardiac ultrasound exams provide real-time data to guide clinical decisions but require highly trained sonographers. Artificial intelligence (AI) that uses deep learning algorithms to guide novices in the acquisition of diagnostic echocardiographic studies may broaden access and improve care. The objective of this trial was to evaluate whether nurses without previous ultrasound experience (novices) could obtain diagnostic-quality acquisitions of 10 echocardiographic views using AI-based software. This noninferiority study was prospective, international, nonrandomized, and conducted at 2 medical centers, in the United States and France, from November 2023 to August 2024. Two limited cardiac exams were performed on adult patients scheduled for a clinically indicated echocardiogram; one was conducted by a novice using AI guidance and one by an expert (experienced sonographer or cardiologist) without it. Primary endpoints were evaluated by 5 experienced cardiologists to assess whether the novice exam was of sufficient quality to visually analyze the left ventricular size and function, the right ventricle size, and the presence of nontrivial pericardial effusion. Secondary endpoints included 8 additional cardiac parameters. A total of 240 patients (mean age 62.6 years; 117 women (48.8%); mean body mass index 26.6 kg/m<sup>2</sup>) completed the study. One hundred percent of the exams performed by novices with the studied software were of sufficient quality to assess the primary endpoints. Cardiac parameters assessed in exams conducted by novices and experts were strongly correlated. AI-based software provides a safe means for novices to perform diagnostic-quality cardiac ultrasounds after a short training period.

Pyramid Hierarchical Masked Diffusion Model for Imaging Synthesis

Xiaojiao Xiao, Qinmin Vivian Hu, Guanghui Wang

arxiv logopreprintJul 22 2025
Medical image synthesis plays a crucial role in clinical workflows, addressing the common issue of missing imaging modalities due to factors such as extended scan times, scan corruption, artifacts, patient motion, and intolerance to contrast agents. The paper presents a novel image synthesis network, the Pyramid Hierarchical Masked Diffusion Model (PHMDiff), which employs a multi-scale hierarchical approach for more detailed control over synthesizing high-quality images across different resolutions and layers. Specifically, this model utilizes randomly multi-scale high-proportion masks to speed up diffusion model training, and balances detail fidelity and overall structure. The integration of a Transformer-based Diffusion model process incorporates cross-granularity regularization, modeling the mutual information consistency across each granularity's latent spaces, thereby enhancing pixel-level perceptual accuracy. Comprehensive experiments on two challenging datasets demonstrate that PHMDiff achieves superior performance in both the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), highlighting its capability to produce high-quality synthesized images with excellent structural integrity. Ablation studies further confirm the contributions of each component. Furthermore, the PHMDiff model, a multi-scale image synthesis framework across and within medical imaging modalities, shows significant advantages over other methods. The source code is available at https://github.com/xiaojiao929/PHMDiff

MAN-GAN: a mask-adaptive normalization based generative adversarial networks for liver multi-phase CT image generation.

Zhao W, Chen W, Fan L, Shang Y, Wang Y, Situ W, Li W, Liu T, Yuan Y, Liu J

pubmed logopapersJul 22 2025
Liver multiphase enhanced computed tomography (MPECT) is vital in clinical practice, but its utility is limited by various factors. We aimed to develop a deep learning network capable of automatically generating MPECT images from standard non-contrast CT scans. Dataset 1 included 374 patients and was divided into three parts: a training set, a validation set and a test set. Dataset 2 included 144 patients with one specific liver disease and was used as an internal test dataset. We further collected another dataset comprising 83 patients for external validation. Then, we propose a Mask-Adaptive Normalization-based Generative Adversarial Network with Cycle-Consistency Loss (MAN-GAN) to achieve non-contrast CT to MPECT translation. To assess the efficiency of MAN-GAN, we conducted a comparative analysis with state-of-the-art methods commonly employed in diverse medical image synthesis tasks. Moreover, two subjective radiologist evaluation studies were performed to verify the clinical usefulness of the generated images. MAN-GAN outperformed the baseline network and other state-of-the-art methods in all generations of the three phases. These results were verified in internal and external datasets. According to radiological evaluation, the image quality of generated three phase images are all above average. Moreover, the similarities between real images and generated images in all three phases are satisfactory. MAN-GAN demonstrates the feasibility of liver MPECT image translation based on non-contrast images and achieves state-of-the-art performance via the subtraction strategy. It has great potential for solving the dilemma of liver CT contrast canning and aiding further liver interaction clinical scenarios.

Supervised versus unsupervised GAN for pseudo-CT synthesis in brain MR-guided radiotherapy.

Kermani MZ, Tavakoli MB, Khorasani A, Abedi I, Sadeghi V, Amouheidari A

pubmed logopapersJul 22 2025
Radiotherapy is a crucial treatment for brain tumor malignancies. To address the limitations of CT-based treatment planning, recent research has explored MR-only radiotherapy, requiring precise MR-to-CT synthesis. This study compares two deep learning approaches, supervised (Pix2Pix) and unsupervised (CycleGAN), for generating pseudo-CT (pCT) images from T1- and T2-weighted MR sequences. 3270 paired T1- and T2-weighted MRI images were collected and registered with corresponding CT images. After preprocessing, a supervised pCT generative model was trained using the Pix2Pix framework, and an unsupervised generative network (CycleGAN) was also trained to enable a comparative assessment of pCT quality relative to the Pix2Pix model. To assess differences between pCT and reference CT images, three key metrics (SSIM, PSNR, and MAE) were used. Additionally, a dosimetric evaluation was performed on selected cases to assess clinical relevance. The average SSIM, PSNR, and MAE for Pix2Pix on T1 images were 0.964 ± 0.03, 32.812 ± 5.21, and 79.681 ± 9.52 HU, respectively. Statistical analysis revealed that Pix2Pix significantly outperformed CycleGAN in generating high-fidelity pCT images (p < 0.05). There was no notable difference in the effectiveness of T1-weighted versus T2-weighted MR images for generating pCT (p > 0.05). Dosimetric evaluation confirmed comparable dose distributions between pCT and reference CT, supporting clinical feasibility. Both supervised and unsupervised methods demonstrated the capability to generate accurate pCT images from conventional T1- and T2-weighted MR sequences. While supervised methods like Pix2Pix achieve higher accuracy, unsupervised approaches such as CycleGAN offer greater flexibility by eliminating the need for paired training data, making them suitable for applications where paired data is unavailable.

The safety and accuracy of radiation-free spinal navigation using a short, scoliosis-specific BoneMRI-protocol, compared to CT.

Lafranca PPG, Rommelspacher Y, Walter SG, Muijs SPJ, van der Velden TA, Shcherbakova YM, Castelein RM, Ito K, Seevinck PR, Schlösser TPC

pubmed logopapersJul 21 2025
Spinal navigation systems require pre- and/or intra-operative 3-D imaging, which expose young patients to harmful radiation. We assessed a scoliosis-specific MRI-protocol that provides T2-weighted MRI and AI-generated synthetic-CT (sCT) scans, through deep learning algorithms. This study aims to compare MRI-based synthetic-CT spinal navigation to CT for safety and accuracy of pedicle screw planning and placement at thoracic and lumbar levels. Spines of 5 cadavers were scanned with thin-slice CT and the scoliosis-specific MRI-protocol (to create sCT). Preoperatively, on both CT and sCT screw trajectories were planned. Subsequently, four spine surgeons performed surface-matched, navigated placement of 2.5 mm k-wires in all pedicles from T3 to L5. Randomization for CT/sCT, surgeon and side was performed (1:1 ratio). On postoperative CT-scans, virtual screws were simulated over k-wires. Maximum angulation, distance between planned and postoperative screw positions and medial breach rate (Gertzbein-Robbins classification) were assessed. 140 k-wires were inserted, 3 were excluded. There were no pedicle breaches > 2 mm. Of sCT-guided screws, 59 were grade A and 10 grade B. For the CT-guided screws, 47 were grade A and 21 grade B (p = 0.022). Average distance (± SD) between intraoperative and postoperative screw positions was 2.3 ± 1.5 mm in sCT-guided screws, and 2.4 ± 1.8 mm for CT (p = 0.78), average maximum angulation (± SD) was 3.8 ± 2.5° for sCT and 3.9 ± 2.9° for CT (p = 0.75). MRI-based, AI-generated synthetic-CT spinal navigation allows for safe and accurate planning and placement of thoracic and lumbar pedicle screws in a cadaveric model, without significant differences in distance and angulation between planned and postoperative screw positions compared to CT.

Artificial intelligence-generated apparent diffusion coefficient (AI-ADC) maps for prostate gland assessment: a multi-reader study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Huang EP, Gelikman DG, Gaur S, Giganti F, Law YM, Margolis DJ, Jadda PK, Raavi S, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJul 21 2025
To compare the quality of AI-ADC maps and standard ADC maps in a multi-reader study. Multi-reader study included 74 consecutive patients (median age = 66 years, [IQR = 57.25-71.75 years]; median PSA = 4.30 ng/mL [IQR = 1.33-7.75 ng/mL]) with suspected or confirmed PCa, who underwent mpMRI between October 2023 and January 2024. The study was conducted in two rounds, separated by a 4-week wash-out period. In each round, four readers evaluated T2W-MRI and standard or AI-generated ADC (AI-ADC) maps. Fleiss' kappa, quadratic-weighted Cohen's kappa statistics were used to assess inter-reader agreement. Linear mixed effect models were employed to compare the quality evaluation of standard versus AI-ADC maps. AI-ADC maps exhibited significantly enhanced imaging quality compared to standard ADC maps with higher ratings in windowing ease (β = 0.67 [95% CI 0.30-1.04], p < 0.05), prostate boundary delineation (β = 1.38 [95% CI 1.03-1.73], p < 0.001), reductions in distortion (β = 1.68 [95% CI 1.30-2.05], p < 0.001), noise (β = 0.56 [95% CI 0.24-0.88], p < 0.001). AI-ADC maps reduced reacquisition requirements for all readers (β = 2.23 [95% CI 1.69-2.76], p < 0.001), supporting potential workflow efficiency gains. No differences were observed between AI-ADC and standard ADC maps' inter-reader agreement. Our multi-reader study demonstrated that AI-ADC maps improved prostate boundary delineation, had lower image noise, fewer distortions, and higher overall image quality compared to ADC maps. Question Can we synthesize apparent diffusion coefficient (ADC) maps with AI to achieve higher quality maps? Findings On average, readers rated quality factors of AI-ADC maps higher than ADC maps in 34.80% of cases, compared to 5.07% for ADC (p < 0.01). Clinical relevance AI-ADC maps may serve as a reliable diagnostic support tool thanks to their high quality, particularly when the acquired ADC maps include artifacts.

Benchmarking GANs, Diffusion Models, and Flow Matching for T1w-to-T2w MRI Translation

Andrea Moschetto, Lemuel Puglisi, Alec Sargood, Pierluigi Dell'Acqua, Francesco Guarnera, Sebastiano Battiato, Daniele Ravì

arxiv logopreprintJul 19 2025
Magnetic Resonance Imaging (MRI) enables the acquisition of multiple image contrasts, such as T1-weighted (T1w) and T2-weighted (T2w) scans, each offering distinct diagnostic insights. However, acquiring all desired modalities increases scan time and cost, motivating research into computational methods for cross-modal synthesis. To address this, recent approaches aim to synthesize missing MRI contrasts from those already acquired, reducing acquisition time while preserving diagnostic quality. Image-to-image (I2I) translation provides a promising framework for this task. In this paper, we present a comprehensive benchmark of generative models$\unicode{x2013}$specifically, Generative Adversarial Networks (GANs), diffusion models, and flow matching (FM) techniques$\unicode{x2013}$for T1w-to-T2w 2D MRI I2I translation. All frameworks are implemented with comparable settings and evaluated on three publicly available MRI datasets of healthy adults. Our quantitative and qualitative analyses show that the GAN-based Pix2Pix model outperforms diffusion and FM-based methods in terms of structural fidelity, image quality, and computational efficiency. Consistent with existing literature, these results suggest that flow-based models are prone to overfitting on small datasets and simpler tasks, and may require more data to match or surpass GAN performance. These findings offer practical guidance for deploying I2I translation techniques in real-world MRI workflows and highlight promising directions for future research in cross-modal medical image synthesis. Code and models are publicly available at https://github.com/AndreaMoschetto/medical-I2I-benchmark.
Page 5 of 19185 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.