Sort by:
Page 1 of 17 results

2.5D Multi-view Averaging Diffusion Model for 3D Medical Image Translation: Application to Low-count PET Reconstruction with CT-less Attenuation Correction.

Chen T, Hou J, Zhou Y, Xie H, Chen X, Liu Q, Guo X, Xia M, Duncan JS, Liu C, Zhou B

pubmed logopapersMay 15 2025
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation exposure to patients and healthcare providers. Reducing the tracer injection dose and eliminating the CT acquisition for attenuation correction can reduce the overall radiation dose, but often results in PET with high noise and bias. Thus, it is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET (NAC-LDPET) into attenuation-corrected standard-dose PET (AC-SDPET). Recently, diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods. However, due to the high computation cost and memory burden, it is largely limited to 2D applications. To address these challenges, we developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC-LDPET to AC-SDPET translation. Specifically, MADM employs separate diffusion models for axial, coronal, and sagittal views, whose outputs are averaged in each sampling step to ensure the 3D generation quality from multiple views. To accelerate the 3D sampling process, we also proposed a strategy to use the CNN-based 3D generation as a prior for the diffusion model. Our experimental results on human patient studies suggested that MADM can generate high-quality 3D translation images, outperforming previous CNN-based and Diffusion-based baseline methods. The code is available at https://github.com/tianqic/MADM.

Whole-body CT-to-PET synthesis using a customized transformer-enhanced GAN.

Xu B, Nie Z, He J, Li A, Wu T

pubmed logopapersMay 14 2025
Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (18F-FDG PET-CT) is a multi-modality medical imaging technique widely used for screening and diagnosis of lesions and tumors, in which, CT can provide detailed anatomical structures, while PET can show metabolic activities. Nevertheless, it has disadvantages such as long scanning time, high cost, and relatively high radiation doses.

Purpose: We propose a deep learning model for the whole-body CT-to-PET synthesis task, generating high-quality synthetic PET images that are comparable to real ones in both clinical relevance and diagnostic value.

Material: We collect 102 pairs of 3D CT and PET scans, which are sliced into 27,240 pairs of 2D CT and PET images ( training: 21,855 pairs, validation: 2,810, testing: 2,575 pairs).

Methods: We propose a Transformer-enhanced Generative Adversarial Network (GAN) for whole-body CT-to-PET synthesis task. The CPGAN model uses residual blocks and Fully Connected Transformer Residual (FCTR) blocks to capture both local features and global contextual information. A customized loss function incorporating structural consistency is designed to improve the quality of synthesized PET images.

Results: Both quantitative and qualitative evaluation results demonstrate effectiveness of the CPGAN model. The mean and standard variance of NRMSE,PSNR and SSIM values on test set are (16.90 ± 12.27) × 10-4, 28.71 ± 2.67 and 0.926 ± 0.033, respectively, outperforming other seven state-of-the-art models. Three radiologists independently and blindly evaluated and gave subjective scores to 100 randomly chosen PET images (50 real and 50 synthetic). By Wilcoxon signed rank test, there are no statistical differences between the synthetic PET images and the real ones.

Conclusions: Despite the inherent limitations of CT images to directly reflect biological information of metabolic tissues, CPGAN model effectively synthesizes satisfying PET images from CT scans, which has potential in reducing the reliance on actual PET-CT scans.

A deep learning sex-specific body composition ageing biomarker using dual-energy X-ray absorptiometry scan.

Lian J, Cai P, Huang F, Huang J, Vardhanabhuti V

pubmed logopapersMay 13 2025
Chronic diseases are closely linked to alterations in body composition, yet there is a need for reliable biomarkers to assess disease risk and progression. This study aimed to develop and validate a biological age indicator based on body composition derived from dual-energy X-ray absorptiometry (DXA) scans, offering a novel approach to evaluating health status and predicting disease outcomes. A deep learning model was trained on a reference population from the UK Biobank to estimate body composition biological age (BCBA). The model's performance was assessed across various groups, including individuals with typical and atypical body composition, those with pre-existing diseases, and those who developed diseases after DXA imaging. Key metrics such as c-index were employed to examine BCBA's diagnostic and prognostic potential for type 2 diabetes, major adverse cardiovascular events (MACE), atherosclerotic cardiovascular disease (ASCVD), and hypertension. Here we show that BCBA strongly correlates with chronic disease diagnoses and risk prediction. BCBA demonstrated significant associations with type 2 diabetes (odds ratio 1.08 for females and 1.04 for males, p < 0.0005), MACE (odds ratio 1.10 for females and 1.11 for males, p < 0.0005), ASCVD (odds ratio 1.07 for females and 1.10 for males, p < 0.0005), and hypertension (odds ratio 1.06 for females and 1.04 for males, p < 0.0005). It outperformed standard cardiovascular risk profiles in predicting MACE and ASCVD. BCBA is a promising biomarker for assessing chronic disease risk and progression, with potential to improve clinical decision-making. Its integration into routine health assessments could aid early disease detection and personalised interventions.

Fully volumetric body composition analysis for prognostic overall survival stratification in melanoma patients.

Borys K, Lodde G, Livingstone E, Weishaupt C, Römer C, Künnemann MD, Helfen A, Zimmer L, Galetzka W, Haubold J, Friedrich CM, Umutlu L, Heindel W, Schadendorf D, Hosch R, Nensa F

pubmed logopapersMay 12 2025
Accurate assessment of expected survival in melanoma patients is crucial for treatment decisions. This study explores deep learning-based body composition analysis to predict overall survival (OS) using baseline Computed Tomography (CT) scans and identify fully volumetric, prognostic body composition features. A deep learning network segmented baseline abdomen and thorax CTs from a cohort of 495 patients. The Sarcopenia Index (SI), Myosteatosis Fat Index (MFI), and Visceral Fat Index (VFI) were derived and statistically assessed for prognosticating OS. External validation was performed with 428 patients. SI was significantly associated with OS on both CT regions: abdomen (P ≤ 0.0001, HR: 0.36) and thorax (P ≤ 0.0001, HR: 0.27), with lower SI associated with prolonged survival. MFI was also associated with OS on abdomen (P ≤ 0.0001, HR: 1.16) and thorax CTs (P ≤ 0.0001, HR: 1.08), where higher MFI was linked to worse outcomes. Lastly, VFI was associated with OS on abdomen CTs (P ≤ 0.001, HR: 1.90), with higher VFI linked to poor outcomes. External validation replicated these results. SI, MFI, and VFI showed substantial potential as prognostic factors for OS in malignant melanoma patients. This approach leveraged existing CT scans without additional procedural or financial burdens, highlighting the seamless integration of DL-based body composition analysis into standard oncologic staging routines.

From Genome to Phenome: Opportunities and Challenges of Molecular Imaging.

Tian M, Hood L, Chiti A, Schwaiger M, Minoshima S, Watanabe Y, Kang KW, Zhang H

pubmed logopapersMay 8 2025
The study of the human phenome is essential for understanding the complexities of wellness and disease and their transitions, with molecular imaging being a vital tool in this exploration. Molecular imaging embodies the 4 principles of human phenomics: precise measurement, accurate calculation or analysis, well-controlled manipulation or intervention, and innovative invention or creation. Its application has significantly enhanced the precision, individualization, and effectiveness of medical interventions. This article provides an overview of molecular imaging's technologic advancements and presents the potential use of molecular imaging in human phenomics and precision medicine. The integration of molecular imaging with multiomics data and artificial intelligence has the potential to transform health care, promoting proactive and preventive strategies. This evolving approach promises to deepen our understanding of the human phenome, lead to preclinical diagnostics and treatments, and establish quantitative frameworks for precision health management.

Multistage Diffusion Model With Phase Error Correction for Fast PET Imaging.

Gao Y, Huang Z, Xie X, Zhao W, Yang Q, Yang X, Yang Y, Zheng H, Liang D, Liu J, Chen R, Hu Z

pubmed logopapersMay 7 2025
Fast PET imaging is clinically important for reducing motion artifacts and improving patient comfort. While recent diffusion-based deep learning methods have shown promise, they often fail to capture the true PET degradation process, suffer from accumulated inference errors, introduce artifacts, and require extensive reconstruction iterations. To address these challenges, we propose a novel multistage diffusion framework tailored for fast PET imaging. At the coarse level, we design a multistage structure to approximate the temporal non-linear PET degradation process in a data-driven manner, using paired PET images collected under different acquisition duration. A Phase Error Correction Network (PECNet) ensures consistency across stages by correcting accumulated deviations. At the fine level, we introduce a deterministic cold diffusion mechanism, which simulates intra-stage degradation through interpolation between known acquisition durations-significantly reducing reconstruction iterations to as few as 10. Evaluations on [<sup>68</sup>Ga]FAPI and [<sup>18</sup>F]FDG PET datasets demonstrate the superiority of our approach, achieving peak PSNRs of 36.2 dB and 39.0 dB, respectively, with average SSIMs over 0.97. Our framework offers high-fidelity PET imaging with fewer iterations, making it practical for accelerated clinical imaging.

Potential of artificial intelligence for radiation dose reduction in computed tomography -A scoping review.

Bani-Ahmad M, England A, McLaughlin L, Hadi YH, McEntee M

pubmed logopapersMay 7 2025
Artificial intelligence (AI) is now transforming medical imaging, with extensive ramifications for nearly every aspect of diagnostic imaging, including computed tomography (CT). This current work aims to review, evaluate, and summarise the role of AI in radiation dose optimisation across three fundamental domains in CT: patient positioning, scan range determination, and image reconstruction. A comprehensive scoping review of the literature was performed. Electronic databases including Scopus, Ovid, EBSCOhost and PubMed were searched between January 2018 and December 2024. Relevant articles were identified from their titles had their abstracts evaluated, and those deemed relevant had their full text reviewed. Extracted data from selected studies included the application of AI, radiation dose, anatomical part, and any relevant evaluation metrics based on the CT parameter in which AI is applied. 90 articles met the selection criteria. Included studies evaluated the performance of AI for dose optimisation through patient positioning, scan range determination, and reconstruction across various CT scans, including the abdomen, chest, head, neck, and pelvis, as well as CT angiography. A concise overview of the present state of AI in these three domains, emphasising benefits, limitations, and impact on the transformation of dose reduction in CT scanning, is provided. AI methods can help minimise positioning offsets and over-scanning caused by manual errors and helped to overcome the limitation associated with low-dose CT settings through deep learning image reconstruction algorithms. Further clinical integration of AI will continue to allow for improvements in optimising CT scan protocols and radiation dose. This review underscores the significance of AI in optimizing radiation doses in CT imaging, focusing on three key areas: patient positioning, scan range determination, and image reconstruction.
Page 1 of 17 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.