Sort by:
Page 5 of 875 results

Brain tau PET-based identification and characterization of subpopulations in patients with Alzheimer's disease using deep learning-derived saliency maps.

Li Y, Wang X, Ge Q, Graeber MB, Yan S, Li J, Li S, Gu W, Hu S, Benzinger TLS, Lu J, Zhou Y

pubmed logopapersJun 9 2025
Alzheimer's disease (AD) is a heterogeneous neurodegenerative disorder in which tau neurofibrillary tangles are a pathological hallmark closely associated with cognitive dysfunction and neurodegeneration. In this study, we used brain tau data to investigate AD heterogeneity by identifying and characterizing the subpopulations among patients. We included 615 cognitively normal and 159 AD brain <sup>18</sup>F-flortaucipr PET scans, along with T1-weighted MRI from the Alzheimer Disease Neuroimaging Initiative database. A three dimensional-convolutional neural network model was employed for AD detection using standardized uptake value ratio (SUVR) images. The model-derived saliency maps were generated and employed as informative image features for clustering AD participants. Among the identified subpopulations, statistical analysis of demographics, neuropsychological measures, and SUVR were compared. Correlations between neuropsychological measures and regional SUVRs were assessed. A generalized linear model was utilized to investigate the sex and APOE ε4 interaction effect on regional SUVRs. Two distinct subpopulations of AD patients were revealed, denoted as S<sub>Hi</sub> and S<sub>Lo</sub>. Compared to the S<sub>Lo</sub> group, the S<sub>Hi</sub> group exhibited a significantly higher global tau burden in the brain, but both groups showed similar cognition distribution levels. In the S<sub>Hi</sub> group, the associations between the neuropsychological measurements and regional tau deposition were weakened. Moreover, a significant interaction effect of sex and APOE ε4 on tau deposition was observed in the S<sub>Lo</sub> group, but no such effect was found in the S<sub>Hi</sub> group. Our results suggest that tau tangles, as shown by SUVR, continue to accumulate even when cognitive function plateaus in AD patients, highlighting the advantages of PET in later disease stages. The differing relationships between cognition and tau deposition, and between gender, APOE4, and tau deposition, provide potential for subtype-specific treatments. Targeting gender-specific and genetic factors influencing tau deposition, as well as interventions aimed at tau's impact on cognition, may be effective.

Clinical validation of a deep learning model for low-count PET image enhancement.

Long Q, Tian Y, Pan B, Xu Z, Zhang W, Xu L, Fan W, Pan T, Gong NJ

pubmed logopapersJun 5 2025
To investigate the effects of the deep learning model RaDynPET on fourfold reduced-count whole-body PET examinations. A total of 120 patients (84 internal cohorts and 36 external cohorts) undergoing <sup>18</sup>F-FDG PET/CT examinations were enrolled. PET images were reconstructed using OSEM algorithm with 120-s (G120) and 30-s (G30) list-mode data. RaDynPET was developed to generate enhanced images (R30) from G30. Two experienced nuclear medicine physicians independently evaluated subjective image quality using a 5-point Likert scale. Standardized uptake values (SUV), standard deviations, liver signal-to-noise ratio (SNR), lesion tumor-to-background ratio (TBR), and contrast-to-noise ratio (CNR) were compared. Subgroup analyses evaluated performance across demographics, and lesion detectability were evaluated using external datasets. RaDynPET was also compared to other deep learning methods. In internal cohorts, R30 demonstrated significantly higher image quality scores than G30 and G120. R30 showed excellent agreement with G120 for liver and lesion SUV values and surpassed G120 in liver SNR and CNR. Liver SNR and CNR of R30 were comparable to G120 in thin group, and the CNR of R30 was comparable to G120 in young age group. In external cohorts, R30 maintained strong SUV agreement with G120, with lesion-level sensitivity and specificity of 95.45% and 98.41%, respectively. There was no statistical difference in lesion detection between R30 and G120. RaDynPET achieved the highest PSNR and SSIM among deep learning methods. The RaDynPET model effectively restored high image quality while maintaining SUV agreement for <sup>18</sup>F-FDG PET scans acquired in 25% of the standard acquisition time.

Predicting clinical outcomes using 18F-FDG PET/CT-based radiomic features and machine learning algorithms in patients with esophageal cancer.

Mutevelizade G, Aydin N, Duran Can O, Teke O, Suner AF, Erdugan M, Sayit E

pubmed logopapersJun 4 2025
This study evaluated the relationship between 18F-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) radiomic features and clinical parameters, including tumor localization, histopathological subtype, lymph node metastasis, mortality, and treatment response, in esophageal cancer (EC) patients undergoing chemoradiotherapy and the predictive performance of various machine learning (ML) models. In this retrospective study, 39 patients with EC who underwent pretreatment 18F-FDG PET/CT and received concurrent chemoradiotherapy were analyzed. Texture features were extracted using LIFEx software. Logistic regression, Naive Bayes, random forest, extreme gradient boosting (XGB), and support vector machine classifiers were applied to predict clinical outcomes. Cox regression and Kaplan-Meier analyses were used to evaluate overall survival (OS), and the accuracy of ML algorithms was quantified using the area under the receiver operating characteristic curve. Radiomic features showed significant associations with several clinical parameters. Lymph node metastasis, tumor localization, and treatment response emerged as predictors of OS. Among the ML models, XGB demonstrated the most consistent and highest predictive performance across clinical outcomes. Radiomic features extracted from 18F-FDG PET/CT, when combined with ML approaches, may aid in predicting treatment response and clinical outcomes in EC. Radiomic features demonstrated value in assessing tumor heterogeneity; however, clinical parameters retained a stronger prognostic value for OS.

Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction

George Webber, Alexander Hammers, Andrew P. King, Andrew J. Reader

arxiv logopreprintJun 4 2025
Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multi-subject PET-MR scans, synthesizing "pseudo-PET" images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [$^{18}$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Direct parametric reconstruction in dynamic PET using deep image prior and a novel parameter magnification strategy.

Hong X, Wang F, Sun H, Arabi H, Lu L

pubmed logopapersJun 2 2025
Multiple parametric imaging in positron emission tomography (PET) is challenging due to the noisy dynamic data and the complex mapping to kinetic parameters. Although methods like direct parametric reconstruction have been proposed to improve the image quality, limitations persist, particularly for nonlinear and small-value micro-parameters (e.g., k<sub>2</sub>, k<sub>3</sub>). This study presents a novel unsupervised deep learning approach to reconstruct and improve the quality of these micro-parameters. We proposed a direct parametric image reconstruction model, DIP-PM, integrating deep image prior (DIP) with a parameter magnification (PM) strategy. The model employs a U-Net generator to predict multiple parametric images using a CT image prior, with each output channel subsequently magnified by a factor to adjust the intensity. The model was optimized with a log-likelihood loss computed between the measured projection data and forward projected data. Two tracer datasets were simulated for evaluation: <sup>82</sup>Rb data using the 1-tissue compartment (1 TC) model and <sup>18</sup>F-FDG data using the 2-tissue compartment (2 TC) model, with 10-fold magnification applied to the 1 TC k<sub>2</sub> and the 2 TC k<sub>3</sub>, respectively. DIP-PM was compared to the indirect method, direct algorithm (OTEM) and the DIP method without parameter magnification (DIP-only). Performance was assessed on phantom data using peak signal-to-noise ratio (PSNR), normalized root mean square error (NRMSE) and structural similarity index (SSIM), as well as on real <sup>18</sup>F-FDG scan from a male subject. For the 1 TC model, OTEM performed well in K<sub>1</sub> reconstruction, but both indirect and OTEM methods showed high noise and poor performance in k<sub>2</sub>. The DIP-only method suppressed noise in k<sub>2</sub>, but failed to reconstruct fine structures in the myocardium. DIP-PM outperformed other methods with well-preserved detailed structures, particularly in k<sub>2</sub>, achieving the best metrics (PSNR: 19.00, NRMSE: 0.3002, SSIM: 0.9289). For the 2 TC model, traditional methods exhibited high noise and blurred structures in estimating all nonlinear parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>), while DIP-based methods significantly improved image quality. DIP-PM outperformed all methods in k<sub>3</sub> (PSNR: 21.89, NRMSE: 0.4054, SSIM: 0.8797), and consequently produced the most accurate 2 TC K<sub>i</sub> images (PSNR: 22.74, NRMSE: 0.4897, SSIM: 0.8391). On real FDG data, DIP-PM also showed evident advantages in estimating K<sub>1</sub>, k<sub>2</sub> and k<sub>3</sub> while preserving myocardial structures. The results underscore the efficacy of the DIP-based direct parametric imaging in generating and improving quality of PET parametric images. This study suggests that the proposed DIP-PM method with the parameter magnification strategy can enhance the fidelity of nonlinear micro-parameter images.

UR-cycleGAN: Denoising full-body low-dose PET images using cycle-consistent Generative Adversarial Networks.

Liu Y, Sun Z, Liu H

pubmed logopapersJun 2 2025
This study aims to develop a CycleGAN based denoising model to enhance the quality of low-dose PET (LDPET) images, making them as close as possible to standard-dose PET (SDPET) images. Using a Philips Vereos PET/CT system, whole-body PET images of fluorine-18 fluorodeoxyglucose (18F-FDG) were acquired from 37 patients to facilitate the development of the UR-CycleGAN model. In this model, low-dose data were simulated by reconstructing PET images with a 30-s acquisition time, while standard-dose data were reconstructed from a 2.5-min acquisition. The network was trained in a supervised manner on 13 210 pairs of PET images, and the quality of the images was objectively evaluated using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Compared to simulated low-dose data, the denoised PET images generated by our model showed significant improvement, with a clear trend toward SDPET image quality. The proposed method reduces acquisition time by 80% compared to standard-dose imaging, while achieving image quality close to SDPET images. It also enhances visual detail fidelity, demonstrating the feasibility and practical utility of the model for significantly reducing imaging time while maintaining high image quality.

A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans.

Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P

pubmed logopapersJun 1 2025
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV<sup>2</sup>, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV<sup>2</sup>), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV<sup>2</sup>, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

Extracerebral Normalization of <sup>18</sup>F-FDG PET Imaging Combined with Behavioral CRS-R Scores Predict Recovery from Disorders of Consciousness.

Guo K, Li G, Quan Z, Wang Y, Wang J, Kang F, Wang J

pubmed logopapersJun 1 2025
Identifying patients likely to regain consciousness early on is a challenge. The assessment of consciousness levels and the prediction of wakefulness probabilities are facilitated by <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography (PET). This study aimed to develop a prognostic model for predicting 1-year postinjury outcomes in prolonged disorders of consciousness (DoC) using <sup>18</sup>F-FDG PET alongside clinical behavioral scores. Eighty-seven patients with prolonged DoC newly diagnosed with behavioral Coma Recovery Scale-Revised (CRS-R) scores and <sup>18</sup>F-FDG PET/computed tomography (18F-FDG PET/CT) scans were included. PET images were normalized by the cerebellum and extracerebral tissue, respectively. Images were divided into training and independent test sets at a ratio of 5:1. Image-based classification was conducted using the DenseNet121 network, whereas tabular-based deep learning was employed to train depth features extracted from imaging models and behavioral CRS-R scores. The performance of the models was assessed and compared using the McNemar test. Among the 87 patients with DoC who received routine treatments, 52 patients showed recovery of consciousness, whereas 35 did not. The classification of the standardized uptake value ratio by extracerebral tissue model demonstrated a higher specificity and lower sensitivity in predicting consciousness recovery than the classification of the standardized uptake value ratio by cerebellum model. With area under the curve values of 0.751 ± 0.093 and 0.412 ± 0.104 on the test sets, respectively, the difference is not statistically significant (P = 0.73). The combination of standardized uptake value ratio by extracerebral tissue and computed tomography depth features with behavioral CRS-R scores yielded the highest classification accuracy, with area under the curve values of 0.950 ± 0.027 and 0.933 ± 0.015 on the training and test sets, respectively, outperforming any individual mode. In this preliminary study, a multimodal prognostic model based on <sup>18</sup>F-FDG PET extracerebral normalization and behavioral CRS-R scores facilitated the prediction of recovery in DoC.
Page 5 of 875 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.