Sort by:
Page 9 of 11101 results

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

Machine Learning Models of Voxel-Level [<sup>18</sup>F] Fluorodeoxyglucose Positron Emission Tomography Data Excel at Predicting Progressive Supranuclear Palsy Pathology.

Braun AS, Satoh R, Pham NTT, Singh-Reilly N, Ali F, Dickson DW, Lowe VJ, Whitwell JL, Josephs KA

pubmed logopapersMay 30 2025
To determine whether a machine learning model of voxel level [<sup>18</sup>f]fluorodeoxyglucose positron emission tomography (PET) data could predict progressive supranuclear palsy (PSP) pathology, as well as outperform currently available biomarkers. One hundred and thirty-seven autopsied patients with PSP (n = 42) and other neurodegenerative diseases (n = 95) who underwent antemortem [<sup>18</sup>f]fluorodeoxyglucose PET and 3.0 Tesla magnetic resonance imaging (MRI) scans were analyzed. A linear support vector machine was applied to differentiate pathological groups with sensitivity analyses performed to assess the influence of voxel size and region removal. A radial basis function was also prepared to create a secondary model using the most important voxels. The models were optimized on the main dataset (n = 104), and their performance was compared with the magnetic resonance parkinsonism index measured on MRI in the independent test dataset (n = 33). The model had the highest accuracy (0.91) and F-score (0.86) when voxel size was 6mm. In this optimized model, important voxels for differentiating the groups were observed in the thalamus, midbrain, and cerebellar dentate. The secondary models found the combination of thalamus and dentate to have the highest accuracy (0.89) and F-score (0.81). The optimized secondary model showed the highest accuracy (0.91) and F-scores (0.86) in the test dataset and outperformed the magnetic resonance parkinsonism index (0.81 and 0.70, respectively). The results suggest that glucose hypometabolism in the thalamus and cerebellar dentate have the highest potential for predicting PSP pathology. Our optimized machine learning model outperformed the best currently available biomarker to predict PSP pathology. ANN NEUROL 2025.

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.

Motion-resolved parametric imaging derived from short dynamic [<sup>18</sup>F]FDG PET/CT scans.

Artesani A, van Sluis J, Providência L, van Snick JH, Slart RHJA, Noordzij W, Tsoumpas C

pubmed logopapersMay 29 2025
This study aims to assess the added value of utilizing short-dynamic whole-body PET/CT scans and implementing motion correction before quantifying metabolic rate, offering more insights into physiological processes. While this approach may not be commonly adopted, addressing motion effects is crucial due to their demonstrated potential to cause significant errors in parametric imaging. A 15-minute dynamic FDG PET acquisition protocol was utilized for four lymphoma patients undergoing therapy evaluation. Parametric imaging was obtained using a population-based input function (PBIF) derived from twelve patients with full 65-minute dynamic FDG PET acquisition. AI-based registration methods were employed to correct misalignments between both PET and ACCT and PET-to-PET. Tumour characteristics were assessed using both parametric images and standardized uptake values (SUV). The motion correction process significantly reduced mismatches between images without significantly altering voxel intensity values, except for SUV<sub>max</sub>. Following the alignment of the attenuation correction map with the PET frame, an increase in SUV<sub>max</sub> in FDG-avid lymph nodes was observed, indicating its susceptibility to spatial misalignments. In contrast, Patlak K<sub>i</sub> parameter was highly sensitive to misalignment across PET frames, that notably altered the Patlak slope. Upon completion of the motion correction process, the parametric representation revealed heterogeneous behaviour among lymph nodes compared to SUV images. Notably, reduced volume of elevated metabolic rate was determined in the mediastinal lymph nodes in contrast with an SUV of 5 g/ml, indicating potential perfusion or inflammation. Motion resolved short-dynamic PET can enhance the utility and reliability of parametric imaging, an aspect often overlooked in commercial software.

Evaluation of locoregional invasiveness of early lung adenocarcinoma manifesting as ground-glass nodules via [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT imaging.

Ruan D, Shi S, Guo W, Pang Y, Yu L, Cai J, Wu Z, Wu H, Sun L, Zhao L, Chen H

pubmed logopapersMay 24 2025
Accurate differentiation of the histologic invasiveness of early-stage lung adenocarcinoma is crucial for determining surgical strategies. This study aimed to investigate the potential of [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT in assessing the invasiveness of early lung adenocarcinoma presenting as ground-glass nodules (GGNs) and identifying imaging features with strong predictive potential. This prospective study (NCT04588064) was conducted between July 2020 and July 2022, focusing on GGNs that were confirmed postoperatively to be either invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma (MIA), or precursor glandular lesions (PGL). A total of 45 patients with 53 pulmonary GGNs were included in the study: 19 patients with GGNs associated with PGL-MIA and 34 with IAC. Lung nodules were segmented using the Segment Anything Model in Medical Images (MedSAM) and the PET Tumor Segmentation Extension. Clinical characteristics, along with conventional and high-throughput radiomics features from High-resolution CT (HRCT) and PET scans, were analysed. The predictive performance of these features in differentiating between PGL or MIA (PGL-MIA) and IAC was assessed using 5-fold cross-validation across six machine learning algorithms. Model validation was performed on an independent external test set (n = 11). The Chi-squared, Fisher's exact, and DeLong tests were employed to compare the performance of the models. The maximum standardised uptake value (SUVmax) derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET was identified as an independent predictor of IAC. A cut-off value of 1.82 yielded a sensitivity of 94% (32/34), specificity of 84% (16/19), and an overall accuracy of 91% (48/53) in the training set, while achieving 100% (12/12) accuracy in the external test set. Radiomics-based classification further improved diagnostic performance, achieving a sensitivity of 97% (33/34), specificity of 89% (17/19), accuracy of 94% (50/53), and an area under the receiver operating characteristic curve (AUC) of 0.97 [95% CI: 0.93-1.00]. Compared with the CT-based radiomics model and the PET-based model, the combined PET/CT radiomics model did not show significant improvement in predictive performance. The key predictive feature was [<sup>68</sup>Ga]Ga-FAPI-46 PET log-sigma-7-mm-3D_firstorder_RootMeanSquared. The SUVmax derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT can effectively differentiate the invasiveness of early-stage lung adenocarcinoma manifesting as GGNs. Integrating high-throughput features from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT images can considerably enhance classification accuracy. NCT04588064; URL: https://clinicaltrials.gov/study/NCT04588064 .

Joint Reconstruction of Activity and Attenuation in PET by Diffusion Posterior Sampling in Wavelet Coefficient Space

Clémentine Phung-Ngoc, Alexandre Bousse, Antoine De Paepe, Hong-Phuong Dang, Olivier Saut, Dimitris Visvikis

arxiv logopreprintMay 24 2025
Attenuation correction (AC) is necessary for accurate activity quantification in positron emission tomography (PET). Conventional reconstruction methods typically rely on attenuation maps derived from a co-registered computed tomography (CT) or magnetic resonance imaging scan. However, this additional scan may complicate the imaging workflow, introduce misalignment artifacts and increase radiation exposure. In this paper, we propose a joint reconstruction of activity and attenuation (JRAA) approach that eliminates the need for auxiliary anatomical imaging by relying solely on emission data. This framework combines wavelet diffusion model (WDM) and diffusion posterior sampling (DPS) to reconstruct fully three-dimensional (3-D) data. Experimental results show our method outperforms maximum likelihood activity and attenuation (MLAA) and MLAA with UNet-based post processing, and yields high-quality noise-free reconstructions across various count settings when time-of-flight (TOF) information is available. It is also able to reconstruct non-TOF data, although the reconstruction quality significantly degrades in low-count (LC) conditions, limiting its practical effectiveness in such settings. This approach represents a step towards stand-alone PET imaging by reducing the dependence on anatomical modalities while maintaining quantification accuracy, even in low-count scenarios when TOF information is available.

Non-invasive arterial input function estimation using an MRA atlas and machine learning.

Vashistha R, Moradi H, Hammond A, O'Brien K, Rominger A, Sari H, Shi K, Vegh V, Reutens D

pubmed logopapersMay 23 2025
Quantifying biological parameters of interest through dynamic positron emission tomography (PET) requires an arterial input function (AIF) conventionally obtained from arterial blood samples. The AIF can also be non-invasively estimated from blood pools in PET images, often identified using co-registered MRI images. Deploying methods without blood sampling or the use of MRI generally requires total body PET systems with a long axial field-of-view (LAFOV) that includes a large cardiovascular blood pool. However, the number of such systems in clinical use is currently much smaller than that of short axial field-of-view (SAFOV) scanners. We propose a data-driven approach for AIF estimation for SAFOV PET scanners, which is non-invasive and does not require MRI or blood sampling using brain PET scans. The proposed method was validated using dynamic <sup>18</sup>F-fluorodeoxyglucose [<sup>18</sup>F]FDG total body PET data from 10 subjects. A variational inference-based machine learning approach was employed to correct for peak activity. The prior was estimated using a probabilistic vascular MRI atlas, registered to each subject's PET image to identify cerebral arteries in the brain. The estimated AIF using brain PET images (IDIF-Brain) was compared to that obtained using data from the descending aorta of the heart (IDIF-DA). Kinetic rate constants (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) and net radiotracer influx (K<sub>i</sub>) for both cases were computed and compared. Qualitatively, the shape of IDIF-Brain matched that of IDIF-DA, capturing information on both the peak and tail of the AIF. The area under the curve (AUC) of IDIF-Brain and IDIF-DA were similar, with an average relative error of 9%. The mean Pearson correlations between kinetic parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>) estimated with IDIF-DA and IDIF-Brain for each voxel were between 0.92 and 0.99 in all subjects, and for K<sub>i</sub>, it was above 0.97. This study introduces a new approach for AIF estimation in dynamic PET using brain PET images, a probabilistic vascular atlas, and machine learning techniques. The findings demonstrate the feasibility of non-invasive and subject-specific AIF estimation for SAFOV scanners.
Page 9 of 11101 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.