Sort by:
Page 6 of 876 results

Extracerebral Normalization of <sup>18</sup>F-FDG PET Imaging Combined with Behavioral CRS-R Scores Predict Recovery from Disorders of Consciousness.

Guo K, Li G, Quan Z, Wang Y, Wang J, Kang F, Wang J

pubmed logopapersJun 1 2025
Identifying patients likely to regain consciousness early on is a challenge. The assessment of consciousness levels and the prediction of wakefulness probabilities are facilitated by <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography (PET). This study aimed to develop a prognostic model for predicting 1-year postinjury outcomes in prolonged disorders of consciousness (DoC) using <sup>18</sup>F-FDG PET alongside clinical behavioral scores. Eighty-seven patients with prolonged DoC newly diagnosed with behavioral Coma Recovery Scale-Revised (CRS-R) scores and <sup>18</sup>F-FDG PET/computed tomography (18F-FDG PET/CT) scans were included. PET images were normalized by the cerebellum and extracerebral tissue, respectively. Images were divided into training and independent test sets at a ratio of 5:1. Image-based classification was conducted using the DenseNet121 network, whereas tabular-based deep learning was employed to train depth features extracted from imaging models and behavioral CRS-R scores. The performance of the models was assessed and compared using the McNemar test. Among the 87 patients with DoC who received routine treatments, 52 patients showed recovery of consciousness, whereas 35 did not. The classification of the standardized uptake value ratio by extracerebral tissue model demonstrated a higher specificity and lower sensitivity in predicting consciousness recovery than the classification of the standardized uptake value ratio by cerebellum model. With area under the curve values of 0.751 ± 0.093 and 0.412 ± 0.104 on the test sets, respectively, the difference is not statistically significant (P = 0.73). The combination of standardized uptake value ratio by extracerebral tissue and computed tomography depth features with behavioral CRS-R scores yielded the highest classification accuracy, with area under the curve values of 0.950 ± 0.027 and 0.933 ± 0.015 on the training and test sets, respectively, outperforming any individual mode. In this preliminary study, a multimodal prognostic model based on <sup>18</sup>F-FDG PET extracerebral normalization and behavioral CRS-R scores facilitated the prediction of recovery in DoC.

A Novel Theranostic Strategy for Malignant Pulmonary Nodules by Targeted CECAM6 with <sup>89</sup>Zr/<sup>131</sup>I-Labeled Tinurilimab.

Chen C, Zhu K, Wang J, Pan D, Wang X, Xu Y, Yan J, Wang L, Yang M

pubmed logopapersJun 1 2025
Lung adenocarcinoma (LUAD) constitutes a major cause of cancer-related fatalities worldwide. Early identification of malignant pulmonary nodules constitutes the most effective approach to reducing the mortality of LUAD. Despite the wide application of low-dose computed tomography (LDCT) in the early screening of LUAD, the identification of malignant pulmonary nodules by it remains a challenge. In this study, CEACAM6 (also called CD66c) as a potential biomarker is investigated for differentiating malignant lung nodules. Then, the CEACAM6-targeting monoclonal antibody (mAb, tinurilimab) is radiolabeled with <sup>89</sup>Zr and <sup>131</sup>I for theranostic applications. In terms of diagnosis, machine learning confirms CEACAM6 as a specific extracellular marker for discrimination between LUAD and benign nodules. The <sup>89</sup>Zr-labeled mAb is highly specific uptake in CEACAM6-positive LUAD via positron emission tomography (PET) imaging, and its ability to distinguish in malignant pulmonary nodules are significantly higher than that of <sup>18</sup>F Fluorodeoxyglucose (FDG) by positron emission tomography/magnetic resonance (PET/MR) imaging. While the <sup>131</sup>I-labeled mAb serving as the therapeutic aspect has significantly suppressed tumor growth after a single treatment. These results proves that <sup>89</sup>Zr/<sup>131</sup>I-labeled tinurilimab facilitates the differential capacity of malignant pulmonary nodules and radioimmunotherapy of LUAD in preclinical models. Further clinical evaluation and translation of this CEACAM6-targeted theranostics may be significant help in diagnosis and treatment of LUAD.

Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images.

Bargagna F, Zigrino D, De Santi LA, Genovesi D, Scipioni M, Favilli B, Vergaro G, Emdin M, Giorgetti A, Positano V, Santarelli MF

pubmed logopapersJun 1 2025
Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.

A Robust [<sup>18</sup>F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification.

Pasini G, Stefano A, Mantarro C, Richiusa S, Comelli A, Russo GI, Sabini MG, Cosentino S, Ippolito M, Russo G

pubmed logopapersJun 1 2025
The aim of this study is to investigate the role of [<sup>18</sup>F]-PSMA-1007 PET in differentiating high- and low-risk prostate cancer (PCa) through a robust radiomics ensemble model. This retrospective study included 143 PCa patients who underwent [<sup>18</sup>F]-PSMA-1007 PET/CT imaging. PCa areas were manually contoured on PET images and 1781 image biomarker standardization initiative (IBSI)-compliant radiomics features were extracted. A 30 times iterated preliminary analysis pipeline, comprising of the least absolute shrinkage and selection operator (LASSO) for feature selection and fivefold cross-validation for model optimization, was adopted to identify the most robust features to dataset variations, select candidate models for ensemble modelling, and optimize hyperparameters. Thirteen subsets of selected features, 11 generated from the preliminary analysis plus two additional subsets, the first based on the combination of robust and fine-tuning features, and the second only on fine-tuning features were used to train the model ensemble. Accuracy, area under curve (AUC), sensitivity, specificity, precision, and f-score values were calculated to provide models' performance. Friedman test, followed by post hoc tests corrected with Dunn-Sidak correction for multiple comparisons, was used to verify if statistically significant differences were found in the different ensemble models over the 30 iterations. The model ensemble trained with the combination of robust and fine-tuning features obtained the highest average accuracy (79.52%), AUC (85.75%), specificity (84.29%), precision (82.85%), and f-score (78.26%). Statistically significant differences (p < 0.05) were found for some performance metrics. These findings support the role of [<sup>18</sup>F]-PSMA-1007 PET radiomics in improving risk stratification for PCa, by reducing dependence on biopsies.

Treatment Response Assessment According to Updated PROMISE Criteria in Patients with Metastatic Prostate Cancer Using an Automated Imaging Platform for Identification, Measurement, and Temporal Tracking of Disease.

Benitez CM, Sahlstedt H, Sonni I, Brynolfsson J, Berenji GR, Juarez JE, Kane N, Tsai S, Rettig M, Nickols NG, Duriseti S

pubmed logopapersJun 1 2025
Prostate-specific membrane antigen (PSMA) molecular imaging is widely used for disease assessment in prostate cancer (PC). Artificial intelligence (AI) platforms such as automated Prostate Cancer Molecular Imaging Standardized Evaluation (aPROMISE) identify and quantify locoregional and distant disease, thereby expediting lesion identification and standardizing reporting. Our aim was to evaluate the ability of the updated aPROMISE platform to assess treatment responses based on integration of the RECIP (Response Evaluation Criteria in PSMA positron emission tomography-computed tomography [PET/CT]) 1.0 classification. The study included 33 patients with castration-sensitive PC (CSPC) and 34 with castration-resistant PC (CRPC) who underwent PSMA-targeted molecular imaging before and ≥2 mo after completion of treatment. Tracer-avid lesions were identified using aPROMISE for pretreatment and post-treatment PET/CT scans. Detected lesions were manually approved by an experienced nuclear medicine physician, and total tumor volume (TTV) was calculated. Response was assessed according to RECIP 1.0 as CR (complete response), PR (partial response), PD (progressive disease), or SD (stable disease). KEY FINDINGS AND LIMITATIONS: aPROMISE identified 1576 lesions on baseline scans and 1631 lesions on follow-up imaging, 618 (35%) of which were new. Of the 67 patients, aPROMISE classified four as CR, 16 as PR, 34 as SD, and 13 as PD; five cases were misclassified. The agreement between aPROMISE and clinician validation was 89.6% (κ = 0.79). aPROMISE may serve as a novel assessment tool for treatment response that integrates PSMA PET/CT results and RECIP imaging criteria. The precision and accuracy of this automated process should be validated in prospective clinical studies. We used an artificial intelligence (AI) tool to analyze scans for prostate cancer before and after treatment to see if we could track how cancer spots respond to treatment. We found that the AI approach was successful in tracking individual tumor changes, showing which tumors disappeared, and identifying new tumors in response to prostate cancer treatment.

Empowering PET imaging reporting with retrieval-augmented large language models and reading reports database: a pilot single center study.

Choi H, Lee D, Kang YK, Suh M

pubmed logopapersJun 1 2025
The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making. We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center. The system uses vector space embedding to facilitate similarity-based retrieval. Queries prompt the system to generate context-based answers and identify similar cases or differential diagnoses. From routine clinical PET readings, experienced nuclear medicine physicians evaluated the performance of system in terms of the relevance of queried similar cases and the appropriateness score of suggested potential diagnoses. The system efficiently organized embedded vectors from PET reports, showing that imaging reports were accurately clustered within the embedded vector space according to the diagnosis or PET study type. Based on this system, a proof-of-concept chatbot was developed and showed the framework's potential in referencing reports of previous similar cases and identifying exemplary cases for various purposes. From routine clinical PET readings, 84.1% of the cases retrieved relevant similar cases, as agreed upon by all three readers. Using the RAG system, the appropriateness score of the suggested potential diagnoses was significantly better than that of the LLM without RAG. Additionally, it demonstrated the capability to offer differential diagnoses, leveraging the vast database to enhance the completeness and precision of generated reports. The integration of RAG LLM with a large database of PET imaging reports suggests the potential to support clinical practice of nuclear medicine imaging reading by various tasks of AI including finding similar cases and deriving potential diagnoses from them. This study underscores the potential of advanced AI tools in transforming medical imaging reporting practices.

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.

Machine Learning Models of Voxel-Level [<sup>18</sup>F] Fluorodeoxyglucose Positron Emission Tomography Data Excel at Predicting Progressive Supranuclear Palsy Pathology.

Braun AS, Satoh R, Pham NTT, Singh-Reilly N, Ali F, Dickson DW, Lowe VJ, Whitwell JL, Josephs KA

pubmed logopapersMay 30 2025
To determine whether a machine learning model of voxel level [<sup>18</sup>f]fluorodeoxyglucose positron emission tomography (PET) data could predict progressive supranuclear palsy (PSP) pathology, as well as outperform currently available biomarkers. One hundred and thirty-seven autopsied patients with PSP (n = 42) and other neurodegenerative diseases (n = 95) who underwent antemortem [<sup>18</sup>f]fluorodeoxyglucose PET and 3.0 Tesla magnetic resonance imaging (MRI) scans were analyzed. A linear support vector machine was applied to differentiate pathological groups with sensitivity analyses performed to assess the influence of voxel size and region removal. A radial basis function was also prepared to create a secondary model using the most important voxels. The models were optimized on the main dataset (n = 104), and their performance was compared with the magnetic resonance parkinsonism index measured on MRI in the independent test dataset (n = 33). The model had the highest accuracy (0.91) and F-score (0.86) when voxel size was 6mm. In this optimized model, important voxels for differentiating the groups were observed in the thalamus, midbrain, and cerebellar dentate. The secondary models found the combination of thalamus and dentate to have the highest accuracy (0.89) and F-score (0.81). The optimized secondary model showed the highest accuracy (0.91) and F-scores (0.86) in the test dataset and outperformed the magnetic resonance parkinsonism index (0.81 and 0.70, respectively). The results suggest that glucose hypometabolism in the thalamus and cerebellar dentate have the highest potential for predicting PSP pathology. Our optimized machine learning model outperformed the best currently available biomarker to predict PSP pathology. ANN NEUROL 2025.
Page 6 of 876 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.