Sort by:
Page 2 of 875 results

Precision Diagnosis and Treatment Monitoring of Glioma via PET Radiomics.

Zhou C, Ji P, Gong B, Kou Y, Fan Z, Wang L

pubmed logopapersJul 17 2025
Glioma, the most common primary intracranial tumor, poses significant challenges to precision diagnosis and treatment due to its heterogeneity and invasiveness. With the introduction of the 2021 WHO classification standard based on molecular biomarkers, the role of imaging in non-invasive subtyping and therapeutic monitoring of gliomas has become increasingly crucial. While conventional MRI shows limitations in assessing metabolic status and differentiating tumor recurrence, positron emission tomography (PET) combined with radiomics and artificial intelligence technologies offers a novel paradigm for precise diagnosis and treatment monitoring through quantitative extraction of multimodal imaging features (e.g., intensity, texture, dynamic parameters). This review systematically summarizes the technical workflow of PET radiomics (including tracer selection, image segmentation, feature extraction, and model construction) and its applications in predicting molecular subtypes (such as IDH mutation and MGMT methylation), distinguishing recurrence from treatment-related changes, and prognostic stratification. Studies demonstrate that amino acid tracers (e.g., <sup>18</sup>F-FET, <sup>11</sup>C-MET) combined with multimodal radiomics models significantly outperform traditional parametric analysis in diagnostic efficacy. Nevertheless, current research still faces challenges including data heterogeneity, insufficient model interpretability, and lack of clinical validation. Future advancements require multicenter standardized protocols, open-source algorithm frameworks, and multi-omics integration to facilitate the transformative clinical translation of PET radiomics from research to practice.

An interpretable machine learning model for predicting bone marrow invasion in patients with lymphoma via <sup>18</sup>F-FDG PET/CT: a multicenter study.

Zhu X, Lu D, Wu Y, Lu Y, He L, Deng Y, Mu X, Fu W

pubmed logopapersJul 15 2025
Accurate identification of bone marrow invasion (BMI) is critical for determining the prognosis of and treatment strategies for lymphoma. Although bone marrow biopsy (BMB) is the current gold standard, its invasive nature and sampling errors highlight the necessity for noninvasive alternatives. We aimed to develop and validate an interpretable machine learning model that integrates clinical data, <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) parameters, radiomic features, and deep learning features to predict BMI in lymphoma patients. We included 159 newly diagnosed lymphoma patients (118 from Center I and 41 from Center II), excluding those with prior treatments, incomplete data, or under 18 years of age. Data from Center I were randomly allocated to training (n = 94) and internal test (n = 24) sets; Center II served as an external validation set (n = 41). Clinical parameters, PET/CT features, radiomic characteristics, and deep learning features were comprehensively analyzed and integrated into machine learning models. Model interpretability was elucidated via Shapley Additive exPlanations (SHAPs). Additionally, a comparative diagnostic study evaluated reader performance with and without model assistance. BMI was confirmed in 70 (44%) patients. The key clinical predictors included B symptoms and platelet count. Among the tested models, the ExtraTrees classifier achieved the best performance. For external validation, the combined model (clinical + PET/CT + radiomics + deep learning) achieved an area under the receiver operating characteristic curve (AUC) of 0.886, outperforming models that use only clinical (AUC 0.798), radiomic (AUC 0.708), or deep learning features (AUC 0.662). SHAP analysis revealed that PET radiomic features (especially PET_lbp_3D_m1_glcm_DependenceEntropy), platelet count, and B symptoms were significant predictors of BMI. Model assistance significantly enhanced junior reader performance (AUC improved from 0.663 to 0.818, p = 0.03) and improved senior reader accuracy, although not significantly (AUC 0.768 to 0.867, p = 0.10). Our interpretable machine learning model, which integrates clinical, imaging, radiomic, and deep learning features, demonstrated robust BMI prediction performance and notably enhanced physician diagnostic accuracy. These findings underscore the clinical potential of interpretable AI to complement medical expertise and potentially reduce the reliance on invasive BMB for lymphoma staging.

Advanced finite segmentation model with hybrid classifier learning for high-precision brain tumor delineation in PET imaging.

Murugan K, Palanisamy S, Sathishkumar N, Alshalali TAN

pubmed logopapersJul 15 2025
Brain tumor segmentation plays a crucial role in clinical diagnostics and treatment planning, yet accurate and efficient segmentation remains a significant challenge due to complex tumor structures and variations in imaging modalities. Multi-feature selection and region classification depend on continuous homogeneous features to improve the precision of tumor detection. This classification is required to suppress the discreteness across various extraction rates to consent to the smallest segmentation region that is infected. This study proposes a Finite Segmentation Model (FSM) with Improved Classifier Learning (ICL) to enhance segmentation accuracy in Positron Emission Tomography (PET) images. The FSM-ICL framework integrates advanced textural feature extraction, deep learning-based classification, and an adaptive segmentation approach to differentiate between tumor and non-tumor regions with high precision. Our model is trained and validated on the Synthetic Whole-Head Brain Tumor Segmentation Dataset, consisting of 1000 training and 426 testing images, achieving a segmentation accuracy of 92.57%, significantly outperforming existing approaches such as NRAN (62.16%), DSSE-V-Net (71.47%), and DenseUNet+ (83.93%). Furthermore, FSM-ICL enhances classification precision to 95.59%, reduces classification error to 5.67%, and minimizes classification time to 572.39 ms, demonstrating a 10.09% improvement in precision and a 10.96% boost in classification rates over state-of-the-art methods. The hybrid classifier learning approach effectively addresses segmentation discreteness, ensuring continuous and discrete tumor region detection with superior feature differentiation. This work has significant implications for automated tumor detection, personalized treatment strategies, and AI-driven medical imaging advancements. Future directions include incorporating micro-segmentation and pre-classification techniques to further optimize performance in dense pixel-packed datasets.

<sup>18</sup>F-FDG PET-based liver segmentation using deep-learning.

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.

Comparison of diagnostic performance between manual diagnosis following PROMISE V2 and aPROMISE utilizing Ga/F-PSMA PET/CT.

Enei Y, Yanagisawa T, Okada A, Kuruma H, Okazaki C, Watanabe K, Lenzo NP, Kimura T, Miki K

pubmed logopapersJul 15 2025
Automated PROMISE (aPROMISE), which is an artificial intelligence-supported software for prostate-specific membrane antigen (PSMA) PET/CT based on PROMISE V2, has demonstrated diagnostic utility with better correspondence rates compared to manual diagnosis. However, previous studies have consistently utilized <sup>18</sup>F-PSMA PET/CT. Therefore, we investigated the diagnostic utility of aPROMISE using both <sup>18</sup>F- and <sup>68</sup> Ga-PSMA PET/CT of Japanese patients with metastatic prostate cancer (mPCa). We retrospectively evaluated 21 PSMA PET/CT images (<sup>68</sup> Ga-PSMA PET/CT: n = 12, <sup>18</sup>F-PSMA PET/CT: n = 9) from 21 patients with mPCa. A single, well-experienced nuclear radiologist performed manual diagnosis following PROMISE V2 and subsequently performed aPROMISE-assisted diagnosis to assess miTNM and details of metastatic sites. We compared the diagnostic time and correspondence rates of miTNM diagnosis between manual and aPROMISE-assisted diagnoses. Additionally, we investigated the differences in diagnostic performance between the two radioisotopes. aPROMISE-assisted diagnosis was significantly associated with shorter median diagnostic time compared to manual diagnosis (427 s [IQR: 370-834] vs. 1,114 s [IQR: 922-1291], p < 0.001). The time reduction with aPROMISE-assisted diagnosis was particularly notable when using <sup>68</sup> Ga-PSMA PET/CT. aPROMISE had high diagnostic accuracy with 100% sensitivity for miT, M1a, and M1b stages. Notably, for M1b stages, aPROMISE achieved 100% sensitivity and specificity, regardless of the type of radioisotope used. However, aPROMISE was misinterpreted in lymph node detection in some cases and missed five visceral metastases (2 adrenal and 3 liver), resulting in lower sensitivity for miM1c stage (63%). In addition to detecting metastatic sites, aPROMISE successfully provided detailed metrics, including the number of metastatic lesions, total metastatic volume, and SUV mean. Despite the preliminary nature of the study, aPROMISE-assisted diagnosis significantly reduces diagnostic time and achieves satisfactory accuracy compared to manual diagnosis. While aPROMISE is effective in detecting bone metastases, its limitations in identifying lymph node and visceral metastases must be carefully addressed. This study supports the utility of aPROMISE in Japanese patients with mPCa and underscores the need for further validation in larger cohorts.

A Multi-Modal Deep Learning Framework for Predicting PSA Progression-Free Survival in Metastatic Prostate Cancer Using PSMA PET/CT Imaging

Ghaderi, H., Shen, C., Issa, W., Pomper, M. G., Oz, O. K., Zhang, T., Wang, J., Yang, D. X.

medrxiv logopreprintJul 14 2025
PSMA PET/CT imaging has been increasingly utilized in the management of patients with metastatic prostate cancer (mPCa). Imaging biomarkers derived from PSMA PET may provide improved prognostication and prediction of treatment response for mPCa patients. This study investigates a novel deep learning-derived imaging biomarker framework for outcome prediction using multi-modal PSMA PET/CT and clinical features. A single institution cohort of 99 mPCa patients with 396 lesions was evaluated. Imaging features were extracted from cropped lesion areas and combined with clinical variables including body mass index, ECOG performance status, prostate specific antigen (PSA) level, Gleason score, and treatments received. The PSA progression-free survival (PFS) model was trained using a ResNet architecture with a Cox proportional hazards loss function using five-fold cross-validation. Performance was assessed using concordance index (C-index) and Kaplan-Meier survival analysis. Among evaluated model architectures, the ResNet-18 backbone offered the best performance. The multi-modal deep learning framework achieved a 5-fold cross-validation C-index ranging from 0.75 to 0.94, outperforming models incorporating imaging only (0.70-0.89) and clinical features only (0.53-0.65). Kaplan-Meir survival analysis performed on the deep learning-derived predictions demonstrated clear risk stratification, with a median PSA progression free survival (PFS) of 19.7 months in the high-risk group and 26 months in the low-risk group (P < 0.001). Deep learning-derived imaging biomarker based on PSMA PET/CT can effectively predict PSA PFS for mPCa patients. Further clinical validation in prospective cohorts is warranted.

Effect of data-driven motion correction for respiratory movement on lesion detectability in PET-CT: a phantom study.

de Winter MA, Gevers R, Lavalaye J, Habraken JBA, Maspero M

pubmed logopapersJul 11 2025
While data-driven motion correction (DDMC) techniques have proven to enhance the visibility of lesions affected by motion, their impact on overall detectability remains unclear. This study investigates whether DDMC improves lesion detectability in PET-CT using FDG-18F. A moving platform simulated respiratory motion in a NEMA-IEC body phantom with varying amplitudes (0, 7, 10, 20, 30 mm) and target-to-background ratios (2, 5, 10.5). Scans were reconstructed with and without DDMC, and the spherical targets' maximal and mean recovery coefficient (RC) and contrast-to-noise ratio (CNR) were measured. DDMC results in higher RC values in the target spheres. CNR values increase for small, high-motion affected targets but decrease for larger spheres with smaller amplitudes. A sub-analysis shows that DDMC increases the contrast of the sphere along with a 36% increase in background noise. While DDMC significantly enhances contrast (RC), its impact on detectability (CNR) is less profound due to increased background noise. CNR improves for small targets with high motion amplitude, potentially enhancing the detectability of low-uptake lesions. Given that the increased background noise may reduce detectability for targets unaffected by motion, we suggest that DDMC reconstructions are used best in addition to non-DDMC reconstructions.

Deep supervised transformer-based noise-aware network for low-dose PET denoising across varying count levels.

Azimi MS, Felfelian V, Zeraatkar N, Dadgar H, Arabi H, Zaidi H

pubmed logopapersJul 8 2025
Reducing radiation dose from PET imaging is essential to minimize cancer risks; however, it often leads to increased noise and degraded image quality, compromising diagnostic reliability. Recent advances in deep learning have shown promising results in addressing these limitations through effective denoising. However, existing networks trained on specific noise levels often fail to generalize across diverse acquisition conditions. Moreover, training multiple models for different noise levels is impractical due to data and computational constraints. This study aimed to develop a supervised Swin Transformer-based unified noise-aware (ST-UNN) network that handles diverse noise levels and reconstructs high-quality images in low-dose PET imaging. We present a Swin Transformer-based Noise-Aware Network (ST-UNN), which incorporates multiple sub-networks, each designed to address specific noise levels ranging from 1 % to 10 %. An adaptive weighting mechanism dynamically integrates the outputs of these sub-networks to achieve effective denoising. The model was trained and evaluated using PET/CT dataset encompassing the entire head and malignant lesions in the head and neck region. Performance was assessed using a combination of structural and statistical metrics, including the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Standardized Uptake Value (SUV) mean bias, SUV<sub>max</sub> bias, and Root Mean Square Error (RMSE). This comprehensive evaluation ensured reliable results for both global and localized regions within PET images. The ST-UNN consistently outperformed conventional networks, particularly in ultra-low-dose scenarios. At 1 % count level, it achieved a PSNR of 34.77, RMSE of 0.05, and SSIM of 0.97, notably surpassing the baseline networks. It also achieved the lowest SUV<sub>mean</sub> bias (0.08) and RMSE lesion (0.12) at this level. Across all count levels, ST-UNN maintained high performance and low error, demonstrating strong generalization and diagnostic integrity. ST-UNN offers a scalable, transformer-based solution for low-dose PET imaging. By dynamically integrating sub-networks, it effectively addresses noise variability and provides superior image quality, thereby advancing the capabilities of low-dose and dynamic PET imaging.

AI lesion tracking in PET/CT imaging: a proposal for a Siamese-based CNN pipeline applied to PSMA PET/CT scans.

Hein SP, Schultheiss M, Gafita A, Zaum R, Yagubbayli F, Tauber R, Rauscher I, Eiber M, Pfeiffer F, Weber WA

pubmed logopapersJul 8 2025
Assessing tumor response to systemic therapies is one of the main applications of PET/CT. Routinely, only a small subset of index lesions out of multiple lesions is analyzed. However, this operator dependent selection may bias the results due to possible significant inter-metastatic heterogeneity of response to therapy. Automated, AI-based approaches for lesion tracking hold promise in enabling the analysis of many more lesions and thus providing a better assessment of tumor response. This work introduces a Siamese CNN approach for lesion tracking between PET/CT scans. Our approach is applied on the laborious task of tracking a high number of bone lesions in full-body baseline and follow-up [<sup>68</sup>Ga]Ga- or [<sup>18</sup>F]F-PSMA PET/CT scans after two cycles of [<sup>177</sup>Lu]Lu-PSMA therapy of metastatic castration resistant prostate cancer patients. Data preparation includes lesion segmentation and affine registration. Our algorithm extracts suitable lesion patches and forwards them into a Siamese CNN trained to classify the lesion patch pairs as corresponding or non-corresponding lesions. Experiments have been performed with different input patch types and a Siamese network in 2D and 3D. The CNN model successfully learned to classify lesion assignments, reaching an accuracy of 83 % in its best configuration with an AUC = 0.91. For corresponding lesions the pipeline accomplished lesion tracking accuracy of even 89 %. We proved that a CNN may facilitate the tracking of multiple lesions in PSMA PET/CT scans. Future clinical studies are necessary if this improves the prediction of the outcome of therapies.

Emerging Frameworks for Objective Task-based Evaluation of Quantitative Medical Imaging Methods

Yan Liu, Huitian Xia, Nancy A. Obuchowski, Richard Laforest, Arman Rahmim, Barry A. Siegel, Abhinav K. Jha

arxiv logopreprintJul 7 2025
Quantitative imaging (QI) is demonstrating strong promise across multiple clinical applications. For clinical translation of QI methods, objective evaluation on clinically relevant tasks is essential. To address this need, multiple evaluation strategies are being developed. In this paper, based on previous literature, we outline four emerging frameworks to perform evaluation studies of QI methods. We first discuss the use of virtual imaging trials (VITs) to evaluate QI methods. Next, we outline a no-gold-standard evaluation framework to clinically evaluate QI methods without ground truth. Third, a framework to evaluate QI methods for joint detection and quantification tasks is outlined. Finally, we outline a framework to evaluate QI methods that output multi-dimensional parameters, such as radiomic features. We review these frameworks, discussing their utilities and limitations. Further, we examine future research areas in evaluation of QI methods. Given the recent advancements in PET, including long axial field-of-view scanners and the development of artificial-intelligence algorithms, we present these frameworks in the context of PET.
Page 2 of 875 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.