Sort by:
Page 57 of 1381373 results

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

Optimizing MR-based attenuation correction in hybrid PET/MR using deep learning: validation with a flatbed insert and consistent patient positioning.

Wang H, Wang Y, Xue Q, Zhang Y, Qiao X, Lin Z, Zheng J, Zhang Z, Yang Y, Zhang M, Huang Q, Huang Y, Cao T, Wang J, Li B

pubmed logopapersJun 1 2025
To address the challenges of verifying MR-based attenuation correction (MRAC) in PET/MR due to CT positional mismatches and alignment issues, this study utilized a flatbed insert and arms-down positioning during PET/CT scans to achieve precise MR-CT matching for accurate MRAC evaluation. A validation dataset of 21 patients underwent whole-body [<sup>18</sup>F]FDG PET/CT followed by [<sup>18</sup>F]FDG PET/MR. A flatbed insert ensured consistent positioning, allowing direct comparison of four MRAC methods-four-tissue and five-tissue models with discrete and continuous μ-maps-against CT-based attenuation correction (CTAC). A deep learning-based framework, trained on a dataset of 300 patients, was used to generate synthesized-CTs from MR images, forming the basis for all MRAC methods. Quantitative analyses were conducted at the whole-body, region of interest, and lesion levels, with lesion-distance analysis evaluating the impact of bone proximity on standardized uptake value (SUV) quantification. Distinct differences were observed among MRAC methods in spine and femur regions. Joint histogram analysis showed MRAC-4 (continuous μ-map) closely aligned with CTAC. Lesion-distance analysis revealed MRAC-4 minimized bone-induced SUV interference (r = 0.01, p = 0.8643). However, tissues prone to bone segmentation interference, such as the spine and liver, exhibited greater SUV variability and lower reproducibility in MRAC-4 compared to MRAC-2 (2D bone segmentation, discrete μ-map) and MRAC-3 (3D bone segmentation, discrete μ-map). Using a flatbed insert, this study validated MRAC with high precision. Continuous μ-value MRAC method (MRAC-4) demonstrated superior accuracy and minimized bone-related SUV errors but faced challenges in reproducibility, particularly in bone-rich regions.

Influence of prior probability information on large language model performance in radiological diagnosis.

Fukushima T, Kurokawa R, Hagiwara A, Sonoda Y, Asari Y, Kurokawa M, Kanzawa J, Gonoi W, Abe O

pubmed logopapersJun 1 2025
Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented. Our purpose is to investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases. We analyzed 322 consecutive cases from Radiology's "Diagnosis Please" quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar's test. The overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p = 0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 64.9%, p = 0.027). Providing information that may influence prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles and adjust the weighting of their diagnostic responses based on prior information, highlighting the potential for optimizing LLM's performance in clinical settings by providing relevant contextual information.

Impact of contrast enhancement phase on CT-based radiomics analysis for predicting post-surgical recurrence in renal cell carcinoma.

Khene ZE, Bhanvadia R, Tachibana I, Sharma P, Trevino I, Graber W, Bertail T, Fleury R, Acosta O, De Crevoisier R, Bensalah K, Lotan Y, Margulis V

pubmed logopapersJun 1 2025
To investigate the effect of CT enhancement phase on radiomics features for predicting post-surgical recurrence of clear cell renal cell carcinoma (ccRCC). This retrospective study included 144 patients who underwent radical or partial nephrectomy for ccRCC. Preoperative multiphase abdominal CT scans (non-contrast, corticomedullary, and nephrographic phases) were obtained for each patient. Automated segmentation of renal masses was performed using the nnU-Net framework. Radiomics signatures (RS) were developed for each phase using ensembles of machine learning-based models (Random Survival Forests [RSF], Survival Support Vector Machines [S-SVM], and Extreme Gradient Boosting [XGBoost]) with and without feature selection. Feature selection was performed using Affinity Propagation Clustering. The primary endpoint was disease-free survival, assessed by concordance index (C-index). The study included 144 patients. Radical and partial nephrectomies were performed in 81% and 19% of patients, respectively, with 81% of tumors classified as high grade. Disease recurrence occurred in 74 patients (51%). A total of 1,316 radiomics features were extracted per phase per patient. Without feature selection, C-index values for RSF, S-SVM, XGBoost, and Penalized Cox models ranged from 0.43 to 0.61 across phases. With Affinity Propagation feature selection, C-index values improved to 0.51-0.74, with the corticomedullary phase achieving the highest performance (C-index up to 0.74). The results of our study indicate that radiomics analysis of corticomedullary phase contrast-enhanced CT images may provide valuable predictive insight into recurrence risk for non-metastatic ccRCC following surgical resection. However, the lack of external validation is a limitation, and further studies are needed to confirm these findings in independent cohorts.

Ultra-fast biparametric MRI in prostate cancer assessment: Diagnostic performance and image quality compared to conventional multiparametric MRI.

Pausch AM, Filleböck V, Elsner C, Rupp NJ, Eberli D, Hötker AM

pubmed logopapersJun 1 2025
To compare the diagnostic performance and image quality of a deep-learning-assisted ultra-fast biparametric MRI (bpMRI) with the conventional multiparametric MRI (mpMRI) for the diagnosis of clinically significant prostate cancer (csPCa). This prospective single-center study enrolled 123 biopsy-naïve patients undergoing conventional mpMRI and additionally ultra-fast bpMRI at 3 T between 06/2023-02/2024. Two radiologists (R1: 4 years and R2: 3 years of experience) independently assigned PI-RADS scores (PI-RADS v2.1) and assessed image quality (mPI-QUAL score) in two blinded study readouts. Weighted Cohen's Kappa (κ) was calculated to evaluate inter-reader agreement. Diagnostic performance was analyzed using clinical data and histopathological results from clinically indicated biopsies. Inter-reader agreement was good for both mpMRI (κ = 0.83) and ultra-fast bpMRI (κ = 0.87). Both readers demonstrated high sensitivity (≥94 %/≥91 %, R1/R2) and NPV (≥96 %/≥95 %) for csPCa detection using both protocols. The more experienced reader mostly showed notably higher specificity (≥77 %/≥53 %), PPV (≥62 %/≥45 %), and diagnostic accuracy (≥82 %/≥65 %) compared to the less experienced reader. There was no significant difference in the diagnostic performance of correctly identifying csPCa between both protocols (p > 0.05). The ultra-fast bpMRI protocol had significantly better image quality ratings (p < 0.001) and achieved a reduction in scan time of 80 % compared to conventional mpMRI. Deep-learning-assisted ultra-fast bpMRI protocols offer a promising alternative to conventional mpMRI for diagnosing csPCa in biopsy-naïve patients with comparable inter-reader agreement and diagnostic performance at superior image quality. However, reader experience remains essential for diagnostic performance.

"Advances in biomarker discovery and diagnostics for alzheimer's disease".

Bhatia V, Chandel A, Minhas Y, Kushawaha SK

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by intracellular neurofibrillary tangles with tau protein and extracellular β-amyloid plaques. Early and accurate diagnosis is crucial for effective treatment and management. The purpose of this review is to investigate new technologies that improve diagnostic accuracy while looking at the current diagnostic criteria for AD, such as clinical evaluations, cognitive testing, and biomarker-based techniques. A thorough review of the literature was done in order to assess both conventional and contemporary diagnostic methods. Multimodal strategies integrating clinical, imaging, and biochemical evaluations were emphasised. The promise of current developments in biomarker discovery was also examined, including mass spectrometry and artificial intelligence. Current diagnostic approaches include cerebrospinal fluid (CSF) biomarkers, imaging tools (MRI, PET), cognitive tests, and new blood-based markers. Integrating these technologies into multimodal diagnostic procedures enhances diagnostic accuracy and distinguishes dementia from other conditions. New technologies that hold promise for improving biomarker identification and diagnostic reliability include mass spectrometry and artificial intelligence. Advancements in AD diagnostics underscore the need for accessible, minimally invasive, and cost-effective techniques to facilitate early detection and intervention. The integration of novel technologies with traditional methods may significantly enhance the accuracy and feasibility of AD diagnosis.

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.

Automated contouring for breast cancer radiotherapy in the isocentric lateral decubitus position: a neural network-based solution for enhanced precision and efficiency.

Loap P, Monteil R, Kirova Y, Vu-Bezin J

pubmed logopapersJun 1 2025
Adjuvant radiotherapy is essential for reducing local recurrence and improving survival in breast cancer patients, but it carries a risk of ischemic cardiac toxicity, which increases with heart exposure. The isocentric lateral decubitus position, where the breast rests flat on a support, reduces heart exposure and leads to delivery of a more uniform dose. This position is particularly beneficial for patients with unique anatomies, such as those with pectus excavatum or larger breast sizes. While artificial intelligence (AI) algorithms for autocontouring have shown promise, they have not been tailored to this specific position. This study aimed to develop and evaluate a neural network-based autocontouring algorithm for patients treated in the isocentric lateral decubitus position. In this single-center study, 1189 breast cancer patients treated after breast-conserving surgery were included. Their simulation CT scans (1209 scans) were used to train and validate a neural network-based autocontouring algorithm (nnU-Net). Of these, 1087 scans were used for training, and 122 scans were reserved for validation. The algorithm's performance was assessed using the Dice similarity coefficient (DSC) to compare the automatically delineated volumes with manual contours. A clinical evaluation of the algorithm was performed on 30 additional patients, with contours rated by two expert radiation oncologists. The neural network-based algorithm achieved a segmentation time of approximately 4 min, compared to 20 min for manual segmentation. The DSC values for the validation cohort were 0.88 for the treated breast, 0.90 for the heart, 0.98 for the right lung, and 0.97 for the left lung. In the clinical evaluation, 90% of the automatically contoured breast volumes were rated as acceptable without corrections, while the remaining 10% required minor adjustments. All lung contours were accepted without corrections, and heart contours were rated as acceptable in 93.3% of cases, with minor corrections needed in 6.6% of cases. This neural network-based autocontouring algorithm offers a practical, time-saving solution for breast cancer radiotherapy planning in the isocentric lateral decubitus position. Its strong geometric performance, clinical acceptability, and significant time efficiency make it a valuable tool for modern radiotherapy practices, particularly in high-volume centers.

A continuous-action deep reinforcement learning-based agent for coronary artery centerline extraction in coronary CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersJun 1 2025
The lumen centerline of the coronary artery allows vessel reconstruction used to detect stenoses and plaques. Discrete-action-based centerline extraction methods suffer from artifacts and plaques. This study aimed to develop a continuous-action-based method which performs more effectively in cases involving artifacts or plaques. A continuous-action deep reinforcement learning-based model was trained to predict the artery's direction and radius value. The model is based on an Actor-Critic architecture. The Actor learns a deterministic policy to output the actions made by an agent. These actions indicate the centerline's direction and radius value consecutively. The Critic learns a value function to evaluate the quality of the agent's actions. A novel DDR reward was introduced to measure the agent's action (both centerline extraction and radius estimate) at each step. The method achieved an average OV of 95.7%, OF of 93.6%, OT of 97.3%, and AI of 0.22 mm in 80 test data. In 53 cases with artifacts or plaques, it achieved an average OV of 95.0%, OF of 91.5%, OT of 96.7%, and AI of 0.23 mm. The 95% limits of agreement between the reference and estimated radius values were <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.46 mm and 0.43 mm in the 80 test data. Experiments demonstrate that the Actor-Critic architecture can achieve efficient centerline extraction and radius estimate. Compared with discrete-action-based methods, our method performs more effectively in cases involving artifacts or plaques. The extracted centerlines and radius values allow accurate coronary artery reconstruction that facilitates the detection of stenoses and plaques.

BCT-Net: semantic-guided breast cancer segmentation on BUS.

Xin J, Yu Y, Shen Q, Zhang S, Su N, Wang Z

pubmed logopapersJun 1 2025
Accurately and swiftly segmenting breast tumors is significant for cancer diagnosis and treatment. Ultrasound imaging stands as one of the widely employed methods in clinical practice. However, due to challenges such as low contrast, blurred boundaries, and prevalent shadows in ultrasound images, tumor segmentation remains a daunting task. In this study, we propose BCT-Net, a network amalgamating CNN and transformer components for breast tumor segmentation. BCT-Net integrates a dual-level attention mechanism to capture more features and redefines the skip connection module. We introduce the utilization of a classification task as an auxiliary task to impart additional semantic information to the segmentation network, employing supervised contrastive learning. A hybrid objective loss function is proposed, which combines pixel-wise cross-entropy, binary cross-entropy, and supervised contrastive learning loss. Experimental results demonstrate that BCT-Net achieves high precision, with Pre and DSC indices of 86.12% and 88.70%, respectively. Experiments conducted on the BUSI dataset of breast ultrasound images manifest that this approach exhibits high accuracy in breast tumor segmentation.
Page 57 of 1381373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.