Sort by:
Page 49 of 3463455 results

External validation of deep learning-derived 18F-FDG PET/CT delta biomarkers for loco-regional control in head and neck cancer.

Kovacs DG, Aznar M, Van Herk M, Mohamed I, Price J, Ladefoged CN, Fischer BM, Andersen FL, McPartlin A, Osorio EMV, Abravan A

pubmed logopapersAug 30 2025
Delta biomarkers that reflect changes in tumour burden over time can support personalised follow-up in head and neck cancer. However, their clinical use can be limited by the need for manual image segmentation. This study externally evaluates a deep learning model for automatic determination of volume change from serial 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans to stratify patients by loco-regional outcome. Patient/material and methods: An externally developed deep learning algorithm for tumour segmentation was applied to pre- and post-radiotherapy (RT, with or without concomitant chemoradiotherapy) PET/CT scans of 50 consecutive head and neck cancer patients from The Christie NHS Foundation Trust, UK. The model, originally trained on pre-treatment scans from a different institution, was deployed to derive tumour volumes at both time points. The AI-derived change in tumour volume (ΔPET-Gross tumour volume (GTV)) was calculated for each patient. Kaplan-Meier analysis assessed loco-regional control based on ΔPET-GTV, dichotomised at the cohort median. In a separate secondary analysis confined to the pre‑treatment scans, a radiation oncologist qualitatively evaluated the AI‑generated PET‑GTV contours. Patients with higher ΔPET-GTV (i.e. greater tumour shrinkage) had significantly improved loco-regional control (log-rank p = 0.02). At 2 years, control was 94.1% (95% CI: 83.6-100%) vs. 53.6% (95% CI: 32.2-89.1%). Only one of nine failures occurred in the high ΔPET-GTV group. Clinician review found AI volumes acceptable for planning in 78% of cases. In two cases, the algorithm identified oropharyngeal primaries on pre-treatment PET-CT before clinical identification. Deep learning-derived ΔPET-GTV may support clinically meaningful assessment of post-treatment disease status and risk stratification, offering a scalable alternative to manual segmentation in PET/CT follow-up.

Multi-DECT Image-based Interpretable Model Incorporating Habitat Radiomics and Vision Transformer Deep Learning for Preoperative Prediction of Muscle Invasion in Bladder Cancer.

Du C, Wei W, Hu M, He J, Shen J, Liu Y, Li J, Liu L

pubmed logopapersAug 30 2025
The research aims to evaluate the effectiveness of a multi-dual-energy CT (DECT) image-based interpretable model that integrates habitat radiomics with a 3D Vision Transformer (ViT) deep learning (DL) for preoperatively predicting muscle invasion in bladder cancer (BCa). This retrospective study analyzed 200 BCa patients, who were divided into a training cohort (n=140) and a test cohort (n=60) in a 7:3 ratio. Univariate and multivariate analyses were performed on the DECT quantitative parameters to identify independent predictors, which were subsequently used to develop a DECT model. The K-means algorithm was employed to generate habitat sub-regions of BCa. Traditional radiomics (Rad) model, habitat model, ResNet 18 model, ViT model, and fusion models were constructed from the 40, 70, and 100 keV virtual monochromatic images (VMIs) in DECT. The evaluation of all models used the area under the receiver operating characteristic curve (AUC), calibration curve, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). The SHAP method was employed to interpret the optimal model and visualize its decision-making process. The Habitat-ViT model demonstrated superior performance compared to other single models, achieving an AUC of 0.997 (95% CI 0.992, 1.000) in the training cohort and 0.892 (95% CI 0.814, 0.971) in the test cohort. The incorporation of DECT quantitative parameters did not improve the performance. DCA and calibration curve assessments indicated that the Habitat-ViT model provided a favorable net benefit and demonstrated strong calibration. Furthermore, SHAP clarified the decision-making processes underlying the model's predicted outcomes. Multi-DECT image-based interpretable model that integrates habitat radiomics with a ViT DL holds promise for predicting muscle invasion status in BCa, providing valuable insights for personalized treatment planning and prognostic assessment.

Brain Atrophy Does Not Predict Clinical Progression in Progressive Supranuclear Palsy.

Quattrone A, Franzmeier N, Huppertz HJ, Seneca N, Petzold GC, Spottke A, Levin J, Prudlo J, Düzel E, Höglinger GU

pubmed logopapersAug 30 2025
Clinical progression rate is the typical primary endpoint measure in progressive supranuclear palsy (PSP) clinical trials. This longitudinal multicohort study investigated whether baseline clinical severity and regional brain atrophy could predict clinical progression in PSP-Richardson's syndrome (PSP-RS). PSP-RS patients (n = 309) from the placebo arms of clinical trials (NCT03068468, NCT01110720, NCT02985879, NCT01049399) and DescribePSP cohort were included. We investigated associations of baseline clinical and volumetric magnetic resonance imaging (MRI) data with 1-year longitudinal PSP rating scale (PSPRS) change. Machine learning (ML) models were tested to predict individual clinical trajectories. PSP-RS patients showed a mean PSPRS score increase of 10.3 points/yr. The frontal lobe volume showed the strongest association with subsequent clinical progression (β: -0.34, P < 0.001). However, ML models did not accurately predict individual progression rates (R<sup>2</sup> <0.15). Baseline clinical severity and brain atrophy could not predict individual clinical progression, suggesting no need for MRI-based stratification of patients in future PSP trials. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Synthesize contrast-enhanced ultrasound image of thyroid nodules via generative adversarial networks.

Lai M, Yao J, Zhou Y, Zhou L, Jiang T, Sui L, Tang J, Zhu X, Huang J, Wang Y, Liu J, Xu D

pubmed logopapersAug 30 2025
This study aims to explore the feasibility of employing generative adversarial networks (GAN) to generate synthetic contrast-enhanced ultrasound (CEUS) from grayscale ultrasound images of patients with thyroid nodules while dispensing with the need for ultrasound contrast agents injection. Patients who underwent preoperative thyroid CEUS examinations between January 2020 and July 2022 were collected retrospectively. The cycle-GAN framework integrated paired and unpaired learning modules was employed to develop the non-invasive image generation process. The synthetic CEUS images was generated in three phases: pre-arterial, plateau, and venous. The evaluation included quantitative similarity metrics, classification performance, and qualitative assessment by radiologists. CEUS videos of 360 thyroid nodules from 314 patients (45 years ± 12 [SD]; 272 women) in the internal dataset and 202 thyroid nodules from 183 patients (46 years ± 13 [SD]; 148 women) in the external dataset were included. In the external testing dataset, quantitative analysis revealed a significant degree of similarity between real and synthetic CEUS images (structure similarity index, 0.89 ± 0.04; peak signal-to-noise ratio, 28.17 ± 2.42). Radiologists deemed 126 of 132 [95%] synthetic CEUS images diagnostically useful. The accuracy of radiologists in distinguishing between real and synthetic images was 55.6% (95% CI: 0.49, 0.63), with an AUC of 61.0% (95% CI: 0.65, 0.68). No statistically significant difference (p > 0.05) was observed when radiologists assessed peak intensity and enhancement patterns using real CEUS and synthetic CEUS. Both quantitative analysis and radiologist evaluations exhibited that synthetic CEUS images generated by generative adversarial networks were similar to real CEUS images. QuestionIt is feasible to generate synthetic thyroid contrast-enhanced ultrasound images using generative adversarial networks without ultrasound contrast agents injection. FindingsCompared to real contrast-enhanced ultrasound images, synthetic contrast-enhanced ultrasound images exhibited high similarity and image quality. Clinical relevanceThis non-invasive and intelligent transformation may reduce the requirement for ultrasound contrast agents in certain cases, particularly in scenarios where ultrasound contrast agents administration is contraindicated, such as in patients with allergies, poor tolerance, or limited access to resources.

A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging

Peirong Liu, Oula Puonti, Xiaoling Hu, Karthik Gopinath, Annabel Sorby-Adams, Daniel C. Alexander, W. Taylor Kimberly, Juan E. Iglesias

arxiv logopreprintAug 30 2025
Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. Here we introduce BrainFM, a modality-agnostic, multi-task vision foundation model for human brain imaging. With the proposed "mild-to-severe" intra-subject generation and "real-synth" mix-up training strategy, BrainFM is resilient to the appearance of acquired images (e.g., modality, contrast, deformation, resolution, artifacts), and can be directly applied to five fundamental brain imaging tasks, including image synthesis for CT and T1w/T2w/FLAIR MRI, anatomy segmentation, scalp-to-cortical distance, bias field estimation, and registration. We evaluate the efficacy of BrainFM on eleven public datasets, and demonstrate its robustness and effectiveness across all tasks and input modalities. Code is available at https://github.com/jhuldr/BrainFM.

A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

Advancing Positron Emission Tomography Image Quantification: Artificial Intelligence-Driven Methods, Clinical Challenges, and Emerging Opportunities in Long-Axial Field-of-View Positron Emission Tomography/Computed Tomography Imaging.

Yousefirizi F, Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I

pubmed logopapersAug 29 2025
Positron emission tomography/computed tomography (PET/CT) imaging plays a pivotal role in oncology, aiding tumor metabolism assessment, disease staging, and therapy response evaluation. Traditionally, semi-quantitative metrics such as SUVmax have been extensively used, though these methods face limitations in reproducibility and predictive capability. Recent advancements in artificial intelligence (AI), particularly deep learning, have revolutionized PET imaging, significantly enhancing image quantification accuracy, and biomarker extraction capabilities, thereby enabling more precise clinical decision-making.

Ultrafast Multi-tracer Total-body PET Imaging Using a Transformer-Based Deep Learning Model.

Sun H, Sanaat A, Yi W, Salimi Y, Huang Y, Decorads CE, Castarède I, Wu H, Lu L, Zaidi H

pubmed logopapersAug 29 2025
Reducing PET scan acquisition time to minimize motion-related artifacts and improving patient comfort is always demanding. This study proposes a deep-learning framework for synthesizing diagnostic-quality PET images from ultrafast scans in multi-tracer total-body PET imaging. A retrospective analysis was conducted on clinical uEXPLORER PET/CT datasets from a single institution, including [<sup>18</sup>F]FDG (N=50), [<sup>18</sup>F]FAPI (N=45) and [<sup>68</sup>Ga]FAPI (N=60) studies. Standard 300-s acquisitions were performed for each patient, with ultrafast scan PET images (3, 6, 15, 30, and 40 s) generated through list-mode data truncation. We developed two variants of a 3D SwinUNETR-V2 architecture: Model 1 (PET-only input) and Model 2 (PET+CT fusion input). The proposed methodology was trained and tested on all three datasets using 5-fold cross-validation. The proposed Model 1 and Model 2 significantly enhanced subjective image quality and lesion detectability in multi-tracer PET images compared to the original ultrafast scans. Model 1 and Model 2 also improved objective image quality metrics. For the [¹⁸F]FDG datasets, both approaches improved peak signal-to-noise ratio (PSNR) metrics across ultra-short acquisitions: 3 s: 48.169±6.121 (Model 1) vs. 48.123±6.103 (Model 2) vs. 44.092±7.508 (ultrafast), p < 0.001; 6 s: 48.997±5.960 vs. 48.461±5.897 vs. 46.503±7.190, p < 0.001; 15 s: 50.310±5.674 vs. 50.042±5.734 vs. 49.331±6.732, p < 0.001. The proposed Model 1 and Model 2 effectively enhance image quality of multi-tracer total-body PET scans with ultrafast acquisition times. The predicted PET images demonstrate comparable performance in terms of image quality and lesion detectability.

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

Liver fat quantification at 0.55 T enabled by locally low-rank enforced deep learning reconstruction.

Helo M, Nickel D, Kannengiesser S, Kuestner T

pubmed logopapersAug 29 2025
The emergence of new medications for fatty liver conditions has increased the need for reliable and widely available assessment of MRI proton density fat fraction (MRI-PDFF). Whereas low-field MRI presents a promising solution, its utilization is challenging due to the low SNR. This work aims to enhance SNR and enable precise PDFF quantification at low-field MRI using a novel locally low-rank deep learning-based (LLR-DL) reconstruction. LLR-DL alternates between regularized SENSE and a neural network (U-Net) throughout several iterations, operating on complex-valued data. The network processes the spectral projection onto singular value bases, which are computed on local patches across the echoes dimension. The output of the network is recast into the basis of the original echoes and used as a prior for the following iteration. The final echoes are processed by a multi-echo Dixon algorithm. Two different protocols were proposed for imaging at 0.55 T. An iron-and-fat phantom and 10 volunteers were scanned on both 0.55 and 1.5 T systems. Linear regression, t-statistics, and Bland-Altman analyses were conducted. LLR-DL achieved significantly improved image quality compared to the conventional reconstruction technique, with a 32.7% increase in peak SNR and a 25% improvement in structural similarity index. PDFF repeatability was 2.33% in phantoms (0% to 100%) and 0.79% in vivo (3% to 18%), with narrow cross-field strength limits of agreement below 1.67% in phantoms and 1.75% in vivo. An LLR-DL reconstruction was developed and investigated to enable precise PDFF quantification at 0.55 T and improve consistency with 1.5 T results.
Page 49 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.