Sort by:
Page 1 of 1098 results
Next

Association Between Body Composition and Cardiometabolic Outcomes : A Prospective Cohort Study.

Jung M, Reisert M, Rieder H, Rospleszcz S, Lu MT, Bamberg F, Raghu VK, Weiss J

pubmed logopapersSep 30 2025
Current measures of adiposity have limitations. Artificial intelligence (AI) models may accurately and efficiently estimate body composition (BC) from routine imaging. To assess the association of AI-derived BC compartments from magnetic resonance imaging (MRI) with cardiometabolic outcomes. Prospective cohort study. UK Biobank (UKB) observational cohort study. 33 432 UKB participants with no history of diabetes, myocardial infarction, or ischemic stroke (mean age, 65.0 years [SD, 7.8]; mean body mass index [BMI], 25.8 kg/m<sup>2</sup> [SD, 4.2]; 52.8% female) who underwent whole-body MRI. An AI tool was applied to MRI to derive 3-dimensional (3D) BC measures, including subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), skeletal muscle (SM), and SM fat fraction (SMFF), and then calculate their relative distribution. Sex-stratified associations of these relative compartments with incident diabetes mellitus (DM) and major adverse cardiovascular events (MACE) were assessed using restricted cubic splines. Adipose tissue compartments and SMFF increased and SM decreased with age. After adjustment for age, smoking, and hypertension, greater adiposity and lower SM proportion were associated with higher incidence of DM and MACE after a median follow-up of 4.2 years in sex-stratified analyses; however, after additional adjustment for BMI and waist circumference (WC), only elevated VAT proportions and high SMFF (top fifth percentile in the cohort for each) were associated with increased risk for DM (respective adjusted hazard ratios [aHRs], 2.16 [95% CI, 1.59 to 2.94] and 1.27 [CI, 0.89 to 1.80] in females and 1.84 [CI, 1.48 to 2.27] and 1.84 [CI, 1.43 to 2.37] in males) and MACE (1.37 [CI, 1.00 to 1.88] and 1.72 [CI, 1.23 to 2.41] in females and 1.22 [CI, 0.99 to 1.50] and 1.25 [CI, 0.98 to 1.60] in males). In addition, in males only, those in the bottom fifth percentile of SM proportion had increased risk for DM (aHR for the bottom fifth percentile of the cohort, 1.96 [CI, 1.45 to 2.65]) and MACE (aHR, 1.55 [CI, 1.15 to 2.09]). Results may not be generalizable to non-Whites or people outside the United Kingdom. Artificial intelligence-derived BC proportions were strongly associated with cardiometabolic risk, but after BMI and WC were accounted for, only VAT proportion and SMFF (both sexes) and SM proportion (males only) added prognostic information. None.

Low-Count PET Image Reconstruction with Generalized Sparsity Priors via Unrolled Deep Networks.

Fu M, Fang M, Liao B, Liang D, Hu Z, Wu FX

pubmed logopapersSep 29 2025
Deep learning has demonstrated remarkable efficacy in reconstructing low-count PET (Positron Emission Tomography) images, attracting considerable attention in the medical imaging community. However, most existing deep learning approaches have not fully exploited the unique physical characteristics of PET imaging in the design of fidelity and prior regularization terms, resulting in constrained model performance and interpretability. In light of these considerations, we introduce an unrolled deep network based on maximum likelihood estimation for the Poisson distribution and a Generalized domain transformation for Sparsity learning, dubbed GS-Net. To address this complex optimization challenge, we employ the Alternating Direction Method of Multipliers (ADMM) framework, integrating a modified Expectation Maximization (EM) approach to address the primary objective and utilize the shrinkage thresholding approach to optimize the L1 norm term. Additionally, within this unrolled deep network, all hyperparameters are adaptively adjusted through end-to-end learning to eliminate the need for manual parameter tuning. Through extensive experiments on simulated patient brain datasets and real patient whole-body clinical datasets with multiple count levels, our method has demonstrated advanced performance compared to traditional non-iterative and iterative reconstruction, deep learning-based direct reconstruction, and hybrid unrolled methods, as demonstrated by qualitative and quantitative evaluations.

Automatic Body Region Classification in CT Scans Using Deep Learning.

Golzan M, Lee H, Ngatched TMN, Zhang L, Michalak M, Chow V, Beg MF, Popuri K

pubmed logopapersSep 26 2025
Accurate classification of anatomical regions in computed tomography (CT) scans is essential for optimizing downstream diagnostic and analytic workflows in medical imaging. We demonstrate the high performance that deep learning (DL) algorithms can achieve in the classification of whole-body parts in CT images acquired under various protocols. Our model was trained using a dataset consisting of 5485 anonymized neuroimaging informatics technology initiative (NIFTI) CT scans collected from 45 different health centers. The dataset was split into 3290 scans for training, 1097 scans for validation, and 1098 scans for testing. Each body CT scan was classified into six distinct classes covering the whole body: chest, abdomen, pelvis, chest and abdomen, abdomen and pelvis, and chest and abdomen and pelvis. The performance of the DL model stood at an accuracy, precision, recall, and F1-score of 97.53% (95% CI: 96.62%, 98.45%), 97.56% (95% CI: 96.6%, 98.4%), 97.6% (95% CI: 96.7%, 98.5%), and 97.56% (96.6%, 98.4%), respectively, in identifying different body parts. These findings demonstrate the strength of our approach in annotating CT images through a wide variation in both acquisition protocols and patient demographics. This study underlines the potential that DL holds for medical imaging and, in particular, for the automation of body region classification in CT. Our findings confirm that these models could be implemented in clinical routines to improve diagnostic efficiency and harmony.

Performance Comparison of Cutting-Edge Large Language Models on the ACR In-Training Examination: An Update for 2025.

Young A, Paloka R, Islam A, Prasanna P, Hill V, Payne D

pubmed logopapersSep 24 2025
This study represents a continuation of prior work by Payne et al. evaluating large language model (LLM) performance on radiology board-style assessments, specifically the ACR diagnostic radiology in-training examination (DXIT). Building upon earlier findings with GPT-4, we assess the performance of newer, cutting-edge models, such as GPT-4o, GPT-o1, GPT-o3, Claude, Gemini, and Grok on standardized DXIT questions. In addition to overall performance, we compare model accuracy on text-based versus image-based questions to assess multi-modal reasoning capabilities. As a secondary aim, we investigate the potential impact of data contamination by comparing model performance on original versus revised image-based questions. Seven LLMs - GPT-4, GPT-4o, GPT-o1, GPT-o3, Claude 3.5 Sonnet, Gemini 1.5 Pro, and Grok 2.0-were evaluated using 106 publicly available DXIT questions. Each model was prompted using a standardized instruction set to simulate a radiology resident answering board-style questions. For each question, the model's selected answer, rationale, and confidence score were recorded. Unadjusted accuracy (based on correct answer selection) and logic-adjusted accuracy (based on clinical reasoning pathways) were calculated. Subgroup analysis compared model performance on text-based versus image-based questions. Additionally, 63 image-based questions were revised to test novel reasoning while preserving the original diagnostic image to assess the impact of potential training data contamination. Across 106 DXIT questions, GPT-o1 demonstrated the highest unadjusted accuracy (71.7%), followed closely by GPT-4o (69.8%) and GPT-o3 (68.9%). GPT-4 (59.4%) and Grok 2.0 exhibited similar scores (59.4% and 52.8%). Claude 3.5 Sonnet had the lowest unadjusted accuracies (34.9%). Similar trends were observed for logic-adjusted accuracy, with GPT-o1 (60.4%), GPT-4o (59.4%), and GPT-o3 (59.4%) again outperforming other models, while Grok 2.0 and Claude 3.5 Sonnet lagged behind (34.0% and 30.2%, respectively). GPT-4o's performance was significantly higher on text-based questions compared to image-based ones. Unadjusted accuracy for the revised DXIT questions was 49.2%, compared to 56.1% on matched original DXIT questions. Logic-adjusted accuracy for the revised DXIT questions was 40.0% compared to 44.4% on matched original DXIT questions. No significant difference in performance was observed between original and revised questions. Modern LLMs, especially those from OpenAI, demonstrate strong and improved performance on board-style radiology assessments. Comparable performance on revised prompts suggests that data contamination may have played a limited role. As LLMs improve, they hold strong potential to support radiology resident learning through personalized feedback and board-style question review.

Fully Automated Image-Based Multiplexing of Serial PET/CT Imaging for Facilitating Comprehensive Disease Phenotyping.

Shiyam Sundar LK, Gutschmayer S, Pires M, Ferrara D, Nguyen T, Abdelhafez YG, Spencer B, Cherry SR, Badawi RD, Kersting D, Fendler WP, Kim MS, Lassen ML, Hasbak P, Schmidt F, Linder P, Mu X, Jiang Z, Abenavoli EM, Sciagrà R, Frille A, Wirtz H, Hesse S, Sabri O, Bailey D, Chan D, Callahan J, Hicks RJ, Beyer T

pubmed logopapersSep 18 2025
Combined PET/CT imaging provides critical insights into both anatomic and molecular processes, yet traditional single-tracer approaches limit multidimensional disease phenotyping; to address this, we developed the PET Unified Multitracer Alignment (PUMA) framework-an open-source, postprocessing tool that multiplexes serial PET/CT scans for comprehensive voxelwise tissue characterization. <b>Methods:</b> PUMA utilizes artificial intelligence-based CT segmentation from multiorgan objective segmentation to generate multilabel maps of 24 body regions, guiding a 2-step registration: affine alignment followed by symmetric diffeomorphic registration. Tracer images are then normalized and assigned to red-green-blue channels for simultaneous visualization of up to 3 tracers. The framework was evaluated on longitudinal PET/CT scans from 114 subjects across multiple centers and vendors. Rigid, affine, and deformable registration methods were compared for optimal coregistration. Performance was assessed using the Dice similarity coefficient for organ alignment and absolute percentage differences in organ intensity and tumor SUV<sub>mean</sub> <b>Results:</b> Deformable registration consistently achieved superior alignment, with Dice similarity coefficient values exceeding 0.90 in 60% of organs while maintaining organ intensity differences below 3%; similarly, SUV<sub>mean</sub> differences for tumors were minimal at 1.6% ± 0.9%, confirming that PUMA preserves quantitative PET data while enabling robust spatial multiplexing. <b>Conclusion:</b> PUMA provides a vendor-independent solution for postacquisition multiplexing of serial PET/CT images, integrating complementary tracer data voxelwise into a composite image without modifying clinical protocols. This enhances multidimensional disease phenotyping and supports better diagnostic and therapeutic decisions using serial multitracer PET/CT imaging.

Automated Field of View Prescription for Whole-body Magnetic Resonance Imaging Using Deep Learning Based Body Region Segmentations.

Quinsten AS, Bojahr C, Nassenstein K, Straus J, Holtkamp M, Salhöfer L, Umutlu L, Forsting M, Haubold J, Wen Y, Kohnke J, Borys K, Nensa F, Hosch R

pubmed logopapersSep 16 2025
Manual field-of-view (FoV) prescription in whole-body magnetic resonance imaging (WB-MRI) is vital for ensuring comprehensive anatomic coverage and minimising artifacts, thereby enhancing image quality. However, this procedure is time-consuming, subject to operator variability, and adversely impacts both patient comfort and workflow efficiency. To overcome these limitations, an automated system was developed and evaluated that prescribes multiple consecutive FoV stations for WB-MRI using deep-learning (DL)-based three-dimensional anatomic segmentations. A total of 374 patients (mean age: 50.5 ± 18.2 y; 52% females) who underwent WB-MRI, including T2-weighted Half-Fourier acquisition single-shot turbo spin-echo (T2-HASTE) and fast whole-body localizer (FWBL) sequences acquired during continuous table movement on a 3T MRI system, were retrospectively collected between March 2012 and January 2025. An external cohort of 10 patients, acquired on two 1.5T scanners, was utilized for generalizability testing. Complementary nnUNet-v2 models were fine-tuned to segment tissue compartments, organs, and a whole-body (WB) outline on FWBL images. From these predicted segmentations, 5 consecutive FoVs (head/neck, thorax, liver, pelvis, and spine) were generated. Segmentation accuracy was quantified by Sørensen-Dice coefficients (DSC), Precision (P), Recall (R), and Specificity (S). Clinical utility was assessed on 30 test cases by 4 blinded experts using Likert scores and a 4-way ranking against 3 radiographer prescriptions. Interrater reliability and statistical comparisons were employed using the intraclass correlation coefficient (ICC), Kendall W, Friedman, and Wilcoxon signed-rank tests. Mean DSCs were 0.98 for torso (P = 0.98, R = 0.98, S = 1.00), 0.96 for head/neck (P = 0.95, R = 0.96, S = 1.00), 0.94 for abdominal cavity (P = 0.95, R = 0.94, S = 1.00), 0.90 for thoracic cavity (P = 0.90, R = 0.91, S = 1.00), 0.86 for liver (P = 0.85, R = 0.87, S = 1.00), and 0.63 for spinal cord (P = 0.64, R = 0.63, S = 1.00). The clinical utility was evidenced by assessments from 2 expert radiologists and 2 radiographers, with 98.3% and 87.5% of cases rated as clinically acceptable in the internal test data set and the external test data set. Predicted FoVs received the highest ranking in 60% of cases. They placed within the top 2 in 85.8% of cases, outperforming radiographers with 9 and 13 years of experience (P < 0.001) and matching the performance of a radiographer with 20 years of experience. DL-based three-dimensional anatomic segmentations enable accurate and reliable multistation FoV prescription for WB-MRI, achieving expert-level performance while significantly reducing manual workload. Automated FoV planning has the potential to standardize WB-MRI acquisition, reduce interoperator variability, and enhance workflow efficiency, thereby facilitating broader clinical adoption.

Fixed point method for PET reconstruction with learned plug-and-play regularization.

Savanier M, Comtat C, Sureau F

pubmed logopapersSep 10 2025
&#xD;Deep learning has shown great promise for improving medical image reconstruction, including PET. However, concerns remain about the stability and robustness of these methods, especially when trained on limited data. This work aims to explore the use of the Plug-and-Play (PnP) framework in PET reconstruction to address these concerns.&#xD;&#xD;Approach:&#xD;We propose a convergent PnP algorithm for low-count PET reconstruction based on the Douglas-Rachford splitting method. We consider several denoisers trained to satisfy fixed-point conditions, with convergence properties ensured either during training or by design, including a spectrally normalized network and a deep equilibrium model. We evaluate the bias-standard deviation tradeoff across clinically relevant regions and an unseen pathological case in a synthetic experiment and a real study. Comparisons are made with model-based iterative reconstruction, post-reconstruction denoising, a deep end-to-end unfolded network and PnP with a Gaussian denoiser.&#xD;&#xD;Main Results:&#xD;Our method achieves lower bias than post-reconstruction processing and reduced standard deviation at matched bias compared to model-based iterative reconstruction. While spectral normalization underperforms in generalization, the deep equilibrium model remains competitive with convolutional networks for plug-and-play reconstruction and generalizes better to the unseen pathology. Compared to the end-to-end unfolded network, it also generalizes more consistently.&#xD;&#xD;Significance:&#xD;This study demonstrates the potential of the PnP framework to improve image quality and quantification accuracy in PET reconstruction. It also highlights the importance of how convergence conditions are imposed on the denoising network to ensure robust and generalizable performance.

A comprehensive review of techniques, algorithms, advancements, challenges, and clinical applications of multi-modal medical image fusion for improved diagnosis.

Zubair M, Hussain M, Albashrawi MA, Bendechache M, Owais M

pubmed logopapersSep 9 2025
Multi-modal medical image fusion (MMIF) is increasingly recognized as an essential technique for enhancing diagnostic precision and facilitating effective clinical decision-making within computer-aided diagnosis systems. MMIF combines data from X-ray, MRI, CT, PET, SPECT, and ultrasound to create detailed, clinically useful images of patient anatomy and pathology. These integrated representations significantly advance diagnostic accuracy, lesion detection, and segmentation. This comprehensive review meticulously surveys the evolution, methodologies, algorithms, current advancements, and clinical applications of MMIF. We present a critical comparative analysis of traditional fusion approaches, including pixel-, feature-, and decision-level methods, and delves into recent advancements driven by deep learning, generative models, and transformer-based architectures. A critical comparative analysis is presented between these conventional methods and contemporary techniques, highlighting differences in robustness, computational efficiency, and interpretability. The article addresses extensive clinical applications across oncology, neurology, and cardiology, demonstrating MMIF's vital role in precision medicine through improved patient-specific therapeutic outcomes. Moreover, the review thoroughly investigates the persistent challenges affecting MMIF's broad adoption, including issues related to data privacy, heterogeneity, computational complexity, interpretability of AI-driven algorithms, and integration within clinical workflows. It also identifies significant future research avenues, such as the integration of explainable AI, adoption of privacy-preserving federated learning frameworks, development of real-time fusion systems, and standardization efforts for regulatory compliance. This review organizes key knowledge, outlines challenges, and highlights opportunities, guiding researchers, clinicians, and developers in advancing MMIF for routine clinical use and promoting personalized healthcare. To support further research, we provide a GitHub repository that includes popular multi-modal medical imaging datasets along with recent models in our shared GitHub repository.

New imaging techniques and trends in radiology.

Kantarcı M, Aydın S, Oğul H, Kızılgöz V

pubmed logopapersSep 8 2025
Radiography is a field of medicine inherently intertwined with technology. The dependency on technology is very high for obtaining images in ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI). Although the reduction in radiation dose is not applicable in US and MRI, advancements in technology have made it possible in CT, with ongoing studies aimed at further optimization. The resolution and diagnostic quality of images obtained through advancements in each modality are steadily improving. Additionally, technological progress has significantly shortened acquisition times for CT and MRI. The use of artificial intelligence (AI), which is becoming increasingly widespread worldwide, has also been incorporated into radiography. This technology can produce more accurate and reproducible results in US examinations. Machine learning offers great potential for improving image quality, creating more distinct and useful images, and even developing new US imaging modalities. Furthermore, AI technologies are increasingly prevalent in CT and MRI for image evaluation, image generation, and enhanced image quality.

Artificial intelligence-assisted assessment of metabolic response to tebentafusp in metastatic uveal melanoma: a long axial field-of-view [<sup>18</sup>F]FDG PET/CT study.

Sachpekidis C, Machiraju D, Strauss DS, Pan L, Kopp-Schneider A, Edenbrandt L, Dimitrakopoulou-Strauss A, Hassel JC

pubmed logopapersSep 6 2025
Tebentafusp has emerged as the first systemic therapy to significantly prolong survival in treatment-naïve HLA-A*02:01 + patients with unresectable or metastatic uveal melanoma (mUM). Notably, a survival benefit has been observed even in the absence of radiographic response. This study aims to investigate the feasibility and prognostic value of artificial intelligence (AI)-assisted quantification and metabolic response assessment of [<sup>18</sup>F]FDG long axial field-of-view (LAFOV) PET/CT in mUM patients undergoing tebentafusp therapy. Fifteen patients with mUM treated with tebentafusp underwent [<sup>18</sup>F]FDG LAFOV PET/CT at baseline and 3 months post-treatment. Total metabolic tumor volume (TMTV) and total lesion glycolysis (TLG) were quantified using a deep learning-based segmentation tool On the RECOMIA platform. Metabolic response was assessed according to AI-assisted PERCIST 1.0 criteria. Associations between PET-derived parameters and overall survival (OS) were evaluated using Kaplan-Meier survival analysis. The median follow up (95% CI) was 14.1 months (12.9 months - not available). Automated TMTV and TLG measurements were successfully obtained in all patients. Elevated baseline TMTV and TLG were significantly associated with shorter OS (TMTV: 16.9 vs. 27.2 months; TLG: 16.9 vs. 27.2 months; p < 0.05). Similarly, higher TMTV and TLG at 3 months post-treatment predicted poorer survival outcomes (TMTV: 14.3 vs. 24.5 months; TLG: 14.3 vs. 24.5 months; p < 0.05). AI-assisted PERCIST response evaluation identified six patients with disease control (complete metabolic response, partial metabolic response, stable metabolic disease) and nine with progressive metabolic disease. A trend toward improved OS was observed in patients with disease control (24.5 vs. 14.6 months, p = 0.08). Circulating tumor DNA (ctDNA) levels based on GNAQ and GNA11 mutations were available in 8 patients; after 3 months Of tebentafusp treatment, 5 showed reduced Or stable ctDNA levels, and 3 showed an increase (median OS: 24.5 vs. 3.3 months; p = 0.13). Patients with increasing ctDNA levels exhibited significantly higher TMTV and TLG on follow-up imaging. AI-assisted whole-body quantification of [1⁸F]FDG PET/CT and PERCIST-based response assessment are feasible and hold prognostic significance in tebentafusp-treated mUM. TMTV and TLG may serve as non-invasive imaging biomarkers for risk stratification and treatment monitoring in this malignancy.
Page 1 of 1098 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.