Sort by:
Page 16 of 1411403 results

Multimodal AI-driven Biomarker for Early Detection of Cancer Cachexia

Ahmed, S., Parker, N., Park, M., Davis, E. W., Jeong, D., Permuth, J. B., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintSep 19 2025
Cancer cachexia, a multifactorial metabolic syndrome characterized by severe muscle wasting and weight loss, contributes to poor outcomes across various cancer types but lacks a standardized, generalizable biomarker for early detection. We present a multimodal AI-based biomarker trained on real-world clinical, radiologic, laboratory, and unstructured clinical note data, leveraging foundation models and large language models (LLMs) to identify cachexia at the time of cancer diagnosis. Prediction accuracy improved with each added modality: 77% using clinical variables alone, 81% with added laboratory data, and 85% with structured symptom features extracted from clinical notes. Incorporating embeddings from clinical text and CT images further improved accuracy to 92%. The framework also demonstrated prognostic utility, improving survival prediction as data modalities were integrated. Designed for real-world clinical deployment, the framework accommodates missing modalities without requiring imputation or case exclusion, supporting scalability across diverse oncology settings. Unlike prior models trained on curated datasets, our approach utilizes standard-of-care clinical data, facilitating integration into oncology workflows. In contrast to fixed-threshold composite indices such as the cachexia index (CXI), the model generates patient-specific predictions, enabling adaptable, cancer-agnostic performance. To enhance clinical reliability and safety, the framework incorporates uncertainty estimation to flag low-confidence cases for expert review. This work advances a clinically applicable, scalable, and trustworthy AI-driven decision support tool for early cachexia detection and personalized oncology care.

Intratumoral and peritumoral heterogeneity based on CT to predict the pathological response after neoadjuvant chemoimmunotherapy in esophageal squamous cell carcinoma.

Ling X, Yang X, Wang P, Li Y, Wen Z, Wang J, Chen K, Yu Y, Liu A, Ma J, Meng W

pubmed logopapersSep 19 2025
Neoadjuvant chemoimmunotherapy (NACI) regimen (camrelizumab plus paclitaxel and nedaplatin) has shown promising potential in patients with esophageal squamous cell carcinoma (ESCC), but accurately predicting the therapeutic response remains a challenge. To develop and validate a CT-based machine learning model that incorporates both intratumoral and peritumoral heterogeneity for predicting the pathological response of ESCC patients after NACI. Patients with ESCC who underwent surgery following NACI between June 2020 and July 2024 were included retrospectively and prospectively. Univariate and multivariate logistic regression analyses were performed to identify clinical variables associated with pathological response. Traditional radiomics features and habitat radiomics features from the intratumoral and peritumoral regions were extracted from post-treatment CT images, and six predictive models were established using 14 machine learning algorithms. The combined model was developed by integrating intratumoral and peritumoral habitat radiomics features with clinical variables. The performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 157 patients (mean [SD] age, 59.6 [6.5] years) were enrolled in our study, of whom 60 (38.2%) achieved major pathological response (MPR) and 40 (25.5%) achieved pathological complete response (pCR). The combined model demonstrated excellent predictive ability for MPR after NACI, with an AUC of 0.915 (95% CI, 0.844-0.981), accuracy of 0.872, sensitivity of 0.733, and specificity of 0.938 in the test set. In sensitivity analysis focusing on pCR, the combined model exhibited robust performance, with an AUC of 0.895 (95% CI, 0.782-0.980) in the test set. The combined model integrating intratumoral and peritumoral habitat radiomics features with clinical variables can accurately predict MPR in ESCC patients after NACI and shows promising potential in predicting pCR.

AI-driven innovations for dental implant treatment planning: A systematic review.

Zaww K, Abbas H, Vanegas Sáenz JR, Hong G

pubmed logopapersSep 19 2025
This systematic review evaluates the effectiveness of artificial intelligence (AI) models in dental implant treatment planning, focusing on: 1) identification, detection, and segmentation of anatomical structures; 2) technical assistance during treatment planning; and 3) additional relevant applications. A literature search of PubMed/MEDLINE, Scopus, and Web of Science was conducted for studies published in English until July 31, 2024. The included studies explored AI applications in implant treatment planning, excluding expert opinions, guidelines, and protocols. Three reviewers independently assessed study quality using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies, resolving disagreements by consensus. Of the 28 included studies, four were high, four were medium, and 20 were low quality according to the JBI scale. Eighteen studies on anatomical segmentation have demonstrated AI models with accuracy rates ranging from 66.4% to 99.1%. Eight studies examined AI's role in technical assistance for surgical planning, demonstrating its potential in predicting jawbone mineral density, optimizing drilling protocols, and classifying plans for maxillary sinus augmentation. One study indicated a learning curve for AI in implant planning, recommending at least 50 images for over 70% predictive accuracy. Another study reported 83% accuracy in localizing stent markers for implant sites, suggesting additional imaging planes to address a 17% miss rate and 2.8% false positives. AI models exhibit potential for automating dental implant planning with high accuracy in anatomical segmentation and insightful technical assistance. However, further well-designed studies with standardized evaluation parameters are required for pragmatic integration into clinical settings.

AI-Based Algorithm to Detect Heart and Lung Disease From Acute Chest Computed Tomography Scans: Protocol for an Algorithm Development and Validation Study.

Olesen ASO, Miger K, Ørting SN, Petersen J, de Bruijne M, Boesen MP, Andersen MB, Grand J, Thune JJ, Nielsen OW

pubmed logopapersSep 19 2025
Dyspnea is a common cause of hospitalization, posing diagnostic challenges among older adult patients with multimorbid conditions. Chest computed tomography (CT) scans are increasingly used in patients with dyspnea and offer superior diagnostic accuracy over chest radiographs but face limited use due to a shortage of radiologists. This study aims to develop and validate artificial intelligence (AI) algorithms to enable automatic analysis of acute CT scans and provide immediate feedback on the likelihood of pneumonia, pulmonary embolism, and cardiac decompensation. This protocol will focus on cardiac decompensation. We designed a retrospective method development and validation study. This study has been approved by the Danish National Committee on Health Research Ethics (1575037). We extracted 4672 acute chest CT scans with corresponding radiological reports from the Copenhagen University Hospital-Bispebjerg and Frederiksberg, Denmark, from 2016 to 2021. The scans will be randomly split into training (2/3) and internal validation (1/3) sets. Development of the AI algorithm involves parameter tuning and feature selection using cross validation. Internal validation uses radiological reports as the ground truth, with algorithm-specific thresholds based on true positive and negative rates of 90% or greater for heart and lung diseases. The AI models will be validated in low-dose chest CT scans from consecutive patients admitted with acute dyspnea and in coronary CT angiography scans from patients with acute coronary syndrome. As of August 2025, CT data extraction has been completed. Algorithm development, including image segmentation and natural language processing, is ongoing. However, for pulmonary congestion, the algorithm development has been completed. Internal and external validation are planned, with overall validation expected to conclude in 2025 and the final results to be available in 2026. The results are expected to enhance clinical decision-making by providing immediate, AI-driven insights from CT scans, which will be beneficial for both clinicians and patients. DERR1-10.2196/77030.

Visual language model-assisted spectral CT reconstruction by diffusion and low-rank priors from limited-angle measurements.

Wang Y, Liang N, Ren J, Zhang X, Shen Y, Cai A, Zheng Z, Li L, Yan B

pubmed logopapersSep 19 2025
Spectral computed tomography (CT) is a critical tool in clinical practice, offering capabilities in multi-energy spectrum imaging and material identification. The limited-angle (LA) scanning strategy has attracted attention for its advantages in fast data acquisition and reduced radiation exposure, aligning with the as low as reasonably achievable principle. However, most deep learning-based methods require separate models for each LA setting, which limits their flexibility in adapting to new conditions. In this study, we developed a novel Visual-Language model-assisted Spectral CT Reconstruction (VLSR) method to address LA artifacts and enable multi-setting adaptation within a single model. The VLSR method integrates the image-text perception ability of visual-language models and the image generation potential of diffusion models. Prompt engineering is introduced to better represent LA artifact characteristics, further improving artifact accuracy. Additionally, a collaborative sampling framework combining data consistency, low-rank regularization, and image-domain diffusion models is developed to produce high-quality and consistent spectral CT reconstructions. The performance of VLSR is superior to other comparison methods. Under the scanning angles of 90° and 60° for simulated data, the VLSR method improves peak signal noise ratio by at least 0.41 dB and 1.13 dB compared with other methods. VLSR method can reconstruct high-quality spectral CT images under diverse LA configurations, allowing faster and more flexible scans with dose reductions.

Multi-modal CT Perfusion-based Deep Learning for Predicting Stroke Lesion Outcomes in Complete and No Recanalization Scenarios.

Yang H, George Y, Mehta D, Lin L, Chen C, Yang D, Sun J, Lau KF, Bain C, Yang Q, Parsons MW, Ge Z

pubmed logopapersSep 19 2025
Predicting the final location and volume of lesions in acute ischemic stroke (AIS) is crucial for clinical management. While CT perfusion (CTP) imaging is routinely used for estimating lesion outcomes, conventional threshold-based methods have limitations. We developed specialized outcome prediction deep learning models that predict infarct core in successful reperfusion cases and the combined core-penumbra region in unsuccessful reperfusion cases. We developed single-modal and multi-modal deep learning models using CTP parameter maps to predict the final infarct lesion on follow-up diffusion-weighted imaging (DWI). Using a multi-center dataset from multiple sites, deep learning models were developed and evaluated separately for patients with complete recanalization (CR, successful reperfusion, n=350) and no recanalization (NR, unsuccessful reperfusion, n=138) after treatment. The CR model was designed to predict the infarct core region, while the NR model predicted the expanded hypoperfused tissue encompassing both core and penumbra regions. Five-fold cross-validation was performed for robust evaluation. The multi-modal 3D nnU-Net model demonstrated superior performance, achieving mean Dice scores of 35.36% in CR patients and 50.22% in NR patients. This significantly outperformed the current clinical used method, providing more accurate outcome estimates than the conventional single-modality threshold-based measures which yielded dice scores of 15.73% and 39.71% for CR and NR groups respectively. Our approach offered both successful reperfusion and unsuccessful reperfusion estimations for potential treatment outcomes, enabling clinicians to better evaluate treatment eligibility for reperfusion therapies and assess potential treatment benefits. This advancement facilitates more personalized treatment recommendations and has the potential to significantly enhance clinical decision-making in AIS management by providing more accurate tissue outcome predictions than conventional single-modality threshold-based approaches. AIS=acute ischemic stroke; CR=complete recanalization; NR=no recanalization; DT=delay time; IQR=interquartile range; GT=ground truth; HD95=95% Hausdorff distance; ASSD=average symmetric surface distance; MLV=mismatch lesion volume.

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. 
Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. 
Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). 
Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

Transplant-Ready? Evaluating AI Lung Segmentation Models in Candidates with Severe Lung Disease

Jisoo Lee, Michael R. Harowicz, Yuwen Chen, Hanxue Gu, Isaac S. Alderete, Lin Li, Maciej A. Mazurowski, Matthew G. Hartwig

arxiv logopreprintSep 18 2025
This study evaluates publicly available deep-learning based lung segmentation models in transplant-eligible patients to determine their performance across disease severity levels, pathology categories, and lung sides, and to identify limitations impacting their use in preoperative planning in lung transplantation. This retrospective study included 32 patients who underwent chest CT scans at Duke University Health System between 2017 and 2019 (total of 3,645 2D axial slices). Patients with standard axial CT scans were selected based on the presence of two or more lung pathologies of varying severity. Lung segmentation was performed using three previously developed deep learning models: Unet-R231, TotalSegmentator, MedSAM. Performance was assessed using quantitative metrics (volumetric similarity, Dice similarity coefficient, Hausdorff distance) and a qualitative measure (four-point clinical acceptability scale). Unet-R231 consistently outperformed TotalSegmentator and MedSAM in general, for different severity levels, and pathology categories (p<0.05). All models showed significant performance declines from mild to moderate-to-severe cases, particularly in volumetric similarity (p<0.05), without significant differences among lung sides or pathology types. Unet-R231 provided the most accurate automated lung segmentation among evaluated models with TotalSegmentator being a close second, though their performance declined significantly in moderate-to-severe cases, emphasizing the need for specialized model fine-tuning in severe pathology contexts.

Radiology Report Conditional 3D CT Generation with Multi Encoder Latent diffusion Model

Sina Amirrajab, Zohaib Salahuddin, Sheng Kuang, Henry C. Woodruff, Philippe Lambin

arxiv logopreprintSep 18 2025
Text to image latent diffusion models have recently advanced medical image synthesis, but applications to 3D CT generation remain limited. Existing approaches rely on simplified prompts, neglecting the rich semantic detail in full radiology reports, which reduces text image alignment and clinical fidelity. We propose Report2CT, a radiology report conditional latent diffusion framework for synthesizing 3D chest CT volumes directly from free text radiology reports, incorporating both findings and impression sections using multiple text encoder. Report2CT integrates three pretrained medical text encoders (BiomedVLP CXR BERT, MedEmbed, and ClinicalBERT) to capture nuanced clinical context. Radiology reports and voxel spacing information condition a 3D latent diffusion model trained on 20000 CT volumes from the CT RATE dataset. Model performance was evaluated using Frechet Inception Distance (FID) for real synthetic distributional similarity and CLIP based metrics for semantic alignment, with additional qualitative and quantitative comparisons against GenerateCT model. Report2CT generated anatomically consistent CT volumes with excellent visual quality and text image alignment. Multi encoder conditioning improved CLIP scores, indicating stronger preservation of fine grained clinical details in the free text radiology reports. Classifier free guidance further enhanced alignment with only a minor trade off in FID. We ranked first in the VLM3D Challenge at MICCAI 2025 on Text Conditional CT Generation and achieved state of the art performance across all evaluation metrics. By leveraging complete radiology reports and multi encoder text conditioning, Report2CT advances 3D CT synthesis, producing clinically faithful and high quality synthetic data.

DICE: Diffusion Consensus Equilibrium for Sparse-view CT Reconstruction

Leon Suarez-Rodriguez, Roman Jacome, Romario Gualdron-Hurtado, Ana Mantilla-Dulcey, Henry Arguello

arxiv logopreprintSep 18 2025
Sparse-view computed tomography (CT) reconstruction is fundamentally challenging due to undersampling, leading to an ill-posed inverse problem. Traditional iterative methods incorporate handcrafted or learned priors to regularize the solution but struggle to capture the complex structures present in medical images. In contrast, diffusion models (DMs) have recently emerged as powerful generative priors that can accurately model complex image distributions. In this work, we introduce Diffusion Consensus Equilibrium (DICE), a framework that integrates a two-agent consensus equilibrium into the sampling process of a DM. DICE alternates between: (i) a data-consistency agent, implemented through a proximal operator enforcing measurement consistency, and (ii) a prior agent, realized by a DM performing a clean image estimation at each sampling step. By balancing these two complementary agents iteratively, DICE effectively combines strong generative prior capabilities with measurement consistency. Experimental results show that DICE significantly outperforms state-of-the-art baselines in reconstructing high-quality CT images under uniform and non-uniform sparse-view settings of 15, 30, and 60 views (out of a total of 180), demonstrating both its effectiveness and robustness.
Page 16 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.