Sort by:
Page 3 of 1011010 results

Development and validation of machine learning models to predict vertebral artery injury by C2 pedicle screws.

Ye B, Sun Y, Chen G, Wang B, Meng H, Shan L

pubmed logopapersAug 12 2025
Cervical 2 pedicle screw (C2PS) fixation is widely used in posterior cervical surgery but carries risks of vertebral artery injury (VAI), a rare yet severe complication. This study aimed to identify risk factors for VAI during C2PS placement and develop a machine learning (ML)-based predictive model to enhance preoperative risk assessment. Clinical and radiological data from 280 patients undergoing head and neck CT angiography were retrospectively analyzed. Three-dimensional reconstructed images simulated C2PS placement, classifying patients into injury (n = 98) and non-injury (n = 182) groups. Fifteen variables, including characteristic of patients and anatomic variables were evaluated. Eight ML algorithms were trained (70% training cohort) and validated (30% validation cohort). Model performance was assessed using AUC, sensitivity, specificity, and SHAP (SHapley Additive exPlanations) for interpretability. Six key risk factors were identified: pedicle diameter, high-riding vertebral artery (HRVA), intra-axial vertebral artery (IAVA), vertebral artery diameter (VAD), distance between the transverse foramen and the posterior end of the vertebral body (TFPEVB) and distance between the vertebral artery and the vertebral body (VAVB). The neural network model (NNet) demonstrated optimal predictive performance, achieving AUCs of 0.929 (training) and 0.936 (validation). SHAP analysis confirmed these variables as primary contributors to VAI risk. This study established an ML-driven predictive model for VAI during C2PS placement, highlighting six critical anatomical and radiological risk factors. Integrating this model into clinical workflows may optimize preoperative planning, reduce complications, and improve surgical outcomes. External validation in multicenter cohorts is warranted to enhance generalizability.

CRCFound: A Colorectal Cancer CT Image Foundation Model Based on Self-Supervised Learning.

Yang J, Cai D, Liu J, Zhuang Z, Zhao Y, Wang FA, Li C, Hu C, Gai B, Chen Y, Li Y, Wang L, Gao F, Wu X

pubmed logopapersAug 12 2025
Accurate risk stratification is crucial for determining the optimal treatment plan for patients with colorectal cancer (CRC). However, existing deep learning models perform poorly in the preoperative diagnosis of CRC and exhibit limited generalizability, primarily due to insufficient annotated data. To address these issues, CRCFound, a self-supervised learning-based CT image foundation model for CRC is proposed. After pretraining on 5137 unlabeled CRC CT images, CRCFound can learn universal feature representations and provide efficient and reliable adaptability for various clinical applications. Comprehensive benchmark tests are conducted on six different diagnostic tasks and two prognosis tasks to validate the performance of the pretrained model. Experimental results demonstrate that CRCFound can easily transfer to most CRC tasks and exhibit outstanding performance and generalization ability. Overall, CRCFound can solve the problem of insufficient annotated data and perform well in a wide range of downstream tasks of CRC, making it a promising solution for accurate diagnosis and personalized treatment of CRC patients.

Machine learning models for diagnosing lymph node recurrence in postoperative PTC patients: a radiomic analysis.

Pang F, Wu L, Qiu J, Guo Y, Xie L, Zhuang S, Du M, Liu D, Tan C, Liu T

pubmed logopapersAug 12 2025
Postoperative papillary thyroid cancer (PTC) patients often have enlarged cervical lymph nodes due to inflammation or hyperplasia, which complicates the assessment of recurrence or metastasis. This study aimed to explore the diagnostic capabilities of computed tomography (CT) imaging and radiomic analysis to distinguish the recurrence of cervical lymph nodes in patients with PTC postoperatively. A retrospective analysis of 194 PTC patients who underwent total thyroidectomy was conducted, with 98 cases of cervical lymph node recurrence and 96 cases without recurrence. Using 3D Slicer software, Regions of Interest (ROI) were delineated on enhanced venous phase CT images, analyzing 302 positive and 391 negative lymph nodes. These nodes were randomly divided into training and validation sets in a 3:2 ratio. Python was used to extract radiomic features from the ROIs and to develop radiomic models. Univariate and multivariate analyses identified statistically significant risk factors for cervical lymph node recurrence from clinical data, which, when combined with radiomic scores, formed a nomogram to predict recurrence risk. The diagnostic efficacy and clinical utility of the models were assessed using ROC curves, calibration curves, and Decision Curve Analysis (DCA). This study analyzed 693 lymph nodes (302 positive and 391 negative) and identified 35 significant radiomic features through dimensionality reduction and selection. The three machine learning models, including the Lasso regression, Support Vector Machine (SVM), and RF radiomics models, showed.

Lung-DDPM+: Efficient Thoracic CT Image Synthesis using Diffusion Probabilistic Model

Yifan Jiang, Ahmad Shariftabrizi, Venkata SK. Manem

arxiv logopreprintAug 12 2025
Generative artificial intelligence (AI) has been playing an important role in various domains. Leveraging its high capability to generate high-fidelity and diverse synthetic data, generative AI is widely applied in diagnostic tasks, such as lung cancer diagnosis using computed tomography (CT). However, existing generative models for lung cancer diagnosis suffer from low efficiency and anatomical imprecision, which limit their clinical applicability. To address these drawbacks, we propose Lung-DDPM+, an improved version of our previous model, Lung-DDPM. This novel approach is a denoising diffusion probabilistic model (DDPM) guided by nodule semantic layouts and accelerated by a pulmonary DPM-solver, enabling the method to focus on lesion areas while achieving a better trade-off between sampling efficiency and quality. Evaluation results on the public LIDC-IDRI dataset suggest that the proposed method achieves 8$\times$ fewer FLOPs (floating point operations per second), 6.8$\times$ lower GPU memory consumption, and 14$\times$ faster sampling compared to Lung-DDPM. Moreover, it maintains comparable sample quality to both Lung-DDPM and other state-of-the-art (SOTA) generative models in two downstream segmentation tasks. We also conducted a Visual Turing Test by an experienced radiologist, showing the advanced quality and fidelity of synthetic samples generated by the proposed method. These experimental results demonstrate that Lung-DDPM+ can effectively generate high-quality thoracic CT images with lung nodules, highlighting its potential for broader applications, such as general tumor synthesis and lesion generation in medical imaging. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM-PLUS.

Fully Automatic Volume Segmentation Using Deep Learning Approaches to Assess the Thoracic Aorta, Visceral Abdominal Aorta, and Visceral Vasculature.

Pouncey AL, Charles E, Bicknell C, Bérard X, Ducasse E, Caradu C

pubmed logopapersAug 12 2025
Computed tomography angiography (CTA) imaging is essential to evaluate and analyse complex abdominal and thoraco-abdominal aortic aneurysms. However, CTA analyses are labour intensive, time consuming, and prone to interphysician variability. Fully automatic volume segmentation (FAVS) using artificial intelligence with deep learning has been validated for infrarenal aorta imaging but requires further testing for thoracic and visceral aorta segmentation. This study assessed FAVS accuracy against physician controlled manual segmentation (PCMS) in the descending thoracic aorta, visceral abdominal aorta, and visceral vasculature. This was a retrospective, multicentre, observational cohort study. Fifty pre-operative CTAs of patients with abdominal aortic aneurysm were randomly selected. Comparisons between FAVS and PCMS and assessment of inter- and intra-observer reliability of PCMS were performed. Volumetric segmentation performance was evaluated using sensitivity, specificity, Dice similarity coefficient (DSC), and Jaccard index (JI). Visceral vessel identification was compared by analysing branchpoint coordinates. Bland-Altman limits of agreement (BA-LoA) were calculated for proximal visceral diameters (excluding duplicate renals). FAVS demonstrated performance comparable with PCMS for volumetric segmentation, with a median DSC of 0.93 (interquartile range [IQR] 0.03), JI of 0.87 (IQR 0.05), sensitivity of 0.99 (IQR 0.01), and specificity of 1.00 (IQR 0.00). These metrics are similar to interphysician comparisons: median DSC 0.93 (IQR 0.07), JI 0.87 (IQR 0.12), sensitivity 0.90 (IQR 0.08), and specificity 1.00 (IQR 0.00). FAVS correctly identified 99.5% (183/184) of visceral vessels. Branchpoint coordinates for FAVS and PCMS were within the limits of CTA spatial resolution (Δx -0.33 [IQR 2.82], Δy 0.61 [IQR 4.85], Δz 2.10 [IQR 4.69] mm). BA-LoA for proximal visceral diameter measurements showed reasonable agreement: FAVS vs. PCMS mean difference -0.11 ± 5.23 mm compared with interphysician variability of 0.03 ± 5.27 mm. FAVS provides accurate, efficient segmentation of the thoracic and visceral aorta, delivering performance comparable with manual segmentation by expert physicians. This technology may enhance clinical workflows for monitoring and planning treatments for complex abdominal and thoraco-abdominal aortic aneurysms.

Construction and validation of a urinary stone composition prediction model based on machine learning.

Guo J, Zhang J, Zhang J, Xu C, Wang X, Liu C

pubmed logopapersAug 11 2025
The composition of urinary calculi serves as a critical determinant for personalized surgical strategies; however, such compositional data are often unavailable preoperatively. This study aims to develop a machine learning-based preoperative prediction model for stone composition and evaluate its clinical utility. A retrospective cohort study design was employed to include patients with urinary calculi admitted to the Department of Urology at the Second Affiliated Hospital of Zhengzhou University from 2019 to 2024. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression combined with multivariate logistic regression, and a binary prediction model for urinary calculi was subsequently constructed. Model validation was conducted using metrics such as the area under the curve (AUC), while Shapley Additive Explanations(SHAP) values were applied to interpret the predictive outcomes. Among 708 eligible patients, distinct prediction models were established for four stone types: calcium oxalate stones: Logistic regression achieved optimal performance (AUC = 0.845), with maximum stone CT value, 24-hour urinary oxalate, and stone size as top predictors (SHAP-ranked); infection stones: Logistic regression (AUC = 0.864) prioritized stone size, urinary pH, and recurrence history; uric acid stones: LASSO-ridge-elastic net model demonstrated exceptional accuracy (AUC = 0.961), driven by maximum CT value, 24-hour oxalate, and urinary calcium; calcium-containing stones: Logistic regression attained better prediction (AUC = 0.953), relying on CT value, 24-hour calcium, and stone size. This study developed a machine learning prediction model based on multi-algorithm integration, achieving accurate preoperative discrimination of urinary stone composition. The integration of key imaging features with metabolic indicators enhanced the model's predictive performance.

Ratio of visceral-to-subcutaneous fat area improves long-term mortality prediction over either measure alone: automated CT-based AI measures with longitudinal follow-up in a large adult cohort.

Liu D, Kuchnia AJ, Blake GM, Lee MH, Garrett JW, Pickhardt PJ

pubmed logopapersAug 11 2025
Fully automated AI-based algorithms can quantify adipose tissue on abdominal CT images. The aim of this study was to investigate the clinical value of these biomarkers by determining the association between adipose tissue measures and all-cause mortality. This retrospective study included 151,141 patients who underwent abdominal CT for any reason between 2000 and 2021. A validated AI-based algorithm quantified subcutaneous (SAT) and visceral (VAT) adipose tissue cross-sectional area. A visceral-to-subcutaneous adipose tissue area ratio (VSR) was calculated. Clinical data (age at the time of CT, sex, date of death, date of last contact) was obtained from a database search of the electronic health record. Hazard ratios (HR) and Kaplan-Meier curves assessed the relationship between adipose tissue measures and mortality. The endpoint of interest was all-cause mortality, with additional subgroup analysis including age and gender. 138,169 patients were included in the final analysis. Higher VSR was associated with increased mortality; this association was strongest in younger women (highest compared to lowest risk quartile HR 3.32 in 18-39y). Lower SAT was associated with increased mortality regardless of sex or age group (HR up to 1.63 in 18-39y). Higher VAT was associated with increased mortality in younger age groups, with the trend weakening and reversing with age; this association was stronger in women. AI-based CT measures of SAT, VAT, and VSR are predictive of mortality, with VSR being the highest performing fat area biomarker overall. These metrics tended to perform better for women and younger patients. Incorporating AI tools can augment patient assessment and management, improving outcome.

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules.

Sun Q, Yu L, Song Z, Wang C, Li W, Chen W, Xu J, Han S

pubmed logopapersAug 11 2025
Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

Anatomy-Aware Low-Dose CT Denoising via Pretrained Vision Models and Semantic-Guided Contrastive Learning

Runze Wang, Zeli Chen, Zhiyun Song, Wei Fang, Jiajin Zhang, Danyang Tu, Yuxing Tang, Minfeng Xu, Xianghua Ye, Le Lu, Dakai Jin

arxiv logopreprintAug 11 2025
To reduce radiation exposure and improve the diagnostic efficacy of low-dose computed tomography (LDCT), numerous deep learning-based denoising methods have been developed to mitigate noise and artifacts. However, most of these approaches ignore the anatomical semantics of human tissues, which may potentially result in suboptimal denoising outcomes. To address this problem, we propose ALDEN, an anatomy-aware LDCT denoising method that integrates semantic features of pretrained vision models (PVMs) with adversarial and contrastive learning. Specifically, we introduce an anatomy-aware discriminator that dynamically fuses hierarchical semantic features from reference normal-dose CT (NDCT) via cross-attention mechanisms, enabling tissue-specific realism evaluation in the discriminator. In addition, we propose a semantic-guided contrastive learning module that enforces anatomical consistency by contrasting PVM-derived features from LDCT, denoised CT and NDCT, preserving tissue-specific patterns through positive pairs and suppressing artifacts via dual negative pairs. Extensive experiments conducted on two LDCT denoising datasets reveal that ALDEN achieves the state-of-the-art performance, offering superior anatomy preservation and substantially reducing over-smoothing issue of previous work. Further validation on a downstream multi-organ segmentation task (encompassing 117 anatomical structures) affirms the model's ability to maintain anatomical awareness.

Generative Artificial Intelligence to Automate Cerebral Perfusion Mapping in Acute Ischemic Stroke from Non-contrast Head Computed Tomography Images: Pilot Study.

Primiano NJ, Changa AR, Kohli S, Greenspan H, Cahan N, Kummer BR

pubmed logopapersAug 11 2025
Acute ischemic stroke (AIS) is a leading cause of death and long-term disability worldwide, where rapid reperfusion remains critical for salvaging brain tissue. Although CT perfusion (CTP) imaging provides essential hemodynamic information, its limitations-including extended processing times, additional radiation exposure, and variable software outputs-can delay treatment. In contrast, non-contrast head CT (NCHCT) is ubiquitously available in acute stroke settings. This study explores a generative artificial intelligence approach to predict key perfusion parameters (relative cerebral blood flow [rCBF] and time-to-maximum [Tmax]) directly from NCHCT, potentially streamlining stroke imaging workflows and expanding access to critical perfusion data. We retrospectively identified patients evaluated for AIS who underwent NCHCT, CT angiography, and CTP. Ground truth perfusion maps (rCBF and Tmax) were extracted from VIZ.ai post-processed CTP studies. A modified pix2pix-turbo generative adversarial network (GAN) was developed to translate co-registered NCHCT images into corresponding perfusion maps. The network was trained using paired NCHCT-CTP data, with training, validation, and testing splits of 80%:10%:10%. Performance was assessed on the test set using quantitative metrics including the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and Fréchet inception distance (FID). Out of 120 patients, studies from 99 patients fitting our inclusion and exclusion criteria were used as the primary cohort (mean age 73.3 ± 13.5 years; 46.5% female). Cerebral occlusions were predominantly in the middle cerebral artery. GAN-generated Tmax maps achieved an SSIM of 0.827, PSNR of 16.99, and FID of 62.21, while the rCBF maps demonstrated comparable performance (SSIM 0.79, PSNR 16.38, FID 59.58). These results indicate that the model approximates ground truth perfusion maps to a moderate degree and successfully captures key cerebral hemodynamic features. Our findings demonstrate the feasibility of generating functional perfusion maps directly from widely available NCHCT images using a modified GAN. This cross-modality approach may serve as a valuable adjunct in AIS evaluation, particularly in resource-limited settings or when traditional CTP provides limited diagnostic information. Future studies with larger, multicenter datasets and further model refinements are warranted to enhance clinical accuracy and utility.
Page 3 of 1011010 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.