Sort by:
Page 126 of 1411403 results

Deep Learning and Radiomic Signatures Associated with Tumor Immune Heterogeneity Predict Microvascular Invasion in Colon Cancer.

Jia J, Wang J, Zhang Y, Bai G, Han L, Niu Y

pubmed logopapersMay 23 2025
This study aims to develop and validate a deep learning radiomics signature (DLRS) that integrates radiomics and deep learning features for the non-invasive prediction of microvascular invasion (MVI) in patients with colon cancer (CC). Furthermore, the study explores the potential association between DLRS and tumor immune heterogeneity. This study is a multi-center retrospective study that included a total of 1007 patients with colon cancer (CC) from three medical centers and The Cancer Genome Atlas (TCGA-COAD) database. Patients from Medical Centers 1 and 2 were divided into a training cohort (n = 592) and an internal validation cohort (n = 255) in a 7:3 ratio. Medical Center 3 (n = 135) and the TCGA-COAD database (n = 25) were used as external validation cohorts. Radiomics and deep learning features were extracted from contrast-enhanced venous-phase CT images. Feature selection was performed using machine learning algorithms, and three predictive models were developed: a radiomics model, a deep learning (DL) model, and a combined deep learning radiomics (DLR) model. The predictive performance of each model was evaluated using multiple metrics, including the area under the curve (AUC), sensitivity, and specificity. Additionally, differential gene expression analysis was conducted on RNA-seq data from the TCGA-COAD dataset to explore the association between the DLRS and tumor immune heterogeneity within the tumor microenvironment. Compared to the standalone radiomics and deep learning models, DLR fusion model demonstrated superior predictive performance. The AUC for the internal validation cohort was 0.883 (95% CI: 0.828-0.937), while the AUC for the external validation cohort reached 0.855 (95% CI: 0.775-0.935). Furthermore, stratifying patients from the TCGA-COAD dataset into high-risk and low-risk groups based on the DLRS revealed significant differences in immune cell infiltration and immune checkpoint expression between the two groups (P < 0.05). The contrast-enhanced CT-based DLR fusion model developed in this study effectively predicts the MVI status in patients with CC. This model serves as a non-invasive preoperative assessment tool and reveals a potential association between the DLRS and immune heterogeneity within the tumor microenvironment, providing insights to optimize individualized treatment strategies.

Automated Detection of Severe Cerebral Edema Using Explainable Deep Transfer Learning after Hypoxic Ischemic Brain Injury.

Wang Z, Kulpanowski AM, Copen WA, Rosenthal ES, Dodelson JA, McCrory DE, Edlow BL, Kimberly WT, Amorim E, Westover M, Ning M, Zabihi M, Schaefer PW, Malhotra R, Giacino JT, Greer DM, Wu O

pubmed logopapersMay 23 2025
Substantial gaps exist in the neuroprognostication of cardiac arrest patients who remain comatose after the restoration of spontaneous circulation. Most studies focus on predicting survival, a measure confounded by the withdrawal of life-sustaining treatment decisions. Severe cerebral edema (SCE) may serve as an objective proximal imaging-based surrogate of neurologic injury. We retrospectively analyzed data from 288 patients to automate SCE detection with machine learning (ML) and to test the hypothesis that the quantitative values produced by these algorithms (ML_SCE) can improve predictions of neurologic outcomes. Ground-truth SCE (GT_SCE) classification was based on radiology reports. The model attained a cross-validated testing accuracy of 87% [95% CI: 84%, 89%] for detecting SCE. Attention maps explaining SCE classification focused on cisternal regions (p<0.05). Multivariable analyses showed that older age (p<0.001), non-shockable initial cardiac rhythm (p=0.004), and greater ML_SCE values (p<0.001) were significant predictors of poor neurologic outcomes, with GT_SCE (p=0.064) as a non-significant covariate. Our results support the feasibility of automated SCE detection. Future prospective studies with standardized neurologic assessments are needed to substantiate the utility of quantitative ML_SCE values to improve neuroprognostication.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Sophie Walters, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

Monocular Marker-free Patient-to-Image Intraoperative Registration for Cochlear Implant Surgery

Yike Zhang, Eduardo Davalos Anaya, Jack H. Noble

arxiv logopreprintMay 23 2025
This paper presents a novel method for monocular patient-to-image intraoperative registration, specifically designed to operate without any external hardware tracking equipment or fiducial point markers. Leveraging a synthetic microscopy surgical scene dataset with a wide range of transformations, our approach directly maps preoperative CT scans to 2D intraoperative surgical frames through a lightweight neural network for real-time cochlear implant surgery guidance via a zero-shot learning approach. Unlike traditional methods, our framework seamlessly integrates with monocular surgical microscopes, making it highly practical for clinical use without additional hardware dependencies and requirements. Our method estimates camera poses, which include a rotation matrix and a translation vector, by learning from the synthetic dataset, enabling accurate and efficient intraoperative registration. The proposed framework was evaluated on nine clinical cases using a patient-specific and cross-patient validation strategy. Our results suggest that our approach achieves clinically relevant accuracy in predicting 6D camera poses for registering 3D preoperative CT scans to 2D surgical scenes with an angular error within 10 degrees in most cases, while also addressing limitations of traditional methods, such as reliance on external tracking systems or fiducial markers.

A Unified Multi-Scale Attention-Based Network for Automatic 3D Segmentation of Lung Parenchyma & Nodules In Thoracic CT Images

Muhammad Abdullah, Furqan Shaukat

arxiv logopreprintMay 23 2025
Lung cancer has been one of the major threats across the world with the highest mortalities. Computer-aided detection (CAD) can help in early detection and thus can help increase the survival rate. Accurate lung parenchyma segmentation (to include the juxta-pleural nodules) and lung nodule segmentation, the primary symptom of lung cancer, play a crucial role in the overall accuracy of the Lung CAD pipeline. Lung nodule segmentation is quite challenging because of the diverse nodule types and other inhibit structures present within the lung lobes. Traditional machine/deep learning methods suffer from generalization and robustness. Recent Vision Language Models/Foundation Models perform well on the anatomical level, but they suffer on fine-grained segmentation tasks, and their semi-automatic nature limits their effectiveness in real-time clinical scenarios. In this paper, we propose a novel method for accurate 3D segmentation of lung parenchyma and lung nodules. The proposed architecture is an attention-based network with residual blocks at each encoder-decoder state. Max pooling is replaced by strided convolutions at the encoder, and trilinear interpolation is replaced by transposed convolutions at the decoder to maximize the number of learnable parameters. Dilated convolutions at each encoder-decoder stage allow the model to capture the larger context without increasing computational costs. The proposed method has been evaluated extensively on one of the largest publicly available datasets, namely LUNA16, and is compared with recent notable work in the domain using standard performance metrics like Dice score, IOU, etc. It can be seen from the results that the proposed method achieves better performance than state-of-the-art methods. The source code, datasets, and pre-processed data can be accessed using the link: https://github.com/EMeRALDsNRPU/Attention-Based-3D-ResUNet.

Development and validation of a multi-omics hemorrhagic transformation model based on hyperattenuated imaging markers following mechanical thrombectomy.

Jiang L, Zhu G, Wang Y, Hong J, Fu J, Hu J, Xiao S, Chu J, Hu S, Xiao W

pubmed logopapersMay 23 2025
This study aimed to develop a predictive model integrating clinical, radiomics, and deep learning (DL) features of hyperattenuated imaging markers (HIM) from computed tomography scans immediately following mechanical thrombectomy (MT) to predict hemorrhagic transformation (HT). A total of 239 patients with HIM who underwent MT were enrolled, with 191 patients (80%) in the training cohort and 48 patients (20%) in the validation cohort. Additionally, the model was tested on an internal prospective cohort of 49 patients. A total of 1834 radiomics features and 2048 DL features were extracted from HIM images. Statistical methods, such as analysis of variance, Pearson's correlation coefficient, principal component analysis, and least absolute shrinkage and selection operator, were used to select the most significant features. A K-Nearest Neighbor classifier was employed to develop a combined model integrating clinical, radiomics, and DL features for HT prediction. Model performance was evaluated using metrics such as accuracy, sensitivity, specificity, receiver operating characteristic curves, and area under curve (AUC). In the training, validation, and test cohorts, the combined model achieved AUCs of 0.926, 0.923, and 0.887, respectively, outperforming other models, including clinical, radiomics, and DL models, as well as hybrid models combining subsets of features (Clinical + Radiomics, DL + Radiomics, and Clinical + DL) in predicting HT. The combined model, which integrates clinical, radiomics, and DL features derived from HIM, demonstrated efficacy in noninvasively predicting HT. These findings suggest its potential utility in guiding clinical decision-making for patients with MT.

Multimodal fusion model for prognostic prediction and radiotherapy response assessment in head and neck squamous cell carcinoma.

Tian R, Hou F, Zhang H, Yu G, Yang P, Li J, Yuan T, Chen X, Chen Y, Hao Y, Yao Y, Zhao H, Yu P, Fang H, Song L, Li A, Liu Z, Lv H, Yu D, Cheng H, Mao N, Song X

pubmed logopapersMay 23 2025
Accurate prediction of prognosis and postoperative radiotherapy response is critical for personalized treatment in head and neck squamous cell carcinoma (HNSCC). We developed a multimodal deep learning model (MDLM) integrating computed tomography, whole-slide images, and clinical features from 1087 HNSCC patients across multiple centers. The MDLM exhibited good performance in predicting overall survival (OS) and disease-free survival in external test cohorts. Additionally, the MDLM outperformed unimodal models. Patients with a high-risk score who underwent postoperative radiotherapy exhibited prolonged OS compared to those who did not (P = 0.016), whereas no significant improvement in OS was observed among patients with a low-risk score (P = 0.898). Biological exploration indicated that the model may be related to changes in the cytochrome P450 metabolic pathway, tumor microenvironment, and myeloid-derived cell subpopulations. Overall, the MDLM effectively predicts prognosis and postoperative radiotherapy response, offering a promising tool for personalized HNSCC therapy.

End-to-end prognostication in pancreatic cancer by multimodal deep learning: a retrospective, multicenter study.

Schuurmans M, Saha A, Alves N, Vendittelli P, Yakar D, Sabroso-Lasa S, Xue N, Malats N, Huisman H, Hermans J, Litjens G

pubmed logopapersMay 23 2025
Pancreatic cancer treatment plans involving surgery and/or chemotherapy are highly dependent on disease stage. However, current staging systems are ineffective and poorly correlated with survival outcomes. We investigate how artificial intelligence (AI) can enhance prognostic accuracy in pancreatic cancer by integrating multiple data sources. Patients with histopathology and/or radiology/follow-up confirmed pancreatic ductal adenocarcinoma (PDAC) from a Dutch center (2004-2023) were included in the development cohort. Two additional PDAC cohorts from a Dutch and Spanish center were used for external validation. Prognostic models including clinical variables, contrast-enhanced CT images, and a combination of both were developed to predict high-risk short-term survival. All models were trained using five-fold cross-validation and assessed by the area under the time-dependent receiver operating characteristic curve (AUC). The models were developed on 401 patients (203 females, 198 males, median survival (OS) = 347 days, IQR: 171-585), with 98 (24.4%) short-term survivors (OS < 230 days) and 303 (75.6%) long-term survivors. The external validation cohorts included 361 patients (165 females, 138 males, median OS = 404 days, IQR: 173-736), with 110 (30.5%) short-term survivors and 251 (69.5%) longer survivors. The best AUC for predicting short vs. long-term survival was achieved with the multi-modal model (AUC = 0.637 (95% CI: 0.500-0.774)) in the internal validation set. External validation showed AUCs of 0.571 (95% CI: 0.453-0.689) and 0.675 (95% CI: 0.593-0.757). Multimodal AI can predict long vs. short-term survival in PDAC patients, showing potential as a prognostic tool in clinical decision-making. Question Prognostic tools for pancreatic ductal adenocarcinoma (PDAC) remain limited, with TNM staging offering suboptimal accuracy in predicting patient survival outcomes. Findings The multimodal AI model demonstrated improved prognostic performance over TNM and unimodal models for predicting short- and long-term survival in PDAC patients. Clinical relevance Multimodal AI provides enhanced prognostic accuracy compared to current staging systems, potentially improving clinical decision-making and personalized management strategies for PDAC patients.

Evaluation of a deep-learning segmentation model for patients with colorectal cancer liver metastases (COALA) in the radiological workflow.

Zeeuw M, Bereska J, Strampel M, Wagenaar L, Janssen B, Marquering H, Kemna R, van Waesberghe JH, van den Bergh J, Nota I, Moos S, Nio Y, Kop M, Kist J, Struik F, Wesdorp N, Nelissen J, Rus K, de Sitter A, Stoker J, Huiskens J, Verpalen I, Kazemier G

pubmed logopapersMay 23 2025
For patients with colorectal liver metastases (CRLM), total tumor volume (TTV) is prognostic. A deep-learning segmentation model for CRLM to assess TTV called COlorectal cAncer Liver metastases Assessment (COALA) has been developed. This study evaluated COALA's performance and practical utility in the radiological picture archiving and communication system (PACS). A secondary aim was to provide lessons for future researchers on the implementation of artificial intelligence (AI) models. Patients discussed between January and December 2023 in a multidisciplinary meeting for CRLM were included. In those patients, CRLM was automatically segmented in portal-venous phase CT scans by COALA and integrated with PACS. Eight expert abdominal radiologists completed a questionnaire addressing segmentation accuracy and PACS integration. They were also asked to write down general remarks. In total, 57 patients were evaluated. Of those patients, 112 contrast-enhanced portal-venous phase CT scans were analyzed. Of eight radiologists, six (75%) evaluated the model as user-friendly in their radiological workflow. Areas of improvement of the COALA model were the segmentation of small lesions, heterogeneous lesions, and lesions at the border of the liver with involvement of the diaphragm or heart. Key lessons for implementation were a multidisciplinary approach, a robust method prior to model development and organizing evaluation sessions with end-users early in the development phase. This study demonstrates that the deep-learning segmentation model for patients with CRLM (COALA) is user-friendly in the radiologist's PACS. Future researchers striving for implementation should have a multidisciplinary approach, propose a robust methodology and involve end-users prior to model development. Many segmentation models are being developed, but none of those models are evaluated in the (radiological) workflow or clinically implemented. Our model is implemented in the radiological work system, providing valuable lessons for researchers to achieve clinical implementation. Developed segmentation models should be implemented in the radiological workflow. Our implemented segmentation model provides valuable lessons for future researchers. If implemented in clinical practice, our model could allow for objective radiological evaluation.

Development of a non-contrast CT-based radiomics nomogram for early prediction of delayed cerebral ischemia in aneurysmal subarachnoid hemorrhage.

Chen L, Wang X, Wang S, Zhao X, Yan Y, Yuan M, Sun S

pubmed logopapersMay 23 2025
Delayed cerebral ischemia (DCI) is a significant complication following aneurysmal subarachnoid hemorrhage (aSAH), leading to poor prognosis and high mortality. This study developed a non-contrast CT (NCCT)-based radiomics nomogram for early DCI prediction in aSAH patients. Three hundred seventy-seven aSAH patients were included in this retrospective study. Radiomic features from the baseline CTs were extracted using PyRadiomics. Feature selection was conducted using t-tests, Pearson correlation, and Lasso regression to identify those features most closely associated with DCI. Multivariable logistic regression was used to identify independent clinical and demographic risk factors. Eight machine learning algorithms were applied to construct radiomics-only and radiomics-clinical fusion nomogram models. The nomogram integrated the radscore and three clinically significant parameters (aneurysm and aneurysm treatment and admission Hunt-Hess score), with the Support Vector Machine model yielding the highest performance in the validation set. The radiomics model and nomogram produced AUCs of 0.696 (95% CI: 0.578-0.815) and 0.831 (95% CI: 0.739-0.923), respectively. The nomogram achieved an accuracy of 0.775, a sensitivity of 0.750, a specificity of 0.795, and an F1 score of 0.750. The NCCT-based radiomics nomogram demonstrated high predictive performance for DCI in aSAH patients, providing a valuable tool for early DCI identification and formulating appropriate treatment strategies. Not applicable.
Page 126 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.