Sort by:
Page 123 of 1261258 results

Brain connectome gradient dysfunction in patients with end-stage renal disease and its association with clinical phenotype and cognitive deficits.

Li P, Li N, Ren L, Yang YP, Zhu XY, Yuan HJ, Luo ZY, Mu JY, Wang W, Zhang M

pubmed logopapersMay 6 2025
A cortical hierarchical architecture is vital for encoding and integrating sensorimotor-to-cognitive information. However, whether this gradient structure is disrupted in end-stage renal disease (ESRD) patients and how this disruption provides valuable information for potential clinical symptoms remain unknown. We prospectively enrolled 77 ESRD patients and 48 healthy controls. Using resting-state functional magnetic resonance imaging, we studied ESRD-related hierarchical alterations. The Neurosynth platform and machine-learning models with 10-fold cross-validation were applied. ESRD patients had abnormal gradient metrics in core regions of the default mode network, sensorimotor network, and frontoparietal network. These changes correlated with creatinine, depression, and cognitive functions. A logistic regression classifier achieved a maximum performance of 84.8% accuracy and 0.901 area under the ROC curve (AUC). Our results highlight hierarchical imbalances in ESRD patients that correlate with diverse cognitive deficits, which may be used as potential neuroimaging markers for clinical symptoms.

Machine learning algorithms integrating positron emission tomography/computed tomography features to predict pathological complete response after neoadjuvant chemoimmunotherapy in lung cancer.

Sheng Z, Ji S, Chen Y, Mi Z, Yu H, Zhang L, Wan S, Song N, Shen Z, Zhang P

pubmed logopapersMay 6 2025
Reliable methods for predicting pathological complete response (pCR) in non-small cell lung cancer (NSCLC) patients undergoing neoadjuvant chemoimmunotherapy are still under exploration. Although Fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography (18F-FDG PET/CT) features reflect tumour response, their utility in predicting pCR remains controversial. This retrospective analysis included NSCLC patients who received neoadjuvant chemoimmunotherapy followed by 18F-FDG PET/CT imaging at Shanghai Pulmonary Hospital from October 2019 to August 2024. Eligible patients were randomly divided into training and validation cohort at a 7:3 ratio. Relevant 18F-FDG PET/CT features were evaluated as individual predictors and incorporated into 5 machine learning (ML) models. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), and Shapley additive explanation was applied for model interpretation. A total of 205 patients were included, with 91 (44.4%) achieving pCR. Post-treatment tumour maximum standardized uptake value (SUVmax) demonstrated the highest predictive performance among individual predictors, achieving an AUC of 0.72 (95% CI 0.65-0.79), while ΔT SUVmax achieved an AUC of 0.65 (95% CI 0.53-0.77). The Light Gradient Boosting Machine algorithm outperformed other models and individual predictors, achieving an average AUC of 0.87 (95% CI 0.78-0.97) in training cohort and 0.83 (95% CI 0.72-0.94) in validation cohort. Shapley additive explanation analysis identified post-treatment tumour SUVmax and post-treatment nodal volume as key contributors. This ML models offer a non-invasive and effective approach for predicting pCR after neoadjuvant chemoimmunotherapy in NSCLC.

A Vision-Language Model for Focal Liver Lesion Classification

Song Jian, Hu Yuchang, Wang Hui, Chen Yen-Wei

arxiv logopreprintMay 6 2025
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.

Comprehensive Cerebral Aneurysm Rupture Prediction: From Clustering to Deep Learning

Zakeri, M., Atef, A., Aziznia, M., Jafari, A.

medrxiv logopreprintMay 6 2025
Cerebral aneurysm is a silent yet prevalent condition that affects a substantial portion of the global population. Aneurysms can develop due to various factors and present differently, necessitating diverse treatment approaches. Choosing the appropriate treatment upon diagnosis is paramount, as the severity of the disease dictates the course of action. The vulnerability of an aneurysm, particularly in the circle of Willis, is a critical concern; rupture can lead to irreversible consequences, including death. The primary objective of this study is to predict the rupture status of cerebral aneurysms using a comprehensive dataset that includes clinical, morphological, and hemodynamic data extracted from blood flow simulations of patients with actual vessels. Our goal is to provide valuable insights that can aid in treatment decision-making and potentially save the lives of future patients. Diagnosing and predicting the rupture status of aneurysms based solely on brain scans poses a significant challenge, often with limited accuracy, even for experienced physicians. However, harnessing statistical and machine learning (ML) techniques can enhance rupture prediction and treatment strategy selection. We employed a diverse set of supervised and unsupervised algorithms, training them on a database comprising over 700 cerebral aneurysms, which included 55 different parameters: 3 clinical, 35 morphological, and 17 hemodynamic features. Two of our models including stochastic gradient descent (SGD) and multi-layer perceptron (MLP) achieved a maximum area under the curve (AUC) of 0.86, a precision rate of 0.86, and a recall rate of 0.90 for prediction of cerebral aneurysm rupture. Given the sensitivity of the data and the critical nature of the condition, recall is a more vital parameter than accuracy and precision; our study achieved an acceptable recall score. Key features for rupture prediction included ellipticity index, low shear area ratio, and irregularity. Additionally, a one-dimensional CNN model predicted rupture status along a continuous spectrum, achieving 0.78 accuracy on the testing dataset, providing nuanced insights into rupture propensity.

From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities.

Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV

pubmed logopapersMay 1 2025
Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.

Upper-lobe CT imaging features improve prediction of lung function decline in COPD.

Makimoto K, Virdee S, Koo M, Hogg JC, Bourbeau J, Tan WC, Kirby M

pubmed logopapersMay 1 2025
It is unknown whether prediction models for lung function decline using computed tomography (CT) imaging-derived features from the upper lobes improve performance compared with globally derived features in individuals at risk of and with COPD. Individuals at risk (current or former smokers) and those with COPD from the Canadian Cohort Obstructive Lung Disease (CanCOLD) retrospective study, were investigated. A total of 103 CT features were extracted globally and regionally, and were used with 12 clinical features (demographics, questionnaires and spirometry) to predict rapid lung function decline for individuals at risk and those with COPD. Machine-learning models were evaluated in a hold-out test set using the area under the receiver operating characteristic curve (AUC) with DeLong's test for comparison. A total of 780 participants were included (n=276 at risk; n=298 Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 COPD; n=206 GOLD 2+ COPD). For predicting rapid lung function decline in those at risk, the upper-lobe CT model obtained a significantly higher AUC (AUC=0.80) than the lower-lobe CT model (AUC=0.63) and global model (AUC=0.66; p<0.05). For predicting rapid lung function decline in COPD, there was no significant differences between the upper-lobe (AUC=0.63), lower-lobe (AUC=0.59) or global CT features model (AUC=059; p>0.05). CT features extracted from the upper lobes obtained significantly improved prediction performance compared with globally extracted features for rapid lung function decline in early/mild COPD.

Deep learning-based fine-grained assessment of aneurysm wall characteristics using 4D-CT angiography.

Kumrai T, Maekawa T, Chen Y, Sugiyama Y, Takagaki M, Yamashiro S, Takizawa K, Ichinose T, Ishida F, Kishima H

pubmed logopapersJan 1 2025
This study proposes a novel deep learning-based approach for aneurysm wall characteristics, including thin-walled (TW) and hyperplastic-remodeling (HR) regions. We analyzed fifty-two unruptured cerebral aneurysms employing 4D-computed tomography angiography (4D-CTA) and intraoperative recordings. The TW and HR regions were identified in intraoperative images. The 3D trajectories of observation points on aneurysm walls were processed to compute a time series of 3D speed, acceleration, and smoothness of motion, aiming to evaluate the aneurysm wall characteristics. To facilitate point-level risk evaluation using the time-series data, we developed a convolutional neural network (CNN)-long- short-term memory (LSTM)-based regression model enriched with attention layers. In order to accommodate patient heterogeneity, a patient-independent feature extraction mechanism was introduced. Furthermore, unlabeled data were incorporated to enhance the data-intensive deep model. The proposed method achieved an average diagnostic accuracy of 92%, significantly outperforming a simpler model lacking attention. These results underscore the significance of patient-independent feature extraction and the use of unlabeled data. This study demonstrates the efficacy of a fine-grained deep learning approach in predicting aneurysm wall characteristics using 4D-CTA. Notably, incorporating an attention-based network structure proved to be particularly effective, contributing to enhanced performance.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.
Page 123 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.