Sort by:
Page 147 of 1601593 results

XDementNET: An Explainable Attention Based Deep Convolutional Network to Detect Alzheimer Progression from MRI data

Soyabul Islam Lincoln, Mirza Mohd Shahriar Maswood

arxiv logopreprintMay 20 2025
A common neurodegenerative disease, Alzheimer's disease requires a precise diagnosis and efficient treatment, particularly in light of escalating healthcare expenses and the expanding use of artificial intelligence in medical diagnostics. Many recent studies shows that the combination of brain Magnetic Resonance Imaging (MRI) and deep neural networks have achieved promising results for diagnosing AD. Using deep convolutional neural networks, this paper introduces a novel deep learning architecture that incorporates multiresidual blocks, specialized spatial attention blocks, grouped query attention, and multi-head attention. The study assessed the model's performance on four publicly accessible datasets and concentrated on identifying binary and multiclass issues across various categories. This paper also takes into account of the explainability of AD's progression and compared with state-of-the-art methods namely Gradient Class Activation Mapping (GradCAM), Score-CAM, Faster Score-CAM, and XGRADCAM. Our methodology consistently outperforms current approaches, achieving 99.66\% accuracy in 4-class classification, 99.63\% in 3-class classification, and 100\% in binary classification using Kaggle datasets. For Open Access Series of Imaging Studies (OASIS) datasets the accuracies are 99.92\%, 99.90\%, and 99.95\% respectively. The Alzheimer's Disease Neuroimaging Initiative-1 (ADNI-1) dataset was used for experiments in three planes (axial, sagittal, and coronal) and a combination of all planes. The study achieved accuracies of 99.08\% for axis, 99.85\% for sagittal, 99.5\% for coronal, and 99.17\% for all axis, and 97.79\% and 8.60\% respectively for ADNI-2. The network's ability to retrieve important information from MRI images is demonstrated by its excellent accuracy in categorizing AD stages.

A 3D deep learning model based on MRI for predicting lymphovascular invasion in rectal cancer.

Wang T, Chen C, Liu C, Li S, Wang P, Yin D, Liu Y

pubmed logopapersMay 20 2025
The assessment of lymphovascular invasion (LVI) is crucial in the management of rectal cancer; However, accurately evaluating LVI preoperatively using imaging remains challenging. Recent advances in radiomics have created opportunities for developing more accurate diagnostic tools. This study aimed to develop and validate a deep learning model for predicting LVI in rectal cancer patients using preoperative MR imaging. These cases were randomly divided into a training cohort (n = 233) and an validation cohort (n = 101) at a ratio of 7:3. Based on the pathological reports, the patients were classified into positive and negative groups according to their LVI status. Based on the preoperative MRI T2WI axial images, the regions of interest (ROI) were defined from the tumor itself and the edges of the tumor extending outward by 5 pixels, 10 pixels, 15 pixels, and 20 pixels. The 2D and 3D deep learning features were extracted using the DenseNet121 architecture, and the deep learning models were constructed, including a total of ten models: GTV (the tumor itself), GPTV5 (the tumor itself and the tumor extending outward by 5 pixels), GPTV10, GPTV15, and GPTV20. To assess model performance, we utilized the area under the curve (AUC) and conducted DeLong test to compare different models, aiming to identify the optimal model for predicting LVI in rectal cancer. In the 2D deep learning model group, the 2D GPTV10 model demonstrated superior performance with an AUC of 0.891 (95% confidence interval [CI] 0.850-0.933) in the training cohort and an AUC of 0.841 (95% CI 0.767-0.915) in the validation cohort. The difference in AUC between this model and other 2D models was not statistically significant based on DeLong test (p > 0.05); In the group of 3D deep learning models, the 3D GPTV10 model had the highest AUC, with a training cohort AUC of 0.961 (95% CI 0.940-0.982) and a validation cohort AUC of 0.928 (95% CI 0.881-0.976). DeLong test demonstrated that the performance of the 3D GPTV10 model surpassed other 3D models as well as the 2D GPTV10 model (p < 0.05). The study developed a deep learning model, namely 3D GPTV10, utilizing preoperative MRI data to accurately predict the presence of LVI in rectal cancer patients. By training on the tumor itself and its surrounding margin 10 pixels as the region of interest, this model achieved superior performance compared to other deep learning models. These findings have significant implications for clinicians in formulating personalized treatment plans for rectal cancer patients.

Deep learning-based radiomics and machine learning for prognostic assessment in IDH-wildtype glioblastoma after maximal safe surgical resection: a multicenter study.

Liu J, Jiang S, Wu Y, Zou R, Bao Y, Wang N, Tu J, Xiong J, Liu Y, Li Y

pubmed logopapersMay 20 2025
Glioblastoma (GBM) is a highly aggressive brain tumor with poor prognosis. This study aimed to construct and validate a radiomics-based machine learning model for predicting overall survival (OS) in IDH-wildtype GBM after maximal safe surgical resection using magnetic resonance imaging. A total of 582 patients were retrospectively enrolled, comprising 301 in the training cohort, 128 in the internal validation cohort, and 153 in the external validation cohort. Volumes of interest (VOIs) from contrast-enhanced T1-weighted imaging (CE-T1WI) were segmented into three regions: contrast-enhancing tumor, necrotic non-enhancing core, and peritumoral edema using an ResNet-based segmentation network. A total of 4,227 radiomic features were extracted and filtered using LASSO-Cox regression to identify signatures. The prognostic model was constructed using the Mime prediction framework, categorizing patients into high- and low-risk groups based on the median OS. Model performance was assessed using the concordance index (CI) and Kaplan-Meier survival analysis. Independent prognostic factors were identified through multivariable Cox regression analysis, and a nomogram was developed for individualized risk assessment. The Step Cox [backward] + RSF model achieved CIs of 0.89, 0.81, and 0.76 in the training, internal and external validation cohorts. Log-rank tests demonstrated significant survival differences between high- and low-risk groups across all cohorts (P < 0.05). Multivariate Cox analysis identified age (HR: 1.022; 95% CI: 0.979, 1.009, P < 0.05), KPS score (HR: 0.970, 95% CI: 0.960, 0.978, P < 0.05), rad-scores of the necrotic non-enhancing core (HR: 8.164; 95% CI: 2.439, 27.331, P < 0.05), and peritumoral edema (HR: 3.748; 95% CI: 1.212, 11.594, P < 0.05) as independent predictors of OS. A nomogram integrating these predictors provided individualized risk assessment. This deep learning segmentation-based radiomics model demonstrated robust performance in predicting OS in GBM after maximal safe surgical resection. By incorporating radiomic signatures and advanced machine learning algorithms, it offers a non-invasive tool for personalized prognostic assessment and supports clinical decision-making.

Challenges in Using Deep Neural Networks Across Multiple Readers in Delineating Prostate Gland Anatomy.

Abudalou S, Choi J, Gage K, Pow-Sang J, Yilmaz Y, Balagurunathan Y

pubmed logopapersMay 20 2025
Deep learning methods provide enormous promise in automating manually intense tasks such as medical image segmentation and provide workflow assistance to clinical experts. Deep neural networks (DNN) require a significant amount of training examples and a variety of expert opinions to capture the nuances and the context, a challenging proposition in oncological studies (H. Wang et al., Nature, vol. 620, no. 7972, pp. 47-60, Aug 2023). Inter-reader variability among clinical experts is a real-world problem that severely impacts the generalization of DNN reproducibility. This study proposes quantifying the variability in DNN performance using expert opinions and exploring strategies to train the network and adapt between expert opinions. We address the inter-reader variability problem in the context of prostate gland segmentation using a well-studied DNN, the 3D U-Net model. Reference data includes magnetic resonance imaging (MRI, T2-weighted) with prostate glandular anatomy annotations from two expert readers (R#1, n = 342 and R#2, n = 204). 3D U-Net was trained and tested with individual expert examples (R#1 and R#2) and had an average Dice coefficient of 0.825 (CI, [0.81 0.84]) and 0.85 (CI, [0.82 0.88]), respectively. Combined training with a representative cohort proportion (R#1, n = 100 and R#2, n = 150) yielded enhanced model reproducibility across readers, achieving an average test Dice coefficient of 0.863 (CI, [0.85 0.87]) for R#1 and 0.869 (CI, [0.87 0.88]) for R#2. We re-evaluated the model performance across the gland volumes (large, small) and found improved performance for large gland size with an average Dice coefficient to be at 0.846 [CI, 0.82 0.87] and 0.872 [CI, 0.86 0.89] for R#1 and R#2, respectively, estimated using fivefold cross-validation. Performance for small gland sizes diminished with average Dice of 0.8 [0.79, 0.82] and 0.8 [0.79, 0.83] for R#1 and R#2, respectively.

Prediction of prognosis of immune checkpoint inhibitors combined with anti-angiogenic agents for unresectable hepatocellular carcinoma by machine learning-based radiomics.

Xu X, Jiang X, Jiang H, Yuan X, Zhao M, Wang Y, Chen G, Li G, Duan Y

pubmed logopapersMay 19 2025
This study aims to develop and validate a novel radiomics model utilizing magnetic resonance imaging (MRI) to predict progression-free survival (PFS) in patients with unresectable hepatocellular carcinoma (uHCC) who are receiving a combination of immune checkpoint inhibitors (ICIs) and antiangiogenic agents. This is an area that has not been previously explored using MRI-based radiomics. 111 patients with uHCC were enrolled in this study. After performing univariate cox regression and the least absolute shrinkage and selection operator (LASSO) algorithms to extract radiological features, the Rad-score was calculated through a Cox proportional hazards regression model and a random survival forest (RSF) model. The optimal calculation method was selected by comparing the Harrell's concordance index (C-index) values. The Rad-score was then combined with independent clinical risk factors to create a nomogram. C-index, time-dependent receiver operating characteristics (ROC) curves, calibration curves, and decision curve analysis were employed to assess the forecast ability of the risk models. The combined nomogram incorporated independent clinical factors and Rad-score calculated by RSF demonstrated better prognosis prediction for PFS, with C-index of 0.846, 0.845, separately in the training and the validation cohorts. This indicates that our model performs well and has the potential to enable more precise patient stratification and personalized treatment strategies. Based on the risk level, the participants were classified into two distinct groups: the high-risk signature (HRS) group and the low-risk signature (LRS) group, with a significant difference between the groups (P < 0.01). The effective clinical-radiomics nomogram based on MRI imaging is a promising tool in predicting the prognosis in uHCC patients receiving ICIs combined with anti-angiogenic agents, potentially leading to more effective clinical outcomes.

Morphometric and radiomics analysis toward the prediction of epilepsy associated with supratentorial low-grade glioma in children.

Tsai ML, Hsieh KL, Liu YL, Yang YS, Chang H, Wong TT, Peng SJ

pubmed logopapersMay 19 2025
Understanding the impact of epilepsy on pediatric brain tumors is crucial to diagnostic precision and optimal treatment selection. This study investigated MRI radiomics features, tumor location, voxel-based morphometry (VBM) for gray matter density, and tumor volumetry to differentiate between children with low grade glioma (LGG)-associated epilepsies and those without, and further identified key radiomics features for predicting of epilepsy risk in children with supratentorial LGG to construct an epilepsy prediction model. A total of 206 radiomics features of tumors and voxel-based morphometric analysis of tumor location features were extracted from T2-FLAIR images in a primary cohort of 48 children with LGG with epilepsy (N = 23) or without epilepsy (N = 25), prior to surgery. Feature selection was performed using the minimum redundancy maximum relevance algorithm, and leave-one-out cross-validation was applied to assess the predictive performance of radiomics and tumor location signatures in differentiating epilepsy-associated LGG from non-epilepsy cases. Voxel-based morphometric analysis showed significant positive t-scores within bilateral temporal cortex and negative t-scores in basal ganglia between epilepsy and non-epilepsy groups. Eight radiomics features were identified as significant predictors of epilepsy in LGG, encompassing characteristics of 2 locations, 2 shapes, 1 image gray scale intensity, and 3 textures. The most important predictor was temporal lobe involvement, followed by high dependence high grey level emphasis, elongation, area density, information correlation 1, midbrain and intensity range. The Linear Support Vector Machine (SVM) model yielded the best prediction performance, when implemented with a combination of radiomics features and tumor location features, as evidenced by the following metrics: precision (0.955), recall (0.913), specificity (0.960), accuracy (0.938), F-1 score (0.933), and area under curve (AUC) (0.950). Our findings demonstrated the efficacy of machine learning models based on radiomics features and voxel-based anatomical locations in predicting the risk of epilepsy in supratentorial LGG. This model provides a highly accurate tool for distinguishing epilepsy-associated LGG in children, supporting precise treatment planning. Not applicable.

The Role of Machine Learning to Detect Occult Neck Lymph Node Metastases in Early-Stage (T1-T2/N0) Oral Cavity Carcinomas.

Troise S, Ugga L, Esposito M, Positano M, Elefante A, Capasso S, Cuocolo R, Merola R, Committeri U, Abbate V, Bonavolontà P, Nocini R, Dell'Aversana Orabona G

pubmed logopapersMay 19 2025
Oral cavity carcinomas (OCCs) represent roughly 50% of all head and neck cancers. The risk of occult neck metastases for early-stage OCCs ranges from 15% to 35%, hence the need to develop tools that can support the diagnosis of detecting these neck metastases. Machine learning and radiomic features are emerging as effective tools in this field. Thus, the aim of this study is to demonstrate the effectiveness of radiomic features to predict the risk of occult neck metastases in early-stage (T1-T2/N0) OCCs. Retrospective study. A single-institution analysis (Maxillo-facial Surgery Unit, University of Naples Federico II). A retrospective analysis was conducted on 75 patients surgically treated for early-stage OCC. For all patients, data regarding TNM, in particular pN status after the histopathological examination, have been obtained and the analysis of radiomic features from MRI has been extrapolated. 56 patients confirmed N0 status after surgery, while 19 resulted in pN+. The radiomic features, extracted by a machine-learning algorithm, exhibited the ability to preoperatively discriminate occult neck metastases with a sensitivity of 78%, specificity of 83%, an AUC of 86%, accuracy of 80%, and a positive predictive value (PPV) of 63%. Our results seem to confirm that radiomic features, extracted by machine learning methods, are effective tools in detecting occult neck metastases in early-stage OCCs. The clinical relevance of this study is that radiomics could be used routinely as a preoperative tool to support diagnosis and to help surgeons in the surgical decision-making process, particularly regarding surgical indications for neck lymph node treatment.

GuidedMorph: Two-Stage Deformable Registration for Breast MRI

Yaqian Chen, Hanxue Gu, Haoyu Dong, Qihang Li, Yuwen Chen, Nicholas Konz, Lin Li, Maciej A. Mazurowski

arxiv logopreprintMay 19 2025
Accurately registering breast MR images from different time points enables the alignment of anatomical structures and tracking of tumor progression, supporting more effective breast cancer detection, diagnosis, and treatment planning. However, the complexity of dense tissue and its highly non-rigid nature pose challenges for conventional registration methods, which primarily focus on aligning general structures while overlooking intricate internal details. To address this, we propose \textbf{GuidedMorph}, a novel two-stage registration framework designed to better align dense tissue. In addition to a single-scale network for global structure alignment, we introduce a framework that utilizes dense tissue information to track breast movement. The learned transformation fields are fused by introducing the Dual Spatial Transformer Network (DSTN), improving overall alignment accuracy. A novel warping method based on the Euclidean distance transform (EDT) is also proposed to accurately warp the registered dense tissue and breast masks, preserving fine structural details during deformation. The framework supports paradigms that require external segmentation models and with image data only. It also operates effectively with the VoxelMorph and TransMorph backbones, offering a versatile solution for breast registration. We validate our method on ISPY2 and internal dataset, demonstrating superior performance in dense tissue, overall breast alignment, and breast structural similarity index measure (SSIM), with notable improvements by over 13.01% in dense tissue Dice, 3.13% in breast Dice, and 1.21% in breast SSIM compared to the best learning-based baseline.

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset.

Nguyen DT, Imami M, Zhao LM, Wu J, Borhani A, Mohseni A, Khunte M, Zhong Z, Shi V, Yao S, Wang Y, Loizou N, Silva AC, Zhang PJ, Zhang Z, Jiao Z, Kamel I, Liao WH, Bai H

pubmed logopapersMay 19 2025
Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL). To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets. Retrospective multi-center study. A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions. MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences. FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset. Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05). No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set. FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models. 4. Stage 2.

Functional MRI Analysis of Cortical Regions to Distinguish Lewy Body Dementia From Alzheimer's Disease.

Kashyap B, Hanson LR, Gustafson SK, Sherman SJ, Sughrue ME, Rosenbloom MH

pubmed logopapersMay 19 2025
Cortical regions such as parietal area H (PH) and the fundus of the superior temporal sulcus (FST) are involved in higher visual function and may play a role in dementia with Lewy bodies (DLB), which is frequently associated with hallucinations. The authors evaluated functional connectivity between these two regions for distinguishing participants with DLB from those with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and from cognitively normal (CN) individuals to identify a functional connectivity MRI signature for DLB. Eighteen DLB participants completed cognitive testing and functional MRI scans and were matched to AD or MCI and CN individuals whose data were obtained from the Alzheimer's Disease Neuroimaging Initiative database (https://adni.loni.usc.edu). Images were analyzed with data from Human Connectome Project (HCP) comparison individuals by using a machine learning-based subject-specific HCP atlas based on diffusion tractography. Bihemispheric functional connectivity of the PH to left FST regions was reduced in the DLB group compared with the AD and CN groups (mean±SD connectivity score=0.307±0.009 vs. 0.456±0.006 and 0.433±0.006, respectively). No significant differences were detected among the groups in connectivity within basal ganglia structures, and no significant correlations were observed between neuropsychological testing results and functional connectivity between the PH and FST regions. Performances on clock-drawing and number-cancelation tests were significantly and negatively correlated with connectivity between the right caudate nucleus and right substantia nigra for DLB participants but not for AD or CN participants. The functional connectivity between PH and FST regions is uniquely affected by DLB and may help distinguish this condition from AD.
Page 147 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.