Sort by:
Page 62 of 1241236 results

Stratifying trigeminal neuralgia and characterizing an abnormal property of brain functional organization: a resting-state fMRI and machine learning study.

Wu M, Qiu J, Chen Y, Jiang X

pubmed logopapersJul 1 2025
Increasing evidence suggests that primary trigeminal neuralgia (TN), including classical TN (CTN) and idiopathic TN (ITN), share biological, neuropsychological, and clinical features, despite differing diagnostic criteria. Neuroimaging studies have shown neurovascular compression (NVC) differences in these disorders. However, changes in brain dynamics across these two TN subtypes remain unknown. The authors aimed to examine the functional connectivity differences in CTN, ITN, and pain-free controls. A total of 93 subjects, 50 TN patients and 43 pain-free controls, underwent resting-state functional magnetic resonance imaging (rs-fMRI). All TN patients underwent surgery, and the NVC type was verified. Functional connectivity and spontaneous brain activity were analyzed, and the significant alterations in rs-fMRI indices were selected to train classification models. The patients with TN showed increased connectivity between several brain regions, such as the medial prefrontal cortex (mPFC) and left planum temporale and decreased connectivity between the mPFC and left superior frontal gyrus. CTN patients exhibited a further reduction in connectivity between the left insular lobe and left occipital pole. Compared to controls, TN patients had heightened neural activity in the frontal regions. The CTN patients showed reduced activity in the right temporal pole compared to that in the ITN patients. These patterns effectively distinguished TN patients from controls, with an accuracy of 74.19% and an area under the receiver operating characteristic curve of 0.80. This study revealed alterations in rs-fMRI metrics in TN patients compared to those in controls and is the first to show differences between CTN and ITN. The support vector machine model of rs-fMRI indices exhibited moderate performance on discriminating TN patients from controls. These findings have unveiled potential biomarkers for TN and its subtypes, which can be used for additional investigation of the pathophysiology of the disease.

Preoperative MRI-based deep learning reconstruction and classification model for assessing rectal cancer.

Yuan Y, Ren S, Lu H, Chen F, Xiang L, Chamberlain R, Shao C, Lu J, Shen F, Chen L

pubmed logopapersJul 1 2025
To determine whether deep learning reconstruction (DLR) could improve the image quality of rectal MR images, and to explore the discrimination of the TN stage of rectal cancer by different readers and deep learning classification models, compared with conventional MR images without DLR. Images of high-resolution T2-weighted, diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted imaging (CE-T1WI) from patients with pathologically diagnosed rectal cancer were retrospectively processed with and without DLR and assessed by five readers. The first two readers measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the lesions. The overall image quality and lesion display performance for each sequence with and without DLR were independently scored using a five-point scale, and the TN stage of rectal cancer lesions was evaluated by the other three readers. Fifty of the patients were randomly selected to further make a comparison between DLR and traditional denoising filter. Deep learning classification models were developed and compared for the TN stage. Receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA) were used to evaluate the diagnostic performance of the proposed model. Overall, 178 patients were evaluated. The SNR and CNR of the lesion on images with DLR were significantly higher than those without DLR, for T2WI, DWI and CE-T1WI, respectively (p < 0.0001). A significant difference was observed in overall image quality and lesion display performance between images with and without DLR (p < 0.0001). The image quality scores, SNR, and CNR values of DLR image set were significantly larger than those of original and filter enhancement image sets (all p values < 0.05) for all the three sequences, respectively. The deep learning classification models with DLR achieved good discrimination of the TN stage, with area under the curve (AUC) values of 0.937 (95% CI 0.839-0.977) and 0.824 (95% CI 0.684-0.913) in the test sets, respectively. Deep learning reconstruction and classification models could improve the image quality of rectal MRI images and enhance the diagnostic performance for determining the TN stage of patients with rectal cancer.

Accelerating brain T2-weighted imaging using artificial intelligence-assisted compressed sensing combined with deep learning-based reconstruction: a feasibility study at 5.0T MRI.

Wen Y, Ma H, Xiang S, Feng Z, Guan C, Li X

pubmed logopapersJul 1 2025
T2-weighted imaging (T2WI), renowned for its sensitivity to edema and lesions, faces clinical limitations due to prolonged scanning time, increasing patient discomfort, and motion artifacts. The individual applications of artificial intelligence-assisted compressed sensing (ACS) and deep learning-based reconstruction (DLR) technologies have demonstrated effectiveness in accelerated scanning. However, the synergistic potential of ACS combined with DLR at 5.0T remains unexplored. This study systematically evaluates the diagnostic efficacy of the integrated ACS-DLR technique for T2WI at 5.0T, comparing it to conventional parallel imaging (PI) protocols. The prospective analysis was performed on 98 participants who underwent brain T2WI scans using ACS, DLR, and PI techniques. Two observers evaluated the overall image quality, truncation artifacts, motion artifacts, cerebrospinal fluid flow artifacts, vascular pulsation artifacts, and the significance of lesions. Subjective rating differences among the three sequences were compared. Objective assessment involved the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in gray matter, white matter, and cerebrospinal fluid for each sequence. The SNR, CNR, and acquisition time of each sequence were compared. The acquisition time for ACS and DLR was reduced by 78%. The overall image quality of DLR is higher than that of ACS (P < 0.001) and equivalent to PI (P > 0.05). The SNR of the DLR sequence is the highest, and the CNR of DLR is higher than that of the ACS sequence (P < 0.001) and equivalent to PI (P > 0.05). The integration of ACS and DLR enables the ultrafast acquisition of brain T2WI while maintaining superior SNR and comparable CNR compared to PI sequences. Not applicable.

Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.

Lin H, Yue Y, Xie L, Chen B, Li W, Yang F, Zhang Q, Chen H

pubmed logopapersJul 1 2025
Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency. A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision. The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness. Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.

Brain structural features with functional priori to classify Parkinson's disease and multiple system atrophy using diagnostic MRI.

Zhou K, Li J, Huang R, Yu J, Li R, Liao W, Lu F, Hu X, Chen H, Gao Q

pubmed logopapersJul 1 2025
Clinical two-dimensional (2D) MRI data has seen limited application in the early diagnosis of Parkinson's disease (PD) and multiple system atrophy (MSA) due to quality limitations, yet its diagnostic and therapeutic potential remains underexplored. This study presents a novel machine learning framework using reconstructed clinical images to accurately distinguish PD from MSA and identify disease-specific neuroimaging biomarkers. The structure constrained super-resolution network (SCSRN) algorithm was employed to reconstruct clinical 2D MRI data for 56 PD and 58 MSA patients. Features were derived from a functional template, and hierarchical SHAP-based feature selection improved model accuracy and interpretability. In the test set, the Extra Trees and logistic regression models based on the functional template demonstrated an improved accuracy rate of 95.65% and an AUC of 99%. The positive and negative impacts of various features predicting PD and MSA were clarified, with larger fourth ventricular and smaller brainstem volumes being most significant. The proposed framework provides new insights into the comprehensive utilization of clinical 2D MRI images to explore underlying neuroimaging biomarkers that can distinguish between PD and MSA, highlighting disease-specific alterations in brain morphology observed in these conditions.

Machine learning for Parkinson's disease: a comprehensive review of datasets, algorithms, and challenges.

Shokrpour S, MoghadamFarid A, Bazzaz Abkenar S, Haghi Kashani M, Akbari M, Sarvizadeh M

pubmed logopapersJul 1 2025
Parkinson's disease (PD) is a devastating neurological ailment affecting both mobility and cognitive function, posing considerable problems to the health of the elderly across the world. The absence of a conclusive treatment underscores the requirement to investigate cutting-edge diagnostic techniques to improve patient outcomes. Machine learning (ML) has the potential to revolutionize PD detection by applying large repositories of structured data to enhance diagnostic accuracy. 133 papers published between 2021 and April 2024 were reviewed using a systematic literature review (SLR) methodology, and subsequently classified into five categories: acoustic data, biomarkers, medical imaging, movement data, and multimodal datasets. This comprehensive analysis offers valuable insights into the applications of ML in PD diagnosis. Our SLR identifies the datasets and ML algorithms used for PD diagnosis, as well as their merits, limitations, and evaluation factors. We also discuss challenges, future directions, and outstanding issues.

Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification.

Panigrahi S, Adhikary DRD, Pattanayak BK

pubmed logopapersJul 1 2025
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen's Kappa Score, DeLong's test based on AUC Score and McNemar's test based on F1-score confirms the model's reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.

Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Lamba K, Rani S, Shabaz M

pubmed logopapersJul 1 2025
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.
Page 62 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.