Sort by:
Page 1 of 90896 results
Next

The performance of artificial intelligence in image-based prediction of hematoma enlargement: a systematic review and meta-analysis.

Fan W, Wu Z, Zhao W, Jia L, Li S, Wei W, Chen X

pubmed logopapersDec 1 2025
Accurately predicting hematoma enlargement (HE) is crucial for improving the prognosis of patients with cerebral haemorrhage. Artificial intelligence (AI) is a potentially reliable assistant for medical image recognition. This study systematically reviews medical imaging articles on the predictive performance of AI in HE. Retrieved relevant studies published before October, 2024 from Embase, Institute of Electrical and Electronics Engineers (IEEE), PubMed, Web of Science, and Cochrane Library databases. The diagnostic test of predicting hematoma enlargement based on CT image training artificial intelligence model, and reported 2 × 2 contingency tables or provided sensitivity (SE) and specificity (SP) for calculation. Two reviewers independently screened the retrieved citations and extracted data. The methodological quality of studies was assessed using the QUADAS-AI, and Preferred Reporting Items for Systematic reviews and Meta-Analyses was used to ensure standardised reporting of studies. Subgroup analysis was performed based on sample size, risk of bias, year of publication, ratio of training set to test set, and number of centres involved. 36 articles were included in this Systematic review to qualitative analysis, of which 23 have sufficient information for further quantitative analysis. Among these articles, there are a total of 7 articles used deep learning (DL) and 16 articles used machine learning (ML). The comprehensive SE and SP of ML are 78% (95% CI: 69-85%) and 85% (78-90%), respectively, while the AUC is 0.89 (0.86-0.91). The SE and SP of DL was 87% (95% CI: 80-92%) and 75% (67-81%), respectively, with an AUC of 0.88 (0.85-0.91). The subgroup analysis found that when the ratio of the training set to the test set is 7:3, the sensitivity is 0.77(0.62-0.91), <i>p</i> = 0.03; In terms of specificity, the group with sample size more than 200 has higher specificity, which is 0.83 (0.75-0.92), <i>p</i> = 0.02; among the risk groups in the study design, the specificity of the risk group was higher, which was 0.83 (0.76-0.89), <i>p</i> = 0.02. The group specificity of articles published before 2021 was higher, 0.84 (0.77-0.90); and the specificity of data from a single research centre was higher, which was 0.85 (0.80-0.91), <i>p</i> < 0.001. Artificial intelligence algorithms based on imaging have shown good performance in predicting HE.

Nonsuicidal self-injury prediction with pain-processing neural circuits using interpretable graph neural network.

Wu S, Xue Y, Hang Y, Xie Y, Zhang P, Liang M, Zhong Y, Wang C

pubmed logopapersDec 1 2025
Nonsuicidal self-injury (NSSI) involves the intentional destruction of one's own body tissues without suicidal intent. Prior research has shown that individuals with NSSI exhibit abnormal pain perception; however, the pain-processing neural circuits underlying NSSI remain poorly understood. This study leverages graph neural networks to predict NSSI risk and examine the learned connectivity of neural underpinnings using multimodal data. Resting-state functional MRI and diffusion tensor imaging were collected from 50 patients with NSSI, 79 healthy controls (HC), and 44 patients with mental disorder who did not engage in NSSI as disease controls (DC). We constructed pain-related brain networks for each participant. An interpretable graph attention networks (GAT) model was developed, considering demographic factors, to predict NSSI risk and highlight NSSI-specific connectivity using learned attention matrices. The proposed GAT model based on imaging data achieved an accuracy of 80%, and increased to 88% when self-reported pain scales were incorporated alongside imaging data in distinguishing patients with NSSI from HC. It highlighted amygdala-parahippocampus and inferior frontal gyrus (IFG)-insula connectivity as pivotal in NSSI-related pain processing. After incorporating imaging data of DC, the model's accuracy reached 74%, underscoring consistent neural connectivity patterns. The GAT model demonstrates high predictive accuracy for NSSI, enhanced by including self-reported pain scales. Our proposed GAT model underscores the significance in the functional integration of limbic regions, paralimbic regions and IFG in NSSI pain processing. Our findings suggest altered pain processing as a key mechanism in NSSI, providing insights for potential neural modulation intervention strategies.

Cerebral ischemia detection using deep learning techniques.

Pastor-Vargas R, Antón-Munárriz C, Haut JM, Robles-Gómez A, Paoletti ME, Benítez-Andrades JA

pubmed logopapersDec 1 2025
Cerebrovascular accident (CVA), commonly known as stroke, stands as a significant contributor to contemporary mortality and morbidity rates, often leading to lasting disabilities. Early identification is crucial in mitigating its impact and reducing mortality. Non-contrast computed tomography (NCCT) remains the primary diagnostic tool in stroke emergencies due to its speed, accessibility, and cost-effectiveness. NCCT enables the exclusion of hemorrhage and directs attention to ischemic causes resulting from arterial flow obstruction. Quantification of NCCT findings employs the Alberta Stroke Program Early Computed Tomography Score (ASPECTS), which evaluates affected brain structures. This study seeks to identify early alterations in NCCT density in patients with stroke symptoms using a binary classifier distinguishing NCCT scans with and without stroke. To achieve this, various well-known deep learning architectures, namely VGG3D, ResNet3D, and DenseNet3D, validated in the ImageNet challenges, are implemented with 3D images covering the entire brain volume. The training results of these networks are presented, wherein diverse parameters are examined for optimal performance. The DenseNet3D network emerges as the most effective model, attaining a training set accuracy of 98% and a test set accuracy of 95%. The aim is to alert medical professionals to potential stroke cases in their early stages based on NCCT findings displaying altered density patterns.

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

Aphasia severity prediction using a multi-modal machine learning approach.

Hu X, Varkanitsa M, Kropp E, Betke M, Ishwar P, Kiran S

pubmed logopapersAug 15 2025
The present study examined an integrated multiple neuroimaging modality (T1 structural, Diffusion Tensor Imaging (DTI), and resting-state FMRI (rsFMRI)) to predict aphasia severity using Western Aphasia Battery-Revised Aphasia Quotient (WAB-R AQ) in 76 individuals with post-stroke aphasia. We employed Support Vector Regression (SVR) and Random Forest (RF) models with supervised feature selection and a stacked feature prediction approach. The SVR model outperformed RF, achieving an average root mean square error (RMSE) of 16.38±5.57, Pearson's correlation coefficient (r) of 0.70±0.13, and mean absolute error (MAE) of 12.67±3.27, compared to RF's RMSE of 18.41±4.34, r of 0.66±0.15, and MAE of 14.64±3.04. Resting-state neural activity and structural integrity emerged as crucial predictors of aphasia severity, appearing in the top 20% of predictor combinations for both SVR and RF. Finally, the feature selection method revealed that functional connectivity in both hemispheres and between homologous language areas is critical for predicting language outcomes in patients with aphasia. The statistically significant difference in performance between the model using only single modality and the optimal multi-modal SVR/RF model (which included both resting-state connectivity and structural information) underscores that aphasia severity is influenced by factors beyond lesion location and volume. These findings suggest that integrating multiple neuroimaging modalities enhances the prediction of language outcomes in aphasia beyond lesion characteristics alone, offering insights that could inform personalized rehabilitation strategies.

Machine learning based differential diagnosis of schizophrenia, major depression disorder and bipolar disorder using structural magnetic resonance imaging.

Cao P, Li R, Li Y, Dong Y, Tang Y, Xu G, Si Q, Chen C, Chen L, Liu W, Yao Y, Sui Y, Zhang J

pubmed logopapersAug 15 2025
Cortical morphological abnormalities in schizophrenia (SCZ), major depressive disorder (MDD), and bipolar disorder (BD) have been identified in past research. However, their potential as objective biomarkers to differentiate these disorders remains uncertain. Machine learning models may offer a novel diagnostic tool. Structural MRI (sMRI) of 220 SCZ, 220 MDD, 220 BD, and 220 healthy controls were obtained using a 3T scanner. Volume, thickness, surface area, and mean curvature of 68 cerebral cortices were extracted using FreeSurfer. 272 features underwent 3 feature selection techniques to isolate important variables for model construction. These features were incorporated into 3 classifiers for classification. After model evaluation and hyperparameter tuning, the best-performing model was identified, along with the most significant brain measures. The univariate feature selection-Naive Bayes model achieved the best performance, with an accuracy of 0.66, macro-average AUC of 0.86, and sensitivities and specificities ranging from 0.58-0.86 to 0.81-0.93, respectively. Key features included thickness of right isthmus-cingulate cortex, area of left inferior temporal gyrus, thickness of right superior temporal gyrus, mean curvature of right pars orbitalis, thickness of left transverse temporal cortex, volume of left caudal anterior-cingulate cortex, area of right banks superior temporal sulcus, and thickness of right temporal pole. The machine learning model based on sMRI data shows promise for aiding in the differential diagnosis of SCZ, MDD, and BD. Cortical features from the cingulate and temporal lobes may highlight distinct biological mechanisms underlying each disorder.

A software ecosystem for brain tractometry processing, analysis, and insight.

Kruper J, Richie-Halford A, Qiao J, Gilmore A, Chang K, Grotheer M, Roy E, Caffarra S, Gomez T, Chou S, Cieslak M, Koudoro S, Garyfallidis E, Satthertwaite TD, Yeatman JD, Rokem A

pubmed logopapersAug 14 2025
Tractometry uses diffusion-weighted magnetic resonance imaging (dMRI) to assess physical properties of brain connections. Here, we present an integrative ecosystem of software that performs all steps of tractometry: post-processing of dMRI data, delineation of major white matter pathways, and modeling of the tissue properties within them. This ecosystem also provides a set of interoperable and extensible tools for visualization and interpretation of the results that extract insights from these measurements. These include novel machine learning and statistical analysis methods adapted to the characteristic structure of tract-based data. We benchmark the performance of these statistical analysis methods in different datasets and analysis tasks, including hypothesis testing on group differences and predictive analysis of subject age. We also demonstrate that computational advances implemented in the software offer orders of magnitude of acceleration. Taken together, these open-source software tools-freely available at https://tractometry.org-provide a transformative environment for the analysis of dMRI data.

Optimized AI-based Neural Decoding from BOLD fMRI Signal for Analyzing Visual and Semantic ROIs in the Human Visual System.

Veronese L, Moglia A, Pecco N, Della Rosa P, Scifo P, Mainardi LT, Cerveri P

pubmed logopapersAug 14 2025
AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional MRI (fMRI) into the observed visual stimulus. Traditionally, ridge linear models transform fMRI into a latent space, which is then decoded using variational autoencoders (VAE) or latent diffusion models (LDM). Owing to the complexity and noisiness of fMRI data, newer approaches split the reconstruction into two sequential stages, the first one providing a rough visual approximation using a VAE, the second one incorporating semantic information through the adoption of LDM guided by contrastive language-image pre-training (CLIP) embeddings. This work addressed some key scientific and technical gaps of the two-stage neural decoding by: 1) implementing a gated recurrent unit (GRU)-based architecture to establish a non-linear mapping between the fMRI signal and the VAE latent space, 2) optimizing the dimensionality of the VAE latent space, 3) systematically evaluating the contribution of the first reconstruction stage, and 4) analyzing the impact of different brain regions of interest (ROIs) on reconstruction quality. Experiments on the Natural Scenes Dataset, containing 73,000 unique natural images, along with fMRI of eight subjects, demonstrated that the proposed architecture maintained competitive performance while reducing the complexity of its first stage by 85%. The sensitivity analysis showcased that the first reconstruction stage is essential for preserving high structural similarity in the final reconstructions. Restricting analysis to semantic ROIs, while excluding early visual areas, diminished visual coherence, preserving semantics though. The inter-subject repeatability across ROIs was about 92 and 98% for visual and sematic metrics, respectively. This study represents a key step toward optimized neural decoding architectures leveraging non-linear models for stimulus prediction. Sensitivity analysis highlighted the interplay between the two reconstruction stages, while ROI-based analysis provided strong evidence that the two-stage AI model reflects the brain's hierarchical processing of visual information.

Delineation of the Centromedian Nucleus for Epilepsy Neuromodulation Using Deep Learning Reconstruction of White Matter-Nulled Imaging.

Ryan MV, Satzer D, Hu H, Litwiller DV, Rettmann DW, Tanabe J, Thompson JA, Ojemann SG, Kramer DR

pubmed logopapersAug 14 2025
Neuromodulation of the centromedian nucleus (CM) of the thalamus has shown promise in treating refractory epilepsy, particularly for idiopathic generalized epilepsy and Lennox-Gastaut syndrome. However, precise targeting of CM remains challenging. The combination of deep learning reconstruction (DLR) and fast gray matter acquisition T1 inversion recovery (FGATIR) offers potential improvements in visualization of CM for deep brain stimulation (DBS) targeting. The goal of the study was to evaluate the visualization of the putative CM on DLR-FGATIR and its alignment with atlas-defined CM boundaries, with the aim of facilitating direct targeting of CM for neuromodulation. This retrospective study included 12 patients with drug-resistant epilepsy treated with thalamic neuromodulation by using DLR-FGATIR for direct targeting. Postcontrast-T1-weighted MRI, DLR-FGATIR, and postoperative CT were coregistered and normalized into Montreal Neurological Institute (MNI) space and compared with the Morel histologic atlas. Contrast-to-noise ratios were measured between CM and neighboring nuclei. CM segmentations were compared between an experienced rater, a trainee rater, the Morel atlas, and the Thalamus Optimized Multi Atlas Segmentation (THOMAS) atlas (derived from expert segmentation of high-field MRI) by using the Sorenson-Dice coefficient (Dice score, a measure of overlap) and volume ratios. The number of electrode contacts within the Morel atlas CM was assessed. On DLR-FGATIR, CM was visible as an ovoid hypointensity in the intralaminar thalamus. Contrast-to-noise ratios were highest (<i>P</i> < .001) for the mediodorsal and medial pulvinar nuclei. Dice score with the Morel atlas CM was higher (median 0.49, interquartile range 0.40-0.58) for the experienced rater (<i>P</i> < .001) than the trainee rater (0.32, 0.19-0.46) and no different (<i>P</i> = .32) than the THOMAS atlas CM (0.56, 0.55-0.58). Both raters and the THOMAS atlas tended to under-segment the lateral portion of the Morel atlas CM, reflected by smaller segmentation volumes (<i>P</i> < .001). All electrodes targeting CM based on DLR-FGATIR traversed the Morel atlas CM. DLR-FGATIR permitted visualization and delineation of CM commensurate with a group atlas derived from high-field MRI. This technique provided reliable guidance for accurate electrode placement within CM, highlighting its potential use for direct targeting.

A novel hybrid convolutional and recurrent neural network model for automatic pituitary adenoma classification using dynamic contrast-enhanced MRI.

Motamed M, Bastam M, Tabatabaie SM, Elhaie M, Shahbazi-Gahrouei D

pubmed logopapersAug 14 2025
Pituitary adenomas, ranging from subtle microadenomas to mass-effect macroadenomas, pose diagnostic challenges for radiologists due to increasing scan volumes and the complexity of dynamic contrast-enhanced MRI interpretation. A hybrid CNN-LSTM model was trained and validated on a multi-center dataset of 2,163 samples from Tehran and Babolsar, Iran. Transfer learning and preprocessing techniques (e.g., Wiener filters) were utilized to improve classification performance for microadenomas (< 10 mm) and macroadenomas (> 10 mm). The model achieved 90.5% accuracy, an area under the receiver operating characteristic curve (AUROC) of 0.92, and 89.6% sensitivity (93.5% for microadenomas, 88.3% for macroadenomas), outperforming standard CNNs by 5-18% across metrics. With a processing time of 0.17 s per scan, the model demonstrated robustness to variations in imaging conditions, including scanner differences and contrast variations, excelling in real-time detection and differentiation of adenoma subtypes. This dual-path approach, the first to synergize spatial and temporal MRI features for pituitary diagnostics, offers high precision and efficiency. Supported by comparisons with existing models, it provides a scalable, reproducible tool to improve patient outcomes, with potential adaptability to broader neuroimaging challenges.
Page 1 of 90896 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.