Sort by:
Page 116 of 1241236 results

Characterizing ASD Subtypes Using Morphological Features from sMRI with Unsupervised Learning.

Raj A, Ratnaik R, Sengar SS, Fredo ARJ

pubmed logopapersMay 15 2025
In this study, we attempted to identify the subtypes of autism spectrum disorder (ASD) with the help of anatomical alterations found in structural magnetic resonance imaging (sMRI) data of the ASD brain and machine learning tools. Initially, the sMRI data was preprocessed using the FreeSurfer toolbox. Further, the brain regions were segmented into 148 regions of interest using the Destrieux atlas. Features such as volume, thickness, surface area, and mean curvature were extracted for each brain region. We performed principal component analysis independently on the volume, thickness, surface area, and mean curvature features and identified the top 10 features. Further, we applied k-means clustering on these top 10 features and validated the number of clusters using Elbow and Silhouette method. Our study identified two clusters in the dataset which significantly shows the existence of two subtypes in ASD. We identified the features such as volume of scaled lh_G_front middle, thickness of scaled rh_S_temporal transverse, area of scaled lh_S_temporal sup, and mean curvature of scaled lh_G_precentral as the significant features discriminating the two clusters with statistically significant p-value (p<0.05). Thus, our proposed method is effective for the identification of ASD subtypes and can also be useful for the screening of other similar neurological disorders.

A CVAE-based generative model for generalized B<sub>1</sub> inhomogeneity corrected chemical exchange saturation transfer MRI at 5 T.

Zhang R, Zhang Q, Wu Y

pubmed logopapersMay 15 2025
Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) has emerged as a powerful tool to image endogenous or exogenous macromolecules. CEST contrast highly depends on radiofrequency irradiation B<sub>1</sub> level. Spatial inhomogeneity of B<sub>1</sub> field would bias CEST measurement. Conventional interpolation-based B<sub>1</sub> correction method required CEST dataset acquisition under multiple B<sub>1</sub> levels, substantially prolonging scan time. The recently proposed supervised deep learning approach reconstructed B<sub>1</sub> inhomogeneity corrected CEST effect at the identical B<sub>1</sub> as of the training data, hindering its generalization to other B<sub>1</sub> levels. In this study, we proposed a Conditional Variational Autoencoder (CVAE)-based generative model to generate B<sub>1</sub> inhomogeneity corrected Z spectra from single CEST acquisition. The model was trained from pixel-wise source-target paired Z spectra under multiple B<sub>1</sub> with target B<sub>1</sub> as a conditional variable. Numerical simulation and healthy human brain imaging at 5 T were respectively performed to evaluate the performance of proposed model in B<sub>1</sub> inhomogeneity corrected CEST MRI. Results showed that the generated B<sub>1</sub>-corrected Z spectra agreed well with the reference averaged from regions with subtle B<sub>1</sub> inhomogeneity. Moreover, the performance of the proposed model in correcting B<sub>1</sub> inhomogeneity in APT CEST effect, as measured by both MTR<sub>asym</sub> and [Formula: see text] at 3.5 ppm, were superior over conventional Z/contrast-B<sub>1</sub>-interpolation and other deep learning methods, especially when target B<sub>1</sub> were not included in sampling or training dataset. In summary, the proposed model allows generalized B<sub>1</sub> inhomogeneity correction, benefiting quantitative CEST MRI in clinical routines.

Joint resting state and structural networks characterize pediatric bipolar patients compared to healthy controls: a multimodal fusion approach.

Yi X, Ma M, Wang X, Zhang J, Wu F, Huang H, Xiao Q, Xie A, Liu P, Grecucci A

pubmed logopapersMay 15 2025
Pediatric bipolar disorder (PBD) is a highly debilitating condition, characterized by alternating episodes of mania and depression, with intervening periods of remission. Limited information is available about the functional and structural abnormalities in PBD, particularly when comparing type I with type II subtypes. Resting-state brain activity and structural grey matter, assessed through MRI, may provide insight into the neurobiological biomarkers of this disorder. In this study, Resting state Regional Homogeneity (ReHo) and grey matter concentration (GMC) data of 58 PBD patients, and 21 healthy controls matched for age, gender, education and IQ, were analyzed in a data fusion unsupervised machine learning approach known as transposed Independent Vector Analysis. Two networks significantly differed between BPD and HC. The first network included fronto- medial regions, such as the medial and superior frontal gyrus, the cingulate, and displayed higher ReHo and GMC values in PBD compared to HC. The second network included temporo-posterior regions, as well as the insula, the caudate and the precuneus and displayed lower ReHo and GMC values in PBD compared to HC. Additionally, two networks differ between type-I vs type-II in PBD: an occipito-cerebellar network with increased ReHo and GMC in type-I compared to type-II, and a fronto-parietal network with decreased ReHo and GMC in type-I compared to type-II. Of note, the first network positively correlated with depression scores. These findings shed new light on the functional and structural abnormalities displayed by pediatric bipolar patients.

MRI-derived deep learning models for predicting 1p/19q codeletion status in glioma patients: a systematic review and meta-analysis of diagnostic test accuracy studies.

Ahmadzadeh AM, Broomand Lomer N, Ashoobi MA, Elyassirad D, Gheiji B, Vatanparast M, Rostami A, Abouei Mehrizi MA, Tabari A, Bathla G, Faghani S

pubmed logopapersMay 15 2025
We conducted a systematic review and meta-analysis to evaluate the performance of magnetic resonance imaging (MRI)-derived deep learning (DL) models in predicting 1p/19q codeletion status in glioma patients. The literature search was performed in four databases: PubMed, Web of Science, Embase, and Scopus. We included the studies that evaluated the performance of end-to-end DL models in predicting the status of glioma 1p/19q codeletion. The quality of the included studies was assessed by the Quality assessment of diagnostic accuracy studies-2 (QUADAS-2) METhodological RadiomICs Score (METRICS). We calculated diagnostic pooled estimates and heterogeneity was evaluated using I<sup>2</sup>. Subgroup analysis and sensitivity analysis were conducted to explore sources of heterogeneity. Publication bias was evaluated by Deeks' funnel plots. Twenty studies were included in the systematic review. Only two studies had a low quality. A meta-analysis of the ten studies demonstrated a pooled sensitivity of 0.77 (95% CI: 0.63-0.87), a specificity of 0.85 (95% CI: 0.74-0.92), a positive diagnostic likelihood ratio (DLR) of 5.34 (95% CI: 2.88-9.89), a negative DLR of 0.26 (95% CI: 0.16-0.45), a diagnostic odds ratio of 20.24 (95% CI: 8.19-50.02), and an area under the curve of 0.89 (95% CI: 0.86-0.91). The subgroup analysis identified a significant difference between groups depending on the segmentation method used. DL models can predict glioma 1p/19q codeletion status with high accuracy and may enhance non-invasive tumor characterization and aid in the selection of optimal therapeutic strategies.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

"MR Fingerprinting for Imaging Brain Hemodynamics and Oxygenation".

Coudert T, Delphin A, Barrier A, Barbier EL, Lemasson B, Warnking JM, Christen T

pubmed logopapersMay 15 2025
Over the past decade, several studies have explored the potential of magnetic resonance fingerprinting (MRF) for the quantification of brain hemodynamics, oxygenation, and perfusion. Recent advances in simulation models and reconstruction frameworks have also significantly enhanced the accuracy of vascular parameter estimation. This review provides an overview of key vascular MRF studies, emphasizing advancements in geometrical models for vascular simulations, novel sequences, and state-of-the-art reconstruction techniques incorporating machine learning and deep learning algorithms. Both pre-clinical and clinical applications are discussed. Based on these findings, we outline future directions and development areas that need to be addressed to facilitate their clinical translation. EVIDENCE LEVEL: N/A. TECHNICAL EFFICACY: Stage 1.

Deep normative modelling reveals insights into early-stage Alzheimer's disease using multi-modal neuroimaging data.

Lawry Aguila A, Lorenzini L, Janahi M, Barkhof F, Altmann A

pubmed logopapersMay 15 2025
Exploring the early stages of Alzheimer's disease (AD) is crucial for timely intervention to help manage symptoms and set expectations for affected individuals and their families. However, the study of the early stages of AD involves analysing heterogeneous disease cohorts which may present challenges for some modelling techniques. This heterogeneity stems from the diverse nature of AD itself, as well as the inclusion of undiagnosed or 'at-risk' AD individuals or the presence of comorbidities which differentially affect AD biomarkers within the cohort. Normative modelling is an emerging technique for studying heterogeneous disorders that can quantify how brain imaging-based measures of individuals deviate from a healthy population. The normative model provides a statistical description of the 'normal' range that can be used at subject level to detect deviations, which may relate to pathological effects. In this work, we applied a deep learning-based normative model, pre-trained on MRI scans in the UK Biobank, to investigate ageing and identify abnormal age-related decline. We calculated deviations, relative to the healthy population, in multi-modal MRI data of non-demented individuals in the external EPAD (ep-ad.org) cohort and explored these deviations with the aim of determining whether normative modelling could detect AD-relevant subtle differences between individuals. We found that aggregate measures of deviation based on the entire brain correlated with measures of cognitive ability and biological phenotypes, indicating the effectiveness of a general deviation metric in identifying AD-related differences among individuals. We then explored deviations in individual imaging features, stratified by cognitive performance and genetic risk, across different brain regions and found that the brain regions showing deviations corresponded to those affected by AD such as the hippocampus. Finally, we found that 'at-risk' individuals in the EPAD cohort exhibited increasing deviation over time, with an approximately 6.4 times greater t-statistic in a pairwise t-test compared to a 'super-healthy' cohort. This study highlights the capability of deep normative modelling approaches to detect subtle differences in brain morphology among individuals at risk of developing AD in a non-demented population. Our findings allude to the potential utility of normative deviation metrics in monitoring disease progression.

Machine learning for grading prediction and survival analysis in high grade glioma.

Li X, Huang X, Shen Y, Yu S, Zheng L, Cai Y, Yang Y, Zhang R, Zhu L, Wang E

pubmed logopapersMay 15 2025
We developed and validated a magnetic resonance imaging (MRI)-based radiomics model for the classification of high-grade glioma (HGG) and determined the optimal machine learning (ML) approach. This retrospective analysis included 184 patients (59 grade III lesions and 125 grade IV lesions). Radiomics features were extracted from MRI with T1-weighted imaging (T1WI). The least absolute shrinkage and selection operator (LASSO) feature selection method and seven classification methods including logistic regression, XGBoost, Decision Tree, Random Forest (RF), Adaboost, Gradient Boosting Decision Tree, and Stacking fusion model were used to differentiate HGG. Performance was compared on AUC, sensitivity, accuracy, precision and specificity. In the non-fusion models, the best performance was achieved by using the XGBoost classifier, and using SMOTE to deal with the data imbalance to improve the performance of all the classifiers. The Stacking fusion model performed the best, with an AUC = 0.95 (sensitivity of 0.84; accuracy of 0.85; F1 score of 0.85). MRI-based quantitative radiomics features have good performance in identifying the classification of HGG. The XGBoost method outperforms the classifiers in the non-fusion model and the Stacking fusion model outperforms the non-fusion model.

Machine learning prediction prior to onset of mild cognitive impairment using T1-weighted magnetic resonance imaging radiomic of the hippocampus.

Zhan S, Wang J, Dong J, Ji X, Huang L, Zhang Q, Xu D, Peng L, Wang X, Zhang Y, Liang S, Chen L

pubmed logopapersMay 15 2025
Early identification of individuals who progress from normal cognition (NC) to mild cognitive impairment (MCI) may help prevent cognitive decline. We aimed to build predictive models using radiomic features of the bilateral hippocampus in combination with scores from neuropsychological assessments. We utilized the Alzheimer's Disease Neuroimaging Initiative (ADNI) database to study 175 NC individuals, identifying 50 who progressed to MCI within seven years. Employing the Least Absolute Shrinkage and Selection Operator (LASSO) on T1-weighted images, we extracted and refined hippocampal features. Classification models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and light gradient boosters (LightGBM), were built based on significant neuropsychological scores. Model validation was conducted using 5-fold cross-validation, and hyperparameters were optimized with Scikit-learn, using an 80:20 data split for training and testing. We found that the LightGBM model achieved an area under the receiver operating characteristic (ROC) curve (AUC) value of 0.89 and an accuracy of 0.79 in the training set, and an AUC value of 0.80 and an accuracy of 0.74 in the test set. The study identified that T1-weighted magnetic resonance imaging radiomic of the hippocampus would be used to predict the progression to MCI at the normal cognitive stage, which might provide a new insight into clinical research.
Page 116 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.