Sort by:
Page 98 of 1191186 results

Functional connectome-based predictive modeling of suicidal ideation.

Averill LA, Tamman AJF, Fouda S, Averill CL, Nemati S, Ragnhildstveit A, Gosnell S, Akiki TJ, Salas R, Abdallah CG

pubmed logopapersMay 27 2025
Suicide represents an egregious threat to society despite major advancements in medicine, in part due to limited knowledge of the biological mechanisms of suicidal behavior. We apply a connectome predictive modeling machine learning approach to identify a reproducible brain network associated with suicidal ideation in the hopes of demonstrating possible targets for novel anti-suicidal therapeutics. Patients were recruited from an inpatient facility at The Menninger Clinic, in Houston, Texas (N = 261; 181 with active and specific suicidal ideation) and had a current major depressive episode and recurrent major depressive disorder, underwent resting-state functional magnetic resonance imaging. The participants' ages ranged from 18 to 70 (mean ± SEM = 31.6 ± 0.8 years) and 136 (52 %) were males. Using this approach, we found a robust and reproducible biomarker of suicidal ideation relative to controls without ideation, showing that increased suicidal ideation was associated with greater internal connectivity and reduced internetwork external connectivity in the central executive, default mode, and dorsal salience networks. We also found evidence for higher external connectivity between ventral salience and sensorimotor/visual networks as being associated with increased suicidal ideation. Overall, these observed differences may reflect reduced network integration and higher segregation of connectivity in individuals with increased suicide risk. Our findings provide avenues for future work to test novel drugs targeting these identified neural alterations, for instance drugs that increase network integration.

Neurostimulation for the Management of Epilepsy: Advances in Targeted Therapy.

Verma S, Malviya R, Sridhar SB, Sundram S, Shareef J

pubmed logopapersMay 27 2025
Epilepsy is a multifaceted neurological disorder marked by seizures that can present with a wide range of symptoms. Despite the prevalent use of anti-epileptic drugs, drug resistance and adverse effects present considerable obstacles. Despite advancements in anti-epileptic drugs (AEDs), approximately 20-30% of patients remain drug-resistant, highlighting the need for innovative therapeutic strategies. This study aimed to explore advancements in epilepsy diagnosis and treatment utilizing modern technology and medicines. The literature survey was carried out using Scopus, ScienceDirect, and Google Scholar. Data from the last 10 years were preferred to include in the study. Emerging technologies, such as artificial intelligence, gene therapy, and wearable gadgets, have transformed epilepsy care. EEG and MRI play essential roles in diagnosis, while AI aids in evaluating big datasets for more accurate seizure identification. Machine learning and artificial intelligence are increasingly integrated into diagnostic processes to enhance seizure detection and classification. Wearable technology improves patient self-monitoring and helps clinical research. Furthermore, gene treatments offer promise by treating the fundamental causes of seizure activity, while stem cell therapies give neuroprotective and regenerative advantages. Dietary interventions, including ketogenic diets, are being examined for their ability to modify neurochemical pathways implicated in epilepsy. Recent technological and therapeutic developments provide major benefits in epilepsy assessment and treatment, with AI and wearable devices enhancing seizure detection and patient monitoring. Nonetheless, additional study is essential to ensure greater clinical application and efficacy. Future perspectives include the potential of optogenetics and advanced signal processing techniques to revolutionize treatment paradigms, emphasizing the importance of personalized medicine in epilepsy care. Overall, a comprehensive understanding of the multifaceted nature of epilepsy is essential for developing effective interventions and improving patient outcomes.

Advances in Diagnostic Approaches for Alzheimer's Disease: From Biomarkers to Deep Learning Technology.

Asif M, Ullah H, Jamil N, Riaz M, Zain M, Pushparaj PN, Rasool M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is a devastating neurological disorder that affects humans and is a major contributor to dementia. It is characterized by cognitive dysfunction, impairing an individual's ability to perform daily tasks. In AD, nerve cells in areas of the brain related to cognitive function are damaged. Despite extensive research, there is currently no specific therapeutic or diagnostic approach for this fatal disease. However, scientists worldwide have developed effective techniques for diagnosing and managing this challenging disorder. Among the various methods used to diagnose AD are feedback from blood relatives and observations of changes in an individual's behavioral and cognitive abilities. Biomarkers, such as amyloid beta and measures of neurodegeneration, aid in the early detection of Alzheimer's disease (AD) through cerebrospinal fluid (CSF) samples and brain imaging techniques like Magnetic Resonance Imaging (MRI). Advanced medical imaging technologies, including X-ray, CT, MRI, ultrasound, mammography, and PET, provide valuable insights into human anatomy and function. MRI, in particular, is non-invasive and useful for scanning both the structural and functional aspects of the brain. Additionally, Machine Learning (ML) and deep learning (DL) technologies, especially Convolutional Neural Networks (CNNs), have demonstrated high accuracy in diagnosing AD by detecting brain changes. However, these technologies are intended to support, rather than replace, clinical assessments by medical professionals.

DeepMultiConnectome: Deep Multi-Task Prediction of Structural Connectomes Directly from Diffusion MRI Tractography

Marcus J. Vroemen, Yuqian Chen, Yui Lo, Tengfei Xu, Weidong Cai, Fan Zhang, Josien P. W. Pluim, Lauren J. O'Donnell

arxiv logopreprintMay 27 2025
Diffusion MRI (dMRI) tractography enables in vivo mapping of brain structural connections, but traditional connectome generation is time-consuming and requires gray matter parcellation, posing challenges for large-scale studies. We introduce DeepMultiConnectome, a deep-learning model that predicts structural connectomes directly from tractography, bypassing the need for gray matter parcellation while supporting multiple parcellation schemes. Using a point-cloud-based neural network with multi-task learning, the model classifies streamlines according to their connected regions across two parcellation schemes, sharing a learned representation. We train and validate DeepMultiConnectome on tractography from the Human Connectome Project Young Adult dataset ($n = 1000$), labeled with an 84 and 164 region gray matter parcellation scheme. DeepMultiConnectome predicts multiple structural connectomes from a whole-brain tractogram containing 3 million streamlines in approximately 40 seconds. DeepMultiConnectome is evaluated by comparing predicted connectomes with traditional connectomes generated using the conventional method of labeling streamlines using a gray matter parcellation. The predicted connectomes are highly correlated with traditionally generated connectomes ($r = 0.992$ for an 84-region scheme; $r = 0.986$ for a 164-region scheme) and largely preserve network properties. A test-retest analysis of DeepMultiConnectome demonstrates reproducibility comparable to traditionally generated connectomes. The predicted connectomes perform similarly to traditionally generated connectomes in predicting age and cognitive function. Overall, DeepMultiConnectome provides a scalable, fast model for generating subject-specific connectomes across multiple parcellation schemes.

Automatic identification of Parkinsonism using clinical multi-contrast brain MRI: a large self-supervised vision foundation model strategy.

Suo X, Chen M, Chen L, Luo C, Kemp GJ, Lui S, Sun H

pubmed logopapersMay 27 2025
Valid non-invasive biomarkers for Parkinson's disease (PD) and Parkinson-plus syndrome (PPS) are urgently needed. Based on our recent self-supervised vision foundation model the Shift Window UNET TRansformer (Swin UNETR), which uses clinical multi-contrast whole brain MRI, we aimed to develop an efficient and practical model ('SwinClassifier') for the discrimination of PD vs PPS using routine clinical MRI scans. We used 75,861 clinical head MRI scans including T1-weighted, T2-weighted and fluid attenuated inversion recovery imaging as a pre-training dataset to develop a foundation model, using self-supervised learning with a cross-contrast context recovery task. Then clinical head MRI scans from n = 1992 participants with PD and n = 1989 participants with PPS were used as a downstream PD vs PPS classification dataset. We then assessed SwinClassifier's performance in confusion matrices compared to a comparative self-supervised vanilla Vision Transformer (ViT) autoencoder ('ViTClassifier'), and to two convolutional neural networks (DenseNet121 and ResNet50) trained from scratch. SwinClassifier showed very good performance (F1 score 0.83, 95% confidence interval [CI] [0.79-0.87], AUC 0.89) in PD vs PPS discrimination in independent test datasets (n = 173 participants with PD and n = 165 participants with PPS). This self-supervised classifier with pretrained weights outperformed the ViTClassifier and convolutional classifiers trained from scratch (F1 score 0.77-0.82, AUC 0.83-0.85). Occlusion sensitivity mapping in the correctly-classified cases (n = 160 PD and n = 114 PPS) highlighted the brain regions guiding discrimination mainly in sensorimotor and midline structures including cerebellum, brain stem, ventricle and basal ganglia. Our self-supervised digital model based on routine clinical head MRI discriminated PD vs PPS with good accuracy and sensitivity. With incremental improvements the approach may be diagnostically useful in early disease. National Key Research and Development Program of China.

Interpretable Machine Learning Models for Differentiating Glioblastoma From Solitary Brain Metastasis Using Radiomics.

Xia X, Wu W, Tan Q, Gou Q

pubmed logopapersMay 27 2025
To develop and validate interpretable machine learning models for differentiating glioblastoma (GB) from solitary brain metastasis (SBM) using radiomics features from contrast-enhanced T1-weighted MRI (CE-T1WI), and to compare the impact of low-order and high-order features on model performance. A cohort of 434 patients with histopathologically confirmed GB (226 patients) and SBM (208 patients) was retrospectively analyzed. Radiomic features were derived from CE-T1WI, with feature selection conducted through minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Machine learning models, including GradientBoost and lightGBM (LGBM), were trained using low-order and high-order features. The performance of the models was assessed through receiver operating characteristic analysis and computation of the area under the curve, along with other indicators, including accuracy, specificity, and sensitivity. SHapley Additive Explanations (SHAP) analysis is used to measure the influence of each feature on the model's predictions. The performances of various machine learning models on both the training and validation datasets were notably different. For the training group, the LGBM, CatBoost, multilayer perceptron (MLP), and GradientBoost models achieved the highest AUC scores, all exceeding 0.9, demonstrating strong discriminative power. The LGBM model exhibited the best stability, with a minimal AUC difference of only 0.005 between the training and test sets, suggesting strong generalizability. Among the validation group results, the GradientBoost classifier achieved the maximum AUC of 0.927, closely followed by random forest at 0.925. GradientBoost also demonstrated high sensitivity (0.911) and negative predictive value (NPV, 0.889), effectively identifying true positives. The LGBM model showed the highest test accuracy (86.2%) and performed excellently in terms of sensitivity (0.911), NPV (0.895), and positive predictive value (PPV, 0.837). The models utilizing high-order features outperformed those based on low-order features in all the metrics. SHAP analysis further enhances model interpretability, providing insights into feature importance and contributions to classification decisions. Machine learning techniques based on radiomics can effectively distinguish GB from SBM, with gradient boosting tree-based models such as LGBMs demonstrating superior performance. High-order features significantly improve model accuracy and robustness. SHAP technology enhances the interpretability and transparency of models for distinguishing brain tumors, providing intuitive visualization of the contribution of radiomic features to classification.

Scalable Segmentation for Ultra-High-Resolution Brain MR Images

Xiaoling Hu, Peirong Liu, Dina Zemlyanker, Jonathan Williams Ramirez, Oula Puonti, Juan Eugenio Iglesias

arxiv logopreprintMay 27 2025
Although deep learning has shown great success in 3D brain MRI segmentation, achieving accurate and efficient segmentation of ultra-high-resolution brain images remains challenging due to the lack of labeled training data for fine-scale anatomical structures and high computational demands. In this work, we propose a novel framework that leverages easily accessible, low-resolution coarse labels as spatial references and guidance, without incurring additional annotation cost. Instead of directly predicting discrete segmentation maps, our approach regresses per-class signed distance transform maps, enabling smooth, boundary-aware supervision. Furthermore, to enhance scalability, generalizability, and efficiency, we introduce a scalable class-conditional segmentation strategy, where the model learns to segment one class at a time conditioned on a class-specific input. This novel design not only reduces memory consumption during both training and testing, but also allows the model to generalize to unseen anatomical classes. We validate our method through comprehensive experiments on both synthetic and real-world datasets, demonstrating its superior performance and scalability compared to conventional segmentation approaches.

Dose calculation in nuclear medicine with magnetic resonance imaging images using Monte Carlo method.

Vu LH, Thao NTP, Trung NT, Hau PVT, Hong Loan TT

pubmed logopapersMay 27 2025
In recent years, scientists have been trying to convert magnetic resonance imaging (MRI) images into computed tomography (CT) images for dose calculations while taking advantage of the benefits of MRI images. The main approaches for image conversion are bulk density, Atlas registration, and machine learning. These methods have limitations in accuracy and time consumption and require large datasets to convert images. In this study, the novel 'voxels spawn voxels' technique combined with the 'orthonormalize' feature in Carimas software was developed to build a conversion dataset from MRI intensity to Hounsfield unit value for some structural regions including gluteus maximus, liver, kidneys, spleen, pancreas, and colon. The original CT images and the converted MRI images were imported into the Geant4/Gamos software for dose calculation. It gives good results (<5%) in most organs except the intestine (18%).

Deep learning-based CAD system for Alzheimer's diagnosis using deep downsized KPLS.

Neffati S, Mekki K, Machhout M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is the most prevalent type of dementia. It is linked with a gradual decline in various brain functions, such as memory. Many research efforts are now directed toward non-invasive procedures for early diagnosis because early detection greatly benefits the patient care and treatment outcome. Additional to an accurate diagnosis and reduction of the rate of misdiagnosis; Computer-Aided Design (CAD) systems are built to give definitive diagnosis. This paper presents a novel CAD system to determine stages of AD. Initially, deep learning techniques are utilized to extract features from the AD brain MRIs. Then, the extracted features are reduced using a proposed feature reduction technique named Deep Downsized Kernel Partial Least Squares (DDKPLS). The proposed approach selects a reduced number of samples from the initial information matrix. The samples chosen give rise to a new data matrix further processed by KPLS to deal with the high dimensionality. The reduced feature space is finally classified using ELM. The implementation is named DDKPLS-ELM. Reference tests have been performed on the Kaggle MRI dataset, which exhibit the efficacy of the DDKPLS-based classifier; it achieves accuracy up to 95.4% and an F1 score of 95.1%.

PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.

Saito I, Yamamoto S, Takaya E, Harigai A, Sato T, Kobayashi T, Takase K, Ueda T

pubmed logopapersMay 27 2025
This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.
Page 98 of 1191186 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.