Sort by:
Page 35 of 53522 results

Functional connectome-based predictive modeling of suicidal ideation.

Averill LA, Tamman AJF, Fouda S, Averill CL, Nemati S, Ragnhildstveit A, Gosnell S, Akiki TJ, Salas R, Abdallah CG

pubmed logopapersMay 27 2025
Suicide represents an egregious threat to society despite major advancements in medicine, in part due to limited knowledge of the biological mechanisms of suicidal behavior. We apply a connectome predictive modeling machine learning approach to identify a reproducible brain network associated with suicidal ideation in the hopes of demonstrating possible targets for novel anti-suicidal therapeutics. Patients were recruited from an inpatient facility at The Menninger Clinic, in Houston, Texas (N = 261; 181 with active and specific suicidal ideation) and had a current major depressive episode and recurrent major depressive disorder, underwent resting-state functional magnetic resonance imaging. The participants' ages ranged from 18 to 70 (mean ± SEM = 31.6 ± 0.8 years) and 136 (52 %) were males. Using this approach, we found a robust and reproducible biomarker of suicidal ideation relative to controls without ideation, showing that increased suicidal ideation was associated with greater internal connectivity and reduced internetwork external connectivity in the central executive, default mode, and dorsal salience networks. We also found evidence for higher external connectivity between ventral salience and sensorimotor/visual networks as being associated with increased suicidal ideation. Overall, these observed differences may reflect reduced network integration and higher segregation of connectivity in individuals with increased suicide risk. Our findings provide avenues for future work to test novel drugs targeting these identified neural alterations, for instance drugs that increase network integration.

Advances in Diagnostic Approaches for Alzheimer's Disease: From Biomarkers to Deep Learning Technology.

Asif M, Ullah H, Jamil N, Riaz M, Zain M, Pushparaj PN, Rasool M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is a devastating neurological disorder that affects humans and is a major contributor to dementia. It is characterized by cognitive dysfunction, impairing an individual's ability to perform daily tasks. In AD, nerve cells in areas of the brain related to cognitive function are damaged. Despite extensive research, there is currently no specific therapeutic or diagnostic approach for this fatal disease. However, scientists worldwide have developed effective techniques for diagnosing and managing this challenging disorder. Among the various methods used to diagnose AD are feedback from blood relatives and observations of changes in an individual's behavioral and cognitive abilities. Biomarkers, such as amyloid beta and measures of neurodegeneration, aid in the early detection of Alzheimer's disease (AD) through cerebrospinal fluid (CSF) samples and brain imaging techniques like Magnetic Resonance Imaging (MRI). Advanced medical imaging technologies, including X-ray, CT, MRI, ultrasound, mammography, and PET, provide valuable insights into human anatomy and function. MRI, in particular, is non-invasive and useful for scanning both the structural and functional aspects of the brain. Additionally, Machine Learning (ML) and deep learning (DL) technologies, especially Convolutional Neural Networks (CNNs), have demonstrated high accuracy in diagnosing AD by detecting brain changes. However, these technologies are intended to support, rather than replace, clinical assessments by medical professionals.

Neurostimulation for the Management of Epilepsy: Advances in Targeted Therapy.

Verma S, Malviya R, Sridhar SB, Sundram S, Shareef J

pubmed logopapersMay 27 2025
Epilepsy is a multifaceted neurological disorder marked by seizures that can present with a wide range of symptoms. Despite the prevalent use of anti-epileptic drugs, drug resistance and adverse effects present considerable obstacles. Despite advancements in anti-epileptic drugs (AEDs), approximately 20-30% of patients remain drug-resistant, highlighting the need for innovative therapeutic strategies. This study aimed to explore advancements in epilepsy diagnosis and treatment utilizing modern technology and medicines. The literature survey was carried out using Scopus, ScienceDirect, and Google Scholar. Data from the last 10 years were preferred to include in the study. Emerging technologies, such as artificial intelligence, gene therapy, and wearable gadgets, have transformed epilepsy care. EEG and MRI play essential roles in diagnosis, while AI aids in evaluating big datasets for more accurate seizure identification. Machine learning and artificial intelligence are increasingly integrated into diagnostic processes to enhance seizure detection and classification. Wearable technology improves patient self-monitoring and helps clinical research. Furthermore, gene treatments offer promise by treating the fundamental causes of seizure activity, while stem cell therapies give neuroprotective and regenerative advantages. Dietary interventions, including ketogenic diets, are being examined for their ability to modify neurochemical pathways implicated in epilepsy. Recent technological and therapeutic developments provide major benefits in epilepsy assessment and treatment, with AI and wearable devices enhancing seizure detection and patient monitoring. Nonetheless, additional study is essential to ensure greater clinical application and efficacy. Future perspectives include the potential of optogenetics and advanced signal processing techniques to revolutionize treatment paradigms, emphasizing the importance of personalized medicine in epilepsy care. Overall, a comprehensive understanding of the multifaceted nature of epilepsy is essential for developing effective interventions and improving patient outcomes.

Multicentre evaluation of deep learning CT autosegmentation of the head and neck region for radiotherapy.

Pang EPP, Tan HQ, Wang F, Niemelä J, Bolard G, Ramadan S, Kiljunen T, Capala M, Petit S, Seppälä J, Vuolukka K, Kiitam I, Zolotuhhin D, Gershkevitsh E, Lehtiö K, Nikkinen J, Keyriläinen J, Mokka M, Chua MLK

pubmed logopapersMay 27 2025
This is a multi-institutional study to evaluate a head-and-neck CT auto-segmentation software across seven institutions globally. 11 lymph node levels and 7 organs-at-risk contours were evaluated in a two-phase study design. Time savings were measured in both phases, and the inter-observer variability across the seven institutions was quantified in phase two. Overall time savings were found to be 42% in phase one and 49% in phase two. Lymph node levels IA, IB, III, IVA, and IVB showed no significant time savings, with some centers reporting longer editing times than manual delineation. All the edited ROIs showed reduced inter-observer variability compared to manual segmentation. Our study shows that auto-segmentation plays a crucial role in harmonizing contouring practices globally. However, the clinical benefits of auto-segmentation software vary significantly across ROIs and between clinics. To maximize its potential, institution-specific commissioning is required to optimize the clinical benefits.

Interpretable Machine Learning Models for Differentiating Glioblastoma From Solitary Brain Metastasis Using Radiomics.

Xia X, Wu W, Tan Q, Gou Q

pubmed logopapersMay 27 2025
To develop and validate interpretable machine learning models for differentiating glioblastoma (GB) from solitary brain metastasis (SBM) using radiomics features from contrast-enhanced T1-weighted MRI (CE-T1WI), and to compare the impact of low-order and high-order features on model performance. A cohort of 434 patients with histopathologically confirmed GB (226 patients) and SBM (208 patients) was retrospectively analyzed. Radiomic features were derived from CE-T1WI, with feature selection conducted through minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Machine learning models, including GradientBoost and lightGBM (LGBM), were trained using low-order and high-order features. The performance of the models was assessed through receiver operating characteristic analysis and computation of the area under the curve, along with other indicators, including accuracy, specificity, and sensitivity. SHapley Additive Explanations (SHAP) analysis is used to measure the influence of each feature on the model's predictions. The performances of various machine learning models on both the training and validation datasets were notably different. For the training group, the LGBM, CatBoost, multilayer perceptron (MLP), and GradientBoost models achieved the highest AUC scores, all exceeding 0.9, demonstrating strong discriminative power. The LGBM model exhibited the best stability, with a minimal AUC difference of only 0.005 between the training and test sets, suggesting strong generalizability. Among the validation group results, the GradientBoost classifier achieved the maximum AUC of 0.927, closely followed by random forest at 0.925. GradientBoost also demonstrated high sensitivity (0.911) and negative predictive value (NPV, 0.889), effectively identifying true positives. The LGBM model showed the highest test accuracy (86.2%) and performed excellently in terms of sensitivity (0.911), NPV (0.895), and positive predictive value (PPV, 0.837). The models utilizing high-order features outperformed those based on low-order features in all the metrics. SHAP analysis further enhances model interpretability, providing insights into feature importance and contributions to classification decisions. Machine learning techniques based on radiomics can effectively distinguish GB from SBM, with gradient boosting tree-based models such as LGBMs demonstrating superior performance. High-order features significantly improve model accuracy and robustness. SHAP technology enhances the interpretability and transparency of models for distinguishing brain tumors, providing intuitive visualization of the contribution of radiomic features to classification.

Machine learning decision support model construction for craniotomy approach of pineal region tumors based on MRI images.

Chen Z, Chen Y, Su Y, Jiang N, Wanggou S, Li X

pubmed logopapersMay 27 2025
Pineal region tumors (PRTs) are rare but deep-seated brain tumors, and complete surgical resection is crucial for effective tumor treatment. The choice of surgical approach is often challenging due to the low incidence and deep location. This study aims to combine machine learning and deep learning algorithms with pre-operative MRI images to build a model for PRTs surgical approaches recommendation, striving to model clinical experience for practical reference and education. This study was a retrospective study which enrolled a total of 173 patients diagnosed with PRTs radiologically from our hospital. Three traditional surgical approaches of were recorded for prediction label. Clinical and VASARI related radiological information were selected for machine learning prediction model construction. And MRI images from axial, sagittal and coronal views of orientation were also used for deep learning craniotomy approach prediction model establishment and evaluation. 5 machine learning methods were applied to construct the predictive classifiers with the clinical and VASARI features and all methods could achieve area under the ROC (Receiver operating characteristic) curve (AUC) values over than 0.7. And also, 3 deep learning algorithms (ResNet-50, EfficientNetV2-m and ViT) were applied based on MRI images from different orientations. EfficientNetV2-m achieved the highest AUC value of 0.89, demonstrating a significant high performance of prediction. And class activation mapping was used to reveal that the tumor itself and its surrounding relations are crucial areas for model decision-making. In our study, we used machine learning and deep learning to construct surgical approach recommendation models. Deep learning could achieve high performance of prediction and provide efficient and personalized decision support tools for PRTs surgical approach. Not applicable.

DeepMultiConnectome: Deep Multi-Task Prediction of Structural Connectomes Directly from Diffusion MRI Tractography

Marcus J. Vroemen, Yuqian Chen, Yui Lo, Tengfei Xu, Weidong Cai, Fan Zhang, Josien P. W. Pluim, Lauren J. O'Donnell

arxiv logopreprintMay 27 2025
Diffusion MRI (dMRI) tractography enables in vivo mapping of brain structural connections, but traditional connectome generation is time-consuming and requires gray matter parcellation, posing challenges for large-scale studies. We introduce DeepMultiConnectome, a deep-learning model that predicts structural connectomes directly from tractography, bypassing the need for gray matter parcellation while supporting multiple parcellation schemes. Using a point-cloud-based neural network with multi-task learning, the model classifies streamlines according to their connected regions across two parcellation schemes, sharing a learned representation. We train and validate DeepMultiConnectome on tractography from the Human Connectome Project Young Adult dataset ($n = 1000$), labeled with an 84 and 164 region gray matter parcellation scheme. DeepMultiConnectome predicts multiple structural connectomes from a whole-brain tractogram containing 3 million streamlines in approximately 40 seconds. DeepMultiConnectome is evaluated by comparing predicted connectomes with traditional connectomes generated using the conventional method of labeling streamlines using a gray matter parcellation. The predicted connectomes are highly correlated with traditionally generated connectomes ($r = 0.992$ for an 84-region scheme; $r = 0.986$ for a 164-region scheme) and largely preserve network properties. A test-retest analysis of DeepMultiConnectome demonstrates reproducibility comparable to traditionally generated connectomes. The predicted connectomes perform similarly to traditionally generated connectomes in predicting age and cognitive function. Overall, DeepMultiConnectome provides a scalable, fast model for generating subject-specific connectomes across multiple parcellation schemes.

Deep Learning Auto-segmentation of Diffuse Midline Glioma on Multimodal Magnetic Resonance Images.

Fernández-Patón M, Montoya-Filardi A, Galiana-Bordera A, Martínez-Gironés PM, Veiga-Canuto D, Martínez de Las Heras B, Cerdá-Alberich L, Martí-Bonmatí L

pubmed logopapersMay 27 2025
Diffuse midline glioma (DMG) H3 K27M-altered is a rare pediatric brainstem cancer with poor prognosis. To advance the development of predictive models to gain a deeper understanding of DMG, there is a crucial need for seamlessly integrating automatic and highly accurate tumor segmentation techniques. There is only one method that tries to solve this task in this cancer; for that reason, this study develops a modified CNN-based 3D-Unet tool to automatically segment DMG in an accurate way in magnetic resonance (MR) images. The dataset consisted of 52 DMG patients and 70 images, each with T1W and T2W or FLAIR images. Three different datasets were created: T1W images, T2W or FLAIR images, and a combined set of T1W and T2W/FLAIR images. Denoising, bias field correction, spatial resampling, and normalization were applied as preprocessing steps to the MR images. Patching techniques were also used to enlarge the dataset size. For tumor segmentation, a 3D U-Net architecture with residual blocks was used. The best results were obtained for the dataset composed of all T1W and T2W/FLAIR images, reaching an average Dice Similarity Coefficient (DSC) of 0.883 on the test dataset. These results are comparable to other brain tumor segmentation models and to state-of-the-art results in DMG segmentation using fewer sequences. Our results demonstrate the effectiveness of the proposed 3D U-Net architecture for DMG tumor segmentation. This advancement holds potential for enhancing the precision of diagnostic and predictive models in the context of this challenging pediatric cancer.

Multi-instance Learning as Downstream Task of Self-Supervised Learning-based Pre-trained Model

Koki Matsuishi, Tsuyoshi Okita

arxiv logopreprintMay 27 2025
In deep multi-instance learning, the number of applicable instances depends on the data set. In histopathology images, deep learning multi-instance learners usually assume there are hundreds to thousands instances in a bag. However, when the number of instances in a bag increases to 256 in brain hematoma CT, learning becomes extremely difficult. In this paper, we address this drawback. To overcome this problem, we propose using a pre-trained model with self-supervised learning for the multi-instance learner as a downstream task. With this method, even when the original target task suffers from the spurious correlation problem, we show improvements of 5% to 13% in accuracy and 40% to 55% in the F1 measure for the hypodensity marker classification of brain hematoma CT.

Scalable Segmentation for Ultra-High-Resolution Brain MR Images

Xiaoling Hu, Peirong Liu, Dina Zemlyanker, Jonathan Williams Ramirez, Oula Puonti, Juan Eugenio Iglesias

arxiv logopreprintMay 27 2025
Although deep learning has shown great success in 3D brain MRI segmentation, achieving accurate and efficient segmentation of ultra-high-resolution brain images remains challenging due to the lack of labeled training data for fine-scale anatomical structures and high computational demands. In this work, we propose a novel framework that leverages easily accessible, low-resolution coarse labels as spatial references and guidance, without incurring additional annotation cost. Instead of directly predicting discrete segmentation maps, our approach regresses per-class signed distance transform maps, enabling smooth, boundary-aware supervision. Furthermore, to enhance scalability, generalizability, and efficiency, we introduce a scalable class-conditional segmentation strategy, where the model learns to segment one class at a time conditioned on a class-specific input. This novel design not only reduces memory consumption during both training and testing, but also allows the model to generalize to unseen anatomical classes. We validate our method through comprehensive experiments on both synthetic and real-world datasets, demonstrating its superior performance and scalability compared to conventional segmentation approaches.
Page 35 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.