Sort by:
Page 19 of 1241236 results

Interpretable machine learning model for characterizing magnetic susceptibility-based biomarkers in first episode psychosis.

Franco P, Montalba C, Caulier-Cisterna R, Milovic C, González A, Ramirez-Mahaluf JP, Undurraga J, Salas R, Crossley N, Tejos C, Uribe S

pubmed logopapersSep 6 2025
Several studies have shown changes in neurochemicals within the deep-brain nuclei of patients with psychosis. These alterations indicate a dysfunction in dopamine within subcortical regions affected by fluctuations in iron concentrations. Quantitative Susceptibility Mapping (QSM) is a method employed to measure iron concentration, offering a potential means to identify dopamine dysfunction in these subcortical areas. This study employed a random forest algorithm to predict susceptibility features of the First-Episode Psychosis (FEP) and the response to antipsychotics using Shapley Additionality Explanation (SHAP) values. 3D multi-echo Gradient Echo (GRE) and T1-weighted GRE were obtained in 61 healthy-volunteers (HV) and 76 FEP patients (32 % Treatment-Resistant Schizophrenia (TRS) and 68 % treatment-Responsive Schizophrenia (RS)) using a 3T Philips Ingenia MRI scanner. QSM and R2* were reconstructed and averaged in twenty-two segmented regions of interest. We used a Sequential Forward Selection as a feature selection algorithm and a Random Forest as a model to predict FEP patients and their response to antipsychotics. We further applied the SHAP framework to identify informative features and their interpretations. Finally, multiple correlation patterns from magnetic susceptibility parameters were extracted using hierarchical clustering. Our approach accurately classifies HV and FEP patients with 76.48 ± 10.73 % accuracy (using four features) and TRS vs RS patients with 76.43 ± 12.57 % accuracy (using four features), using 10-fold stratified cross-validation. The SHAP analyses indicated the top four nonlinear relationships between the selected features. Hierarchical clustering revealed two groups of correlated features for each study. Early prediction of treatment response enables tailored strategies for FEP patients with treatment resistance, ensuring timely and effective interventions.

A novel multimodal framework combining habitat radiomics, deep learning, and conventional radiomics for predicting MGMT gene promoter methylation in Glioma: Superior performance of integrated models.

Zhu FY, Chen WJ, Chen HY, Ren SY, Zhuo LY, Wang TD, Ren CC, Yin XP, Wang JN

pubmed logopapersSep 6 2025
The present study aimed to develop a noninvasive predictive framework that integrates clinical data, conventional radiomics, habitat imaging, and deep learning for the preoperative stratification of MGMT gene promoter methylation in glioma. This retrospective study included 410 patients from the University of California, San Francisco, USA, and 102 patients from our hospital. Seven models were constructed using preoperative contrast-enhanced T1-weighted MRI with gadobenate dimeglumine as the contrast agent. Habitat radiomics features were extracted from tumor subregions by k-means clustering, while deep learning features were acquired using a 3D convolutional neural network. Model performance was evaluated based on area under the curve (AUC) value, F1-score, and decision curve analysis. The combined model integrating clinical data, conventional radiomics, habitat imaging features, and deep learning achieved the highest performance (training AUC = 0.979 [95 % CI: 0.969-0.990], F1-score = 0.944; testing AUC = 0.777 [0.651-0.904], F1-score = 0.711). Among the single-modality models, habitat radiomics outperformed the other models (training AUC = 0.960 [0.954-0.983]; testing AUC = 0.724 [0.573-0.875]). The proposed multimodal framework considerably enhances preoperative prediction of MGMT gene promoter methylation, with habitat radiomics highlighting the critical role of tumor heterogeneity. This approach provides a scalable tool for personalized management of glioma.

Prenatal diagnosis of cerebellar hypoplasia in fetal ultrasound using deep learning under the constraint of the anatomical structures of the cerebellum and cistern.

Wu X, Liu F, Xu G, Ma Y, Cheng C, He R, Yang A, Gan J, Liang J, Wu X, Zhao S

pubmed logopapersSep 5 2025
The objective of this retrospective study is to develop and validate an artificial intelligence model constrained by the anatomical structure of the brain with the aim of improving the accuracy of prenatal diagnosis of fetal cerebellar hypoplasia using ultrasound imaging. Fetal central nervous system dysplasia is one of the most prevalent congenital malformations, and cerebellar hypoplasia represents a significant manifestation of this anomaly. Accurate clinical diagnosis is of great importance for the purpose of prenatal screening of fetal health. Although ultrasound has been extensively utilized to assess fetal development, the accurate assessment of cerebellar development remains challenging due to the inherent limitations of ultrasound imaging, including low resolution, artifacts, and acoustic shadowing of the skull. This retrospective study included 302 cases diagnosed with cerebellar hypoplasia and 549 normal pregnancies collected from Maternal and Child Health Hospital of Hubei Province between September 2019 and September 2023. For each case, experienced ultrasound physicians selected appropriate brain ultrasound images to delineate the boundaries of the skull, cerebellum, and cerebellomedullary cistern. These cases were divided into one training set and two test sets, based on the examination dates. This study then proposed a dual-branch deep learning classification network, anatomical structure-constrained network (ASC-Net), which took ultrasound images and anatomical structure masks as separate inputs. The performance of the ASC-Net was extensively evaluated and compared with several state-of-the-art deep learning networks. The impact of anatomical structures on the performance of ASC-Net was carefully examined. ASC-Net demonstrated superior performance in the diagnosis of cerebellar hypoplasia, achieving classification accuracies of 0.9778 and 0.9222, as well as areas under the receiver operating characteristic curve of 0.9986 and 0.9265 on the two test sets. These results significantly outperformed several state-of-the-art networks on the same dataset. In comparison to other studies on cerebellar hypoplasia auxiliary diagnosis, ASC-Net also demonstrated comparable or even better performance. A subgroup analysis revealed that ASC-Net was more capable of distinguishing cerebellar hypoplasia in cases with gestational weeks greater than 30 weeks. Furthermore, when constrained by anatomical structures of both the cerebellum and cistern, ASC-Net exhibited the best performance compared to other kinds of structural constraint. The development and validation of ASC-Net have significantly enhanced the accuracy of prenatal diagnosis of cerebellar hypoplasia using ultrasound images. This study highlights the importance of anatomical structures of the fetal cerebellum and cistern on the performance of the diagnostic artificial intelligence model in ultrasound. This might provide new insights for clinical diagnosis of cerebellar hypoplasia, assist clinicians in providing more targeted advice and treatment during pregnancy, and contribute to improved perinatal healthcare. ASC-Net is open-sourced and publicly available in a GitHub repository at https://github.com/Wwwwww111112/ASC-Net .

Prediction of intracranial aneurysm rupture from computed tomography angiography using an automated artificial intelligence framework.

Choi JH, Sobisch J, Kim M, Park JC, Ahn JS, Kwun BD, Špiclin Ž, Bizjak Ž, Park W

pubmed logopapersSep 5 2025
Intracranial aneurysms (IAs) are common vascular pathologies with a risk of fatal rupture. Human assessment of rupture risk is error prone, and treatment decision for unruptured IAs often rely on expert opinion and institutional policy. Therefore, we aimed to develop a computer-assisted aneurysm rupture prediction framework to help guide the decision-making process and create future decision criteria. This retrospective study included 335 patients with 500 IAs, of the 500 IAs studied, 250 were labeled as ruptured and 250 as unruptured. A skilled radiologist and a neurosurgeon visually examined the computed tomography angiography (CTA) images and labeled the IAs. For external validation we included 24 IAs, 10 ruptured and 15 unruptured, imaged with 3D rotational angiography (3D-RA) from the Aneurisk dataset. The pretrained nnU-net model was used for automated vessel segmentation, which was fed to pretrained PointNet++ models for vessel labeling and aneurysm segmentation. From these the latent keypoint representations were extracted as vessel shape and aneurysm shape features, respectively. Additionally, conventional features such as IAs morphological measurements, location and patient data, such as age, sex, were used for training and testing eight machine learning models for rupture status classification. The top-performing model, a random forest with feature selection, achieved an area under the receiver operating curve of 0.851, an accuracy of 0.782, a sensitivity of 0.804, and a specificity of 0.760. This model used 14 aneurysm shape features, seven conventional features, and one vessel shape feature. On the external dataset, it achieved an AUC of 0.805. While aneurysm shape features consistently contributed significantly across the classification models, vessel shape features contributed a small portion. Our proposed automated artificial intelligence framework could assist in clinical decision-making by assessing aneurysm rupture risk using screening tests, such as CTA and 3D-RA.

Semi-supervised Deep Transfer for Regression without Domain Alignment

Mainak Biswas, Ambedkar Dukkipati, Devarajan Sridharan

arxiv logopreprintSep 5 2025
Deep learning models deployed in real-world applications (e.g., medicine) face challenges because source models do not generalize well to domain-shifted target data. Many successful domain adaptation (DA) approaches require full access to source data. Yet, such requirements are unrealistic in scenarios where source data cannot be shared either because of privacy concerns or because it is too large and incurs prohibitive storage or computational costs. Moreover, resource constraints may limit the availability of labeled targets. We illustrate this challenge in a neuroscience setting where source data are unavailable, labeled target data are meager, and predictions involve continuous-valued outputs. We build upon Contradistinguisher (CUDA), an efficient framework that learns a shared model across the labeled source and unlabeled target samples, without intermediate representation alignment. Yet, CUDA was designed for unsupervised DA, with full access to source data, and for classification tasks. We develop CRAFT -- a Contradistinguisher-based Regularization Approach for Flexible Training -- for source-free (SF), semi-supervised transfer of pretrained models in regression tasks. We showcase the efficacy of CRAFT in two neuroscience settings: gaze prediction with electroencephalography (EEG) data and ``brain age'' prediction with structural MRI data. For both datasets, CRAFT yielded up to 9% improvement in root-mean-squared error (RMSE) over fine-tuned models when labeled training examples were scarce. Moreover, CRAFT leveraged unlabeled target data and outperformed four competing state-of-the-art source-free domain adaptation models by more than 3%. Lastly, we demonstrate the efficacy of CRAFT on two other real-world regression benchmarks. We propose CRAFT as an efficient approach for source-free, semi-supervised deep transfer for regression that is ubiquitous in biology and medicine.

A Replicable and Generalizable Neuroimaging-Based Indicator of Pain Sensitivity Across Individuals.

Zhang LB, Lu XJ, Zhang HJ, Wei ZX, Kong YZ, Tu YH, Iannetti GD, Hu L

pubmed logopapersSep 5 2025
Revealing the neural underpinnings of pain sensitivity is crucial for understanding how the brain encodes individual differences in pain and advancing personalized pain treatments. Here, six large and diverse functional magnetic resonance imaging (fMRI) datasets (total N = 1046) are leveraged to uncover the neural mechanisms of pain sensitivity. Replicable and generalizable correlations are found between nociceptive-evoked fMRI responses and pain sensitivity for laser heat, contact heat, and mechanical pains. These fMRI responses correlate more strongly with pain sensitivity than with tactile, auditory, and visual sensitivity. Moreover, a machine learning model is developed that accurately predicts not only pain sensitivity (r = 0.20∼0.56, ps < 0.05) but also analgesic effects of different treatments in healthy individuals (r = 0.17∼0.25, ps < 0.05). Notably, these findings are influenced considerably by sample sizes, requiring >200 for univariate whole brain correlation analysis and >150 for multivariate machine learning modeling. Altogether, this study demonstrates that fMRI activations encode pain sensitivity across various types of pain, thus facilitating interpretations of subjective pain reports and promoting more mechanistically informed investigations into pain physiology.

Interpretable Transformer Models for rs-fMRI Epilepsy Classification and Biomarker Discovery

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.

TauGenNet: Plasma-Driven Tau PET Image Synthesis via Text-Guided 3D Diffusion Models

Yuxin Gong, Se-in Jang, Wei Shao, Yi Su, Kuang Gong

arxiv logopreprintSep 4 2025
Accurate quantification of tau pathology via tau positron emission tomography (PET) scan is crucial for diagnosing and monitoring Alzheimer's disease (AD). However, the high cost and limited availability of tau PET restrict its widespread use. In contrast, structural magnetic resonance imaging (MRI) and plasma-based biomarkers provide non-invasive and widely available complementary information related to brain anatomy and disease progression. In this work, we propose a text-guided 3D diffusion model for 3D tau PET image synthesis, leveraging multimodal conditions from both structural MRI and plasma measurement. Specifically, the textual prompt is from the plasma p-tau217 measurement, which is a key indicator of AD progression, while MRI provides anatomical structure constraints. The proposed framework is trained and evaluated using clinical AV1451 tau PET data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results demonstrate that our approach can generate realistic, clinically meaningful 3D tau PET across a range of disease stages. The proposed framework can help perform tau PET data augmentation under different settings, provide a non-invasive, cost-effective alternative for visualizing tau pathology, and support the simulation of disease progression under varying plasma biomarker levels and cognitive conditions.

Deep Learning for Segmenting Ischemic Stroke Infarction in Non-contrast CT Scans by Utilizing Asymmetry.

Sun J, Ju GL, Qu YH, Xie HH, Sun HX, Han SY, Li YF, Jia XQ, Yang Q

pubmed logopapersSep 4 2025
Non-contrast computed tomography (NCCT) is a first-line imaging technique for determining treatment options for acute ischemic stroke (AIS). However, its poor contrast and signal-to-noise ratio limit the diagnosis accuracy for radiologists, and automated AIS lesion segmentation using NCCT also remains a challenge. This study aims to develop a segmentation method for ischemic lesions in NCCT scans, combining symmetry-based principles with the nnUNet segmentation model. Our novel approach integrates a Generative Module (GM) utilizing 2.5 D ResUNet and an Upstream Segmentation Module (UM) with additional inputs and constraints under the 3D nnUNet segmentation model, utilizing symmetry-based learning to enhance the identification and segmentation of ischemic regions. We utilized the publicly accessible AISD dataset for our experiments. This dataset contains 397 NCCT scans of acute ischemic stroke taken within 24 h of the onset of symptoms. Our method was trained and validated using 345 scans, while the remaining 52 scans were used for internal testing. Additionally, we included 60 positive cases (External Set 1) with segmentation labels obtained from our hospital for external validation of the segmentation task. External Set 2 was employed to evaluate the model's sensitivity and specificity in case-dimensional classification, further assessing its clinical performance. We introduced innovative features such as an intensity-based lesion probability (ILP) function and specific input channels for suspected lesion areas to augment the model's sensitivity and specificity. The methodology demonstrated commendable segmentation efficacy, attaining a Dice Similarity Coefficient (DSC) of 0.6720 and a Hausdorff Distance (HD95) of 35.28 on the internal test dataset. Similarly, on the external test dataset, the method yielded satisfactory segmentation outcomes, with a DSC of 0.4891 and an HD 95 of 46.06. These metrics reflect a substantial overlap with expert-drawn boundaries and demonstrate the model's potential for reliable clinical application. In terms of classification performance, the method achieved an Area Under the Curve (AUC) of 0.991 on the external test set, surpassing the performance of nnUNet, which recorded an AUC of 0.947. This study introduces a novel segmentation technique for ischemic lesions in NCCT scans, leveraging symmetry-based principles integrated with nnUNet, which shows potential for improving clinical decision-making in stroke care.

Temporally-Aware Diffusion Model for Brain Progression Modelling with Bidirectional Temporal Regularisation

Mattia Litrico, Francesco Guarnera, Mario Valerio Giuffrida, Daniele Ravì, Sebastiano Battiato

arxiv logopreprintSep 3 2025
Generating realistic MRIs to accurately predict future changes in the structure of brain is an invaluable tool for clinicians in assessing clinical outcomes and analysing the disease progression at the patient level. However, current existing methods present some limitations: (i) some approaches fail to explicitly capture the relationship between structural changes and time intervals, especially when trained on age-imbalanced datasets; (ii) others rely only on scan interpolation, which lack clinical utility, as they generate intermediate images between timepoints rather than future pathological progression; and (iii) most approaches rely on 2D slice-based architectures, thereby disregarding full 3D anatomical context, which is essential for accurate longitudinal predictions. We propose a 3D Temporally-Aware Diffusion Model (TADM-3D), which accurately predicts brain progression on MRI volumes. To better model the relationship between time interval and brain changes, TADM-3D uses a pre-trained Brain-Age Estimator (BAE) that guides the diffusion model in the generation of MRIs that accurately reflect the expected age difference between baseline and generated follow-up scans. Additionally, to further improve the temporal awareness of TADM-3D, we propose the Back-In-Time Regularisation (BITR), by training TADM-3D to predict bidirectionally from the baseline to follow-up (forward), as well as from the follow-up to baseline (backward). Although predicting past scans has limited clinical applications, this regularisation helps the model generate temporally more accurate scans. We train and evaluate TADM-3D on the OASIS-3 dataset, and we validate the generalisation performance on an external test set from the NACC dataset. The code will be available upon acceptance.
Page 19 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.