Sort by:
Page 5 of 1161152 results

Prediction of cervical cancer lymph node metastasis based on multisequence magnetic resonance imaging radiomics and deep learning features: a dual-center study.

Luo S, Guo Y, Ye Y, Mu Q, Huang W, Tang G

pubmed logopapersAug 10 2025
Cervical cancer is a leading cause of death from malignant tumors in women, and accurate evaluation of occult lymph node metastasis (OLNM) is crucial for optimal treatment. This study aimed to develop several predictive models-including Clinical model, Radiomics models (RD), Deep Learning models (DL), Radiomics-Deep Learning fusion models (RD-DL), and a Clinical-RD-DL combined model-for assessing the risk of OLNM in cervical cancer patients.The study included 130 patients from Center 1 (training set) and 55 from Center 2 (test set). Clinical data and imaging sequences (T1, T2, and DWI) were used to extract features for model construction. Model performance was assessed using the DeLong test, and SHAP analysis was used to examine feature contributions. Results showed that both the RD-combined (AUC = 0.803) and DL-combined (AUC = 0.818) models outperformed single-sequence models as well as the standalone Clinical model (AUC = 0.702). The RD-DL model yielded the highest performance, achieving an AUC of 0.981 in the training set and 0.903 in the test set. Notably, integrating clinical variables did not further improve predictive performance; the Clinical-RD-DL model performed comparably to the RD-DL model. SHAP analysis showed that deep learning features had the greatest impact on model predictions. Both RD and DL models effectively predict OLNM, with the RD-DL model offering superior performance. These findings provide a rapid, non-invasive clinical prediction method.

Neurobehavioral mechanisms of fear and anxiety in multiple sclerosis.

Meyer-Arndt L, Rust R, Bellmann-Strobl J, Schmitz-Hübsch T, Marko L, Forslund S, Scheel M, Gold SM, Hetzer S, Paul F, Weygandt M

pubmed logopapersAug 9 2025
Anxiety is a common yet often underdiagnosed and undertreated comorbidity in multiple sclerosis (MS). While altered fear processing is a hallmark of anxiety in other populations, its neurobehavioral mechanisms in MS remain poorly understood. This study investigates the extent to which neurobehavioral mechanisms of fear generalization contribute to anxiety in MS. We recruited 18 persons with MS (PwMS) and anxiety, 36 PwMS without anxiety, and 23 healthy persons (HPs). Participants completed a functional MRI (fMRI) fear generalization task to assess fear processing and diffusion-weighted MRI for graph-based structural connectome analyses. Consistent with findings in non-MS anxiety populations, PwMS with anxiety exhibit fear overgeneralization, perceiving non-threating stimuli as threatening. A machine learning model trained on HPs in a multivariate pattern analysis (MVPA) cross-decoding approach accurately predicts behavioral fear generalization in both MS groups using whole-brain fMRI fear response patterns. Regional fMRI prediction and graph-based structural connectivity analyses reveal that fear response activity and structural network integrity of partially overlapping areas, such as hippocampus (for fear stimulus comparison) and anterior insula (for fear excitation), are crucial for MS fear generalization. Reduced network integrity in such regions is a direct indicator of MS anxiety. Our findings demonstrate that MS anxiety is substantially characterized by fear overgeneralization. The fact that a machine learning model trained to associate fMRI fear response patterns with fear ratings in HPs predicts fear ratings from fMRI data across MS groups using an MVPA cross-decoding approach suggests that generic fear processing mechanisms substantially contribute to anxiety in MS.

Spatio-Temporal Conditional Diffusion Models for Forecasting Future Multiple Sclerosis Lesion Masks Conditioned on Treatments

Gian Mario Favero, Ge Ya Luo, Nima Fathi, Justin Szeto, Douglas L. Arnold, Brennan Nichyporuk, Chris Pal, Tal Arbel

arxiv logopreprintAug 9 2025
Image-based personalized medicine has the potential to transform healthcare, particularly for diseases that exhibit heterogeneous progression such as Multiple Sclerosis (MS). In this work, we introduce the first treatment-aware spatio-temporal diffusion model that is able to generate future masks demonstrating lesion evolution in MS. Our voxel-space approach incorporates multi-modal patient data, including MRI and treatment information, to forecast new and enlarging T2 (NET2) lesion masks at a future time point. Extensive experiments on a multi-centre dataset of 2131 patient 3D MRIs from randomized clinical trials for relapsing-remitting MS demonstrate that our generative model is able to accurately predict NET2 lesion masks for patients across six different treatments. Moreover, we demonstrate our model has the potential for real-world clinical applications through downstream tasks such as future lesion count and location estimation, binary lesion activity classification, and generating counterfactual future NET2 masks for several treatments with different efficacies. This work highlights the potential of causal, image-based generative models as powerful tools for advancing data-driven prognostics in MS.

Deep Learning-aided <sup>1</sup>H-MR Spectroscopy for Differentiating between Patients with and without Hepatocellular Carcinoma.

Bae JS, Lee HH, Kim H, Song IC, Lee JY, Han JK

pubmed logopapersAug 9 2025
Among patients with hepatitis B virus-associated liver cirrhosis (HBV-LC), there may be differences in the hepatic parenchyma between those with and without hepatocellular carcinoma (HCC). Proton MR spectroscopy (<sup>1</sup>H-MRS) is a well-established tool for noninvasive metabolomics, but has been challenging in the liver allowing only a few metabolites to be detected other than lipids. This study aims to explore the potential of <sup>1</sup>H-MRS of the liver in conjunction with deep learning to differentiate between HBV-LC patients with and without HCC. Between August 2018 and March 2021, <sup>1</sup>H-MRS data were collected from 37 HBV-LC patients who underwent MRI for HCC surveillance, without HCC (HBV-LC group, n = 20) and with HCC (HBV-LC-HCC group, n = 17). Based on a priori knowledge from the first 10 patients from each group, big spectral datasets were simulated to develop 2 kinds of convolutional neural networks (CNNs): CNNs quantifying 15 metabolites and 5 lipid resonances (qCNNs) and CNNs classifying patients into HBV-LC and HBV-LC-HCC (cCNNs). The performance of the cCNNs was assessed using the remaining patients in the 2 groups (10 HBV-LC and 7 HBV-LC-HCC patients). Using a simulated dataset, the quantitative errors with the qCNNs were significantly lower than those with a conventional nonlinear-least-squares-fitting method for all metabolites and lipids (P ≤0.004). The cCNNs exhibited sensitivity, specificity, and accuracy of 100% (7/7), 90% (9/10), and 94% (16/17), respectively, for identifying the HBV-LC-HCC group. Deep-learning-aided <sup>1</sup>H-MRS with data augmentation by spectral simulation may have potential in differentiating between HBV-LC patients with and without HCC.

Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning

Melika Filvantorkaman, Mohsen Piri, Maral Filvan Torkaman, Ashkan Zabihi, Hamidreza Moradi

arxiv logopreprintAug 9 2025
Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.

Spinal-QDCNN: advanced feature extraction for brain tumor detection using MRI images.

T L, J JJ, Rani VV, Saini ML

pubmed logopapersAug 9 2025
Brain tumor occurs due to the abnormal development of cells in the brain. It has adversely affected human health, and early diagnosis is required to improve the survival rate of the patient. Hence, various brain tumor detection models have been developed to detect brain tumors. However, the existing methods often suffer from limited accuracy and inefficient learning architecture. The traditional approaches cannot effectively detect the small and subtle changes in the brain cells. To overcome these limitations, a SpinalNet-Quantum Dilated Convolutional Neural Network (Spinal-QDCNN) model is proposed for detecting brain tumors using MRI images. The Spinal-QDCNN method is developed by the combination of QDCNN and SpinalNet for brain tumor detection using MRI. At first, the input brain image is pre-processed using RoI extraction. Then, image enhancement is done by using the thresholding transformation, which is followed by segmentation using Projective Adversarial Networks (PAN). Then, different processes, like random erasing, flipping, and resizing, are applied in the image augmentation phase. This is followed by feature extraction, where statistical features such as average contrast, kurtosis and skewness, and mean, Gabor wavelet features, Discrete Wavelet Transform (DWT) with Gradient Binary Pattern (GBP) are extracted, and finally detection is done using Spinal-QDCNN. Moreover, the proposed method attained a maximum accuracy of 86.356%, sensitivity of 87.37%, and specificity of 88.357%.

BrainATCL: Adaptive Temporal Brain Connectivity Learning for Functional Link Prediction and Age Estimation

Yiran Huang, Amirhossein Nouranizadeh, Christine Ahrends, Mengjia Xu

arxiv logopreprintAug 9 2025
Functional Magnetic Resonance Imaging (fMRI) is an imaging technique widely used to study human brain activity. fMRI signals in areas across the brain transiently synchronise and desynchronise their activity in a highly structured manner, even when an individual is at rest. These functional connectivity dynamics may be related to behaviour and neuropsychiatric disease. To model these dynamics, temporal brain connectivity representations are essential, as they reflect evolving interactions between brain regions and provide insight into transient neural states and network reconfigurations. However, conventional graph neural networks (GNNs) often struggle to capture long-range temporal dependencies in dynamic fMRI data. To address this challenge, we propose BrainATCL, an unsupervised, nonparametric framework for adaptive temporal brain connectivity learning, enabling functional link prediction and age estimation. Our method dynamically adjusts the lookback window for each snapshot based on the rate of newly added edges. Graph sequences are subsequently encoded using a GINE-Mamba2 backbone to learn spatial-temporal representations of dynamic functional connectivity in resting-state fMRI data of 1,000 participants from the Human Connectome Project. To further improve spatial modeling, we incorporate brain structure and function-informed edge attributes, i.e., the left/right hemispheric identity and subnetwork membership of brain regions, enabling the model to capture biologically meaningful topological patterns. We evaluate our BrainATCL on two tasks: functional link prediction and age estimation. The experimental results demonstrate superior performance and strong generalization, including in cross-session prediction scenarios.

SST-DUNet: Smart Swin Transformer and Dense UNet for automated preclinical fMRI skull stripping.

Soltanpour S, Utama R, Chang A, Nasseef MT, Madularu D, Kulkarni P, Ferris CF, Joslin C

pubmed logopapersAug 9 2025
Skull stripping is a common preprocessing step in Magnetic Resonance Imaging (MRI) pipelines and is often performed manually. Automating this process is challenging for preclinical data due to variations in brain geometry, resolution, and tissue contrast. Existing methods for MRI skull stripping often struggle with the low resolution and varying slice sizes found in preclinical functional MRI (fMRI) data. This study proposes a novel method that integrates a Dense UNet-based architecture with a feature extractor based on the Smart Swin Transformer (SST), called SST-DUNet. The Smart Shifted Window Multi-Head Self-Attention (SSW-MSA) module in SST replaces the mask-based module in the Swin Transformer (ST), enabling the learning of distinct channel-wise features while focusing on relevant dependencies within brain structures. This modification allows the model to better handle the complexities of fMRI skull stripping, such as low resolution and variable slice sizes. To address class imbalance in preclinical data, a combined loss function using Focal and Dice loss is applied. The model was trained on rat fMRI images and evaluated across three in-house datasets, achieving Dice similarity scores of 98.65%, 97.86%, and 98.04%. We compared our method with conventional and deep learning-based approaches, demonstrating its superiority over state-of-the-art methods. The fMRI results using SST-DUNet closely align with those from manual skull stripping for both seed-based and independent component analyses, indicating that SST-DUNet can effectively substitute manual brain extraction in rat fMRI analysis.

FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI

Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Sidong Liu

arxiv logopreprintAug 9 2025
Accurate, noninvasive detection of isocitrate dehydrogenase (IDH) mutation is essential for effective glioma management. Traditional methods rely on invasive tissue sampling, which may fail to capture a tumor's spatial heterogeneity. While deep learning models have shown promise in molecular profiling, their performance is often limited by scarce annotated data. In contrast, foundation deep learning models offer a more generalizable approach for glioma imaging biomarkers. We propose a Foundation-based Biomarker Network (FoundBioNet) that utilizes a SWIN-UNETR-based architecture to noninvasively predict IDH mutation status from multi-parametric MRI. Two key modules are incorporated: Tumor-Aware Feature Encoding (TAFE) for extracting multi-scale, tumor-focused features, and Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch signals associated with IDH mutation. The model was trained and validated on a diverse, multi-center cohort of 1705 glioma patients from six public datasets. Our model achieved AUCs of 90.58%, 88.08%, 65.41%, and 80.31% on independent test sets from EGD, TCGA, Ivy GAP, RHUH, and UPenn, consistently outperforming baseline approaches (p <= 0.05). Ablation studies confirmed that both the TAFE and CMD modules are essential for improving predictive accuracy. By integrating large-scale pretraining and task-specific fine-tuning, FoundBioNet enables generalizable glioma characterization. This approach enhances diagnostic accuracy and interpretability, with the potential to enable more personalized patient care.

Advanced dynamic ensemble framework with explainability driven insights for precision brain tumor classification across datasets.

Singh R, Gupta S, Ibrahim AO, Gabralla LA, Bharany S, Rehman AU, Hussen S

pubmed logopapersAug 8 2025
Accurate detection of brain tumors remains a significant challenge due to the diversity of tumor types along with human interventions during diagnostic process. This study proposes a novel ensemble deep learning system for accurate brain tumor classification using MRI data. The proposed system integrates fine-tuned Convolutional Neural Network (CNN), ResNet-50 and EfficientNet-B5 to create a dynamic ensemble framework that addresses existing challenges. An adaptive dynamic weight distribution strategy is employed during training to optimize the contribution of each networks in the framework. To address class imbalance and improve model generalization, a customized weighted cross-entropy loss function is incorporated. The model obtains improved interpretability through explainabile artificial intelligence (XAI) techniques, including Grad-CAM, SHAP, SmoothGrad, and LIME, providing deeper insights into prediction rationale. The proposed system achieves a classification accuracy of 99.4% on the test set, 99.48% on the validation set, and 99.31% in cross-dataset validation. Furthermore, entropy-based uncertainty analysis quantifies prediction confidence, yielding an average entropy of 0.3093 and effectively identifying uncertain predictions to mitigate diagnostic errors. Overall, the proposed framework demonstrates high accuracy, robustness, and interpretability, highlighting its potential for integration into automated brain tumor diagnosis systems.
Page 5 of 1161152 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.