Sort by:
Page 103 of 3513502 results

BrainATCL: Adaptive Temporal Brain Connectivity Learning for Functional Link Prediction and Age Estimation

Yiran Huang, Amirhossein Nouranizadeh, Christine Ahrends, Mengjia Xu

arxiv logopreprintAug 9 2025
Functional Magnetic Resonance Imaging (fMRI) is an imaging technique widely used to study human brain activity. fMRI signals in areas across the brain transiently synchronise and desynchronise their activity in a highly structured manner, even when an individual is at rest. These functional connectivity dynamics may be related to behaviour and neuropsychiatric disease. To model these dynamics, temporal brain connectivity representations are essential, as they reflect evolving interactions between brain regions and provide insight into transient neural states and network reconfigurations. However, conventional graph neural networks (GNNs) often struggle to capture long-range temporal dependencies in dynamic fMRI data. To address this challenge, we propose BrainATCL, an unsupervised, nonparametric framework for adaptive temporal brain connectivity learning, enabling functional link prediction and age estimation. Our method dynamically adjusts the lookback window for each snapshot based on the rate of newly added edges. Graph sequences are subsequently encoded using a GINE-Mamba2 backbone to learn spatial-temporal representations of dynamic functional connectivity in resting-state fMRI data of 1,000 participants from the Human Connectome Project. To further improve spatial modeling, we incorporate brain structure and function-informed edge attributes, i.e., the left/right hemispheric identity and subnetwork membership of brain regions, enabling the model to capture biologically meaningful topological patterns. We evaluate our BrainATCL on two tasks: functional link prediction and age estimation. The experimental results demonstrate superior performance and strong generalization, including in cross-session prediction scenarios.

Spatio-Temporal Conditional Diffusion Models for Forecasting Future Multiple Sclerosis Lesion Masks Conditioned on Treatments

Gian Mario Favero, Ge Ya Luo, Nima Fathi, Justin Szeto, Douglas L. Arnold, Brennan Nichyporuk, Chris Pal, Tal Arbel

arxiv logopreprintAug 9 2025
Image-based personalized medicine has the potential to transform healthcare, particularly for diseases that exhibit heterogeneous progression such as Multiple Sclerosis (MS). In this work, we introduce the first treatment-aware spatio-temporal diffusion model that is able to generate future masks demonstrating lesion evolution in MS. Our voxel-space approach incorporates multi-modal patient data, including MRI and treatment information, to forecast new and enlarging T2 (NET2) lesion masks at a future time point. Extensive experiments on a multi-centre dataset of 2131 patient 3D MRIs from randomized clinical trials for relapsing-remitting MS demonstrate that our generative model is able to accurately predict NET2 lesion masks for patients across six different treatments. Moreover, we demonstrate our model has the potential for real-world clinical applications through downstream tasks such as future lesion count and location estimation, binary lesion activity classification, and generating counterfactual future NET2 masks for several treatments with different efficacies. This work highlights the potential of causal, image-based generative models as powerful tools for advancing data-driven prognostics in MS.

Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning

Melika Filvantorkaman, Mohsen Piri, Maral Filvan Torkaman, Ashkan Zabihi, Hamidreza Moradi

arxiv logopreprintAug 9 2025
Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.

OctreeNCA: Single-Pass 184 MP Segmentation on Consumer Hardware

Nick Lemke, John Kalkhof, Niklas Babendererde, Anirban Mukhopadhyay

arxiv logopreprintAug 9 2025
Medical applications demand segmentation of large inputs, like prostate MRIs, pathology slices, or videos of surgery. These inputs should ideally be inferred at once to provide the model with proper spatial or temporal context. When segmenting large inputs, the VRAM consumption of the GPU becomes the bottleneck. Architectures like UNets or Vision Transformers scale very poorly in VRAM consumption, resulting in patch- or frame-wise approaches that compromise global consistency and inference speed. The lightweight Neural Cellular Automaton (NCA) is a bio-inspired model that is by construction size-invariant. However, due to its local-only communication rules, it lacks global knowledge. We propose OctreeNCA by generalizing the neighborhood definition using an octree data structure. Our generalized neighborhood definition enables the efficient traversal of global knowledge. Since deep learning frameworks are mainly developed for large multi-layer networks, their implementation does not fully leverage the advantages of NCAs. We implement an NCA inference function in CUDA that further reduces VRAM demands and increases inference speed. Our OctreeNCA segments high-resolution images and videos quickly while occupying 90% less VRAM than a UNet during evaluation. This allows us to segment 184 Megapixel pathology slices or 1-minute surgical videos at once.

DiffUS: Differentiable Ultrasound Rendering from Volumetric Imaging

Noe Bertramo, Gabriel Duguey, Vivek Gopalakrishnan

arxiv logopreprintAug 9 2025
Intraoperative ultrasound imaging provides real-time guidance during numerous surgical procedures, but its interpretation is complicated by noise, artifacts, and poor alignment with high-resolution preoperative MRI/CT scans. To bridge the gap between reoperative planning and intraoperative guidance, we present DiffUS, a physics-based, differentiable ultrasound renderer that synthesizes realistic B-mode images from volumetric imaging. DiffUS first converts MRI 3D scans into acoustic impedance volumes using a machine learning approach. Next, we simulate ultrasound beam propagation using ray tracing with coupled reflection-transmission equations. DiffUS formulates wave propagation as a sparse linear system that captures multiple internal reflections. Finally, we reconstruct B-mode images via depth-resolved echo extraction across fan-shaped acquisition geometry, incorporating realistic artifacts including speckle noise and depth-dependent degradation. DiffUS is entirely implemented as differentiable tensor operations in PyTorch, enabling gradient-based optimization for downstream applications such as slice-to-volume registration and volumetric reconstruction. Evaluation on the ReMIND dataset demonstrates DiffUS's ability to generate anatomically accurate ultrasound images from brain MRI data.

FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI

Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Sidong Liu

arxiv logopreprintAug 9 2025
Accurate, noninvasive detection of isocitrate dehydrogenase (IDH) mutation is essential for effective glioma management. Traditional methods rely on invasive tissue sampling, which may fail to capture a tumor's spatial heterogeneity. While deep learning models have shown promise in molecular profiling, their performance is often limited by scarce annotated data. In contrast, foundation deep learning models offer a more generalizable approach for glioma imaging biomarkers. We propose a Foundation-based Biomarker Network (FoundBioNet) that utilizes a SWIN-UNETR-based architecture to noninvasively predict IDH mutation status from multi-parametric MRI. Two key modules are incorporated: Tumor-Aware Feature Encoding (TAFE) for extracting multi-scale, tumor-focused features, and Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch signals associated with IDH mutation. The model was trained and validated on a diverse, multi-center cohort of 1705 glioma patients from six public datasets. Our model achieved AUCs of 90.58%, 88.08%, 65.41%, and 80.31% on independent test sets from EGD, TCGA, Ivy GAP, RHUH, and UPenn, consistently outperforming baseline approaches (p <= 0.05). Ablation studies confirmed that both the TAFE and CMD modules are essential for improving predictive accuracy. By integrating large-scale pretraining and task-specific fine-tuning, FoundBioNet enables generalizable glioma characterization. This approach enhances diagnostic accuracy and interpretability, with the potential to enable more personalized patient care.

Multi-institutional study for comparison of detectability of hypovascular liver metastases between 70- and 40-keV images: DELMIO study.

Ichikawa S, Funayama S, Hyodo T, Ozaki K, Ito A, Kakuya M, Kobayashi T, Tanahashi Y, Kozaka K, Igarashi S, Suto T, Noda Y, Matsuo M, Narita A, Okada H, Suzuki K, Goshima S

pubmed logopapersAug 9 2025
To compare the lesion detectability of hypovascular liver metastases between 70-keV and 40-keV images from dual energy-computed tomography (CT) reconstructed with deep-learning image reconstruction (DLIR). This multi-institutional, retrospective study included adult patients both pre- and post-treatment for gastrointestinal adenocarcinoma. All patients underwent contrast-enhanced CT with reconstruction at 40-keV and 70-keV. Liver metastases were confirmed using gadoxetic acid-enhanced magnetic resonance imaging. Four radiologists independently assessed lesion conspicuity (per-patient and per-lesion) using a 5-point scale. A radiologic technologist measured image noise, tumor-to-liver contrast, and contrast-to-noise ratio (CNR). Quantitative and qualitative results were compared between 70-keV and 40-keV images. The study included 138 patients (mean age, 69 ± 12 years; 80 men) with 208 liver metastases. Seventy-one patients had liver metastases, while 67 did not. Primary cancer sites included 68 cases of pancreas, 50 colorectal, 12 stomach, and 8 gallbladder/bile duct. No significant difference in per-patient lesion detectability was found between 70-keV images (sensitivity, 71.8-90.1%; specificity, 61.2-85.1%; accuracy, 73.9-79.7%) and 40-keV images (sensitivity, 76.1-90.1%; specificity, 53.7-82.1%; accuracy, 71.7-79.0%) (p = 0.18-> 0.99). Similarly, no significant difference in per-lesion lesion detectability was observed between 70-keV (sensitivity, 67.3-82.2%) and 40-keV images (sensitivity, 68.8-81.7%) (p = 0.20-> 0.99). However, Image noise was significantly higher at 40 keV, along with greater tumor-to-liver contrast and CNRs for both hepatic parenchyma and tumors (p < 0.01). There was no significant difference in hypovascular liver metastases detectability between 70-keV and 40-keV images using the DLIR technology.

Reducing motion artifacts in the aorta: super-resolution deep learning reconstruction with motion reduction algorithm.

Yasaka K, Tsujimoto R, Miyo R, Abe O

pubmed logopapersAug 9 2025
To assess the efficacy of super-resolution deep learning reconstruction (SR-DLR) with motion reduction algorithm (SR-DLR-M) in mitigating aorta motion artifacts compared to SR-DLR and deep learning reconstruction with motion reduction algorithm (DLR-M). This retrospective study included 86 patients (mean age, 65.0 ± 14.1 years; 53 males) who underwent contrast-enhanced CT including the chest region. CT images were reconstructed with SR-DLR-M, SR-DLR, and DLR-M. Circular or ovoid regions of interest were placed on the aorta, and the standard deviation of the CT attenuation was recorded as quantitative noise. From the CT attenuation profile along a line region of interest that intersected the left common carotid artery wall, edge rise slope and edge rise distance were calculated. Two readers assessed the images based on artifact, sharpness, noise, structure depiction, and diagnostic acceptability (for aortic dissection). Quantitative noise was 7.4/5.4/8.3 Hounsfield unit (HU) in SR-DLR-M/SR-DLR/DLR-M. Significant differences were observed between SR-DLR-M vs. SR-DLR and DLR-M (p < 0.001). Edge rise slope and edge rise distance were 107.1/108.8/85.8 HU/mm and 1.6/1.5/2.0 mm, respectively, in SR-DLR-M/SR-DLR/DLR-M. Statistically significant differences were detected between SR-DLR-M vs. DLR-M (p ≤ 0.001 for both). Two readers scored artifacts in SR-DLR-M as significantly better than those in SR-DLR (p < 0.001). Scores for sharpness, noise, and structure depiction in SR-DLR-M were significantly better than those in DLR-M (p ≤ 0.005). Diagnostic acceptability in SR-DLR-M was significantly better than that in SR-DLR and DLR-M (p < 0.001). SR-DLR-M provided significantly better CT images in diagnosing aortic dissection compared to SR-DLR and DLR-M.

Automated 3D segmentation of rotator cuff muscle and fat from longitudinal CT for shoulder arthroplasty evaluation.

Yang M, Jun BJ, Owings T, Subhas N, Polster J, Winalski CS, Ho JC, Entezari V, Derwin KA, Ricchetti ET, Li X

pubmed logopapersAug 9 2025
To develop and validate a deep learning model for automated 3D segmentation of rotator cuff muscles on longitudinal CT scans to quantify muscle volume and fat fraction in patients undergoing total shoulder arthroplasty (TSA). The proposed segmentation models adopted DeepLabV3 + with ResNet50 as the backbone. The models were trained, validated, and tested on preoperative or minimum 2-year follow-up CT scans from 53 TSA subjects. 3D Dice similarity scores, average symmetric surface distance (ASSD), 95th percentile Hausdorff distance (HD95), and relative absolute volume difference (RAVD) were used to evaluate the model performance on hold-out test sets. The trained models were applied to a cohort of 172 patients to quantify rotator cuff muscle volumes and fat fractions across preoperative and minimum 2- and 5-year follow-ups. Compared to the ground truth, the models achieved mean Dice of 0.928 and 0.916, mean ASSD of 0.844 mm and 1.028 mm, mean HD95 of 3.071 mm and 4.173 mm, and mean RAVD of 0.025 and 0.068 on the hold-out test sets for the pre-operative and the minimum 2-year follow-up CT scans, respectively. This study developed accurate and reliable deep learning models for automated 3D segmentation of rotator cuff muscles on clinical CT scans in TSA patients. These models substantially reduce the time required for muscle volume and fat fraction analysis and provide a practical tool for investigating how rotator cuff muscle health relates to surgical outcomes. This has the potential to inform patient selection, rehabilitation planning, and surgical decision-making in TSA and RCR.

Ultrasound-Based Machine Learning and SHapley Additive exPlanations Method Evaluating Risk of Gallbladder Cancer: A Bicentric and Validation Study.

Chen B, Zhong H, Lin J, Lyu G, Su S

pubmed logopapersAug 9 2025
This study aims to construct and evaluate 8 machine learning models by integrating ultrasound imaging features, clinical characteristics, and serological features to assess the risk of gallbladder cancer (GBC) occurrence in patients. A retrospective analysis was conducted on ultrasound and clinical data of 300 suspected GBC patients who visited the Second Affiliated Hospital of Fujian Medical University from January 2020 to January 2024 and 69 patients who visited the Zhongshan Hospital Affiliated to Xiamen University from January 2024 to January 2025. Key relevant features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using XGBoost, logistic regression, support vector machine, k-nearest neighbors, random forest, decision tree, naive Bayes, and neural network, with the SHapley Additive exPlanations (SHAP) method employed to explain model interpretability. The LASSO regression demonstrated that gender, age, alkaline phosphatase (ALP), clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions were key features for GBC. The XGBoost model demonstrated an area under receiver operating characteristic curve (AUC) of 0.934, 0.916, and 0.813 in the training, validating, and test sets. SHAP analysis revealed the importance ranking of factors as clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions, ALP, gender, and age. Personalized prediction explanations through SHAP values demonstrated the contribution of each feature to the final prediction, enhancing result interpretability. Furthermore, decision plots were generated to display the influence trajectory of each feature on model predictions, aiding in analyzing which features had the greatest impact on these mispredictions; thereby facilitating further model optimization or feature adjustment. This study proposed a GBC ML model based on ultrasound, clinical, and serological characteristics, indicating the superior performance of the XGBoost model and enhancing the interpretability of the model through the SHAP method.
Page 103 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.