Sort by:
Page 7 of 90896 results

Novel radiotherapy target definition using AI-driven predictions of glioblastoma recurrence from metabolic and diffusion MRI.

Tran N, Luks TL, Li Y, Jakary A, Ellison J, Liu B, Adegbite O, Nair D, Kakhandiki P, Molinaro AM, Villanueva-Meyer JE, Butowski N, Clarke JL, Chang SM, Braunstein SE, Morin O, Lin H, Lupo JM

pubmed logopapersAug 7 2025
The current standard-of-care (SOC) practice for defining the clinical target volume (CTV) for radiation therapy (RT) in patients with glioblastoma still employs an isotropic 1-2 cm expansion of the T2-hyperintensity lesion, without considering the heterogeneous infiltrative nature of these tumors. This study aims to improve RT CTV definition in patients with glioblastoma by incorporating biologically relevant metabolic and physiologic imaging acquired before RT along with a deep learning model that can predict regions of subsequent tumor progression by either the presence of contrast-enhancement or T2-hyperintensity. The results were compared against two standard CTV definitions. Our multi-parametric deep learning model significantly outperformed the uniform 2 cm expansion of the T2-lesion CTV in terms of specificity (0.89 ± 0.05 vs 0.79 ± 0.11; p = 0.004), while also achieving comparable sensitivity (0.92 ± 0.11 vs 0.95 ± 0.08; p = 0.10), sparing more normal brain. Model performance was significantly enhanced by incorporating lesion size-weighted loss functions during training and including metabolic images as inputs.

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.

Best Machine Learning Model for Predicting Axial Symptoms After Unilateral Laminoplasty: Based on C2 Spinous Process Muscle Radiomics Features and Sagittal Parameters.

Zheng B, Zhu Z, Liang Y, Liu H

pubmed logopapersAug 7 2025
Study DesignRetrospective study.ObjectiveTo develop a machine learning model for predicting axial symptoms (AS) after unilateral laminoplasty by integrating C2 spinous process muscle radiomics features and cervical sagittal parameters.MethodsIn this retrospective study of 96 cervical myelopathy patients (30 with AS, 66 without) who underwent unilateral laminoplasty between 2018-2022, we extracted radiomics features from preoperative MRI of C2 spinous muscles using PyRadiomics. Clinical data including C2-C7 Cobb angle, cervical sagittal vertical axis (cSVA), T1 slope (T1S) and C2 muscle fat infiltration are collected for clinical model construction. After LASSO regression feature selection, we constructed six machine learning models (SVM, KNN, Random Forest, ExtraTrees, XGBoost, and LightGBM) and evaluated their performance using ROC curves and AUC.ResultsThe AS group demonstrated significantly lower preoperative C2-C7 Cobb angles (12.80° ± 7.49° vs 18.02° ± 8.59°, <i>P</i> = .006), higher cSVA (3.01 cm ± 0.87 vs 2.46 ± 1.19 cm, <i>P</i> = .026), T1S (26.68° ± 5.12° vs 23.66° ± 7.58°, <i>P</i> = .025) and higher C2 muscle fat infiltration (23.73 ± 7.78 vs 20.62 ± 6.93 <i>P</i> = .026). Key radiomics features included local binary pattern texture features and wavelet transform characteristics. The combined model integrating radiomics and clinical parameters achieved the best performance with test AUC of 0.881, sensitivity of 0.833, and specificity of 0.786.ConclusionThe machine learning model based on C2 spinous process muscle radiomics features and clinical parameters (C2-C7 Cobb angle, cSVA, T1S and C2 muscle infiltration) effectively predicts AS occurrence after unilateral laminoplasty, providing clinicians with a valuable tool for preoperative risk assessment and personalized treatment planning.

Coarse-to-Fine Joint Registration of MR and Ultrasound Images via Imaging Style Transfer

Junyi Wang, Xi Zhu, Yikun Guo, Zixi Wang, Haichuan Gao, Le Zhang, Fan Zhang

arxiv logopreprintAug 7 2025
We developed a pipeline for registering pre-surgery Magnetic Resonance (MR) images and post-resection Ultrasound (US) images. Our approach leverages unpaired style transfer using 3D CycleGAN to generate synthetic T1 images, thereby enhancing registration performance. Additionally, our registration process employs both affine and local deformable transformations for a coarse-to-fine registration. The results demonstrate that our approach improves the consistency between MR and US image pairs in most cases.

Longitudinal development of sex differences in the limbic system is associated with age, puberty and mental health

Matte Bon, G., Walther, J., Comasco, E., Derntl, B., Kaufmann, T.

medrxiv logopreprintAug 7 2025
Sex differences in mental health become more evident across adolescence, with a two-fold increase of prevalence of mood disorders in females compared to males. The brain underpinnings remain understudied. Here, we investigated the role of age, puberty and mental health in determining the longitudinal development of sex differences in brain structure. We captured sex differences in limbic and non-limbic structures using machine learning models trained in cross-sectional brain imaging data of 1132 youths, yielding limbic and non-limbic estimates of brain sex. Applied to two independent longitudinal samples (total: 8184 youths), our models revealed pronounced sex differences in brain structure with increasing age. For females, brain sex was sensitive to pubertal development (menarche) over time and, for limbic structures, to mood-related mental health. Our findings highlight the limbic system as a key contributor to the development of sex differences in the brain and the potential of machine learning models for brain sex classification to investigate sex-specific processes relevant to mental health.

Conditional Fetal Brain Atlas Learning for Automatic Tissue Segmentation

Johannes Tischer, Patric Kienast, Marlene Stümpflen, Gregor Kasprian, Georg Langs, Roxane Licandro

arxiv logopreprintAug 6 2025
Magnetic Resonance Imaging (MRI) of the fetal brain has become a key tool for studying brain development in vivo. Yet, its assessment remains challenging due to variability in brain maturation, imaging protocols, and uncertain estimates of Gestational Age (GA). To overcome these, brain atlases provide a standardized reference framework that facilitates objective evaluation and comparison across subjects by aligning the atlas and subjects in a common coordinate system. In this work, we introduce a novel deep-learning framework for generating continuous, age-specific fetal brain atlases for real-time fetal brain tissue segmentation. The framework combines a direct registration model with a conditional discriminator. Trained on a curated dataset of 219 neurotypical fetal MRIs spanning from 21 to 37 weeks of gestation. The method achieves high registration accuracy, captures dynamic anatomical changes with sharp structural detail, and robust segmentation performance with an average Dice Similarity Coefficient (DSC) of 86.3% across six brain tissues. Furthermore, volumetric analysis of the generated atlases reveals detailed neurotypical growth trajectories, providing valuable insights into the maturation of the fetal brain. This approach enables individualized developmental assessment with minimal pre-processing and real-time performance, supporting both research and clinical applications. The model code is available at https://github.com/cirmuw/fetal-brain-atlas

DDTracking: A Deep Generative Framework for Diffusion MRI Tractography with Streamline Local-Global Spatiotemporal Modeling

Yijie Li, Wei Zhang, Xi Zhu, Ye Wu, Yogesh Rathi, Lauren J. O'Donnell, Fan Zhang

arxiv logopreprintAug 6 2025
This paper presents DDTracking, a novel deep generative framework for diffusion MRI tractography that formulates streamline propagation as a conditional denoising diffusion process. In DDTracking, we introduce a dual-pathway encoding network that jointly models local spatial encoding (capturing fine-scale structural details at each streamline point) and global temporal dependencies (ensuring long-range consistency across the entire streamline). Furthermore, we design a conditional diffusion model module, which leverages the learned local and global embeddings to predict streamline propagation orientations for tractography in an end-to-end trainable manner. We conduct a comprehensive evaluation across diverse, independently acquired dMRI datasets, including both synthetic and clinical data. Experiments on two well-established benchmarks with ground truth (ISMRM Challenge and TractoInferno) demonstrate that DDTracking largely outperforms current state-of-the-art tractography methods. Furthermore, our results highlight DDTracking's strong generalizability across heterogeneous datasets, spanning varying health conditions, age groups, imaging protocols, and scanner types. Collectively, DDTracking offers anatomically plausible and robust tractography, presenting a scalable, adaptable, and end-to-end learnable solution for broad dMRI applications. Code is available at: https://github.com/yishengpoxiao/DDtracking.git

UNISELF: A Unified Network with Instance Normalization and Self-Ensembled Lesion Fusion for Multiple Sclerosis Lesion Segmentation

Jinwei Zhang, Lianrui Zuo, Blake E. Dewey, Samuel W. Remedios, Yihao Liu, Savannah P. Hays, Dzung L. Pham, Ellen M. Mowry, Scott D. Newsome, Peter A. Calabresi, Aaron Carass, Jerry L. Prince

arxiv logopreprintAug 6 2025
Automated segmentation of multiple sclerosis (MS) lesions using multicontrast magnetic resonance (MR) images improves efficiency and reproducibility compared to manual delineation, with deep learning (DL) methods achieving state-of-the-art performance. However, these DL-based methods have yet to simultaneously optimize in-domain accuracy and out-of-domain generalization when trained on a single source with limited data, or their performance has been unsatisfactory. To fill this gap, we propose a method called UNISELF, which achieves high accuracy within a single training domain while demonstrating strong generalizability across multiple out-of-domain test datasets. UNISELF employs a novel test-time self-ensembled lesion fusion to improve segmentation accuracy, and leverages test-time instance normalization (TTIN) of latent features to address domain shifts and missing input contrasts. Trained on the ISBI 2015 longitudinal MS segmentation challenge training dataset, UNISELF ranks among the best-performing methods on the challenge test dataset. Additionally, UNISELF outperforms all benchmark methods trained on the same ISBI training data across diverse out-of-domain test datasets with domain shifts and missing contrasts, including the public MICCAI 2016 and UMCL datasets, as well as a private multisite dataset. These test datasets exhibit domain shifts and/or missing contrasts caused by variations in acquisition protocols, scanner types, and imaging artifacts arising from imperfect acquisition. Our code is available at https://github.com/uponacceptance.

Dynamic neural network modulation associated with rumination in major depressive disorder: a prospective observational comparative analysis of cognitive behavioral therapy and pharmacotherapy.

Katayama N, Shinagawa K, Hirano J, Kobayashi Y, Nakagawa A, Umeda S, Kamiya K, Tajima M, Amano M, Nogami W, Ihara S, Noda S, Terasawa Y, Kikuchi T, Mimura M, Uchida H

pubmed logopapersAug 6 2025
Cognitive behavioral therapy (CBT) and pharmacotherapy are primary treatments for major depressive disorder (MDD). However, their differential effects on the neural networks associated with rumination, or repetitive negative thinking, remain poorly understood. This study included 135 participants, whose rumination severity was measured using the rumination response scale (RRS) and whose resting brain activity was measured using functional magnetic resonance imaging (fMRI) at baseline and after 16 weeks. MDD patients received either standard CBT based on Beck's manual (n = 28) or pharmacotherapy (n = 32). Using a hidden Markov model, we observed that MDD patients exhibited increased activity in the default mode network (DMN) and decreased occupancies in the sensorimotor and central executive networks (CEN). The DMN occurrence rate correlated positively with rumination severity. CBT, while not specifically designed to target rumination, reduced DMN occurrence rate and facilitated transitions toward a CEN-dominant brain state as part of broader therapeutic effects. Pharmacotherapy shifted DMN activity to the posterior region of the brain. These findings suggest that CBT and pharmacotherapy modulate brain network dynamics related to rumination through distinct therapeutic pathways.

EATHOA: Elite-evolved hiking algorithm for global optimization and precise multi-thresholding image segmentation in intracerebral hemorrhage images.

Abdel-Salam M, Houssein EH, Emam MM, Samee NA, Gharehchopogh FS, Bacanin N

pubmed logopapersAug 6 2025
Intracerebral hemorrhage (ICH) is a life-threatening condition caused by bleeding in the brain, with high mortality rates, particularly in the acute phase. Accurate diagnosis through medical image segmentation plays a crucial role in early intervention and treatment. However, existing segmentation methods, such as region-growing, clustering, and deep learning, face significant limitations when applied to complex images like ICH, especially in multi-threshold image segmentation (MTIS). As the number of thresholds increases, these methods often become computationally expensive and exhibit degraded segmentation performance. To address these challenges, this paper proposes an Elite-Adaptive-Turbulent Hiking Optimization Algorithm (EATHOA), an enhanced version of the Hiking Optimization Algorithm (HOA), specifically designed for high-dimensional and multimodal optimization problems like ICH image segmentation. EATHOA integrates three novel strategies including Elite Opposition-Based Learning (EOBL) for improving population diversity and exploration, Adaptive k-Average-Best Mutation (AKAB) for dynamically balancing exploration and exploitation, and a Turbulent Operator (TO) for escaping local optima and enhancing the convergence rate. Extensive experiments were conducted on the CEC2017 and CEC2022 benchmark functions to evaluate EATHOA's global optimization performance, where it consistently outperformed other state-of-the-art algorithms. The proposed EATHOA was then applied to solve the MTIS problem in ICH images at six different threshold levels. EATHOA achieved peak values of PSNR (34.4671), FSIM (0.9710), and SSIM (0.8816), outperforming recent methods in segmentation accuracy and computational efficiency. These results demonstrate the superior performance of EATHOA and its potential as a powerful tool for medical image analysis, offering an effective and computationally efficient solution for the complex challenges of ICH image segmentation.
Page 7 of 90896 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.