Sort by:
Page 45 of 93924 results

Cognition-Eye-Brain Connection in Alzheimer's Disease Spectrum Revealed by Multimodal Imaging.

Shi Y, Shen T, Yan S, Liang J, Wei T, Huang Y, Gao R, Zheng N, Ci R, Zhang M, Tang X, Qin Y, Zhu W

pubmed logopapersJun 29 2025
The connection between cognition, eye, and brain remains inconclusive in Alzheimer's disease (AD) spectrum disorders. To explore the relationship between cognitive function, retinal biometrics, and brain alterations in the AD spectrum. Prospective. Healthy control (HC) (n = 16), subjective cognitive decline (SCD) (n = 35), mild cognitive impairment (MCI) (n = 18), and AD group (n = 7). 3-T, 3D T1-weighted Brain Volume (BRAVO) and resting-state functional MRI (fMRI). In all subgroups, cortical thickness was measured from BRAVO and segmented using the Desikan-Killiany-Tourville (DKT) atlas. The fractional amplitude of low-frequency fluctuations (FALFF) and regional homogeneity (ReHo) were measured in fMRI using voxel-based analysis. The eye was imaged by optical coherence tomography angiography (OCTA), with the deep learning model FARGO segmenting the foveal avascular zone (FAZ) and retinal vessels. FAZ area and perimeter, retinal blood vessels curvature (RBVC), thicknesses of the retinal nerve fiber layer (RNFL) and ganglion cell layer-inner plexiform layer (GCL-IPL) were calculated. Cognition-eye-brain associations were compared across the HC group and each AD spectrum stage using multivariable linear regression. Multivariable linear regression analysis. Statistical significance was set at p < 0.05 with FWE correction for fMRI and p < 1/62 (Bonferroni-corrected) for structural analyses. Reductions of FALFF in temporal regions, especially the left superior temporal gyrus (STG) in MCI patients, were linked to decreased RNFL thickness and increased FAZ area significantly. In AD patients, reduced ReHo values in occipital regions, especially the right middle occipital gyrus (MOG), were significantly associated with an enlarged FAZ area. The SCD group showed widespread cortical thickening significantly associated with all aforementioned retinal biometrics, with notable thickening in the right fusiform gyrus (FG) and right parahippocampal gyrus (PHG) correlating with reduced GCL-IPL thickness. Brain function and structure may be associated with cognition and retinal biometrics across the AD spectrum. Specifically, cognition-eye-brain connections may be present in SCD. 2. 3.

Physics informed guided diffusion for accelerated multi-parametric MRI reconstruction

Perla Mayo, Carolin M. Pirkl, Alin Achim, Bjoern Menze, Mohammad Golbabaee

arxiv logopreprintJun 29 2025
We introduce MRF-DiPh, a novel physics informed denoising diffusion approach for multiparametric tissue mapping from highly accelerated, transient-state quantitative MRI acquisitions like Magnetic Resonance Fingerprinting (MRF). Our method is derived from a proximal splitting formulation, incorporating a pretrained denoising diffusion model as an effective image prior to regularize the MRF inverse problem. Further, during reconstruction it simultaneously enforces two key physical constraints: (1) k-space measurement consistency and (2) adherence to the Bloch response model. Numerical experiments on in-vivo brain scans data show that MRF-DiPh outperforms deep learning and compressed sensing MRF baselines, providing more accurate parameter maps while better preserving measurement fidelity and physical model consistency-critical for solving reliably inverse problems in medical imaging.

Hierarchical Characterization of Brain Dynamics via State Space-based Vector Quantization

Yanwu Yang, Thomas Wolfers

arxiv logopreprintJun 28 2025
Understanding brain dynamics through functional Magnetic Resonance Imaging (fMRI) remains a fundamental challenge in neuroscience, particularly in capturing how the brain transitions between various functional states. Recently, metastability, which refers to temporarily stable brain states, has offered a promising paradigm to quantify complex brain signals into interpretable, discretized representations. In particular, compared to cluster-based machine learning approaches, tokenization approaches leveraging vector quantization have shown promise in representation learning with powerful reconstruction and predictive capabilities. However, most existing methods ignore brain transition dependencies and lack a quantification of brain dynamics into representative and stable embeddings. In this study, we propose a Hierarchical State space-based Tokenization network, termed HST, which quantizes brain states and transitions in a hierarchical structure based on a state space-based model. We introduce a refined clustered Vector-Quantization Variational AutoEncoder (VQ-VAE) that incorporates quantization error feedback and clustering to improve quantization performance while facilitating metastability with representative and stable token representations. We validate our HST on two public fMRI datasets, demonstrating its effectiveness in quantifying the hierarchical dynamics of the brain and its potential in disease diagnosis and reconstruction performance. Our method offers a promising framework for the characterization of brain dynamics, facilitating the analysis of metastability.

Deep Learning-Based Automated Detection of the Middle Cerebral Artery in Transcranial Doppler Ultrasound Examinations.

Lee H, Shi W, Mukaddim RA, Brunelle E, Palisetti A, Imaduddin SM, Rajendram P, Incontri D, Lioutas VA, Heldt T, Raju BI

pubmed logopapersJun 28 2025
Transcranial Doppler (TCD) ultrasound has significant clinical value for assessing cerebral hemodynamics, but its reliance on operator expertise limits broader clinical adoption. In this work, we present a lightweight real-time deep learning-based approach capable of automatically identifying the middle cerebral artery (MCA) in TCD Color Doppler images. Two state-of-the-art object detection models, YOLOv10 and Real-Time Detection Transformers (RT-DETR), were investigated for automated MCA detection in real-time. TCD Color Doppler data (41 subjects; 365 videos; 61,611 frames) were collected from neurologically healthy individuals (n = 31) and stroke patients (n = 10). MCA bounding box annotations were performed by clinical experts on all frames. Model training consisted of pretraining utilizing a large abdominal ultrasound dataset followed by subsequent fine-tuning on acquired TCD data. Detection performance at the instance and frame levels, and inference speed were assessed through four-fold cross-validation. Inter-rater agreement between model and two human expert readers was assessed using distance between bounding boxes and inter-rater variability was quantified using the individual equivalence coefficient (IEC) metric. Both YOLOv10 and RT-DETR models showed comparable frame level accuracy for MCA presence, with F1 scores of 0.884 ± 0.023 and 0.884 ± 0.019 respectively. YOLOv10 outperformed RT-DETR for instance-level localization accuracy (AP: 0.817 vs. 0.780) and had considerably faster inference speed on a desktop CPU (11.6 ms vs. 91.14 ms). Furthermore, YOLOv10 showed an average inference time of 36 ms per frame on a tablet device. The IEC was -1.08 with 95 % confidence interval: [-1.45, -0.19], showing that the AI predictions deviated less from each reader than the readers' annotations deviated from each other. Real-time automated detection of the MCA is feasible and can be implemented on mobile platforms, potentially enabling wider clinical adoption by less-trained operators in point-of-care settings.

Revealing the Infiltration: Prognostic Value of Automated Segmentation of Non-Contrast-Enhancing Tumor in Glioblastoma

Gomez-Mahiques, M., Lopez-Mateu, C., Gil-Terron, F. J., Montosa-i-Mico, V., Svensson, S. F., Mendoza Mireles, E. E., Vik-Mo, E. O., Emblem, K., Balana, C., Puig, J., Garcia-Gomez, J. M., Fuster-Garcia, E.

medrxiv logopreprintJun 28 2025
BackgroundPrecise delineation of non-contrast-enhancing tumor (nCET) in glioblastoma (GB) is critical for maximal safe resection, yet routine imaging cannot reliably separate infiltrative tumor from vasogenic edema. The aim of this study was to develop and validate an automated method to identify nCET and assess its prognostic value. MethodsPre-operative T2-weighted and FLAIR MRI from 940 patients with newly diagnosed GB in four multicenter cohorts were analyzed. A deep-learning model segmented enhancing tumor, edema and necrosis; a non-local spatially varying finite mixture model then isolated edema subregions containing nCET. The ratio of nCET to total edema volume--the Diffuse Infiltration Index (DII)--was calculated. Associations between DII and overall survival (OS) were examined with Kaplan-Meier curves and multivariable Cox regression. ResultsThe algorithm distinguished nCET from vasogenic edema in 97.5 % of patients, showing a mean signal-intensity gap > 5 %. Higher DII is able to stratify patients with shorter OS. In the NCT03439332 cohort, DII above the optimal threshold doubled the hazard of death (hazard ratio 2.09, 95 % confidence interval 1.34-3.25; p = 0.0012) and reduced median survival by 122 days. Significant, though smaller, effects were confirmed in GLIOCAT & BraTS (hazard ratio 1.31; p = 0.022), OUS (hazard ratio 1.28; p = 0.007) and in pooled analysis (hazard ratio 1.28; p = 0.0003). DII remained an independent predictor after adjustment for age, extent of resection and MGMT methylation. ConclusionsWe present a reproducible, server-hosted tool for automated nCET delineation and DII biomarker extraction that enables robust, independent prognostic stratification. It promises to guide supramaximal surgical planning and personalized neuro-oncology research and care. Key Points- KP1: Robust automated MRI tool segments non-contrast-enhancing (nCET) glioblastoma. - KP2: Introduced and validated the Diffuse Infiltration Index with prognostic value. - KP3: nCET mapping enables RANO supramaximal resection for personalized surgery. Importance of the StudyThis study underscores the clinical importance of accurately delineating non-contrast-enhancing tumor (nCET) regions in glioblastoma (GB) using standard MRI. Despite their lack of contrast enhancement, nCET areas often harbor infiltrative tumor cells critical for disease progression and recurrence. By integrating deep learning segmentation with a non-local finite mixture model, we developed a reproducible, automated methodology for nCET delineation and introduced the Diffuse Infiltration Index (DII), a novel imaging biomarker. Higher DII values were independently associated with reduced overall survival across large, heterogeneous cohorts. These findings highlight the prognostic relevance of imaging-defined infiltration patterns and support the use of nCET segmentation in clinical decision-making. Importantly, this methodology aligns with and operationalizes recent RANO criteria on supramaximal resection, offering a practical, image-based tool to improve surgical planning. In doing so, our work advances efforts toward more personalized neuro-oncological care, potentially improving outcomes while minimizing functional compromise.

CA-Diff: Collaborative Anatomy Diffusion for Brain Tissue Segmentation

Qilong Xing, Zikai Song, Yuteng Ye, Yuke Chen, Youjia Zhang, Na Feng, Junqing Yu, Wei Yang

arxiv logopreprintJun 28 2025
Segmentation of brain structures from MRI is crucial for evaluating brain morphology, yet existing CNN and transformer-based methods struggle to delineate complex structures accurately. While current diffusion models have shown promise in image segmentation, they are inadequate when applied directly to brain MRI due to neglecting anatomical information. To address this, we propose Collaborative Anatomy Diffusion (CA-Diff), a framework integrating spatial anatomical features to enhance segmentation accuracy of the diffusion model. Specifically, we introduce distance field as an auxiliary anatomical condition to provide global spatial context, alongside a collaborative diffusion process to model its joint distribution with anatomical structures, enabling effective utilization of anatomical features for segmentation. Furthermore, we introduce a consistency loss to refine relationships between the distance field and anatomical structures and design a time adapted channel attention module to enhance the U-Net feature fusion procedure. Extensive experiments show that CA-Diff outperforms state-of-the-art (SOTA) methods.

Novel Artificial Intelligence-Driven Infant Meningitis Screening From High-Resolution Ultrasound Imaging.

Sial HA, Carandell F, Ajanovic S, Jiménez J, Quesada R, Santos F, Buck WC, Sidat M, Bassat Q, Jobst B, Petrone P

pubmed logopapersJun 28 2025
Infant meningitis can be a life-threatening disease and requires prompt and accurate diagnosis to prevent severe outcomes or death. Gold-standard diagnosis requires lumbar puncture (LP) to obtain and analyze cerebrospinal fluid (CSF). Despite being standard practice, LPs are invasive, pose risks for the patient and often yield negative results, either due to contamination with red blood cells from the puncture itself or because LPs are routinely performed to rule out a life-threatening infection, despite the disease's relatively low incidence. Furthermore, in low-income settings where incidence is the highest, LPs and CSF exams are rarely feasible, and suspected meningitis cases are generally treated empirically. There is a growing need for non-invasive, accurate diagnostic methods. We developed a three-stage deep learning framework using Neosonics ultrasound technology for 30 infants with suspected meningitis and a permeable fontanelle at three Spanish University Hospitals (from 2021 to 2023). In stage 1, 2194 images were processed for quality control using a vessel/non-vessel model, with a focus on vessel identification and manual removal of images exhibiting artifacts such as poor coupling and clutter. This refinement process resulted in a final cohort comprising 16 patients-6 cases (336 images) and 10 controls (445 images), yielding 781 images for the second stage. The second stage involved the use of a deep learning model to classify images based on a white blood cell count threshold (set at 30 cells/mm<sup>3</sup>) into control or meningitis categories. The third stage integrated explainable artificial intelligence (XAI) methods, such as Grad-CAM visualizations, alongside image statistical analysis, to provide transparency and interpretability of the model's decision-making process in our artificial intelligence-driven screening tool. Our approach achieved 96% accuracy in quality control and 93% precision and 92% accuracy in image-level meningitis detection, with an overall patient-level accuracy of 94%. It identified 6 meningitis cases and 10 controls with 100% sensitivity and 90% specificity, demonstrating only a single misclassification. The use of gradient-weighted class activation mapping-based XAI significantly enhanced diagnostic interpretability, and to further refine our insights we incorporated a statistics-based XAI approach. By analyzing image metrics such as entropy and standard deviation, we identified texture variations in the images attributable to the presence of cells, which improved the interpretability of our diagnostic tool. This study supports the efficacy of a multi-stage deep learning model for non-invasive screening of infant meningitis and its potential to guide the need for LPs. It also highlights the transformative potential of artificial intelligence in medical diagnostic screening for neonatal health care, paving the way for future research and innovations.

Deep learning for hydrocephalus prognosis: Advances, challenges, and future directions: A review.

Huang J, Shen N, Tan Y, Tang Y, Ding Z

pubmed logopapersJun 27 2025
Diagnosis of hydrocephalus involves a careful check of the patient's history and thorough neurological assessment. The traditional diagnosis has predominantly depended on the professional judgment of physicians based on clinical experience, but with the advancement of precision medicine and individualized treatment, such experience-based methods are no longer sufficient to keep pace with current clinical requirements. To fit this adjustment, the medical community actively devotes itself to data-driven intelligent diagnostic solutions. Building a prognosis prediction model for hydrocephalus has thus become a new focus, among which intelligent prediction systems supported by deep learning offer new technical advantages for clinical diagnosis and treatment decisions. Over the past several years, algorithms of deep learning have demonstrated conspicuous advantages in medical image analysis. Studies revealed that the accuracy rate of the diagnosis of hydrocephalus by magnetic resonance imaging can reach 90% through convolutional neural networks, while their sensitivity and specificity are also better than these of traditional methods. With the extensive use of medical technology in terms of deep learning, its successful use in modeling hydrocephalus prognosis has also drawn extensive attention and recognition from scholars. This review explores the application of deep learning in hydrocephalus diagnosis and prognosis, focusing on image-based, biochemical, and structured data models. Highlighting recent advancements, challenges, and future trajectories, the study emphasizes deep learning's potential to enhance personalized treatment and improve outcomes.

BrainMT: A Hybrid Mamba-Transformer Architecture for Modeling Long-Range Dependencies in Functional MRI Data

Arunkumar Kannan, Martin A. Lindquist, Brian Caffo

arxiv logopreprintJun 27 2025
Recent advances in deep learning have made it possible to predict phenotypic measures directly from functional magnetic resonance imaging (fMRI) brain volumes, sparking significant interest in the neuroimaging community. However, existing approaches, primarily based on convolutional neural networks or transformer architectures, often struggle to model the complex relationships inherent in fMRI data, limited by their inability to capture long-range spatial and temporal dependencies. To overcome these shortcomings, we introduce BrainMT, a novel hybrid framework designed to efficiently learn and integrate long-range spatiotemporal attributes in fMRI data. Our framework operates in two stages: (1) a bidirectional Mamba block with a temporal-first scanning mechanism to capture global temporal interactions in a computationally efficient manner; and (2) a transformer block leveraging self-attention to model global spatial relationships across the deep features processed by the Mamba block. Extensive experiments on two large-scale public datasets, UKBioBank and the Human Connectome Project, demonstrate that BrainMT achieves state-of-the-art performance on both classification (sex prediction) and regression (cognitive intelligence prediction) tasks, outperforming existing methods by a significant margin. Our code and implementation details will be made publicly available at this https://github.com/arunkumar-kannan/BrainMT-fMRI

Machine learning to identify hypoxic-ischemic brain injury on early head CT after pediatric cardiac arrest.

Kirschen MP, Li J, Elmer J, Manteghinejad A, Arefan D, Graham K, Morgan RW, Nadkarni V, Diaz-Arrastia R, Berg R, Topjian A, Vossough A, Wu S

pubmed logopapersJun 27 2025
To train deep learning models to detect hypoxic-ischemic brain injury (HIBI) on early CT scans after pediatric out-of-hospital cardiac arrest (OHCA) and determine if models could identify HIBI that was not visually appreciable to a radiologist. Retrospective study of children who had a CT scan within 24 hours of OHCA compared to age-matched controls. We designed models to detect HIBI by discriminating CT images from OHCA cases and controls, and predict death and unfavorable outcome (PCPC 4-6 at hospital discharge) among cases. Model performance was measured by AUC. We trained a second model to distinguish OHCA cases with radiologist-identified HIBI from controls without OHCA and tested the model on OHCA cases without radiologist-identified HIBI. We compared outcomes between OHCA cases with and without model-categorized HIBI. We analyzed 117 OHCA cases (age 3.1 [0.7-12.2] years); 43% died and 58% had unfavorable outcome. Median time from arrest to CT was 2.1 [1.0,7.2] hours. Deep learning models discriminated OHCA cases from controls with a mean AUC of 0.87±0.05. Among OHCA cases, mean AUCs for predicting death and unfavorable outcome were 0.79±0.06 and 0.69±0.06, respectively. Mean AUC was 0.98±0.01for discriminating between 44 OHCA cases with radiologist-identified HIBI and controls. Among 73 OHCA cases without radiologist-identified HIBI, the model identified 36% as having presumed HIBI; 31% of whom died compared to 17% of cases without HIBI identified radiologically and via the model (p=0.174). Deep learning models can identify HIBI on early CT images after pediatric OHCA and detect some presumed HIBI visually not identified by a radiologist.
Page 45 of 93924 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.