Sort by:
Page 6 of 53522 results

CA-Diff: Collaborative Anatomy Diffusion for Brain Tissue Segmentation

Qilong Xing, Zikai Song, Yuteng Ye, Yuke Chen, Youjia Zhang, Na Feng, Junqing Yu, Wei Yang

arxiv logopreprintJun 28 2025
Segmentation of brain structures from MRI is crucial for evaluating brain morphology, yet existing CNN and transformer-based methods struggle to delineate complex structures accurately. While current diffusion models have shown promise in image segmentation, they are inadequate when applied directly to brain MRI due to neglecting anatomical information. To address this, we propose Collaborative Anatomy Diffusion (CA-Diff), a framework integrating spatial anatomical features to enhance segmentation accuracy of the diffusion model. Specifically, we introduce distance field as an auxiliary anatomical condition to provide global spatial context, alongside a collaborative diffusion process to model its joint distribution with anatomical structures, enabling effective utilization of anatomical features for segmentation. Furthermore, we introduce a consistency loss to refine relationships between the distance field and anatomical structures and design a time adapted channel attention module to enhance the U-Net feature fusion procedure. Extensive experiments show that CA-Diff outperforms state-of-the-art (SOTA) methods.

Hierarchical Characterization of Brain Dynamics via State Space-based Vector Quantization

Yanwu Yang, Thomas Wolfers

arxiv logopreprintJun 28 2025
Understanding brain dynamics through functional Magnetic Resonance Imaging (fMRI) remains a fundamental challenge in neuroscience, particularly in capturing how the brain transitions between various functional states. Recently, metastability, which refers to temporarily stable brain states, has offered a promising paradigm to quantify complex brain signals into interpretable, discretized representations. In particular, compared to cluster-based machine learning approaches, tokenization approaches leveraging vector quantization have shown promise in representation learning with powerful reconstruction and predictive capabilities. However, most existing methods ignore brain transition dependencies and lack a quantification of brain dynamics into representative and stable embeddings. In this study, we propose a Hierarchical State space-based Tokenization network, termed HST, which quantizes brain states and transitions in a hierarchical structure based on a state space-based model. We introduce a refined clustered Vector-Quantization Variational AutoEncoder (VQ-VAE) that incorporates quantization error feedback and clustering to improve quantization performance while facilitating metastability with representative and stable token representations. We validate our HST on two public fMRI datasets, demonstrating its effectiveness in quantifying the hierarchical dynamics of the brain and its potential in disease diagnosis and reconstruction performance. Our method offers a promising framework for the characterization of brain dynamics, facilitating the analysis of metastability.

Quantifying Sagittal Craniosynostosis Severity: A Machine Learning Approach With CranioRate.

Tao W, Somorin TJ, Kueper J, Dixon A, Kass N, Khan N, Iyer K, Wagoner J, Rogers A, Whitaker R, Elhabian S, Goldstein JA

pubmed logopapersJun 27 2025
ObjectiveTo develop and validate machine learning (ML) models for objective and comprehensive quantification of sagittal craniosynostosis (SCS) severity, enhancing clinical assessment, management, and research.DesignA cross-sectional study that combined the analysis of computed tomography (CT) scans and expert ratings.SettingThe study was conducted at a children's hospital and a major computer imaging institution. Our survey collected expert ratings from participating surgeons.ParticipantsThe study included 195 patients with nonsyndromic SCS, 221 patients with nonsyndromic metopic craniosynostosis (CS), and 178 age-matched controls. Fifty-four craniofacial surgeons participated in rating 20 patients head CT scans.InterventionsComputed tomography scans for cranial morphology assessment and a radiographic diagnosis of nonsyndromic SCS.Main OutcomesAccuracy of the proposed Sagittal Severity Score (SSS) in predicting expert ratings compared to cephalic index (CI). Secondary outcomes compared Likert ratings with SCS status, the predictive power of skull-based versus skin-based landmarks, and assessments of an unsupervised ML model, the Cranial Morphology Deviation (CMD), as an alternative without ratings.ResultsThe SSS achieved significantly higher accuracy in predicting expert responses than CI (<i>P</i> < .05). Likert ratings outperformed SCS status in supervising ML models to quantify within-group variations. Skin-based landmarks demonstrated equivalent predictive power as skull landmarks (<i>P</i> < .05, threshold 0.02). The CMD demonstrated a strong correlation with the SSS (Pearson coefficient: 0.92, Spearman coefficient: 0.90, <i>P</i> < .01).ConclusionsThe SSS and CMD can provide accurate, consistent, and comprehensive quantification of SCS severity. Implementing these data-driven ML models can significantly advance CS care through standardized assessments, enhanced precision, and informed surgical planning.

Hybrid segmentation model and CAViaR -based Xception Maxout network for brain tumor detection using MRI images.

Swapna S, Garapati Y

pubmed logopapersJun 27 2025
Brain tumor (BT) is a rapid growth of brain cells. If the BT is not identified and treated in the first stage, it could cause death. Despite several methods and efforts being developed for segmenting and identifying BT, the detection of BT is complicated due to the distinct position of the tumor and its size. To solve such issues, this paper proposes the Conditional Autoregressive Value-at-Risk_Xception Maxout-Network (Caviar_XM-Net) for BT detection utilizing magnetic resonance imaging (MRI) images. The input MRI image gathered from the dataset is denoised using the adaptive bilateral filter (ABF), and tumor region segmentation is done using BFC-MRFNet-RVSeg. Here, the segmentation is done by the Bayesian fuzzy clustering (BFC) and multi-branch residual fusion network (MRF-Net) separately. Subsequently, outputs from both segmentation techniques are combined using the RV coefficient. Image augmentation is performed to boost the quantity of images in the training process. Afterwards, feature extraction is done, where features, like local optimal oriented pattern (LOOP), convolutional neural network (CNN) features, median binary pattern (MBP) with statistical features, and local Gabor XOR pattern (LGXP), are extracted. Lastly, BT detection is carried out by employing Caviar_XM-Net, which is acquired by the assimilation of the Xception model and deep Maxout network (DMN) with the CAViaR approach. Furthermore, the effectiveness of Caviar_XM-Net is examined using the parameters, namely sensitivity, accuracy, specificity, precision, and F1-score, and the corresponding values of 91.59%, 91.36%, 90.83%, 90.99%, and 91.29% are attained. Hence, the Caviar_XM-Net performs better than the traditional methods with high efficiency.

Automated Sella-Turcica Annotation and Mesh Alignment of 3D Stereophotographs for Craniosynostosis Patients Using a PCA-FFNN Based Approach.

Bielevelt F, Chargi N, van Aalst J, Nienhuijs M, Maal T, Delye H, de Jong G

pubmed logopapersJun 27 2025
Craniosynostosis, characterized by the premature fusion of cranial sutures, can lead to significant neurological and developmental complications, necessitating early diagnosis and precise treatment. Traditional cranial morphologic assessment has relied on CT scans, which expose infants to ionizing radiation. Recently, 3D stereophotogrammetry has emerged as a noninvasive alternative, but accurately aligning 3D photographs within standardized reference frames, such as the Sella-turcica-Nasion (S-N) frame, remains a challenge. This study proposes a novel method for predicting the Sella turcica (ST) coordinate from 3D cranial surface models using Principal Component Analysis (PCA) combined with a Feedforward Neural Network (FFNN). The accuracy of this method is compared with the conventional Computed Cranial Focal Point (CCFP) method, which has limitations, especially in cases of asymmetric cranial deformations like plagiocephaly. A data set of 153 CT scans, including 68 craniosynostosis subjects, was used to train and test the PCA-FFNN model. The results demonstrate that the PCA-FFNN approach outperforms CCFP, achieving significantly lower deviations in ST coordinate predictions (3.61 vs. 8.38 mm, P<0.001), particularly along the y-axes and z-axes. In addition, mesh realignment within the S-N reference frame showed improved accuracy with the PCA-FFNN method, evidenced by lower mean deviations and reduced dispersion in distance maps. These findings highlight the potential of the PCA-FFNN approach to provide a more reliable, noninvasive solution for cranial assessment, improving craniosynostosis follow-up and enhancing clinical outcomes.

Association of Covert Cerebrovascular Disease With Falls Requiring Medical Attention.

Clancy Ú, Puttock EJ, Chen W, Whiteley W, Vickery EM, Leung LY, Luetmer PH, Kallmes DF, Fu S, Zheng C, Liu H, Kent DM

pubmed logopapersJun 27 2025
The impact of covert cerebrovascular disease on falls in the general population is not well-known. Here, we determine the time to a first fall following incidentally detected covert cerebrovascular disease during a clinical neuroimaging episode. This longitudinal cohort study assessed computed tomography (CT) and magnetic resonance imaging from 2009 to 2019 of patients aged >50 years registered with Kaiser Permanente Southern California which is a healthcare organization combining health plan coverage with coordinated medical services, excluding those with before stroke/dementia. We extracted evidence of incidental covert brain infarcts (CBI) and white matter hyperintensities/hypoattenuation (WMH) from imaging reports using natural language processing. We examined associations of CBI and WMH with falls requiring medical attention, using Cox proportional hazards regression models with adjustment for 12 variables including age, sex, ethnicity multimorbidity, polypharmacy, and incontinence. We assessed 241 050 patients, mean age 64.9 (SD, 10.42) years, 61.3% female, detecting covert cerebrovascular disease in 31.1% over a mean follow-up duration of 3.04 years. A recorded fall occurred in 21.2% (51 239/241 050) during follow-up. On CT, single fall incidence rate/1000 person-years (p-y) was highest in individuals with both CBI and WMH on CT (129.3 falls/1000 p-y [95% CI, 123.4-135.5]), followed by WMH (109.9 falls/1000 p-y [108.0-111.9]). On magnetic resonance imaging, the incidence rate was the highest with both CBI and WMH (76.3 falls/1000 p-y [95% CI, 69.7-83.2]), followed by CBI (71.4 falls/1000 p-y [95% CI, 65.9-77.2]). The adjusted hazard ratio for single index fall in individuals with CBI on CT was 1.13 (95% CI, 1.09-1.17); versus magnetic resonance imaging 1.17 (95% CI, 1.08-1.27). On CT, the risk for single index fall incrementally increased for mild (1.37 [95% CI, 1.32-1.43]), moderate (1.57 [95% CI, 1.48-1.67]), or severe WMH (1.57 [95% CI, 1.45-1.70]). On magnetic resonance imaging, index fall risk similarly increased with increasing WMH severity: mild (1.11 [95% CI, 1.07-1.17]), moderate (1.21 [95% CI, 1.13-1.28]), and severe WMH (1.34 [95% CI, 1.22-1.46]). In a large population with neuroimaging, CBI and WMH are independently associated with greater risks of an index fall. Increasing severities of WMH are associated incrementally with fall risk across imaging modalities.

Deep Learning-Based Prediction of PET Amyloid Status Using MRI.

Kim D, Ottesen JA, Kumar A, Ho BC, Bismuth E, Young CB, Mormino E, Zaharchuk G

pubmed logopapersJun 27 2025
Identifying amyloid-beta (Aβ)-positive patients is essential for Alzheimer's disease (AD) clinical trials and disease-modifying treatments but currently requires PET or cerebrospinal fluid sampling. Previous MRI-based deep learning models, using only T1-weighted (T1w) images, have shown moderate performance. Multi-contrast MRI and PET-based quantitative Aβ deposition were retrospectively obtained from three public datasets: ADNI, OASIS3, and A4. Aβ positivity was defined using each dataset's recommended centiloid threshold. Two EfficientNet models were trained to predict amyloid positivity: one using only T1w images and another incorporating both T1w and T2-FLAIR. Model performance was assessed using an internal held-out test set, evaluating AUC, accuracy, sensitivity, and specificity. External validation was conducted using an independent cohort from Stanford Alzheimer's Disease Research Center. DeLong's and McNemar's tests were used to compare AUC and accuracy, respectively. A total of 4,056 exams (mean [SD] age: 71.6 [6.3] years; 55% female; 55% amyloid-positive) were used for network development, and 149 exams were used for external testing (mean [SD] age: 72.1 [9.6] years; 58% female; 56% amyloid-positive). The multi-contrast model outperformed the single-modality model in the internal held-out test set (AUC: 0.67, 95% CI: 0.65-0.70, <i>P</i> < 0.001; accuracy: 0.63, 95% CI: 0.62-0.65, <i>P</i> < 0.001) compared to the T1w-only model (AUC: 0.61; accuracy: 0.59). Among cognitive subgroups, the highest performance (AUC: 0.71) was observed in mild cognitive impairment. The multi-contrast model also demonstrated consistent performance in the external test set (AUC: 0.65, 95% CI: 0.60-0.71, <i>P</i> = 0.014; accuracy: 0.62, 95% CI: 0.58- 0.65, <i>P</i> < 0.001). The use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach. Aβ= amyloid-beta; AD= Alzheimer's disease; AUC= area under the receiver operating characteristic curve; CN= cognitively normal; MCI= mild cognitive impairment; T1w = T1-wegithed; T2-FLAIR = T2-weighted fluid attenuated inversion recovery; FBP=<sup>18</sup>F-florbetapir; FBB=<sup>18</sup>F-florbetaben; SUVR= standard uptake value ratio.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.

BrainMT: A Hybrid Mamba-Transformer Architecture for Modeling Long-Range Dependencies in Functional MRI Data

Arunkumar Kannan, Martin A. Lindquist, Brian Caffo

arxiv logopreprintJun 27 2025
Recent advances in deep learning have made it possible to predict phenotypic measures directly from functional magnetic resonance imaging (fMRI) brain volumes, sparking significant interest in the neuroimaging community. However, existing approaches, primarily based on convolutional neural networks or transformer architectures, often struggle to model the complex relationships inherent in fMRI data, limited by their inability to capture long-range spatial and temporal dependencies. To overcome these shortcomings, we introduce BrainMT, a novel hybrid framework designed to efficiently learn and integrate long-range spatiotemporal attributes in fMRI data. Our framework operates in two stages: (1) a bidirectional Mamba block with a temporal-first scanning mechanism to capture global temporal interactions in a computationally efficient manner; and (2) a transformer block leveraging self-attention to model global spatial relationships across the deep features processed by the Mamba block. Extensive experiments on two large-scale public datasets, UKBioBank and the Human Connectome Project, demonstrate that BrainMT achieves state-of-the-art performance on both classification (sex prediction) and regression (cognitive intelligence prediction) tasks, outperforming existing methods by a significant margin. Our code and implementation details will be made publicly available at this https://github.com/arunkumar-kannan/BrainMT-fMRI

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.
Page 6 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.