Sort by:
Page 3 of 875 results

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.

Preoperative prediction of malignant transformation in sinonasal inverted papilloma: a novel MRI-based deep learning approach.

Ding C, Wen B, Han Q, Hu N, Kang Y, Wang Y, Wang C, Zhang L, Xian J

pubmed logopapersMay 12 2025
To develop a novel MRI-based deep learning (DL) diagnostic model, utilizing multicenter large-sample data, for the preoperative differentiation of sinonasal inverted papilloma (SIP) from SIP-transformed squamous cell carcinoma (SIP-SCC). This study included 568 patients from four centers with confirmed SIP (n = 421) and SIP-SCC (n = 147). Deep learning models were built using T1WI, T2WI, and CE-T1WI. A combined model was constructed by integrating these features through an attention mechanism. The diagnostic performance of radiologists, both with and without the model's assistance, was compared. Model performance was evaluated through receiver operating characteristic (ROC) analysis, calibration curves, and decision curve analysis (DCA). The combined model demonstrated superior performance in differentiating SIP from SIP-SCC, achieving AUCs of 0.954, 0.897, and 0.859 in the training, internal validation, and external validation cohorts, respectively. It showed optimal accuracy, stability, and clinical benefit, as confirmed by Brier scores and calibration curves. The diagnostic performance of radiologists, especially for less experienced ones, was significantly improved with model assistance. The MRI-based deep learning model enhances the capability to predict malignant transformation of sinonasal inverted papilloma before surgery. By facilitating earlier diagnosis and promoting timely pathological examination or surgical intervention, this approach holds the potential to enhance patient prognosis. Questions Sinonasal inverted papilloma (SIP) is prone to malignant transformation locally, leading to poor prognosis; current diagnostic methods are invasive and inaccurate, necessitating effective preoperative differentiation. Findings The MRI-based deep learning model accurately diagnoses malignant transformations of SIP, enabling junior radiologists to achieve greater clinical benefits with the assistance of the model. Clinical relevance A novel MRI-based deep learning model enhances the capability of preoperative diagnosis of malignant transformation in sinonasal inverted papilloma, providing a non-invasive tool for personalized treatment planning.

Groupwise image registration with edge-based loss for low-SNR cardiac MRI.

Lei X, Schniter P, Chen C, Ahmad R

pubmed logopapersMay 12 2025
The purpose of this study is to perform image registration and averaging of multiple free-breathing single-shot cardiac images, where the individual images may have a low signal-to-noise ratio (SNR). To address low SNR encountered in single-shot imaging, especially at low field strengths, we propose a fast deep learning (DL)-based image registration method, called Averaging Morph with Edge Detection (AiM-ED). AiM-ED jointly registers multiple noisy source images to a noisy target image and utilizes a noise-robust pre-trained edge detector to define the training loss. We validate AiM-ED using synthetic late gadolinium enhanced (LGE) images from the MR extended cardiac-torso (MRXCAT) phantom and free-breathing single-shot LGE images from healthy subjects (24 slices) and patients (5 slices) under various levels of added noise. Additionally, we demonstrate the clinical feasibility of AiM-ED by applying it to data from patients (6 slices) scanned on a 0.55T scanner. Compared with a traditional energy-minimization-based image registration method and DL-based VoxelMorph, images registered using AiM-ED exhibit higher values of recovery SNR and three perceptual image quality metrics. An ablation study shows the benefit of both jointly processing multiple source images and using an edge map in AiM-ED. For single-shot LGE imaging, AiM-ED outperforms existing image registration methods in terms of image quality. With fast inference, minimal training data requirements, and robust performance at various noise levels, AiM-ED has the potential to benefit single-shot CMR applications.

MRI-Based Diagnostic Model for Alzheimer's Disease Using 3D-ResNet.

Chen D, Yang H, Li H, He X, Mu H

pubmed logopapersMay 12 2025
Alzheimer's disease (AD), a progressive neurodegenerative disorder, is the leading cause of dementia worldwide and remains incurable once it begins. Therefore, early and accurate diagnosis is essential for effective intervention. Leveraging recent advances in deep learning, this study proposes a novel diagnostic model based on the 3D-ResNet architecture to classify three cognitive states: AD, mild cognitive impairment (MCI), and cognitively normal (CN) individuals, using MRI data. The model integrates the strengths of ResNet and 3D convolutional neural networks (3D-CNN), and incorporates a special attention mechanism(SAM) within the residual structure to enhance feature representation. The study utilized the ADNI dataset, comprising 800 brain MRI scans. The dataset was split in a 7:3 ratio for training and testing, and the network was trained using data augmentation and cross-validation strategies. The proposed model achieved 92.33% accuracy in the three-class classification task, and 97.61%, 95.83%, and 93.42% accuracy in binary classifications of AD vs. CN, AD vs. MCI, and CN vs. MCI, respectively, outperforming existing state-of-the-art methods. Furthermore, Grad-CAM heatmaps and 3D MRI reconstructions revealed that the cerebral cortex and hippocampus are critical regions for AD classification. These findings demonstrate a robust and interpretable AI-based diagnostic framework for AD, providing valuable technical support for its timely detection and clinical intervention.

New developments in imaging in ALS.

Kleinerova J, Querin G, Pradat PF, Siah WF, Bede P

pubmed logopapersMay 12 2025
Neuroimaging in ALS has contributed considerable academic insights in recent years demonstrating genotype-specific topological changes decades before phenoconversion and characterising longitudinal propagation patterns in specific phenotypes. It has elucidated the radiological underpinnings of specific clinical phenomena such as pseudobulbar affect, apathy, behavioural change, spasticity, and language deficits. Academic concepts such as sexual dimorphism, motor reserve, cognitive reserve, adaptive changes, connectivity-based propagation, pathological stages, and compensatory mechanisms have also been evaluated by imaging. The underpinnings of extra-motor manifestations such as cerebellar, sensory, extrapyramidal and cognitive symptoms have been studied by purpose-designed imaging protocols. Clustering approaches have been implemented to uncover radiologically distinct disease subtypes and machine-learning models have been piloted to accurately classify individual patients into relevant diagnostic, phenotypic, and prognostic categories. Prediction models have been developed for survival in symptomatic patients and phenoconversion in asymptomatic mutation carriers. A range of novel imaging modalities have been implemented and 7 Tesla MRI platforms are increasingly being used in ALS studies. Non-ALS MND conditions, such as PLS, SBMA, and SMA, are now also being increasingly studied by quantitative neuroimaging approaches. A unifying theme of recent imaging papers is the departure from describing focal brain changes to focusing on dynamic structural and functional connectivity alterations. Progressive cortico-cortical, cortico-basal, cortico-cerebellar, cortico-bulbar, and cortico-spinal disconnection has been consistently demonstrated by recent studies and recognised as the primary driver of clinical decline. These studies have led the reconceptualisation of ALS as a "network" or "circuitry disease".

Multi-Plane Vision Transformer for Hemorrhage Classification Using Axial and Sagittal MRI Data

Badhan Kumar Das, Gengyan Zhao, Boris Mailhe, Thomas J. Re, Dorin Comaniciu, Eli Gibson, Andreas Maier

arxiv logopreprintMay 12 2025
Identifying brain hemorrhages from magnetic resonance imaging (MRI) is a critical task for healthcare professionals. The diverse nature of MRI acquisitions with varying contrasts and orientation introduce complexity in identifying hemorrhage using neural networks. For acquisitions with varying orientations, traditional methods often involve resampling images to a fixed plane, which can lead to information loss. To address this, we propose a 3D multi-plane vision transformer (MP-ViT) for hemorrhage classification with varying orientation data. It employs two separate transformer encoders for axial and sagittal contrasts, using cross-attention to integrate information across orientations. MP-ViT also includes a modality indication vector to provide missing contrast information to the model. The effectiveness of the proposed model is demonstrated with extensive experiments on real world clinical dataset consists of 10,084 training, 1,289 validation and 1,496 test subjects. MP-ViT achieved substantial improvement in area under the curve (AUC), outperforming the vision transformer (ViT) by 5.5% and CNN-based architectures by 1.8%. These results highlight the potential of MP-ViT in improving performance for hemorrhage detection when different orientation contrasts are needed.

Accelerating prostate rs-EPI DWI with deep learning: Halving scan time, enhancing image quality, and validating in vivo.

Zhang P, Feng Z, Chen S, Zhu J, Fan C, Xia L, Min X

pubmed logopapersMay 12 2025
This study aims to evaluate the feasibility and effectiveness of deep learning-based super-resolution techniques to reduce scan time while preserving image quality in high-resolution prostate diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). We retrospectively and prospectively analyzed prostate rs-EPI DWI data, employing deep learning super-resolution models, particularly the Multi-Scale Self-Similarity Network (MSSNet), to reconstruct low-resolution images into high-resolution images. Performance metrics such as structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR), and normalized root mean squared error (NRMSE) were used to compare reconstructed images against the high-resolution ground truth (HR<sub>GT</sub>). Additionally, we evaluated the apparent diffusion coefficient (ADC) values and signal-to-noise ratio (SNR) across different models. The MSSNet model demonstrated superior performance in image reconstruction, achieving maximum SSIM values of 0.9798, and significant improvements in PSNR and NRMSE compared to other models. The deep learning approach reduced the rs-EPI DWI scan time by 54.4 % while maintaining image quality comparable to HR<sub>GT</sub>. Pearson correlation analysis revealed a strong correlation between ADC values from deep learning-reconstructed images and the ground truth, with differences remaining within 5 %. Furthermore, all models showed significant SNR enhancement, with MSSNet performing best across most cases. Deep learning-based super-resolution techniques, particularly MSSNet, effectively reduce scan time and enhance image quality in prostate rs-EPI DWI, making them promising tools for clinical applications.

Study on predicting breast cancer Ki-67 expression using a combination of radiomics and deep learning based on multiparametric MRI.

Wang W, Wang Z, Wang L, Li J, Pang Z, Qu Y, Cui S

pubmed logopapersMay 11 2025
To develop a multiparametric breast MRI radiomics and deep learning-based multimodal model for predicting preoperative Ki-67 expression status in breast cancer, with the potential to advance individualized treatment and precision medicine for breast cancer patients. We included 176 invasive breast cancer patients who underwent breast MRI and had Ki-67 results. The dataset was randomly split into training (70 %) and test (30 %) sets. Features from T1-weighted imaging (T1WI), diffusion-weighted imaging (DWI), T2-weighted imaging (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were fused. Separate models were created for each sequence: T1, DWI, T2, and DCE. A multiparametric MRI (mp-MRI) model was then developed by combining features from all sequences. Models were trained using five-fold cross-validation and evaluated on the test set with receiver operating characteristic (ROC) curve area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. Delong's test compared the mp-MRI model with the other models, with P < 0.05 indicating statistical significance. All five models demonstrated good performance, with AUCs of 0.83 for the T1 model, 0.85 for the DWI model, 0.90 for the T2 model, 0.92 for the DCE model, and 0.96 for the mp-MRI model. Delong's test indicated statistically significant differences between the mp-MRI model and the other four models, with P values < 0.05. The multiparametric breast MRI radiomics and deep learning-based multimodal model performs well in predicting preoperative Ki-67 expression status in breast cancer.

Altered intrinsic ignition dynamics linked to Amyloid-β and tau pathology in Alzheimer's disease

Patow, G. A., Escrichs, A., Martinez-Molina, N., Ritter, P., Deco, G.

biorxiv logopreprintMay 11 2025
Alzheimer's disease (AD) progressively alters brain structure and function, yet the associated changes in large-scale brain network dynamics remain poorly understood. We applied the intrinsic ignition framework to resting-state functional MRI (rs-fMRI) data from AD patients, individuals with mild cognitive impairment (MCI), and cognitively healthy controls (HC) to elucidate how AD shapes intrinsic brain activity. We assessed node-metastability at the whole-brain level and in 7 canonical resting-state networks (RSNs). Our results revealed a progressive decline in dynamical complexity across the disease continuum. HC exhibited the highest node-metastability, whereas it was substantially reduced in MCI and AD patients. The cortical hierarchy of information processing was also disrupted, indicating that rich-club hubs may be selectively affected in AD progression. Furthermore, we used linear mixed-effects models to evaluate the influence of Amyloid-{beta} (A{beta}) and tau pathology on brain dynamics at both regional and whole-brain levels. We found significant associations between both protein burdens and alterations in node metastability. Lastly, a machine learning classifier trained on brain dynamics, A{beta}, and tau burden features achieved high accuracy in discriminating between disease stages. Together, our findings highlight the progressive disruption of intrinsic ignition across whole-brain and RSNs in AD and support the use of node-metastability in conjunction with proteinopathy as a novel framework for tracking disease progression.

A Clinical Neuroimaging Platform for Rapid, Automated Lesion Detection and Personalized Post-Stroke Outcome Prediction

Brzus, M., Griffis, J. C., Riley, C. J., Bruss, J., Shea, C., Johnson, H. J., Boes, A. D.

medrxiv logopreprintMay 11 2025
Predicting long-term functional outcomes for individuals with stroke is a significant challenge. Solving this challenge will open new opportunities for improving stroke management by informing acute interventions and guiding personalized rehabilitation strategies. The location of the stroke is a key predictor of outcomes, yet no clinically deployed tools incorporate lesion location information for outcome prognostication. This study responds to this critical need by introducing a fully automated, three-stage neuroimaging processing and machine learning pipeline that predicts personalized outcomes from clinical imaging in adult ischemic stroke patients. In the first stage, our system automatically processes raw DICOM inputs, registers the brain to a standard template, and uses deep learning models to segment the stroke lesion. In the second stage, lesion location and automatically derived network features are input into statistical models trained to predict long-term impairments from a large independent cohort of lesion patients. In the third stage, a structured PDF report is generated using a large language model that describes the strokes location, the arterial distribution, and personalized prognostic information. We demonstrate the viability of this approach in a proof-of-concept application predicting select cognitive outcomes in a stroke cohort. Brain-behavior models were pre-trained to predict chronic impairment on 28 different cognitive outcomes in a large cohort of patients with focal brain lesions (N=604). The automated pipeline used these models to predict outcomes from clinically acquired MRIs in an independent ischemic stroke cohort (N=153). Starting from raw clinical DICOM images, we show that our pipeline can generate outcome predictions for individual patients in less than 3 minutes with 96% concordance relative to methods requiring manual processing. We also show that prediction accuracy is enhanced using models that incorporate lesion location, lesion-associated network information, and demographics. Our results provide a strong proof-of-concept and lay the groundwork for developing imaging-based clinical tools for stroke outcome prognostication.
Page 3 of 875 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.