Sort by:
Page 16 of 91902 results

Development and validation of a cranial ultrasound imaging-based deep learning model for periventricular-intraventricular haemorrhage detection and grading: a two-centre study.

Peng Y, Hu Z, Wen M, Deng Y, Zhao D, Yu Y, Liang W, Dai X, Wang Y

pubmed logopapersJul 29 2025
Periventricular-intraventricular haemorrhage (IVH) is the most prevalent type of neonatal intracranial haemorrhage. It is especially threatening to preterm infants, in whom it is associated with significant morbidity and mortality. Cranial ultrasound has become an important means of screening periventricular IVH in infants. The integration of artificial intelligence with neonatal ultrasound is promising for enhancing diagnostic accuracy, reducing physician workload, and consequently improving periventricular IVH outcomes. The study investigated whether deep learning-based analysis of the cranial ultrasound images of infants could detect and grade periventricular IVH. This multicentre observational study included 1,060 cases and healthy controls from two hospitals. The retrospective modelling dataset encompassed 773 participants from January 2020 to July 2023, while the prospective two-centre validation dataset included 287 participants from August 2023 to January 2024. The periventricular IVH net model, a deep learning model incorporating the convolutional block attention module mechanism, was developed. The model's effectiveness was assessed by randomly dividing the retrospective data into training and validation sets, followed by independent validation with the prospective two-centre data. To evaluate the model, we measured its recall, precision, accuracy, F1-score, and area under the curve (AUC). The regions of interest (ROI) that influenced the detection by the deep learning model were visualised in significance maps, and the t-distributed stochastic neighbour embedding (t-SNE) algorithm was used to visualise the clustering of model detection parameters. The final retrospective dataset included 773 participants (mean (standard deviation (SD)) gestational age, 32.7 (4.69) weeks; mean (SD) weight, 1,862.60 (855.49) g). For the retrospective data, the model's AUC was 0.99 (95% confidence interval (CI), 0.98-0.99), precision was 0.92 (0.89-0.95), recall was 0.93 (0.89-0.95), and F1-score was 0.93 (0.90-0.95). For the prospective two-centre validation data, the model's AUC was 0.961 (95% CI, 0.94-0.98) and accuracy was 0.89 (95% CI, 0.86-0.92). The two-centre prospective validation results of the periventricular IVH net model demonstrated its tremendous potential for paediatric clinical applications. Combining artificial intelligence with paediatric ultrasound can enhance the accuracy and efficiency of periventricular IVH diagnosis, especially in primary hospitals or community hospitals.

Neural Autoregressive Modeling of Brain Aging

Ridvan Yesiloglu, Wei Peng, Md Tauhidul Islam, Ehsan Adeli

arxiv logopreprintJul 29 2025
Brain aging synthesis is a critical task with broad applications in clinical and computational neuroscience. The ability to predict the future structural evolution of a subject's brain from an earlier MRI scan provides valuable insights into aging trajectories. Yet, the high-dimensionality of data, subtle changes of structure across ages, and subject-specific patterns constitute challenges in the synthesis of the aging brain. To overcome these challenges, we propose NeuroAR, a novel brain aging simulation model based on generative autoregressive transformers. NeuroAR synthesizes the aging brain by autoregressively estimating the discrete token maps of a future scan from a convenient space of concatenated token embeddings of a previous and future scan. To guide the generation, it concatenates into each scale the subject's previous scan, and uses its acquisition age and the target age at each block via cross-attention. We evaluate our approach on both the elderly population and adolescent subjects, demonstrating superior performance over state-of-the-art generative models, including latent diffusion models (LDM) and generative adversarial networks, in terms of image fidelity. Furthermore, we employ a pre-trained age predictor to further validate the consistency and realism of the synthesized images with respect to expected aging patterns. NeuroAR significantly outperforms key models, including LDM, demonstrating its ability to model subject-specific brain aging trajectories with high fidelity.

Enhancing efficiency in paediatric brain tumour segmentation using a pathologically diverse single-center clinical dataset

A. Piffer, J. A. Buchner, A. G. Gennari, P. Grehten, S. Sirin, E. Ross, I. Ezhov, M. Rosier, J. C. Peeken, M. Piraud, B. Menze, A. Guerreiro Stücklin, A. Jakab, F. Kofler

arxiv logopreprintJul 29 2025
Background Brain tumours are the most common solid malignancies in children, encompassing diverse histological, molecular subtypes and imaging features and outcomes. Paediatric brain tumours (PBTs), including high- and low-grade gliomas (HGG, LGG), medulloblastomas (MB), ependymomas, and rarer forms, pose diagnostic and therapeutic challenges. Deep learning (DL)-based segmentation offers promising tools for tumour delineation, yet its performance across heterogeneous PBT subtypes and MRI protocols remains uncertain. Methods A retrospective single-centre cohort of 174 paediatric patients with HGG, LGG, medulloblastomas (MB), ependymomas, and other rarer subtypes was used. MRI sequences included T1, T1 post-contrast (T1-C), T2, and FLAIR. Manual annotations were provided for four tumour subregions: whole tumour (WT), T2-hyperintensity (T2H), enhancing tumour (ET), and cystic component (CC). A 3D nnU-Net model was trained and tested (121/53 split), with segmentation performance assessed using the Dice similarity coefficient (DSC) and compared against intra- and inter-rater variability. Results The model achieved robust performance for WT and T2H (mean DSC: 0.85), comparable to human annotator variability (mean DSC: 0.86). ET segmentation was moderately accurate (mean DSC: 0.75), while CC performance was poor. Segmentation accuracy varied by tumour type, MRI sequence combination, and location. Notably, T1, T1-C, and T2 alone produced results nearly equivalent to the full protocol. Conclusions DL is feasible for PBTs, particularly for T2H and WT. Challenges remain for ET and CC segmentation, highlighting the need for further refinement. These findings support the potential for protocol simplification and automation to enhance volumetric assessment and streamline paediatric neuro-oncology workflows.

A hybrid filtering and deep learning approach for early Alzheimer's disease identification.

Ahamed MKU, Hossen R, Paul BK, Hasan M, Al-Arashi WH, Kazi M, Talukder MA

pubmed logopapersJul 29 2025
Alzheimer's disease is a progressive neurological disorder that profoundly affects cognitive functions and daily activities. Rapid and precise identification is essential for effective intervention and improved patient outcomes. This research introduces an innovative hybrid filtering approach with a deep transfer learning model for detecting Alzheimer's disease utilizing brain imaging data. The hybrid filtering method integrates the Adaptive Non-Local Means filter with a Sharpening filter for image preprocessing. Furthermore, the deep learning model used in this study is constructed on the EfficientNetV2B3 architecture, augmented with additional layers and fine-tuning to guarantee effective classification among four categories: Mild, moderate, very mild, and non-demented. The work employs Grad-CAM++ to enhance interpretability by localizing disease-relevant characteristics in brain images. The experimental assessment, performed on a publicly accessible dataset, illustrates the ability of the model to achieve an accuracy of 99.45%. These findings underscore the capability of sophisticated deep learning methodologies to aid clinicians in accurately identifying Alzheimer's disease.

A novel deep learning-based brain age prediction framework for routine clinical MRI scans.

Kim H, Park S, Seo SW, Na DL, Jang H, Kim JP, Kim HJ, Kang SH, Kwak K

pubmed logopapersJul 29 2025
Physiological brain aging is associated with cognitive impairment and neuroanatomical changes. Brain age prediction of routine clinical 2D brain MRI scans were understudied and often unsuccessful. We developed a novel brain age prediction framework for clinical 2D T1-weighted MRI scans using a deep learning-based model trained with research grade 3D MRI scans mostly from publicly available datasets (N = 8681; age = 51.76 ± 21.74). Our model showed accurate and fast brain age prediction on clinical 2D MRI scans from cognitively unimpaired (CU) subjects (N = 175) with MAE of 2.73 years after age bias correction (Pearson's r = 0.918). Brain age gap of Alzheimer's disease (AD) subjects was significantly greater than CU subjects (p < 0.001) and increase in brain age gap was associated with disease progression in both AD (p < 0.05) and Parkinson's disease (p < 0.01). Our framework can be extended to other MRI modalities and potentially applied to routine clinical examinations, enabling early detection of structural anomalies and improve patient outcome.

GDAIP: A Graph-Based Domain Adaptive Framework for Individual Brain Parcellation

Jianfei Zhu, Haiqi Zhu, Shaohui Liu, Feng Jiang, Baichun Wei, Chunzhi Yi

arxiv logopreprintJul 29 2025
Recent deep learning approaches have shown promise in learning such individual brain parcellations from functional magnetic resonance imaging (fMRI). However, most existing methods assume consistent data distributions across domains and struggle with domain shifts inherent to real-world cross-dataset scenarios. To address this challenge, we proposed Graph Domain Adaptation for Individual Parcellation (GDAIP), a novel framework that integrates Graph Attention Networks (GAT) with Minimax Entropy (MME)-based domain adaptation. We construct cross-dataset brain graphs at both the group and individual levels. By leveraging semi-supervised training and adversarial optimization of the prediction entropy on unlabeled vertices from target brain graph, the reference atlas is adapted from the group-level brain graph to the individual brain graph, enabling individual parcellation under cross-dataset settings. We evaluated our method using parcellation visualization, Dice coefficient, and functional homogeneity. Experimental results demonstrate that GDAIP produces individual parcellations with topologically plausible boundaries, strong cross-session consistency, and ability of reflecting functional organization.

A data assimilation framework for predicting the spatiotemporal response of high-grade gliomas to chemoradiation.

Miniere HJM, Hormuth DA, Lima EABF, Farhat M, Panthi B, Langshaw H, Shanker MD, Talpur W, Thrower S, Goldman J, Ty S, Chung C, Yankeelov TE

pubmed logopapersJul 29 2025
High-grade gliomas are highly invasive and respond variably to chemoradiation. Accurate, patient-specific predictions of tumor response could enhance treatment planning. We present a novel computational platform that assimilates MRI data to continually predict spatiotemporal tumor changes during chemoradiotherapy. Tumor growth and response to chemoradiation was described using a two-species reaction-diffusion model of enhancing and non-enhancing regions of the tumor. Two evaluation scenarios were used to test the predictive accuracy of this model. In scenario 1, the model was calibrated on a patient-specific basis (n = 21) to weekly MRI data during the course of chemoradiotherapy. A data assimilation framework was used to update model parameters with each new imaging visit which were then used to update model predictions. In scenario 2, we evaluated the predictive accuracy of the model when fewer data points are available by calibrating the same model using only the first two imaging visits and then predicted tumor response at the remaining five weeks of treatment. We investigated three approaches to assign model parameters for scenario 2: (1) predictions using only parameters estimated by fitting the data obtained from an individual patient's first two imaging visits, (2) predictions made by averaging the patient-specific parameters with the cohort-derived parameters, and (3) predictions using only cohort-derived parameters. Scenario 1 achieved a median [range] concordance correlation coefficient (CCC) between the predicted and measured total tumor cell counts of 0.91 [0.84, 0.95], and a median [range] percent error in tumor volume of -2.6% [-19.7, 8.0%], demonstrating strong agreement throughout the course of treatment. For scenario 2, the three approaches yielded CCCs of: (1) 0.65 [0.51, 0.88], (2) 0.74 [0.70, 0.91], (3) 0.76 [0.73, 0.92] with significant differences between the approach (1) that does not use the cohort parameters and the two approaches (2 and 3) that do. The proposed data assimilation framework enhances the accuracy of tumor growth forecasts by integrating patient-specific and cohort-based data. These findings show a practical method for identifying more personalized treatment strategies in high-grade glioma patients.

A hybrid M-DbneAlexnet for brain tumour detection using MRI images.

Kotti J, Chalasani V, Rajan C

pubmed logopapersJul 29 2025
Brain Tumour (BT) is characterised by the uncontrolled proliferation of the cells within the brain which can result in cancer. Detecting BT at the early stage significantly increases the patient's survival chances. The existing BT detection methods often struggle with high computational complexity, limited feature discrimination, and poor generalisation. To mitigate these issues, an effective brain tumour detection and segmentation method based on A hybrid network named MobileNet- Deep Batch-Normalized eLU AlexNet (M-DbneAlexnet) is developed based on Magnetic Resonance Imaging (MRI). The image enhancement is done by Piecewise Linear Transformation (PLT) function. BT region is segmented Transformer Brain Tumour Segmentation (TransBTSV2). Then feature extraction is done. Finally, BT is detected using M-DbneAlexnet model, which is devised by combining MobileNet and Deep Batch-Normalized eLU AlexNet (DbneAlexnet).<b>Results:</b> The proposed model achieved an accuracy of 92.68%, sensitivity of 93.02%, and specificity of 92.85%, demonstrating its effectiveness in accurately detecting brain tumors from MRI images. The proposed model enhances training speed and performs well on limited datasets, making it effective for distinguishing between tumor and healthy tissues. Its practical utility lies in enabling early detection and diagnosis of brain tumors, which can significantly reduce mortality rates.

Prediction of MGMT methylation status in glioblastoma patients based on radiomics feature extracted from intratumoral and peritumoral MRI imaging.

Chen WS, Fu FX, Cai QL, Wang F, Wang XH, Hong L, Su L

pubmed logopapersJul 29 2025
Assessing MGMT promoter methylation is crucial for determining appropriate glioblastoma therapy. Previous studies have focused on intratumoral regions, overlooking the peritumoral area. This study aimed to develop a radiomic model using MRI-derived features from both regions. We included 96 glioblastoma patients randomly allocated to training and testing sets. Radiomic features were extracted from intratumoral and peritumoral regions. We constructed and compared radiomic models based on intratumoral, peritumoral, and combined features. Model performance was evaluated using the area under the receiver-operating characteristic curve (AUC). The combined radiomic model achieved an AUC of 0.814 (95% CI: 0.767-0.862) in the training set and 0.808 (95% CI: 0.736-0.859) in the testing set, outperforming models based on intratumoral or peritumoral features alone. Calibration and decision curve analyses demonstrated excellent model fit and clinical utility. The radiomic model incorporating both intratumoral and peritumoral features shows promise in differentiating MGMT methylation status, potentially informing clinical treatment strategies for glioblastoma.

Radiomics, machine learning, and deep learning for hippocampal sclerosis identification: a systematic review and diagnostic meta-analysis.

Baptista JM, Brenner LO, Koga JV, Ohannesian VA, Ito LA, Nabarro PH, Santos LP, Henrique A, de Oliveira Almeida G, Berbet LU, Paranhos T, Nespoli V, Bertani R

pubmed logopapersJul 29 2025
Hippocampal sclerosis (HS) is the primary pathological finding in temporal lobe epilepsy (TLE) and a common cause of refractory seizures. Conventional diagnostic methods, such as EEG and MRI, have limitations. Artificial intelligence (AI) and radiomics, utilizing machine learning and deep learning, offer a non-invasive approach to enhance diagnostic accuracy. This study synthesized recent AI and radiomics research to improve HS detection in TLE. PubMed/Medline, Embase, and Web of Science were systematically searched following PRISMA-DTA guidelines until May 2024. Statistical analysis was conducted using STATA 14. A bivariate model was used to pool sensitivity (SEN) and specificity (SPE) for HS detection, with I2 assessing heterogeneity. Six studies were included. The pooled sensitivity and specificity of AI-based models for HS detection in medial temporal lobe epilepsy (MTLE) were 0.91 (95 % CI: 0.83-0.96; I2 = 71.48 %) and 0.9 (95 % CI: 0.83-0.94; I2 = 69.62 %), with an AUC of 0.96. AI alone showed higher sensitivity (0.92) and specificity (0.93) than AI combined with radiomics (sensitivity: 0.88; specificity: 0.9). Among algorithms, support vector machine (SVM) had the highest performance (SEN: 0.92; SPE: 0.95), followed by convolutional neural networks (CNN) and logistic regression (LR). AI models, particularly SVM, demonstrate high accuracy in detecting HS, with AI alone outperforming its combination with radiomics. These findings support the integration of AI into non-invasive diagnostic workflows, potentially enabling earlier detection and more personalized clinical decision-making in epilepsy care-ultimately contributing to improved patient outcomes and behavioral management.
Page 16 of 91902 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.