Sort by:
Page 105 of 1261251 results

Machine Learning Models of Voxel-Level [<sup>18</sup>F] Fluorodeoxyglucose Positron Emission Tomography Data Excel at Predicting Progressive Supranuclear Palsy Pathology.

Braun AS, Satoh R, Pham NTT, Singh-Reilly N, Ali F, Dickson DW, Lowe VJ, Whitwell JL, Josephs KA

pubmed logopapersMay 30 2025
To determine whether a machine learning model of voxel level [<sup>18</sup>f]fluorodeoxyglucose positron emission tomography (PET) data could predict progressive supranuclear palsy (PSP) pathology, as well as outperform currently available biomarkers. One hundred and thirty-seven autopsied patients with PSP (n = 42) and other neurodegenerative diseases (n = 95) who underwent antemortem [<sup>18</sup>f]fluorodeoxyglucose PET and 3.0 Tesla magnetic resonance imaging (MRI) scans were analyzed. A linear support vector machine was applied to differentiate pathological groups with sensitivity analyses performed to assess the influence of voxel size and region removal. A radial basis function was also prepared to create a secondary model using the most important voxels. The models were optimized on the main dataset (n = 104), and their performance was compared with the magnetic resonance parkinsonism index measured on MRI in the independent test dataset (n = 33). The model had the highest accuracy (0.91) and F-score (0.86) when voxel size was 6mm. In this optimized model, important voxels for differentiating the groups were observed in the thalamus, midbrain, and cerebellar dentate. The secondary models found the combination of thalamus and dentate to have the highest accuracy (0.89) and F-score (0.81). The optimized secondary model showed the highest accuracy (0.91) and F-scores (0.86) in the test dataset and outperformed the magnetic resonance parkinsonism index (0.81 and 0.70, respectively). The results suggest that glucose hypometabolism in the thalamus and cerebellar dentate have the highest potential for predicting PSP pathology. Our optimized machine learning model outperformed the best currently available biomarker to predict PSP pathology. ANN NEUROL 2025.

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

Automated Computer Vision Methods for Image Segmentation, Stereotactic Localization, and Functional Outcome Prediction of Basal Ganglia Hemorrhages.

Kashkoush A, Davison MA, Achey R, Gomes J, Rasmussen P, Kshettry VR, Moore N, Bain M

pubmed logopapersMay 30 2025
Basal ganglia intracranial hemorrhage (bgICH) morphology is associated with postoperative functional outcomes. We hypothesized that bgICH spatial representation modeling could be automated for functional outcome prediction after minimally invasive surgical (MIS) evacuation. A training set of 678 computed tomography head and computed tomography angiography images from 63 patients were used to train key-point detection and instance segmentation convolutional neural network-based models for anatomic landmark identification and bgICH segmentation. Anatomic landmarks included the bilateral orbital rims at the globe's maximum diameter and the posterior-most aspect of the tentorial incisura, which were used to define a universal stereotactic reference frame across patients. Convolutional neural network models were tested using volumetric computed tomography head/computed tomography angiography scans from 45 patients who underwent MIS bgICH evacuation with recorded modified Rankin Scales within one year after surgery. bgICH volumes were highly correlated (R2 = 0.95, P < .001) between manual (median 39-mL) and automatic (median 38-mL) segmentation methods. The absolute median difference between groups was 2-mL (IQR: 1-6 mL). Median localization accuracy (distance between automated and manually designated coordinate frames) was 4 mm (IQR: 3-6). Landmark coordinates were highly correlated in the x- (medial-lateral), y- (anterior-posterior), and z-axes (rostral-caudal) for all 3 landmarks (R2 range = 0.95-0.99, P < .001 for all). Functional outcome (modified Rankin Scale 4-6) was predicted with similar model performance using automated (area under the receiver operating characteristic curve = 0.81, 95% CI: 0.67-0.94) and manually (area under the receiver operating characteristic curve = 0.84, 95% CI: 0.72-0.96) constructed spatial representation models (P = .173). Computer vision models can accurately replicate bgICH manual segmentation, stereotactic localization, and prognosticate functional outcomes after MIS bgICH evacuation.

Beyond the LUMIR challenge: The pathway to foundational registration models

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

ROC Analysis of Biomarker Combinations in Fragile X Syndrome-Specific Clinical Trials: Evaluating Treatment Efficacy via Exploratory Biomarkers

Norris, J. E., Berry-Kravis, E. M., Harnett, M. D., Reines, S. A., Reese, M., Auger, E. K., Outterson, A., Furman, J., Gurney, M. E., Ethridge, L. E.

medrxiv logopreprintMay 29 2025
Fragile X Syndrome (FXS) is a rare neurodevelopmental disorder caused by a trinucleotide repeat expansion on the 5 untranslated region of the FMR1 gene. FXS is characterized by intellectual disability, anxiety, sensory hypersensitivity, and difficulties with executive function. A recent phase 2 placebo-controlled clinical trial assessing BPN14770, a first-in-class phosphodiesterase 4D allosteric inhibitor, in 30 adult males (age 18-41 years) with FXS demonstrated cognitive improvements on the NIH Toolbox Cognitive Battery in domains related to language and caregiver reports of improvement in both daily functioning and language. However, individual physiological measures from electroencephalography (EEG) demonstrated only marginal significance for trial efficacy. A secondary analysis of resting state EEG data collected as part of the phase 2 clinical trial evaluating BPN14770 was conducted using a machine learning classification algorithm to classify trial conditions (i.e., baseline, drug, placebo) via linear EEG variable combinations. The algorithm identified a composite of peak alpha frequencies (PAF) across multiple brain regions as a potential biomarker demonstrating BPN14770 efficacy. Increased PAF from baseline was associated with drug but not placebo. Given the relationship between PAF and cognitive function among typically developed adults and those with intellectual disability, as well as previously reported reductions in alpha frequency and power in FXS, PAF represents a potential physiological measure of BPN14770 efficacy.

Estimating Head Motion in Structural MRI Using a Deep Neural Network Trained on Synthetic Artifacts

Charles Bricout, Samira Ebrahimi Kahou, Sylvain Bouix

arxiv logopreprintMay 29 2025
Motion-related artifacts are inevitable in Magnetic Resonance Imaging (MRI) and can bias automated neuroanatomical metrics such as cortical thickness. Manual review cannot objectively quantify motion in anatomical scans, and existing automated approaches often require specialized hardware or rely on unbalanced noisy training data. Here, we train a 3D convolutional neural network to estimate motion severity using only synthetically corrupted volumes. We validate our method with one held-out site from our training cohort and with 14 fully independent datasets, including one with manual ratings, achieving a representative $R^2 = 0.65$ versus manual labels and significant thickness-motion correlations in 12/15 datasets. Furthermore, our predicted motion correlates with subject age in line with prior studies. Our approach generalizes across scanner brands and protocols, enabling objective, scalable motion assessment in structural MRI studies without prospective motion correction.

Diagnosis of trigeminal neuralgia based on plain skull radiography using convolutional neural network.

Han JH, Ji SY, Kim M, Kwon JE, Park JB, Kang H, Hwang K, Kim CY, Kim T, Jeong HG, Ahn YH, Chung HT

pubmed logopapersMay 29 2025
This study aimed to determine whether trigeminal neuralgia can be diagnosed using convolutional neural networks (CNNs) based on plain X-ray skull images. A labeled dataset of 166 skull images from patients aged over 16 years with trigeminal neuralgia was compiled, alongside a control dataset of 498 images from patients with unruptured intracranial aneurysms. The images were randomly partitioned into training, validation, and test datasets in a 6:2:2 ratio. Classifier performance was assessed using accuracy and the area under the receiver operating characteristic (AUROC) curve. Gradient-weighted class activation mapping was applied to identify regions of interest. External validation was conducted using a dataset obtained from another institution. The CNN achieved an overall accuracy of 87.2%, with sensitivity and specificity of 0.72 and 0.91, respectively, and an AUROC of 0.90 on the test dataset. In most cases, the sphenoid body and clivus were identified as key areas for predicting trigeminal neuralgia. Validation on the external dataset yielded an accuracy of 71.0%, highlighting the potential of deep learning-based models in distinguishing X-ray skull images of patients with trigeminal neuralgia from those of control individuals. Our preliminary results suggest that plain x-ray can be potentially used as an adjunct to conventional MRI, ideally with CISS sequences, to aid in the clinical diagnosis of TN. Further refinement could establish this approach as a valuable screening tool.

Standardizing Heterogeneous MRI Series Description Metadata Using Large Language Models.

Kamel PI, Doo FX, Savani D, Kanhere A, Yi PH, Parekh VS

pubmed logopapersMay 29 2025
MRI metadata, particularly free-text series descriptions (SDs) used to identify sequences, are highly heterogeneous due to variable inputs by manufacturers and technologists. This variability poses challenges in correctly identifying series for hanging protocols and dataset curation. The purpose of this study was to evaluate the ability of large language models (LLMs) to automatically classify MRI SDs. We analyzed non-contrast brain MRIs performed between 2016 and 2022 at our institution, identifying all unique SDs in the metadata. A practicing neuroradiologist manually classified the SD text into: "T1," "T2," "T2/FLAIR," "SWI," "DWI," ADC," or "Other." Then, various LLMs, including GPT 3.5 Turbo, GPT-4, GPT-4o, Llama 3 8b, and Llama 3 70b, were asked to classify each SD into one of the sequence categories. Model performances were compared to ground truth classification using area under the curve (AUC) as the primary metric. Additionally, GPT-4o was tasked with generating regular expression templates to match each category. In 2510 MRI brain examinations, there were 1395 unique SDs, with 727/1395 (52.1%) appearing only once, indicating high variability. GPT-4o demonstrated the highest performance, achieving an average AUC of 0.983 ± 0.020 for all series with detailed prompting. GPT models significantly outperformed Llama models, with smaller differences within the GPT family. Regular expression generation was inconsistent, demonstrating an average AUC of 0.774 ± 0.161 for all sequences. Our findings suggest that LLMs are effective for interpreting and standardizing heterogeneous MRI SDs.

A combined attention mechanism for brain tumor segmentation of lower-grade glioma in magnetic resonance images.

Hedibi H, Beladgham M, Bouida A

pubmed logopapersMay 29 2025
Low-grade gliomas (LGGs) are among the most problematic brain tumors to reliably segment in FLAIR MRI, and effective delineation of these lesions is critical for clinical diagnosis, treatment planning, and patient monitoring. Nevertheless, conventional U-Net-based approaches usually suffer from the loss of critical structural details owing to repetitive down-sampling, while the encoder features often retain irrelevant information that is not properly utilized by the decoder. To solve these challenges, this paper offers a dual-attention U-shaped design, named ECASE-Unet, which seamlessly integrates Efficient Channel Attention (ECA) and Squeeze-and-Excitation (SE) blocks in both the encoder and decoder stages. By selectively recalibrating channel-wise information, the model increases diagnostically significant regions of interest and reduces noise. Furthermore, dilated convolutions are introduced at the bottleneck layer to capture multi-scale contextual cues without inflating computational complexity, and dropout regularization is systematically applied to prevent overfitting on heterogeneous data. Experimental results on the Kaggle Low-Grade-Glioma dataset suggest that ECASE-Unet greatly outperforms previous segmentation algorithms, reaching a Dice coefficient of 0.9197 and an Intersection over Union (IoU) of 0.8521. Comprehensive ablation studies further reveal that integrating ECA and SE modules delivers complementing benefits, supporting the model's robust efficacy in precisely identifying LGG boundaries. These findings underline the potential of ECASE-Unet to expedite clinical operations and improve patient outcomes. Future work will focus on improving the model's applicability to new MRI modalities and studying the integration of clinical characteristics for a more comprehensive characterization of brain tumors.
Page 105 of 1261251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.