Sort by:
Page 27 of 55543 results

SDS-Net: A Synchronized Dual-Stage Network for Predicting Patients Within 4.5-h Thrombolytic Treatment Window Using MRI.

Zhang X, Luan Y, Cui Y, Zhang Y, Lu C, Zhou Y, Zhang Y, Li H, Ju S, Tang T

pubmed logopapersJun 1 2025
Timely and precise identification of acute ischemic stroke (AIS) within 4.5 h is imperative for effective treatment decision-making. This study aims to construct a novel network that utilizes limited datasets to recognize AIS patients within this critical window. We conducted a retrospective analysis of 265 AIS patients who underwent both fluid attenuation inversion recovery (FLAIR) and diffusion-weighted imaging (DWI) within 24 h of symptom onset. Patients were categorized based on the time since stroke onset (TSS) into two groups: TSS ≤ 4.5 h and TSS > 4.5 h. The TSS was calculated as the time from stroke onset to MRI completion. We proposed a synchronized dual-stage network (SDS-Net) and a sequential dual-stage network (Dual-stage Net), which were comprised of infarct voxel identification and TSS classification stages. The models were trained on 181 patients and validated on an independent external cohort of 84 patients using metrics of area under the curve (AUC), sensitivity, specificity, and accuracy. A DeLong test was used to statistically compare the performance of the two models. SDS-Net achieved an accuracy of 0.844 with an AUC of 0.914 in the validation dataset, outperforming the Dual-stage Net, which had an accuracy of 0.822 and an AUC of 0.846. In the external test dataset, SDS-Net further demonstrated superior performance with an accuracy of 0.800 and an AUC of 0.879, compared to the accuracy of 0.694 and AUC of 0.744 of Dual-stage Net (P = 0.049). SDS-Net is a robust and reliable tool for identifying AIS patients within a 4.5-h treatment window using MRI. This model can assist clinicians in making timely treatment decisions, potentially improving patient outcomes.

Decoding Glioblastoma Heterogeneity: Neuroimaging Meets Machine Learning.

Fares J, Wan Y, Mayrand R, Li Y, Mair R, Price SJ

pubmed logopapersJun 1 2025
Recent advancements in neuroimaging and machine learning have significantly improved our ability to diagnose and categorize isocitrate dehydrogenase (IDH)-wildtype glioblastoma, a disease characterized by notable tumoral heterogeneity, which is crucial for effective treatment. Neuroimaging techniques, such as diffusion tensor imaging and magnetic resonance radiomics, provide noninvasive insights into tumor infiltration patterns and metabolic profiles, aiding in accurate diagnosis and prognostication. Machine learning algorithms further enhance glioblastoma characterization by identifying distinct imaging patterns and features, facilitating precise diagnoses and treatment planning. Integration of these technologies allows for the development of image-based biomarkers, potentially reducing the need for invasive biopsy procedures and enabling personalized therapy targeting specific pro-tumoral signaling pathways and resistance mechanisms. Although significant progress has been made, ongoing innovation is essential to address remaining challenges and further improve these methodologies. Future directions should focus on refining machine learning models, integrating emerging imaging techniques, and elucidating the complex interplay between imaging features and underlying molecular processes. This review highlights the pivotal role of neuroimaging and machine learning in glioblastoma research, offering invaluable noninvasive tools for diagnosis, prognosis prediction, and treatment planning, ultimately improving patient outcomes. These advances in the field promise to usher in a new era in the understanding and classification of IDH-wildtype glioblastoma.

Structural alterations as a predictor of depression - a 7-Tesla MRI-based multidimensional approach.

Schnellbächer GJ, Rajkumar R, Veselinović T, Ramkiran S, Hagen J, Collee M, Shah NJ, Neuner I

pubmed logopapersJun 1 2025
Major depressive disorder (MDD) is a debilitating condition that is associated with changes in the default-mode network (DMN). Commonly reported features include alterations in gray matter volume (GMV), cortical thickness (CoT), and gyrification. A comprehensive examination of these variables using ultra-high field strength MRI and machine learning methods may lead to novel insights into the pathophysiology of depression and help develop a more personalized therapy. Cerebral images were obtained from 41 patients with confirmed MDD and 41 healthy controls, matched for age and gender, using a 7-T-MRI. DMN parcellation followed the Schaefer 600 Atlas. Based on the results of a mixed-model repeated measures analysis, a support vector machine (SVM) calculation followed by leave-one-out cross-validation determined the predictive ability of structural features for the presence of MDD. A consecutive permutation procedure identified which areas contributed to the classification results. Correlating changes in those areas with BDI-II and AMDP scores added an explanatory aspect to this study. CoT did not delineate relevant changes in the mixed model and was excluded from further analysis. The SVM achieved a good prediction accuracy of 0.76 using gyrification data. GMV was not a viable predictor for disease presence, however, it correlated in the left parahippocampal gyrus with disease severity as measured by the BDI-II. Structural data of the DMN may therefore contain the necessary information to predict the presence of MDD. However, there may be inherent challenges with predicting disease course or treatment response due to high GMV variance and the static character of gyrification. Further improvements in data acquisition and analysis may help to overcome these difficulties.

CNS-CLIP: Transforming a Neurosurgical Journal Into a Multimodal Medical Model.

Alyakin A, Kurland D, Alber DA, Sangwon KL, Li D, Tsirigos A, Leuthardt E, Kondziolka D, Oermann EK

pubmed logopapersJun 1 2025
Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications. We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification. CNS-CLIP demonstrated superior performance in neurosurgical information retrieval with a Top-1 accuracy of 24.56%, compared with 8.61% for the baseline. The average area under receiver operating characteristic across 6 neuroradiology tasks achieved by CNS-CLIP was 0.95, slightly superior to OpenAI's Contrastive Language-Image Pretraining at 0.94 and significantly outperforming a vanilla vision transformer at 0.62. In generalist classification, CNS-CLIP reached a Top-1 accuracy of 47.55%, a decrease from the baseline of 52.37%, demonstrating a catastrophic forgetting phenomenon. This study presents a pioneering effort in building a domain-specific multimodal model using data from a medical society publication. The results indicate that domain-specific models, while less globally versatile, can offer advantages in specialized contexts. This emphasizes the importance of using tailored data and domain-focused development in training foundation models in neurosurgery and general medicine.

Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts.

Kamel P, Khalid M, Steger R, Kanhere A, Kulkarni P, Parekh V, Yi PH, Gandhi D, Bodanapally U

pubmed logopapersJun 1 2025
Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.

Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E

pubmed logopapersJun 1 2025
Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. This work tests the viability of semi-supervision for brain metastases segmentation. Retrospective. There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. 3 TECHNICAL EFFICACY: Stage 2.

"Advances in biomarker discovery and diagnostics for alzheimer's disease".

Bhatia V, Chandel A, Minhas Y, Kushawaha SK

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by intracellular neurofibrillary tangles with tau protein and extracellular β-amyloid plaques. Early and accurate diagnosis is crucial for effective treatment and management. The purpose of this review is to investigate new technologies that improve diagnostic accuracy while looking at the current diagnostic criteria for AD, such as clinical evaluations, cognitive testing, and biomarker-based techniques. A thorough review of the literature was done in order to assess both conventional and contemporary diagnostic methods. Multimodal strategies integrating clinical, imaging, and biochemical evaluations were emphasised. The promise of current developments in biomarker discovery was also examined, including mass spectrometry and artificial intelligence. Current diagnostic approaches include cerebrospinal fluid (CSF) biomarkers, imaging tools (MRI, PET), cognitive tests, and new blood-based markers. Integrating these technologies into multimodal diagnostic procedures enhances diagnostic accuracy and distinguishes dementia from other conditions. New technologies that hold promise for improving biomarker identification and diagnostic reliability include mass spectrometry and artificial intelligence. Advancements in AD diagnostics underscore the need for accessible, minimally invasive, and cost-effective techniques to facilitate early detection and intervention. The integration of novel technologies with traditional methods may significantly enhance the accuracy and feasibility of AD diagnosis.

Regional Cerebral Atrophy Contributes to Personalized Survival Prediction in Amyotrophic Lateral Sclerosis: A Multicentre, Machine Learning, Deformation-Based Morphometry Study.

Lajoie I, Kalra S, Dadar M

pubmed logopapersJun 1 2025
Accurate personalized survival prediction in amyotrophic lateral sclerosis is essential for effective patient care planning. This study investigates whether grey and white matter changes measured by magnetic resonance imaging can improve individual survival predictions. We analyzed data from 178 patients with amyotrophic lateral sclerosis and 166 healthy controls in the Canadian Amyotrophic Lateral Sclerosis Neuroimaging Consortium study. A voxel-wise linear mixed-effects model assessed disease-related and survival-related atrophy detected through deformation-based morphometry, controlling for age, sex, and scanner variations. Additional linear mixed-effects models explored associations between regional imaging and clinical measurements, and their associations with time to the composite outcome of death, tracheostomy, or permanent assisted ventilation. We evaluated whether incorporating imaging features alongside clinical data could improve the performance of an individual survival distribution model. Deformation-based morphometry uncovered distinct voxel-wise atrophy patterns linked to disease progression and survival, with many of these regional atrophies significantly associated with clinical manifestations of the disease. By integrating regional imaging features with clinical data, we observed a substantial enhancement in the performance of survival models across key metrics. Our analysis identified specific brain regions, such as the corpus callosum, rostral middle frontal gyrus, and thalamus, where atrophy predicted an increased risk of mortality. This study suggests that brain atrophy patterns measured by deformation-based morphometry provide valuable insights beyond clinical assessments for prognosis. It offers a more comprehensive approach to prognosis and highlights brain regions involved in disease progression and survival, potentially leading to a better understanding of amyotrophic lateral sclerosis. ANN NEUROL 2025;97:1144-1157.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.
Page 27 of 55543 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.