Sort by:
Page 108 of 1161159 results

Texture-based probability mapping for automatic assessment of myocardial injury in late gadolinium enhancement images after revascularized STEMI.

Frøysa V, Berg GJ, Singsaas E, Eftestøl T, Woie L, Ørn S

pubmed logopapersMay 15 2025
Late Gadolinium-enhancement in cardiac magnetic resonance imaging (LGE-CMR) is the gold standard for assessing myocardial infarction (MI) size. Texture-based probability mapping (TPM) is a novel machine learning-based analysis of LGE images of myocardial injury. The ability of TPM to assess acute myocardial injury has not been determined. This proof-of-concept study aimed to determine how TPM responds to the dynamic changes in myocardial injury during one-year follow-up after a first-time revascularized acute MI. 41 patients with first-time acute ST-elevation MI and single-vessel occlusion underwent successful PCI. LGE-CMR images were obtained 2 days, 1 week, 2 months, and 1 year following MI. TPM size was compared with manual LGE-CMR based MI size, LV remodeling, and biomarkers. TPM size remained larger than MI by LGE-CMR at all time points, decreasing from 2 days to 2 months (p < 0.001) but increasing from 2 months to 1 year (p < 0.01). TPM correlated strongly with peak Troponin T (p < 0.001) and NT-proBNP (p < 0.001). At 1 week, 2 months, and 1 year, TPM showed a stronger correlation with NT-proBNP than MI size by LGE-CMR. Analyzing all collected pixels from 2 months to 1 year revealed a general increase in pixel scar probability in both the infarcted and non-infarcted regions. This proof-of-concept study suggests that TPM may offer additional insights into myocardial alterations in both infarcted and non-infarcted regions following acute MI. These findings indicate a potential role for TPM in assessing the overall myocardial response to infarction and the subsequent healing and remodeling process.

Comparison of lumbar disc degeneration grading between deep learning model SpineNet and radiologist: a longitudinal study with a 14-year follow-up.

Murto N, Lund T, Kautiainen H, Luoma K, Kerttula L

pubmed logopapersMay 15 2025
To assess the agreement between lumbar disc degeneration (DD) grading by the convolutional neural network model SpineNet and radiologist's visual grading. In a 14-year follow-up MRI study involving 19 male volunteers, lumbar DD was assessed by SpineNet and two radiologists using the Pfirrmann classification at baseline (age 37) and after 14 years (age 51). Pfirrmann summary scores (PSS) were calculated by summing individual disc grades. The agreement between the first radiologist and SpineNet was analyzed, with the second radiologist's grading used for inter-observer agreement. Significant differences were observed in the Pfirrmann grades and PSS assigned by the radiologist and SpineNet at both time points. SpineNet assigned Pfirrmann grade 1 to several discs and grade 5 to more discs compared to the radiologists. The concordance correlation coefficients (CCC) of PSS between the radiologist and SpineNet were 0.54 (95% CI: 0.28 to 0.79) at baseline and 0.54 (0.27 to 0.80) at follow-up. The average kappa (κ) values of 0.74 (0.68 to 0.81) at baseline and 0.68 (0.58 to 0.77) at follow-up. CCC of PSS between the radiologists was 0.83 (0.69 to 0.97) at baseline and 0.78 (0.61 to 0.95) at follow-up, with κ values ranging from 0.73 to 0.96. We found fair to substantial agreement in DD grading between SpineNet and the radiologist, albeit with notable discrepancies. These findings indicate that AI-based systems like SpineNet hold promise as complementary tools in radiological evaluation, including in longitudinal studies, but emphasize the need for ongoing refinement of AI algorithms.

Machine learning prediction prior to onset of mild cognitive impairment using T1-weighted magnetic resonance imaging radiomic of the hippocampus.

Zhan S, Wang J, Dong J, Ji X, Huang L, Zhang Q, Xu D, Peng L, Wang X, Zhang Y, Liang S, Chen L

pubmed logopapersMay 15 2025
Early identification of individuals who progress from normal cognition (NC) to mild cognitive impairment (MCI) may help prevent cognitive decline. We aimed to build predictive models using radiomic features of the bilateral hippocampus in combination with scores from neuropsychological assessments. We utilized the Alzheimer's Disease Neuroimaging Initiative (ADNI) database to study 175 NC individuals, identifying 50 who progressed to MCI within seven years. Employing the Least Absolute Shrinkage and Selection Operator (LASSO) on T1-weighted images, we extracted and refined hippocampal features. Classification models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and light gradient boosters (LightGBM), were built based on significant neuropsychological scores. Model validation was conducted using 5-fold cross-validation, and hyperparameters were optimized with Scikit-learn, using an 80:20 data split for training and testing. We found that the LightGBM model achieved an area under the receiver operating characteristic (ROC) curve (AUC) value of 0.89 and an accuracy of 0.79 in the training set, and an AUC value of 0.80 and an accuracy of 0.74 in the test set. The study identified that T1-weighted magnetic resonance imaging radiomic of the hippocampus would be used to predict the progression to MCI at the normal cognitive stage, which might provide a new insight into clinical research.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

An Annotated Multi-Site and Multi-Contrast Magnetic Resonance Imaging Dataset for the study of the Human Tongue Musculature.

Ribeiro FL, Zhu X, Ye X, Tu S, Ngo ST, Henderson RD, Steyn FJ, Kiernan MC, Barth M, Bollmann S, Shaw TB

pubmed logopapersMay 14 2025
This dataset provides the first annotated, openly available MRI-based imaging dataset for investigations of tongue musculature, including multi-contrast and multi-site MRI data from non-disease participants. The present dataset includes 47 participants collated from three studies: BeLong (four participants; T2-weighted images), EATT4MND (19 participants; T2-weighted images), and BMC (24 participants; T1-weighted images). We provide manually corrected segmentations of five key tongue muscles: the superior longitudinal, combined transverse/vertical, genioglossus, and inferior longitudinal muscles. Other phenotypic measures, including age, sex, weight, height, and tongue muscle volume, are also available for use. This dataset will benefit researchers across domains interested in the structure and function of the tongue in health and disease. For instance, researchers can use this data to train new machine learning models for tongue segmentation, which can be leveraged for segmentation and tracking of different tongue muscles engaged in speech formation in health and disease. Altogether, this dataset provides the means to the scientific community for investigation of the intricate tongue musculature and its role in physiological processes and speech production.

Fed-ComBat: A Generalized Federated Framework for Batch Effect Harmonization in Collaborative Studies

Silva, S., Lorenzi, M., Altmann, A., Oxtoby, N.

biorxiv logopreprintMay 14 2025
In neuroimaging research, the utilization of multi-centric analyses is crucial for obtaining sufficient sample sizes and representative clinical populations. Data harmonization techniques are typically part of the pipeline in multi-centric studies to address systematic biases and ensure the comparability of the data. However, most multi-centric studies require centralized data, which may result in exposing individual patient information. This poses a significant challenge in data governance, leading to the implementation of regulations such as the GDPR and the CCPA, which attempt to address these concerns but also hinder data access for researchers. Federated learning offers a privacy-preserving alternative approach in machine learning, enabling models to be collaboratively trained on decentralized data without the need for data centralization or sharing. In this paper, we present Fed-ComBat, a federated framework for batch effect harmonization on decentralized data. Fed-ComBat extends existing centralized linear methods, such as ComBat and distributed as d-ComBat, and nonlinear approaches like ComBat-GAM in accounting for potentially nonlinear and multivariate covariate effects. By doing so, Fed-ComBat enables the preservation of nonlinear covariate effects without requiring centralization of data and without prior knowledge of which variables should be considered nonlinear or their interactions, differentiating it from ComBat-GAM. We assessed Fed-ComBat and existing approaches on simulated data and multiple cohorts comprising healthy controls (CN) and subjects with various disorders such as Parkinson's disease (PD), Alzheimer's disease (AD), and autism spectrum disorder (ASD). The results of our study show that Fed-ComBat performs better than centralized ComBat when dealing with nonlinear effects and is on par with centralized methods like ComBat-GAM. Through experiments using synthetic data, Fed-ComBat demonstrates a superior ability to reconstruct the target unbiased function, achieving a 35% improvement (RMSE=0.5952) compared to d-ComBat (RMSE=0.9162) and a 12% improvement compared to our proposal to federate ComBat-GAM, d-ComBat-GAM (RMSE=0.6751). Additionally, Fed-ComBat achieves comparable results to centralized methods like ComBat-GAM for MRI-derived phenotypes without requiring prior knowledge of potential nonlinearities.

A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing.

Yinusa A, Faezipour M

pubmed logopapersMay 14 2025
Deep learning, particularly convolutional neural networks (CNNs), has proven valuable for brain tumor classification, aiding diagnostic and therapeutic decisions in medical imaging. Despite their accuracy, these models are vulnerable to adversarial attacks, compromising their reliability in clinical settings. In this research, we utilized a VGG16-based CNN model to classify brain tumors, achieving 96% accuracy on clean magnetic resonance imaging (MRI) data. To assess robustness, we exposed the model to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, which reduced accuracy to 32% and 13%, respectively. We then applied a multi-layered defense strategy, including adversarial training with FGSM and PGD examples and feature squeezing techniques such as bit-depth reduction and Gaussian blurring. This approach improved model resilience, achieving 54% accuracy on FGSM and 47% on PGD adversarial examples. Our results highlight the importance of proactive defense strategies for maintaining the reliability of AI in medical imaging under adversarial conditions.

Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging.

Ince S, Kunduracioglu I, Algarni A, Bayram B, Pacal I

pubmed logopapersMay 14 2025
Cerebral vascular occlusion is a serious condition that can lead to stroke and permanent neurological damage due to insufficient oxygen and nutrients reaching brain tissue. Early diagnosis and accurate segmentation are critical for effective treatment planning. Due to its high soft tissue contrast, Magnetic Resonance Imaging (MRI) is commonly used for detecting these occlusions such as ischemic stroke. However, challenges such as low contrast, noise, and heterogeneous lesion structures in MRI images complicate manual segmentation and often lead to misinterpretations. As a result, deep learning-based Computer-Aided Diagnosis (CAD) systems are essential for faster and more accurate diagnosis and treatment methods, although they can sometimes face challenges such as high computational costs and difficulties in segmenting small or irregular lesions. This study proposes a novel U-Net architecture enhanced with ConvNeXtV2 blocks and GRN-based Multi-Layer Perceptrons (MLP) to address these challenges in cerebral vascular occlusion segmentation. This is the first application of ConvNeXtV2 in this domain. The proposed model significantly improves segmentation accuracy, even in low-contrast regions, while maintaining high computational efficiency, which is crucial for real-world clinical applications. To reduce false positives and improve overall accuracy, small lesions (≤5 pixels) were removed in the preprocessing step with the support of expert clinicians. Experimental results on the ISLES 2022 dataset showed superior performance with an Intersection over Union (IoU) of 0.8015 and a Dice coefficient of 0.8894. Comparative analyses indicate that the proposed model achieves higher segmentation accuracy than existing U-Net variants and other methods, offering a promising solution for clinical use.

A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma.

Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D

pubmed logopapersMay 14 2025
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.

Large language models for efficient whole-organ MRI score-based reports and categorization in knee osteoarthritis.

Xie Y, Hu Z, Tao H, Hu Y, Liang H, Lu X, Wang L, Li X, Chen S

pubmed logopapersMay 14 2025
To evaluate the performance of large language models (LLMs) in automatically generating whole-organ MRI score (WORMS)-based structured MRI reports and predicting osteoarthritis (OA) severity for the knee. A total of 160 consecutive patients suspected of OA were included. Knee MRI reports were reviewed by three radiologists to establish the WORMS reference standard for 39 key features. GPT-4o and GPT-4o-mini were prompted using in-context knowledge (ICK) and chain-of-thought (COT) to generate WORMS-based structured reports from original reports and to automatically predict the OA severity. Four Orthopedic surgeons reviewed original and LLM-generated reports to conduct pairwise preference and difficulty tests, and their review times were recorded. GPT-4o demonstrated perfect performance in extracting the laterality of the knee (accuracy = 100%). GPT-4o outperformed GPT-4o mini in generating WORMS reports (Accuracy: 93.9% vs 76.2%, respectively). GPT-4o achieved higher recall (87.3% s 46.7%, p < 0.001), while maintaining higher precision compared to GPT-4o mini (94.2% vs 71.2%, p < 0.001). For predicting OA severity, GPT-4o outperformed GPT-4o mini across all prompt strategies (best accuracy: 98.1% vs 68.7%). Surgeons found it easier to extract information and gave more preference to LLM-generated reports over the original reports (both p < 0.001) while spending less time on each report (51.27 ± 9.41 vs 87.42 ± 20.26 s, p < 0.001). GPT-4o generated expert multi-feature, WORMS-based reports from original free-text knee MRI reports. GPT-4o with COT achieved high accuracy in categorizing OA severity. Surgeons reported greater preference and higher efficiency when using LLM-generated reports. The perfect performance of generating WORMS-based reports and the high efficiency and ease of use suggest that integrating LLMs into clinical workflows could greatly enhance productivity and alleviate the documentation burden faced by clinicians in knee OA. GPT-4o successfully generated WORMS-based knee MRI reports. GPT-4o with COT prompting achieved impressive accuracy in categorizing knee OA severity. Greater preference and higher efficiency were reported for LLM-generated reports.
Page 108 of 1161159 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.