Sort by:
Page 150 of 1601593 results

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.

GOUHFI: a novel contrast- and resolution-agnostic segmentation tool for Ultra-High Field MRI

Marc-Antoine Fortin, Anne Louise Kristoffersen, Michael Staff Larsen, Laurent Lamalle, Ruediger Stirnberg, Paal Erik Goa

arxiv logopreprintMay 16 2025
Recently, Ultra-High Field MRI (UHF-MRI) has become more available and one of the best tools to study the brain. One common step in quantitative neuroimaging is the brain segmentation. However, the differences between UHF-MRI and 1.5-3T images are such that the automatic segmentation techniques optimized at these field strengths usually produce unsatisfactory segmentation results for UHF images. It has been particularly challenging to perform quantitative analyses as typically done with 1.5-3T data, considerably limiting the potential of UHF-MRI. Hence, we propose a novel Deep Learning (DL)-based segmentation technique called GOUHFI: Generalized and Optimized segmentation tool for Ultra-High Field Images, designed to segment UHF images of various contrasts and resolutions. For training, we used a total of 206 label maps from four datasets acquired at 3T, 7T and 9.4T. In contrast to most DL strategies, we used a previously proposed domain randomization approach, where synthetic images generated from the label maps were used for training a 3D U-Net. GOUHFI was tested on seven different datasets and compared to techniques like FastSurferVINN and CEREBRUM-7T. GOUHFI was able to the segment six contrasts and seven resolutions tested at 3T, 7T and 9.4T. Average Dice-Sorensen Similarity Coefficient (DSC) scores of 0.87, 0.84, 0.91 were computed against the ground truth segmentations at 3T, 7T and 9.4T. Moreover, GOUHFI demonstrated impressive resistance to the typical inhomogeneities observed at UHF-MRI, making it a new powerful segmentation tool that allows to apply the usual quantitative analysis pipelines also at UHF. Ultimately, GOUHFI is a promising new segmentation tool, being the first of its kind proposing a contrast- and resolution-agnostic alternative for UHF-MRI, making it the forthcoming alternative for neuroscientists working with UHF-MRI or even lower field strengths.

Development and validation of clinical-radiomics deep learning model based on MRI for endometrial cancer molecular subtypes classification.

Yue W, Han R, Wang H, Liang X, Zhang H, Li H, Yang Q

pubmed logopapersMay 16 2025
This study aimed to develop and validate a clinical-radiomics deep learning (DL) model based on MRI for endometrial cancer (EC) molecular subtypes classification. This multicenter retrospective study included EC patients undergoing surgery, MRI, and molecular pathology diagnosis across three institutions from January 2020 to March 2024. Patients were divided into training, internal, and external validation cohorts. A total of 386 handcrafted radiomics features were extracted from each MR sequence, and MoCo-v2 was employed for contrastive self-supervised learning to extract 2048 DL features per patient. Feature selection integrated selected features into 12 machine learning methods. Model performance was evaluated with the AUC. A total of 526 patients were included (mean age, 55.01 ± 11.07). The radiomics model and clinical model demonstrated comparable performance across the internal and external validation cohorts, with macro-average AUCs of 0.70 vs 0.69 and 0.70 vs 0.67 (p = 0.51), respectively. The radiomics DL model, compared to the radiomics model, improved AUCs for POLEmut (0.68 vs 0.79), NSMP (0.71 vs 0.74), and p53abn (0.76 vs 0.78) in the internal validation (p = 0.08). The clinical-radiomics DL Model outperformed both the clinical model and radiomics DL model (macro-average AUC = 0.79 vs 0.69 and 0.73, in the internal validation [p = 0.02], 0.74 vs 0.67 and 0.69 in the external validation [p = 0.04]). The clinical-radiomics DL model based on MRI effectively distinguished EC molecular subtypes and demonstrated strong potential, with robust validation across multiple centers. Future research should explore larger datasets to further uncover DL's potential. Our clinical-radiomics DL model based on MRI has the potential to distinguish EC molecular subtypes. This insight aids in guiding clinicians in tailoring individualized treatments for EC patients. Accurate classification of EC molecular subtypes is crucial for prognostic risk assessment. The clinical-radiomics DL model outperformed both the clinical model and the radiomics DL model. The MRI features exhibited better diagnostic performance for POLEmut and p53abn.

Texture-based probability mapping for automatic assessment of myocardial injury in late gadolinium enhancement images after revascularized STEMI.

Frøysa V, Berg GJ, Singsaas E, Eftestøl T, Woie L, Ørn S

pubmed logopapersMay 15 2025
Late Gadolinium-enhancement in cardiac magnetic resonance imaging (LGE-CMR) is the gold standard for assessing myocardial infarction (MI) size. Texture-based probability mapping (TPM) is a novel machine learning-based analysis of LGE images of myocardial injury. The ability of TPM to assess acute myocardial injury has not been determined. This proof-of-concept study aimed to determine how TPM responds to the dynamic changes in myocardial injury during one-year follow-up after a first-time revascularized acute MI. 41 patients with first-time acute ST-elevation MI and single-vessel occlusion underwent successful PCI. LGE-CMR images were obtained 2 days, 1 week, 2 months, and 1 year following MI. TPM size was compared with manual LGE-CMR based MI size, LV remodeling, and biomarkers. TPM size remained larger than MI by LGE-CMR at all time points, decreasing from 2 days to 2 months (p < 0.001) but increasing from 2 months to 1 year (p < 0.01). TPM correlated strongly with peak Troponin T (p < 0.001) and NT-proBNP (p < 0.001). At 1 week, 2 months, and 1 year, TPM showed a stronger correlation with NT-proBNP than MI size by LGE-CMR. Analyzing all collected pixels from 2 months to 1 year revealed a general increase in pixel scar probability in both the infarcted and non-infarcted regions. This proof-of-concept study suggests that TPM may offer additional insights into myocardial alterations in both infarcted and non-infarcted regions following acute MI. These findings indicate a potential role for TPM in assessing the overall myocardial response to infarction and the subsequent healing and remodeling process.

Machine learning prediction prior to onset of mild cognitive impairment using T1-weighted magnetic resonance imaging radiomic of the hippocampus.

Zhan S, Wang J, Dong J, Ji X, Huang L, Zhang Q, Xu D, Peng L, Wang X, Zhang Y, Liang S, Chen L

pubmed logopapersMay 15 2025
Early identification of individuals who progress from normal cognition (NC) to mild cognitive impairment (MCI) may help prevent cognitive decline. We aimed to build predictive models using radiomic features of the bilateral hippocampus in combination with scores from neuropsychological assessments. We utilized the Alzheimer's Disease Neuroimaging Initiative (ADNI) database to study 175 NC individuals, identifying 50 who progressed to MCI within seven years. Employing the Least Absolute Shrinkage and Selection Operator (LASSO) on T1-weighted images, we extracted and refined hippocampal features. Classification models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and light gradient boosters (LightGBM), were built based on significant neuropsychological scores. Model validation was conducted using 5-fold cross-validation, and hyperparameters were optimized with Scikit-learn, using an 80:20 data split for training and testing. We found that the LightGBM model achieved an area under the receiver operating characteristic (ROC) curve (AUC) value of 0.89 and an accuracy of 0.79 in the training set, and an AUC value of 0.80 and an accuracy of 0.74 in the test set. The study identified that T1-weighted magnetic resonance imaging radiomic of the hippocampus would be used to predict the progression to MCI at the normal cognitive stage, which might provide a new insight into clinical research.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

Characterizing ASD Subtypes Using Morphological Features from sMRI with Unsupervised Learning.

Raj A, Ratnaik R, Sengar SS, Fredo ARJ

pubmed logopapersMay 15 2025
In this study, we attempted to identify the subtypes of autism spectrum disorder (ASD) with the help of anatomical alterations found in structural magnetic resonance imaging (sMRI) data of the ASD brain and machine learning tools. Initially, the sMRI data was preprocessed using the FreeSurfer toolbox. Further, the brain regions were segmented into 148 regions of interest using the Destrieux atlas. Features such as volume, thickness, surface area, and mean curvature were extracted for each brain region. We performed principal component analysis independently on the volume, thickness, surface area, and mean curvature features and identified the top 10 features. Further, we applied k-means clustering on these top 10 features and validated the number of clusters using Elbow and Silhouette method. Our study identified two clusters in the dataset which significantly shows the existence of two subtypes in ASD. We identified the features such as volume of scaled lh_G_front middle, thickness of scaled rh_S_temporal transverse, area of scaled lh_S_temporal sup, and mean curvature of scaled lh_G_precentral as the significant features discriminating the two clusters with statistically significant p-value (p<0.05). Thus, our proposed method is effective for the identification of ASD subtypes and can also be useful for the screening of other similar neurological disorders.

Data-Agnostic Augmentations for Unknown Variations: Out-of-Distribution Generalisation in MRI Segmentation

Puru Vaish, Felix Meister, Tobias Heimann, Christoph Brune, Jelmer M. Wolterink

arxiv logopreprintMay 15 2025
Medical image segmentation models are often trained on curated datasets, leading to performance degradation when deployed in real-world clinical settings due to mismatches between training and test distributions. While data augmentation techniques are widely used to address these challenges, traditional visually consistent augmentation strategies lack the robustness needed for diverse real-world scenarios. In this work, we systematically evaluate alternative augmentation strategies, focusing on MixUp and Auxiliary Fourier Augmentation. These methods mitigate the effects of multiple variations without explicitly targeting specific sources of distribution shifts. We demonstrate how these techniques significantly improve out-of-distribution generalization and robustness to imaging variations across a wide range of transformations in cardiac cine MRI and prostate MRI segmentation. We quantitatively find that these augmentation methods enhance learned feature representations by promoting separability and compactness. Additionally, we highlight how their integration into nnU-Net training pipelines provides an easy-to-implement, effective solution for enhancing the reliability of medical segmentation models in real-world applications.

Machine learning-based prognostic subgrouping of glioblastoma: A multicenter study.

Akbari H, Bakas S, Sako C, Fathi Kazerooni A, Villanueva-Meyer J, Garcia JA, Mamourian E, Liu F, Cao Q, Shinohara RT, Baid U, Getka A, Pati S, Singh A, Calabrese E, Chang S, Rudie J, Sotiras A, LaMontagne P, Marcus DS, Milchenko M, Nazeri A, Balana C, Capellades J, Puig J, Badve C, Barnholtz-Sloan JS, Sloan AE, Vadmal V, Waite K, Ak M, Colen RR, Park YW, Ahn SS, Chang JH, Choi YS, Lee SK, Alexander GS, Ali AS, Dicker AP, Flanders AE, Liem S, Lombardo J, Shi W, Shukla G, Griffith B, Poisson LM, Rogers LR, Kotrotsou A, Booth TC, Jain R, Lee M, Mahajan A, Chakravarti A, Palmer JD, DiCostanzo D, Fathallah-Shaykh H, Cepeda S, Santonocito OS, Di Stefano AL, Wiestler B, Melhem ER, Woodworth GF, Tiwari P, Valdes P, Matsumoto Y, Otani Y, Imoto R, Aboian M, Koizumi S, Kurozumi K, Kawakatsu T, Alexander K, Satgunaseelan L, Rulseh AM, Bagley SJ, Bilello M, Binder ZA, Brem S, Desai AS, Lustig RA, Maloney E, Prior T, Amankulor N, Nasrallah MP, O'Rourke DM, Mohan S, Davatzikos C

pubmed logopapersMay 15 2025
Glioblastoma (GBM) is the most aggressive adult primary brain cancer, characterized by significant heterogeneity, posing challenges for patient management, treatment planning, and clinical trial stratification. We developed a highly reproducible, personalized prognostication, and clinical subgrouping system using machine learning (ML) on routine clinical data, magnetic resonance imaging (MRI), and molecular measures from 2838 demographically diverse patients across 22 institutions and 3 continents. Patients were stratified into favorable, intermediate, and poor prognostic subgroups (I, II, and III) using Kaplan-Meier analysis (Cox proportional model and hazard ratios [HR]). The ML model stratified patients into distinct prognostic subgroups with HRs between subgroups I-II and I-III of 1.62 (95% CI: 1.43-1.84, P < .001) and 3.48 (95% CI: 2.94-4.11, P < .001), respectively. Analysis of imaging features revealed several tumor properties contributing unique prognostic value, supporting the feasibility of a generalizable prognostic classification system in a diverse cohort. Our ML model demonstrates extensive reproducibility and online accessibility, utilizing routine imaging data rather than complex imaging protocols. This platform offers a unique approach to personalized patient management and clinical trial stratification in GBM.

Machine learning for grading prediction and survival analysis in high grade glioma.

Li X, Huang X, Shen Y, Yu S, Zheng L, Cai Y, Yang Y, Zhang R, Zhu L, Wang E

pubmed logopapersMay 15 2025
We developed and validated a magnetic resonance imaging (MRI)-based radiomics model for the classification of high-grade glioma (HGG) and determined the optimal machine learning (ML) approach. This retrospective analysis included 184 patients (59 grade III lesions and 125 grade IV lesions). Radiomics features were extracted from MRI with T1-weighted imaging (T1WI). The least absolute shrinkage and selection operator (LASSO) feature selection method and seven classification methods including logistic regression, XGBoost, Decision Tree, Random Forest (RF), Adaboost, Gradient Boosting Decision Tree, and Stacking fusion model were used to differentiate HGG. Performance was compared on AUC, sensitivity, accuracy, precision and specificity. In the non-fusion models, the best performance was achieved by using the XGBoost classifier, and using SMOTE to deal with the data imbalance to improve the performance of all the classifiers. The Stacking fusion model performed the best, with an AUC = 0.95 (sensitivity of 0.84; accuracy of 0.85; F1 score of 0.85). MRI-based quantitative radiomics features have good performance in identifying the classification of HGG. The XGBoost method outperforms the classifiers in the non-fusion model and the Stacking fusion model outperforms the non-fusion model.
Page 150 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.