Sort by:
Page 24 of 1621612 results

Self-Supervised Cross-Encoder for Neurodegenerative Disease Diagnosis

Fangqi Cheng, Yingying Zhao, Xiaochen Yang

arxiv logopreprintSep 9 2025
Deep learning has shown significant potential in diagnosing neurodegenerative diseases from MRI data. However, most existing methods rely heavily on large volumes of labeled data and often yield representations that lack interpretability. To address both challenges, we propose a novel self-supervised cross-encoder framework that leverages the temporal continuity in longitudinal MRI scans for supervision. This framework disentangles learned representations into two components: a static representation, constrained by contrastive learning, which captures stable anatomical features; and a dynamic representation, guided by input-gradient regularization, which reflects temporal changes and can be effectively fine-tuned for downstream classification tasks. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our method achieves superior classification accuracy and improved interpretability. Furthermore, the learned representations exhibit strong zero-shot generalization on the Open Access Series of Imaging Studies (OASIS) dataset and cross-task generalization on the Parkinson Progression Marker Initiative (PPMI) dataset. The code for the proposed method will be made publicly available.

Two-Step Semi-Automated Classification of Choroidal Metastases on MRI: Orbit Localization via Bounding Boxes Followed by Binary Classification via Evolutionary Strategies.

Shi JS, McRae-Posani B, Haque S, Holodny A, Shalu H, Stember J

pubmed logopapersSep 9 2025
The choroid of the eye is a rare site for metastatic tumor spread, and as small lesions on the periphery of brain MRI studies, these choroidal metastases are often missed. To improve their detection, we aimed to use artificial intelligence to distinguish between brain MRI scans containing normal orbits and choroidal metastases. We present a novel hierarchical deep learning framework for sequential cropping and classification on brain MRI images to detect choroidal metastases. The key innovation of this approach lies in training an orbit localization network based on a YOLOv5 architecture to focus on the orbits, isolating the structures of interest and eliminating irrelevant background information. The initial sub-task of localization ensures that the input to the subsequent classification network is restricted to the precise anatomical region where choroidal metastases are likely to occur. In Step 1, we trained a localization network on 386 T2-weighted brain MRI axial slices from 97 patients. Using the localized orbit images from Step 1, in Step 2 we trained a binary classifier network with 33 normal and 33 choroidal metastasis-containing brain MRIs. To address the challenges posed by the small dataset, we employed a data-efficient evolutionary strategies approach, which has been shown to avoid both overfitting and underfitting in small training sets. Our orbit localization model identified globes with 100% accuracy and a mean Average Precision of Intersection over Union thresholds of 0.5 to 0.95 (mAP(0.5:0.95)) of 0.47 on held-out testing data. Similarly, the model generalized well to our Step 2 dataset which included orbits demonstrating pathologies, achieving 100% accuracy and mAP(0.5:0.95) of 0.44. mAP(0.5:0.95) appeared low because the model could not distinguish left and right orbits. Using the cropped orbits as inputs, our evolutionary strategies-trained convolutional neural network achieved a testing set area under the curve (AUC) of 0.93 (95% CI [0.83, 1.03]), with 100% sensitivity and 87% specificity at the optimal Youden's index. The semi-automated pipeline from brain MRI slices to choroidal metastasis classification demonstrates the utility of a sequential localization and classification approach, and clinical relevance for identifying small, "corner-of-the-image", easily overlooked lesions. AI = artificial intelligence; AUC = area under the curve; CNN = convolutional neural network; DNE = deep neuroevolution; IoU = intersection over union; mAP = mean average precision; ROC = receiver operating characteristic.

Assessing the ability of large language models to simplify lumbar spine imaging reports into patient-facing text: a pilot study of GPT-4.

Khazanchi R, Chen AR, Desai P, Herrera D, Staub JR, Follett MA, Krushelnytskyy M, Kemeny H, Hsu WK, Patel AA, Divi SN

pubmed logopapersSep 9 2025
To assess the ability of large language models (LLMs) to accurately simplify lumbar spine magnetic resonance imaging (MRI) reports. Patients who underwent lumbar decompression and/or fusion surgery in 2022 at one tertiary academic medical center were queried using appropriate CPT codes. We then identified all patients with a preoperative ICD diagnosis of lumbar spondylolisthesis and extracted the latest preoperative spine MRI radiology report text. The GPT-4 API was deployed on deidentified reports with a prompt to produce translations and evaluated for accuracy and readability. An enhanced GPT prompt was constructed using high-scoring reports and evaluated on low-scoring reports. Of 93 included reports, GPT effectively reduced the average reading level (11.47 versus 8.50, p < 0.001). While most reports had no accuracy issues, 34% of translations omitted at least one clinically relevant piece of information, while 6% produced a clinically significant inaccuracy in the translation. An enhanced prompt model using high scoring reports-maintained reading level while significantly improving omission rate (p < 0.0001). However, even in the enhanced prompt model, GPT made several errors regarding location of stenosis, description of prior spine surgery, and description of other spine pathologies. GPT-4 effectively simplifies the reading level of lumbar spine MRI reports. The model tends to omit key information in its translations, which can be mitigated with enhanced prompting. Further validation in the domain of spine radiology needs to be performed to facilitate clinical integration.

Spherical Harmonics Representation Learning for High-Fidelity and Generalizable Super-Resolution in Diffusion MRI.

Wu R, Cheng J, Li C, Zou J, Fan W, Ma X, Guo H, Liang Y, Wang S

pubmed logopapersSep 9 2025
Diffusion magnetic resonance imaging (dMRI) often suffers from low spatial and angular resolution due to inherent limitations in imaging hardware and system noise, adversely affecting the accurate estimation of microstructural parameters with fine anatomical details. Deep learning-based super-resolution techniques have shown promise in enhancing dMRI resolution without increasing acquisition time. However, most existing methods are confined to either spatial or angular super-resolution, disrupting the information exchange between the two domains and limiting their effectiveness in capturing detailed microstructural features. Furthermore, traditional pixel-wise loss functions only consider pixel differences, and struggle to recover intricate image details essential for high-resolution reconstruction. We propose SHRL-dMRI, a novel Spherical Harmonics Representation Learning framework for high-fidelity, generalizable super-resolution in dMRI to address these challenges. SHRL-dMRI explores implicit neural representations and spherical harmonics to model continuous spatial and angular representations, simultaneously enhancing both spatial and angular resolution while improving the accuracy of microstructural parameter estimation. To further preserve image fidelity, a data-fidelity module and wavelet-based frequency loss are introduced, ensuring the super-resolved images preserve image consistency and retain fine details. Extensive experiments demonstrate that, compared to five other state-of-the-art methods, our method significantly enhances dMRI data resolution, improves the accuracy of microstructural parameter estimation, and provides better generalization capabilities. It maintains stable performance even under a 45× downsampling factor. The proposed method can effectively improve the resolution of dMRI data without increasing the acquisition time, providing new possibilities for future clinical applications.

Machine learning for myocarditis diagnosis using cardiovascular magnetic resonance: a systematic review, diagnostic test accuracy meta-analysis, and comparison with human physicians.

Łajczak P, Sahin OK, Matyja J, Puglla Sanchez LR, Sayudo IF, Ayesha A, Lopes V, Majeed MW, Krishna MM, Joseph M, Pereira M, Obi O, Silva R, Lecchi C, Schincariol M

pubmed logopapersSep 9 2025
Myocarditis is an inflammation of heart tissue. Cardiovascular magnetic resonance imaging (CMR) has emerged as an important non-invasive imaging tool for diagnosing myocarditis, however, interpretation remains a challenge for novice physicians. Advancements in machine learning (ML) models have further improved diagnostic accuracy, demonstrating good performance. Our study aims to assess the diagnostic accuracy of ML in identifying myocarditis using CMR. A systematic search was performed using PubMed, Embase, Web of Science, Cochrane, and Scopus to identify studies reporting the diagnostic accuracy of ML in the detection of myocarditis using CMR. The included studies evaluated both image-based and report-based assessments using various ML models. Diagnostic accuracy was estimated using a Random-Effects model (R software). We found a total of 141 ML model results from a total of 12 studies, which were included in the systematic review. The best models achieved 0.93 (95% Confidence Interval (CI) 0.88-0.96) sensitivity and 0.95 (95% CI 0.89-0.97) specificity. Pooled area under the curve was 0.97 (95% CI 0.93-0.98). Comparisons with human physicians showed comparable results for diagnostic accuracy of myocarditis. Quality assessment concerns and heterogeneity were present. CMR augmented using ML models with advanced algorithms can provide high diagnostic accuracy for myocarditis, even surpassing novice CMR radiologists. However, high heterogeneity, quality assessment concerns, and lack of information on cost-effectiveness may limit the clinical implementation of ML. Future investigations should explore cost-effectiveness and minimize biases in their methodologies.

Development of an MRI-Based Comprehensive Model Fusing Clinical, Habitat Radiomics, and Deep Learning Models for Preoperative Identification of Tumor Deposits in Rectal Cancer.

Li X, Zhu Y, Wei Y, Chen Z, Wang Z, Li Y, Jin X, Chen Z, Zhan J, Chen X, Wang M

pubmed logopapersSep 9 2025
Tumor deposits (TDs) are an important prognostic factor in rectal cancer. However, integrated models combining clinical, habitat radiomics, and deep learning (DL) features for preoperative TDs detection remain unexplored. To investigate fusion models based on MRI for preoperative TDs identification and prognosis in rectal cancer. Retrospective. Surgically diagnosed rectal cancer patients (n = 635): training (n = 259) and internal validation (n = 112) from center 1; center 2 (n = 264) for external validation. 1.5/3T, T2-weighted image (T2WI) using fast spin echo sequence. Four models (clinical, habitat radiomics, DL, fusion) were developed for preoperative TDs diagnosis (184 TDs positive). T2WI was segmented using nnUNet, and habitat radiomics and DL features were extracted separately. Clinical parameters were analyzed independently. The fusion model integrated selected features from all three approaches through two-stage selection. Disease-free survival (DFS) analysis was used to assess the models' prognostic performance. Intraclass correlation coefficient (ICC), logistic regression, Mann-Whitney U tests, Chi-squared tests, LASSO, area under the curve (AUC), decision curve analysis (DCA), calibration curves, Kaplan-Meier analysis. The AUCs for the four models ranged from 0.778 to 0.930 in the training set. In the internal validation cohort, the AUCs of clinical, habitat radiomics, DL, and fusion models were 0.785 (95% CI 0.767-0.803), 0.827 (95% CI 0.809-0.845), 0.828 (95% CI 0.815-0.841), and 0.862 (95% CI 0.828-0.896), respectively. In the external validation cohort, the corresponding AUCs were 0.711 (95% CI 0.599-0.644), 0.817 (95% CI 0.801-0.833), 0.759 (95% CI 0.743-0.773), and 0.820 (95% CI 0.770-0.860), respectively. TDs-positive patients predicted by the fusion model had significantly poorer DFS (median: 30.7 months) than TDs-negative patients (median follow-up period: 39.9 months). A fusion model may identify TDs in rectal cancer and could allow to stratify DFS risk. 3.

Faster, Self-Supervised Super-Resolution for Anisotropic Multi-View MRI Using a Sparse Coordinate Loss

Maja Schlereth, Moritz Schillinger, Katharina Breininger

arxiv logopreprintSep 9 2025
Acquiring images in high resolution is often a challenging task. Especially in the medical sector, image quality has to be balanced with acquisition time and patient comfort. To strike a compromise between scan time and quality for Magnetic Resonance (MR) imaging, two anisotropic scans with different low-resolution (LR) orientations can be acquired. Typically, LR scans are analyzed individually by radiologists, which is time consuming and can lead to inaccurate interpretation. To tackle this, we propose a novel approach for fusing two orthogonal anisotropic LR MR images to reconstruct anatomical details in a unified representation. Our multi-view neural network is trained in a self-supervised manner, without requiring corresponding high-resolution (HR) data. To optimize the model, we introduce a sparse coordinate-based loss, enabling the integration of LR images with arbitrary scaling. We evaluate our method on MR images from two independent cohorts. Our results demonstrate comparable or even improved super-resolution (SR) performance compared to state-of-the-art (SOTA) self-supervised SR methods for different upsampling scales. By combining a patient-agnostic offline and a patient-specific online phase, we achieve a substantial speed-up of up to ten times for patient-specific reconstruction while achieving similar or better SR quality. Code is available at https://github.com/MajaSchle/tripleSR.

Prediction of double expression status of primary CNS lymphoma using multiparametric MRI radiomics combined with habitat radiomics: a double-center study.

Zhao J, Liang L, Li J, Li Q, Li F, Niu L, Xue C, Fu W, Liu Y, Song S, Liu X

pubmed logopapersSep 9 2025
Double expression lymphoma (DEL) is an independent high-risk prognostic factor for primary CNS lymphoma (PCNSL), and its diagnosis currently relies on invasive methods. This study first integrates radiomics and habitat radiomics features to enhance preoperative DEL status prediction models via intratumoral heterogeneity analysis. Clinical, pathological, and MRI imaging data of 139 PCNSL patients from two independent centers were collected. Radiomics, habitat radiomics, and combined models were constructed using machine learning classifiers, including KNN, DT, LR, and SVM. The AUC in the test set was used to evaluate the optimal predictive model. DCA curve and calibration curve were employed to evaluate the predictive performance of the models. SHAP analysis was utilized to visualize the contribution of each feature in the optimal model. For the radiomics-based models, the Combined radiomics model constructed by LR demonstrated better performance, with the AUC of 0.8779 (95% CI: 0.8171-0.9386) in the training set and 0.7166 (95% CI: 0.497-0.9361) in the test set. The Habitat radiomics model (SVM) based on T1-CE showed an AUC of 0.7446 (95% CI: 0.6503- 0.8388) in the training set and 0.7433 (95% CI: 0.5322-0.9545) in the test set. Finally, the Combined all model exhibited the highest predictive performance: LR achieved AUC values of 0.8962 (95% CI: 0.8299-0.9625) and 0.8289 (95% CI: 0.6785-0.9793) in training and test sets, respectively. The Combined all model developed in this study can provide effective reference value in predicting the DEL status of PCNSL, and habitat radiomics significantly enhances the predictive efficacy.

PUUMA (Placental patch and whole-Uterus dual-branch U-Mamba-based Architecture): Functional MRI Prediction of Gestational Age at Birth and Preterm Risk

Diego Fajardo-Rojas, Levente Baljer, Jordina Aviles Verdera, Megan Hall, Daniel Cromb, Mary A. Rutherford, Lisa Story, Emma C. Robinson, Jana Hutter

arxiv logopreprintSep 8 2025
Preterm birth is a major cause of mortality and lifelong morbidity in childhood. Its complex and multifactorial origins limit the effectiveness of current clinical predictors and impede optimal care. In this study, a dual-branch deep learning architecture (PUUMA) was developed to predict gestational age (GA) at birth using T2* fetal MRI data from 295 pregnancies, encompassing a heterogeneous and imbalanced population. The model integrates both global whole-uterus and local placental features. Its performance was benchmarked against linear regression using cervical length measurements obtained by experienced clinicians from anatomical MRI and other Deep Learning architectures. The GA at birth predictions were assessed using mean absolute error. Accuracy, sensitivity, and specificity were used to assess preterm classification. Both the fully automated MRI-based pipeline and the cervical length regression achieved comparable mean absolute errors (3 weeks) and good sensitivity (0.67) for detecting preterm birth, despite pronounced class imbalance in the dataset. These results provide a proof of concept for automated prediction of GA at birth from functional MRI, and underscore the value of whole-uterus functional imaging in identifying at-risk pregnancies. Additionally, we demonstrate that manual, high-definition cervical length measurements derived from MRI, not currently routine in clinical practice, offer valuable predictive information. Future work will focus on expanding the cohort size and incorporating additional organ-specific imaging to improve generalisability and predictive performance.

Evaluating artificial intelligence for a focal nodular hyperplasia diagnosis using magnetic resonance imaging: preliminary findings.

Kantarcı M, Kızılgöz V, Terzi R, Kılıç AE, Kabalcı H, Durmaz Ö, Tokgöz N, Harman M, Sağır Kahraman A, Avanaz A, Aydın S, Elpek GÖ, Yazol M, Aydınlı B

pubmed logopapersSep 8 2025
This study aimed to evaluate the effectiveness of artificial intelligence (AI) in diagnosing focal nodular hyperplasia (FNH) of the liver using magnetic resonance imaging (MRI) and compare its performance with that of radiologists. In the first phase of the study, the MRIs of 60 patients (30 patients with FNH and 30 patients with no lesions or lesions other than FNH) were processed using a segmentation program and introduced to an AI model. After the learning process, the MRIs of 42 different patients that the AI model had no experience with were introduced to the system. In addition, a radiology resident and a radiology specialist evaluated patients with the same MR sequences. The sensitivity and specificity values were obtained from all three reviews. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the AI model were found to be 0.769, 0.966, 0.909, and 0.903, respectively. The sensitivity and specificity values were higher than those of the radiology resident and lower than those of the radiology specialist. The results of the specialist versus the AI model revealed a good agreement level, with a kappa (κ) value of 0.777. For the diagnosis of FNH, the sensitivity, specificity, PPV, and NPV of the AI device were higher than those of the radiology resident and lower than those of the radiology specialist. With additional studies focused on different specific lesions of the liver, AI models are expected to be able to diagnose each liver lesion with high accuracy in the future. AI is studied to provide assisted or automated interpretation of radiological images with an accurate and reproducible imaging diagnosis.
Page 24 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.