Sort by:
Page 17 of 1241236 results

Few-shot learning for highly accelerated 3D time-of-flight MRA reconstruction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.

Attention Gated-VGG with deep learning-based features for Alzheimer's disease classification.

Moorthy DK, Nagaraj P

pubmed logopapersSep 10 2025
Alzheimer's disease (AD) is considered to be one of the neurodegenerative diseases with possible cognitive deficits related to dementia in human subjects. High priority should be put on efforts aimed at early detection of AD. Here, images undergo a pre-processing phase that integrates image resizing and the application of median filters. After that, processed images are subjected to data augmentation procedures. Feature extraction from WOA-based ResNet, together with extracted convolutional neural network (CNN) features from pre-processed images, is used to train proposed DL model to classify AD. The process is executed using the proposed Attention Gated-VGG model. The proposed method outperformed normal methodologies when tested and achieved an accuracy of 96.7%, sensitivity of 97.8%, and specificity of 96.3%. The results have proven that Attention Gated-VGG model is a very promising technique for classifying AD.

Artificial Intelligence in Early Detection of Autism Spectrum Disorder for Preschool ages: A Systematic Literature Review

Hasan, H. H.

medrxiv logopreprintSep 10 2025
BackgroundEarly detection of autism spectrum disorder (ASD) improves outcomes, yet clinical assessment is time-intensive. Artificial intelligence (AI) may support screening in preschool children by analysing behavioural, neurophysiological, imaging, and biomarker data. AimTo synthesise studies that applied AI in ASD assessment and evaluate whether the underlying data and AI approaches can distinguish ASD characteristics in early childhood. MethodsA systematic search of 15 databases was conducted on 30 November 2024 using predefined terms. Inclusion criteria were empirical studies applying AI to ASD detection in children aged 0-7 years. Reporting followed PRISMA 2020. ResultsTwelve studies met criteria. Reported performance (AUC) ranged from 0.65 to 0.997. Modalities included behavioural (eye-tracking, home videos), motor (tablet/reaching), EEG, diffusion MRI, and blood/epigenetic biomarkers. The largest archival dataset (M-CHAT-R) achieved near-perfect AUC with neural networks. Common limitations were small samples, male-skewed cohorts, and limited external validation. ConclusionsAI can aid early ASD screening in infants and preschoolers, but larger and more diverse datasets, rigorous external validation, and multimodal integration are needed before clinical deployment.

3D-CNN Enhanced Multiscale Progressive Vision Transformer for AD Diagnosis.

Huang F, Chen N, Qiu A

pubmed logopapersSep 10 2025
Vision Transformer (ViT) applied to structural magnetic resonance images has demonstrated success in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, three key challenges have yet to be well addressed: 1) ViT requires a large labeled dataset to mitigate overfitting while most of the current AD-related sMRI data fall short in the sample sizes. 2) ViT neglects the within-patch feature learning, e.g., local brain atrophy, which is crucial for AD diagnosis. 3) While ViT can enhance capturing local features by reducing the patch size and increasing the number of patches, the computational complexity of ViT quadratically increases with the number of patches with unbearable overhead. To this end, this paper proposes a 3D-convolutional neural network (CNN) Enhanced Multiscale Progressive ViT (3D-CNN-MPVT). First, a 3D-CNN is pre-trained on sMRI data to extract detailed local image features and alleviate overfitting. Second, an MPVT module is proposed with an inner CNN module to explicitly characterize the within-patch interactions that are conducive to AD diagnosis. Third, a stitch operation is proposed to merge cross-patch features and progressively reduce the number of patches. The inner CNN alongside the stitch operation in the MPTV module enhances local feature characterization while mitigating computational costs. Evaluations using the Alzheimer's Disease Neuroimaging Initiative dataset with 6610 scans and the Open Access Series of Imaging Studies-3 with 1866 scans demonstrated its superior performance. With minimal preprocessing, our approach achieved an impressive 90% accuracy and 80% in AD classification and MCI conversion prediction, surpassing recent baselines.

Prediction of double expression status of primary CNS lymphoma using multiparametric MRI radiomics combined with habitat radiomics: a double-center study.

Zhao J, Liang L, Li J, Li Q, Li F, Niu L, Xue C, Fu W, Liu Y, Song S, Liu X

pubmed logopapersSep 9 2025
Double expression lymphoma (DEL) is an independent high-risk prognostic factor for primary CNS lymphoma (PCNSL), and its diagnosis currently relies on invasive methods. This study first integrates radiomics and habitat radiomics features to enhance preoperative DEL status prediction models via intratumoral heterogeneity analysis. Clinical, pathological, and MRI imaging data of 139 PCNSL patients from two independent centers were collected. Radiomics, habitat radiomics, and combined models were constructed using machine learning classifiers, including KNN, DT, LR, and SVM. The AUC in the test set was used to evaluate the optimal predictive model. DCA curve and calibration curve were employed to evaluate the predictive performance of the models. SHAP analysis was utilized to visualize the contribution of each feature in the optimal model. For the radiomics-based models, the Combined radiomics model constructed by LR demonstrated better performance, with the AUC of 0.8779 (95% CI: 0.8171-0.9386) in the training set and 0.7166 (95% CI: 0.497-0.9361) in the test set. The Habitat radiomics model (SVM) based on T1-CE showed an AUC of 0.7446 (95% CI: 0.6503- 0.8388) in the training set and 0.7433 (95% CI: 0.5322-0.9545) in the test set. Finally, the Combined all model exhibited the highest predictive performance: LR achieved AUC values of 0.8962 (95% CI: 0.8299-0.9625) and 0.8289 (95% CI: 0.6785-0.9793) in training and test sets, respectively. The Combined all model developed in this study can provide effective reference value in predicting the DEL status of PCNSL, and habitat radiomics significantly enhances the predictive efficacy.

Self-Supervised Cross-Encoder for Neurodegenerative Disease Diagnosis

Fangqi Cheng, Yingying Zhao, Xiaochen Yang

arxiv logopreprintSep 9 2025
Deep learning has shown significant potential in diagnosing neurodegenerative diseases from MRI data. However, most existing methods rely heavily on large volumes of labeled data and often yield representations that lack interpretability. To address both challenges, we propose a novel self-supervised cross-encoder framework that leverages the temporal continuity in longitudinal MRI scans for supervision. This framework disentangles learned representations into two components: a static representation, constrained by contrastive learning, which captures stable anatomical features; and a dynamic representation, guided by input-gradient regularization, which reflects temporal changes and can be effectively fine-tuned for downstream classification tasks. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our method achieves superior classification accuracy and improved interpretability. Furthermore, the learned representations exhibit strong zero-shot generalization on the Open Access Series of Imaging Studies (OASIS) dataset and cross-task generalization on the Parkinson Progression Marker Initiative (PPMI) dataset. The code for the proposed method will be made publicly available.

Brain CT for Diagnosis of Intracranial Disease in Ambulatory Cancer Patients: Assessment of the Diagnostic Value of Scanning Without Contrast Prior to With Contrast.

Wang E, Darbandi A, Tu L, Ballester LY, Morales CJ, Chen M, Gule-Monroe MK, Johnson JM

pubmed logopapersSep 9 2025
Brain imaging with MRI or CT is standard in screening for intracranial disease among ambulatory cancer patients. Although MRI offers greater sensitivity, CT is frequently employed due to its accessibility, affordability, and faster acquisition time. However, the necessity of routinely performing a non-contrast CT with the contrast-enhanced study is unknown. This study evaluates the clinical and economic utility of the non-contrast portion of the brain CT examination. A board-certified neuroradiologist reviewed 737 brain CT reports from outpatients at MD Anderson Cancer Center who underwent contrast and non-contrast CT for cancer staging (October 2014 to March 2016) to assess if significant findings were identified only on non-contrast CT. A GPT-3 model was then fine-tuned to extract reports with a high likelihood of unique and significant non-contrast findings from 1,980 additional brain CT reports (January 2017 to April 2022). These reports were manually reviewed by two neuroradiologists, with adjudication by a third reviewer if needed. The incremental cost-effectiveness ratio of non-contrast CT inclusion was then calculated based on Medicare reimbursement and the 95% confidence interval of the proportion of all reports in which non-contrast CT was necessary for identifying significant findings RESULTS: Seven of 737 reports in the initial dataset revealed significant findings unique to the non-contrast CT, all of which were hemorrhage. The GPT-3 model identified 145 additional reports with a high unique non-contrast CT finding likelihood for manual review from the second dataset of 1,980 reports. 19 of these reports were found to have unique and significant non-contrast CT findings. In total, 0.96% (95% CI: 0.63% -1.40%) of reports had significant findings identified only on non-contrast CT. The incremental cost-effectiveness ratio for identification of a single significant finding on non-contrast CT missed on the contrast-enhanced study was $1,855 to $4,122. In brain CT for ambulatory screening for intracranial disease in cancer patients, non-contrast CT offers limited additional diagnostic value compared to contrast-enhanced CT alone. Considering the associated financial cost, workload, and patient radiation exposure associated with performing a non-contrast CT, contrast-enhanced brain CT alone is sufficient for cancer staging in asymptomatic cancer patients. GPT-3= Generative Pretrained Transformers 3.

Spherical Harmonics Representation Learning for High-Fidelity and Generalizable Super-Resolution in Diffusion MRI.

Wu R, Cheng J, Li C, Zou J, Fan W, Ma X, Guo H, Liang Y, Wang S

pubmed logopapersSep 9 2025
Diffusion magnetic resonance imaging (dMRI) often suffers from low spatial and angular resolution due to inherent limitations in imaging hardware and system noise, adversely affecting the accurate estimation of microstructural parameters with fine anatomical details. Deep learning-based super-resolution techniques have shown promise in enhancing dMRI resolution without increasing acquisition time. However, most existing methods are confined to either spatial or angular super-resolution, disrupting the information exchange between the two domains and limiting their effectiveness in capturing detailed microstructural features. Furthermore, traditional pixel-wise loss functions only consider pixel differences, and struggle to recover intricate image details essential for high-resolution reconstruction. We propose SHRL-dMRI, a novel Spherical Harmonics Representation Learning framework for high-fidelity, generalizable super-resolution in dMRI to address these challenges. SHRL-dMRI explores implicit neural representations and spherical harmonics to model continuous spatial and angular representations, simultaneously enhancing both spatial and angular resolution while improving the accuracy of microstructural parameter estimation. To further preserve image fidelity, a data-fidelity module and wavelet-based frequency loss are introduced, ensuring the super-resolved images preserve image consistency and retain fine details. Extensive experiments demonstrate that, compared to five other state-of-the-art methods, our method significantly enhances dMRI data resolution, improves the accuracy of microstructural parameter estimation, and provides better generalization capabilities. It maintains stable performance even under a 45× downsampling factor. The proposed method can effectively improve the resolution of dMRI data without increasing the acquisition time, providing new possibilities for future clinical applications.

Transposing intensive care innovation from modern warfare to other resource-limited settings.

Jarrassier A, de Rocquigny G, Delagarde C, Ezanno AC, Josse F, Dubost C, Duranteau O, Boussen S, Pasquier P

pubmed logopapersSep 9 2025
Delivering intensive care in conflict zones and other resource-limited settings presents unique clinical, logistical, and ethical challenges. These contexts, characterized by disrupted infrastructure, limited personnel, and prolonged field care, require adapted strategies to ensure critical care delivery under resource-limited settings. This scoping review aims to identify and characterize medical innovations developed or implemented in recent conflicts that may be relevant and transposable to intensive care units operating in other resource-limited settings. A scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Five major databases were searched for English-language publications from 2014 to 2025. Studies describing innovations applicable to intensive care in modern warfare or resource-limited settings were included. While many studies relied on experimental or simulated models, a subset described real-world applications in resource-limited environments, including ultrasound-guided regional analgesia, resuscitative endovascular balloon occlusion of the aorta, portable blood transfusion platforms, and artificial intelligence-supported monitoring of traumatic brain injury. Training strategies such as teleconsultation/telementoring and low-cost simulation were also emphasized. Few of these intensive care innovations were validated in real-life wartime conditions. Innovations from modern warfare offer pragmatic and potentially transposable solutions for intensive care in resource-limited settings. Successfully adapting them requires validation and contextual adaptation, as well as the implementation of concrete collaborative strategies, including tailored training programs, joint simulation exercises, and structured knowledge translation initiatives, to ensure effective and sustainable integration.

Predicting Breath Hold Task Compliance From Head Motion.

Weng TB, Porwal G, Srinivasan D, Inglis B, Rodriguez S, Jacobs DR, Schreiner PJ, Sorond FA, Sidney S, Lewis C, Launer L, Erus G, Nasrallah IM, Bryan RN, Dula AN

pubmed logopapersSep 8 2025
Cerebrovascular reactivity reflects changes in cerebral blood flow in response to an acute stimulus and is reflective of the brain's ability to match blood flow to demand. Functional MRI with a breath-hold task can be used to elicit this vasoactive response, but data validity hinges on subject compliance. Determining breath-hold compliance often requires external monitoring equipment. To develop a non-invasive and data-driven quality filter for breath-hold compliance using only measurements of head motion during imaging. Prospective cohort. Longitudinal data from healthy middle-aged subjects enrolled in the Coronary Artery Risk Development in Young Adults Brain MRI Study, N = 1141, 47.1% female. 3.0 Tesla gradient-echo MRI. Manual labelling of respiratory belt monitored data was used to determine breath hold compliance during MRI scan. A model to estimate the probability of non-compliance with the breath hold task was developed using measures of head motion. The model's ability to identify scans in which the participant was not performing the breath hold were summarized using performance metrics including sensitivity, specificity, recall, and F1 score. The model was applied to additional unmarked data to assess effects on population measures of CVR. Sensitivity analysis revealed exclusion of non-compliant scans using the developed model did not affect median cerebrovascular reactivity (Median [q1, q3] = 1.32 [0.96, 1.71]) compared to using manual review of respiratory belt data (1.33 [1.02, 1.74]) while reducing interquartile range. The final model based on a multi-layer perceptron machine learning classifier estimated non-compliance with an accuracy of 76.9% and an F1 score of 69.5%, indicating a moderate balance between precision and recall for the identification of scans in which the participant was not compliant. The developed model provides the probability of non-compliance with a breath-hold task, which could later be used as a quality filter or included in statistical analyses. TECHNICAL EFFICACY: Stage 3.
Page 17 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.