Sort by:
Page 82 of 84834 results

Multimodal Integration of Plasma, MRI, and Genetic Risk for Cerebral Amyloid Prediction

yichen, w., Chen, H., yuxin, C., Yuyan, C., shiyun, Z., Kexin, W., Yidong, J., Tianyu, B., Yanxi, H., MingKai, Z., Chengxiang, Y., Guozheng, F., Weijie, H., Ni, S., Ying, H.

medrxiv logopreprintMay 8 2025
Accurate estimation of cerebral amyloid-{beta} (A{beta}) burden is critical for early detection and risk stratification in Alzheimers disease (AD). While A{beta} positron emission tomography (PET) remains the gold standard, its high cost, invasive nature and limited accessibility hinder broad clinical application. Blood-based biomarkers offer a non-invasive and cost-effective alternative, but their standalone predictive accuracy remains limited due to biological heterogeneity and limited reflection of central nervous system pathology. Here, we present a high-precision, multimodal prediction machine learning model that integrates plasma biomarkers, brain structural magnetic resonance imaging (sMRI) features, diffusion tensor imaging (DTI)-derived structural connectomes, and genetic risk profiles. The model was trained on 150 participants from the Alzheimers Disease Neuroimaging Initiative (ADNI) and externally validated on 111 participants from the SILCODE cohort. Multimodal integration substantially improved A{beta} prediction, with R{superscript 2} increasing from 0.515 using plasma biomarkers alone to 0.637 when adding imaging and genetic features. These results highlight the potential of this multimodal machine learning approach as a scalable, non-invasive, and economically viable alternative to PET for estimating A{beta} burden.

Automated detection of bottom-of-sulcus dysplasia on MRI-PET in patients with drug-resistant focal epilepsy

Macdonald-Laurs, E., Warren, A. E. L., Mito, R., Genc, S., Alexander, B., Barton, S., Yang, J. Y., Francis, P., Pardoe, H. R., Jackson, G., Harvey, A. S.

medrxiv logopreprintMay 8 2025
Background and ObjectivesBottom-of-sulcus dysplasia (BOSD) is a diagnostically challenging subtype of focal cortical dysplasia, 60% being missed on patients first MRI. Automated MRI-based detection methods have been developed for focal cortical dysplasia, but not BOSD specifically. Use of FDG-PET alongside MRI is not established in automated methods. We report the development and performance of an automated BOSD detector using combined MRI+PET data. MethodsThe training set comprised 54 mostly operated patients with BOSD. The test sets comprised 17 subsequently diagnosed patients with BOSD from the same center, and 12 published patients from a different center. 81% patients across training and test sets had reportedly normal first MRIs and most BOSDs were <1.5cm3. In the training set, 12 features from T1-MRI, FLAIR-MRI and FDG-PET were evaluated using a novel "pseudo-control" normalization approach to determine which features best distinguished dysplastic from normal-appearing cortex. Using the Multi-centre Epilepsy Lesion Detection groups machine-learning detection method with the addition of FDG-PET, neural network classifiers were then trained and tested on MRI+PET features, MRI-only and PET-only. The proportion of patients whose BOSD was overlapped by the top output cluster, and the top five output clusters, were assessed. ResultsCortical and subcortical hypometabolism on FDG-PET were superior in discriminating dysplastic from normal-appearing cortex compared to MRI features. When the BOSD detector was trained on MRI+PET features, 87% BOSDs were overlapped by one of the top five clusters (69% top cluster) in the training set, 76% in the prospective test set (71% top cluster) and 75% in the published test set (42% top cluster). Cluster overlap was similar when the detector was trained and tested on PET-only features but lower when trained and tested on MRI-only features. ConclusionDetection of BOSD is possible using established MRI-based automated detection methods, supplemented with FDG-PET features and trained on a BOSD-specific cohort. In clinical practice, an MRI+PET BOSD detector could improve assessment and outcomes in seemingly MRI-negative patients being considered for epilepsy surgery.

Improved Brain Tumor Detection in MRI: Fuzzy Sigmoid Convolution in Deep Learning

Muhammad Irfan, Anum Nawaz, Riku Klen, Abdulhamit Subasi, Tomi Westerlund, Wei Chen

arxiv logopreprintMay 8 2025
Early detection and accurate diagnosis are essential to improving patient outcomes. The use of convolutional neural networks (CNNs) for tumor detection has shown promise, but existing models often suffer from overparameterization, which limits their performance gains. In this study, fuzzy sigmoid convolution (FSC) is introduced along with two additional modules: top-of-the-funnel and middle-of-the-funnel. The proposed methodology significantly reduces the number of trainable parameters without compromising classification accuracy. A novel convolutional operator is central to this approach, effectively dilating the receptive field while preserving input data integrity. This enables efficient feature map reduction and enhances the model's tumor detection capability. In the FSC-based model, fuzzy sigmoid activation functions are incorporated within convolutional layers to improve feature extraction and classification. The inclusion of fuzzy logic into the architecture improves its adaptability and robustness. Extensive experiments on three benchmark datasets demonstrate the superior performance and efficiency of the proposed model. The FSC-based architecture achieved classification accuracies of 99.17%, 99.75%, and 99.89% on three different datasets. The model employs 100 times fewer parameters than large-scale transfer learning architectures, highlighting its computational efficiency and suitability for detecting brain tumors early. This research offers lightweight, high-performance deep-learning models for medical imaging applications.

Automated Thoracolumbar Stump Rib Detection and Analysis in a Large CT Cohort

Hendrik Möller, Hanna Schön, Alina Dima, Benjamin Keinert-Weth, Robert Graf, Matan Atad, Johannes Paetzold, Friederike Jungmann, Rickmer Braren, Florian Kofler, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke

arxiv logopreprintMay 8 2025
Thoracolumbar stump ribs are one of the essential indicators of thoracolumbar transitional vertebrae or enumeration anomalies. While some studies manually assess these anomalies and describe the ribs qualitatively, this study aims to automate thoracolumbar stump rib detection and analyze their morphology quantitatively. To this end, we train a high-resolution deep-learning model for rib segmentation and show significant improvements compared to existing models (Dice score 0.997 vs. 0.779, p-value < 0.01). In addition, we use an iterative algorithm and piece-wise linear interpolation to assess the length of the ribs, showing a success rate of 98.2%. When analyzing morphological features, we show that stump ribs articulate more posteriorly at the vertebrae (-19.2 +- 3.8 vs -13.8 +- 2.5, p-value < 0.01), are thinner (260.6 +- 103.4 vs. 563.6 +- 127.1, p-value < 0.01), and are oriented more downwards and sideways within the first centimeters in contrast to full-length ribs. We show that with partially visible ribs, these features can achieve an F1-score of 0.84 in differentiating stump ribs from regular ones. We publish the model weights and masks for public use.

FF-PNet: A Pyramid Network Based on Feature and Field for Brain Image Registration

Ying Zhang, Shuai Guo, Chenxi Sun, Yuchen Zhu, Jinhai Xiang

arxiv logopreprintMay 8 2025
In recent years, deformable medical image registration techniques have made significant progress. However, existing models still lack efficiency in parallel extraction of coarse and fine-grained features. To address this, we construct a new pyramid registration network based on feature and deformation field (FF-PNet). For coarse-grained feature extraction, we design a Residual Feature Fusion Module (RFFM), for fine-grained image deformation, we propose a Residual Deformation Field Fusion Module (RDFFM). Through the parallel operation of these two modules, the model can effectively handle complex image deformations. It is worth emphasizing that the encoding stage of FF-PNet only employs traditional convolutional neural networks without any attention mechanisms or multilayer perceptrons, yet it still achieves remarkable improvements in registration accuracy, fully demonstrating the superior feature decoding capabilities of RFFM and RDFFM. We conducted extensive experiments on the LPBA and OASIS datasets. The results show our network consistently outperforms popular methods in metrics like the Dice Similarity Coefficient.

MoRe-3DGSMR: Motion-resolved reconstruction framework for free-breathing pulmonary MRI based on 3D Gaussian representation

Tengya Peng, Ruyi Zha, Qing Zou

arxiv logopreprintMay 8 2025
This study presents an unsupervised, motion-resolved reconstruction framework for high-resolution, free-breathing pulmonary magnetic resonance imaging (MRI), utilizing a three-dimensional Gaussian representation (3DGS). The proposed method leverages 3DGS to address the challenges of motion-resolved 3D isotropic pulmonary MRI reconstruction by enabling data smoothing between voxels for continuous spatial representation. Pulmonary MRI data acquisition is performed using a golden-angle radial sampling trajectory, with respiratory motion signals extracted from the center of k-space in each radial spoke. Based on the estimated motion signal, the k-space data is sorted into multiple respiratory phases. A 3DGS framework is then applied to reconstruct a reference image volume from the first motion state. Subsequently, a patient-specific convolutional neural network is trained to estimate the deformation vector fields (DVFs), which are used to generate the remaining motion states through spatial transformation of the reference volume. The proposed reconstruction pipeline is evaluated on six datasets from six subjects and bench-marked against three state-of-the-art reconstruction methods. The experimental findings demonstrate that the proposed reconstruction framework effectively reconstructs high-resolution, motion-resolved pulmonary MR images. Compared with existing approaches, it achieves superior image quality, reflected by higher signal-to-noise ratio and contrast-to-noise ratio. The proposed unsupervised 3DGS-based reconstruction method enables accurate motion-resolved pulmonary MRI with isotropic spatial resolution. Its superior performance in image quality metrics over state-of-the-art methods highlights its potential as a robust solution for clinical pulmonary MR imaging.

Cross-scale prediction of glioblastoma MGMT methylation status based on deep learning combined with magnetic resonance images and pathology images

Wu, X., Wei, W., Li, Y., Ma, M., Hu, Z., Xu, Y., Hu, W., Chen, G., Zhao, R., Kang, X., Yin, H., Xi, Y.

medrxiv logopreprintMay 8 2025
BackgroundIn glioblastoma (GBM), promoter methylation of the O6-methylguanine-DNA methyltransferase (MGMT) is associated with beneficial chemotherapy but has not been accurately evaluated based on radiological and pathological sections. To develop and validate an MRI and pathology image-based deep learning radiopathomics model for predicting MGMT promoter methylation in patients with GBM. MethodsA retrospective collection of pathologically confirmed isocitrate dehydrogenase (IDH) wild-type GBM patients (n=207) from three centers was performed, all of whom underwent MRI scanning within 2 weeks prior to surgery. The pre-trained ResNet50 was used as the feature extractor. Features of 1024 dimensions were extracted from MRI and pathological images, respectively, and the features were screened for modeling. Then feature fusion was performed by calculating the normalized multimode MRI fusion features and pathological features, and prediction models of MGMT based on deep learning radiomics, pathomics, and radiopathomics (DLRM, DLPM, DLRPM) were constructed and applied to internal and external validation cohorts. ResultsIn the training, internal and external validation cohorts, the DLRPM further improved the predictive performance, with a significantly better predictive performance than the DLRM and DLPM, with AUCs of 0.920 (95% CI 0.870-0.968), 0.854 (95% CI 0.702-1), and 0.840 (95% CI 0.625-1). ConclusionWe developed and validated cross-scale radiology and pathology models for predicting MGMT methylation status, with DLRPM predicting the best performance, and this cross-scale approach paves the way for further research and clinical applications in the future.

Neuroanatomical-Based Machine Learning Prediction of Alzheimer's Disease Across Sex and Age

Jogeshwar, B. K., Lu, S., Nephew, B. C.

medrxiv logopreprintMay 7 2025
Alzheimers Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. In 2024, in the US alone, it affected approximately 1 in 9 people aged 65 and older, equivalent to 6.9 million individuals. Early detection and accurate AD diagnosis are crucial for improving patient outcomes. Magnetic resonance imaging (MRI) has emerged as a valuable tool for examining brain structure and identifying potential AD biomarkers. This study performs predictive analyses by employing machine learning techniques to identify key brain regions associated with AD using numerical data derived from anatomical MRI scans, going beyond standard statistical methods. Using the Random Forest Algorithm, we achieved 92.87% accuracy in detecting AD from Mild Cognitive Impairment and Cognitive Normals. Subgroup analyses across nine sex- and age-based cohorts (69-76 years, 77-84 years, and unified 69-84 years) revealed the hippocampus, amygdala, and entorhinal cortex as consistent top-rank predictors. These regions showed distinct volume reductions across age and sex groups, reflecting distinct age- and sex-related neuroanatomical patterns. For instance, younger males and females (aged 69-76) exhibited volume decreases in the right hippocampus, suggesting its importance in the early stages of AD. Older males (77-84) showed substantial volume decreases in the left inferior temporal cortex. Additionally, the left middle temporal cortex showed decreased volume in females, suggesting a potential female-specific influence, while the right entorhinal cortex may have a male-specific impact. These age-specific sex differences could inform clinical research and treatment strategies, aiding in identifying neuroanatomical markers and therapeutic targets for future clinical interventions.

Interpretable MRI-Based Deep Learning for Alzheimer's Risk and Progression

Lu, B., Chen, Y.-R., Li, R.-X., Zhang, M.-K., Yan, S.-Z., Chen, G.-Q., Castellanos, F. X., Thompson, P. M., Lu, J., Han, Y., Yan, C.-G.

medrxiv logopreprintMay 7 2025
Timely intervention for Alzheimers disease (AD) requires early detection. The development of immunotherapies targeting amyloid-beta and tau underscores the need for accessible, time-efficient biomarkers for early diagnosis. Here, we directly applied our previously developed MRI-based deep learning model for AD to the large Chinese SILCODE cohort (722 participants, 1,105 brain MRI scans). The model -- initially trained on North American data -- demonstrated robust cross-ethnic generalization, without any retraining or fine-tuning, achieving an AUC of 91.3% in AD classification with a sensitivity of 95.2%. It successfully identified 86.7% of individuals at risk of AD progression more than 5 years in advance. Individuals identified as high-risk exhibited significantly shorter median progression times. By integrating an interpretable deep learning brain risk map approach, we identified AD brain subtypes, including an MCI subtype associated with rapid cognitive decline. The models risk scores showed significant correlations with cognitive measures and plasma biomarkers, such as tau proteins and neurofilament light chain (NfL). These findings underscore the exceptional generalizability and clinical utility of MRI-based deep learning models, especially in large and diverse populations, offering valuable tools for early therapeutic intervention. The model has been made open-source and deployed to a free online website for AD risk prediction, to assist in early screening and intervention.

False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Evangelia Christodoulou, Annika Reinke, Pascaline Andrè, Patrick Godau, Piotr Kalinowski, Rola Houhou, Selen Erkan, Carole H. Sudre, Ninon Burgos, Sofiène Boutaj, Sophie Loizillon, Maëlys Solal, Veronika Cheplygina, Charles Heitz, Michal Kozubek, Michela Antonelli, Nicola Rieke, Antoine Gilson, Leon D. Mayer, Minu D. Tizabi, M. Jorge Cardoso, Amber Simpson, Annette Kopp-Schneider, Gaël Varoquaux, Olivier Colliot, Lena Maier-Hein

arxiv logopreprintMay 7 2025
Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.
Page 82 of 84834 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.