Sort by:
Page 57 of 2352345 results

Deep learning ensemble for abdominal aortic calcification scoring from lumbar spine X-ray and DXA images.

Voss A, Suoranta S, Nissinen T, Hurskainen O, Masarwah A, Sund R, Tohka J, Väänänen SP

pubmed logopapersAug 22 2025
Abdominal aortic calcification (AAC) is an independent predictor of cardiovascular diseases (CVDs). AAC is typically detected as an incidental finding in spine scans. Early detection of AAC through opportunistic screening using any available imaging modalities could help identify individuals with a higher risk of developing clinical CVDs. However, AAC is not routinely assessed in clinics, and manual scoring from projection images is time-consuming and prone to inter-rater variability. Also, automated AAC scoring methods exist, but earlier methods have not accounted for the inherent variability in AAC scoring and were developed for a single imaging modality at a time. We propose an automated method for quantifying AAC from lumbar spine X-ray and Dual-energy X-ray Absorptiometry (DXA) images using an ensemble of convolutional neural network models that predicts a distribution of probable AAC scores. We treat AAC score as a normally distributed random variable to account for the variability of manual scoring. The mean and variance of the assumed normal AAC distributions are estimated based on manual annotations, and the models in the ensemble are trained by simulating AAC scores from these distributions. Our proposed ensemble approach successfully extracted AAC scores from both X-ray and DXA images with predicted score distributions demonstrating strong agreement with manual annotations, as evidenced by concordance correlation coefficients of 0.930 for X-ray and 0.912 for DXA. The prediction error between the average estimates of our approach and the average manual annotations was lower than the errors reported previously, highlighting the benefit of incorporating uncertainty in AAC scoring.

Real-world federated learning for the brain imaging scientist

Denissen, S., Laton, J., Grothe, M., Vaneckova, M., Uher, T., Kudrna, M., Horakova, D., Baijot, J., Penner, I.-K., Kirsch, M., Motyl, J., De Vos, M., Chen, O. Y., Van Schependom, J., Sima, D. M., Nagels, G.

medrxiv logopreprintAug 22 2025
BackgroundFederated learning (FL) could boost deep learning in neuroimaging but is rarely deployed in a real-world scenario, where its true potential lies. Here, we propose FLightcase, a new FL toolbox tailored for brain research. We tested FLightcase on a real-world FL network to predict the cognitive status of patients with multiple sclerosis (MS) from brain magnetic resonance imaging (MRI). MethodsWe first trained a DenseNet neural network to predict age from T1-weighted brain MRI on three open-source datasets, IXI (586 images), SALD (491 images) and CamCAN (653 images). These were distributed across the three centres in our FL network, Brussels (BE), Greifswald (DE) and Prague (CZ). We benchmarked this federated model with a centralised version. The best-performing brain age model was then fine-tuned to predict performance on the Symbol Digit Modalities Test (SDMT) of patients with MS (Brussels: 96 images, Greifswald: 756 images, Prague: 2424 images). Shallow transfer learning (TL) was compared with deep transfer learning, updating weights in the last layer or the entire network respectively. ResultsCentralised training outperformed federated training, predicting age with a mean absolute error (MAE) of 6.00 versus 9.02. Federated training yielded a Pearson correlation (all p < .001) between true and predicted age of .78 (IXI, Brussels), .78 (SALD, Greifswald) and .86 (CamCAN, Prague). Fine-tuning of the centralised model to SDMT was most successful with a deep TL paradigm (MAE = 9.12) compared to shallow TL (MAE = 14.08), and respectively on Brussels, Greifswald and Prague predicted SDMT with an MAE of 11.50, 9.64 and 8.86, and a Pearson correlation between true and predicted SDMT of .10 (p = .668), .42 (p < .001) and .51 (p < .001). ConclusionReal-world federated learning using FLightcase is feasible for neuroimaging research in MS, enabling access to a large MS imaging database without sharing this data. The federated SDMT-decoding model is promising and could be improved in the future by adopting FL algorithms that address the non-IID data issue and consider other imaging modalities. We hope our detailed real-world experiments and open-source distribution of FLightcase will prompt researchers to move beyond simulated FL environments.

Covid-19 diagnosis using privacy-preserving data monitoring: an explainable AI deep learning model with blockchain security.

Bala K, Kumar KA, Venu D, Dudi BP, Veluri SP, Nirmala V

pubmed logopapersAug 22 2025
The COVID-19 pandemic emphasised necessity for prompt, precise diagnostics, secure data storage, and robust privacy protection in healthcare. Existing diagnostic systems often suffer from limited transparency, inadequate performance, and challenges in ensuring data security and privacy. The research proposes a novel privacy-preserving diagnostic framework, Heterogeneous Convolutional-recurrent attention Transfer learning based ResNeXt with Modified Greater Cane Rat optimisation (HCTR-MGR), that integrates deep learning, Explainable Artificial Intelligence (XAI), and blockchain technology. The HCTR model combines convolutional layers for spatial feature extraction, recurrent layers for capturing spatial dependencies, and attention mechanisms to highlight diagnostically significant regions. A ResNeXt-based transfer learning backbone enhances performance, while the MGR algorithm improves robustness and convergence. A trust-based permissioned blockchain stores encrypted patient metadata to ensure data security and integrity and eliminates centralised vulnerabilities. The framework also incorporates SHAP and LIME for interpretable predictions. Experimental evaluation on two benchmark chest X-ray datasets demonstrates superior diagnostic performance, achieving 98-99% accuracy, 97-98% precision, 95-97% recall, 99% specificity, and 95-98% F1-score, offering a 2-6% improvement over conventional models such as ResNet, SARS-Net, and PneuNet. These results underscore the framework's potential for scalable, secure, and clinically trustworthy deployment in real-world healthcare systems.

Diagnostic performance of T1-Weighted MRI gray matter biomarkers in Parkinson's disease: A systematic review and meta-analysis.

Torres-Parga A, Gershanik O, Cardona S, Guerrero J, Gonzalez-Ojeda LM, Cardona JF

pubmed logopapersAug 22 2025
T1-weighted structural MRI has advanced our understanding of Parkinson's disease (PD), yet its diagnostic utility in clinical settings remains unclear. To assess the diagnostic performance of T1-weighted MRI gray matter (GM) metrics in distinguishing PD patients from healthy controls and to identify limitations affecting clinical applicability. A systematic review and meta-analysis were conducted on studies reporting sensitivity, specificity, or AUC for PD classification using T1-weighted MRI. Of 2906 screened records, 26 met inclusion criteria, and 10 provided sufficient data for quantitative synthesis. The risk of bias and heterogeneity were evaluated, and sensitivity analyses were performed by excluding influential studies. Pooled estimates showed a sensitivity of 0.71 (95 % CI: 0.70-0.72), specificity of 0.889 (95 % CI: 0.86-0.92), and overall accuracy of 0.909 (95 % CI: 0.89-0.93). These metrics improved after excluding outliers, reducing heterogeneity (I<sup>2</sup> = 95.7 %-0 %). Frequently reported regions showing structural alterations included the substantia nigra, striatum, thalamus, medial temporal cortex, and middle frontal gyrus. However, region-specific diagnostic metrics could not be consistently synthesized due to methodological variability. Machine learning approaches, particularly support vector machines and neural networks, showed enhanced performance with appropriate validation. T1-weighted MRI gray matter metrics demonstrate moderate accuracy in differentiating PD from controls but are not yet suitable as standalone diagnostic tools. Greater methodological standardization, external validation, and integration with clinical and biological data are needed to support precision neurology and clinical translation.

Learning Explainable Imaging-Genetics Associations Related to a Neurological Disorder

Jueqi Wang, Zachary Jacokes, John Darrell Van Horn, Michael C. Schatz, Kevin A. Pelphrey, Archana Venkataraman

arxiv logopreprintAug 22 2025
While imaging-genetics holds great promise for unraveling the complex interplay between brain structure and genetic variation in neurological disorders, traditional methods are limited to simplistic linear models or to black-box techniques that lack interpretability. In this paper, we present NeuroPathX, an explainable deep learning framework that uses an early fusion strategy powered by cross-attention mechanisms to capture meaningful interactions between structural variations in the brain derived from MRI and established biological pathways derived from genetics data. To enhance interpretability and robustness, we introduce two loss functions over the attention matrix - a sparsity loss that focuses on the most salient interactions and a pathway similarity loss that enforces consistent representations across the cohort. We validate NeuroPathX on both autism spectrum disorder and Alzheimer's disease. Our results demonstrate that NeuroPathX outperforms competing baseline approaches and reveals biologically plausible associations linked to the disorder. These findings underscore the potential of NeuroPathX to advance our understanding of complex brain disorders. Code is available at https://github.com/jueqiw/NeuroPathX .

Unlocking the potential of radiomics in identifying fibrosing and inflammatory patterns in interstitial lung disease.

Colligiani L, Marzi C, Uggenti V, Colantonio S, Tavanti L, Pistelli F, Alì G, Neri E, Romei C

pubmed logopapersAug 22 2025
To differentiate interstitial lung diseases (ILDs) with fibrotic and inflammatory patterns using high-resolution computed tomography (HRCT) and a radiomics-based artificial intelligence (AI) pipeline. This single-center study included 84 patients: 50 with idiopathic pulmonary fibrosis (IPF)-representative of fibrotic pattern-and 34 with cellular non-specific interstitial pneumonia (NSIP) secondary to connective tissue disease (CTD)-as an example of mostly inflammatory pattern. For a secondary objective, we analyzed 50 additional patients with COVID-19 pneumonia. We performed semi-automatic segmentation of ILD regions using a deep learning model followed by manual review. From each segmented region, 103 radiomic features were extracted. Classification was performed using an XGBoost model with 1000 bootstrap repetitions and SHapley Additive exPlanations (SHAP) were applied to identify the most predictive features. The model accurately distinguished a fibrotic ILD pattern from an inflammatory ILD one, achieving an average test set accuracy of 0.91 and AUROC of 0.98. The classification was driven by radiomic features capturing differences in lung morphology, intensity distribution, and textural heterogeneity between the two disease patterns. In differentiating cellular NSIP from COVID-19, the model achieved an average accuracy of 0.89. Inflammatory ILDs exhibited more uniform imaging patterns compared to the greater variability typically observed in viral pneumonia. Radiomics combined with explainable AI offers promising diagnostic support in distinguishing fibrotic from inflammatory ILD patterns and differentiating inflammatory ILDs from viral pneumonias. This approach could enhance diagnostic precision and provide quantitative support for personalized ILD management.

Relationship Between [<sup>18</sup>F]FDG PET/CT Texture Analysis and Progression-Free Survival in Patients Diagnosed With Invasive Breast Carcinoma.

Bülbül O, Bülbül HM, Göksel S

pubmed logopapersAug 22 2025
Breast cancer is the most common cancer and the leading cause of cancer-related deaths in women. Texture analysis provides crucial prognostic information about many types of cancer, including breast cancer. The aim was to examine the relationship between texture features (TFs) of 2-deoxy-2[<sup>18</sup>F] fluoro-D-glucose positron emission tomography (PET)/computed tomography and disease progression in patients with invasive breast cancer. TFs of the primary malignant lesion were extracted from PET images of 112 patients. TFs that showed significant differences between patients who achieved one-, three-, and five-year progression-free survival (PFS) and those who did not were selected and subjected to the least absolute shrinkage and selection operator regression method to reduce features and prevent overfitting. Machine learning (ML) was used to predict PFS using TFs and selected clinicopathological parameters. In models using only TFs, random forest predicted one-, three-, and five-year PFS with area under the curve (AUC) values of 0.730, 0.758, and 0.797, respectively. Naive Bayes predicted one-, three-, and five-year PFS with AUC values of 0.857, 0.804, and 0.843, respectively. The neural network predicted one-, three-, and five-year PFS with AUC values of 0.782, 0.828, and 0.780, respectively. These findings indicated increased AUC values when the models combined TFs with clinicopathological parameters. The lowest AUC values of the models combining TFs and clinicopathological parameters when predicting one-year, three-year, and five-year PFS were 0.867, 0.898, and 0.867, respectively. ML models incorporating PET-derived TFs and clinical parameters may assist in predicting progression during the pre-treatment period in patients with invasive breast carcinoma.

AlzhiNet: Traversing from 2D-CNN to 3D-CNN, Towards Early Detection and Diagnosis of Alzheimer's Disease.

Akindele RG, Adebayo S, Yu M, Kanda PS

pubmed logopapersAug 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder with increasing prevalence among the ageing population, necessitating early and accurate diagnosis for effective disease management. In this study, we present a novel hybrid deep learning framework, AlzhiNet, that integrates both 2D convolutional neural networks (2D-CNNs) and 3D convolutional neural networks (3D-CNNs), along with a custom loss function and volumetric data augmentation, to enhance feature extraction and improve classification performance in AD diagnosis. According to extensive experiments, AlzhiNet outperforms standalone 2D and 3D models, highlighting the importance of combining these complementary representations of data. The depth and quality of 3D volumes derived from the augmented 2D slices also significantly influence the model's performance. The results indicate that carefully selecting weighting factors in hybrid predictions is imperative for achieving optimal results. Our framework has been validated on the magnetic resonance imaging (MRI) from Kaggle and MIRIAD datasets, obtaining accuracies of 98.9% and 99.99%, respectively, with an AUC of 100%. Furthermore, AlzhiNet was studied under a variety of perturbation scenarios on the Alzheimer's Kaggle dataset, including Gaussian noise, brightness, contrast, salt and pepper noise, color jitter, and occlusion. The results obtained show that AlzhiNet is more robust to perturbations than ResNet-18, making it an excellent choice for real-world applications. This approach represents a promising advancement in the early diagnosis and treatment planning for AD.

Application of contrast-enhanced CT-driven multimodal machine learning models for pulmonary metastasis prediction in head and neck adenoid cystic carcinoma.

Gong W, Cui Q, Fu S, Wu Y

pubmed logopapersAug 22 2025
This study explores radiomics and deep learning for predicting pulmonary metastasis in head and neck Adenoid Cystic Carcinoma (ACC), assessing machine learning(ML) algorithms' model performance. The study retrospectively analyzed contrast-enhanced CT imaging data and clinical records from 130 patients with pathologically confirmed ACC in the head and neck region. The dataset was randomly split into training and test sets at a 7:3 ratio. Radiomic features and deep learning-derived features were extracted and subsequently integrated through multi-feature fusion. Z-score normalization was applied to training and test sets. Hypothesis testing selected significant features, followed by LASSO regression (5-fold CV) identifying 7 predictive features. Nine machine learning algorithms were employed to build predictive models for ACC pulmonary metastasis: ada, KNN, rf, NB, GLM, LDA, rpart, SVM-RBF, and GBM. Models were trained using the training set and tested on the test set. Model performance was evaluated using metrics such as recall, sensitivity, PPV, F1-score, precision, prevalence, NPV, specificity, accuracy, detection rate, detection prevalence, and balanced accuracy. Machine learning models based on multi-feature fusion of enhanced CT, utilizing KNN, SVM, rpart, GBM, NB, GLM, and LDA, demonstrated AUC values in the test set of 0.687, 0.863, 0.737, 0.793, 0.763, 0.867, and 0.844, respectively. Rf and ada showed significant overfitting. Among these, GBM and GLM showed higher stability in predicting pulmonary metastasis of head and neck ACC. Radiomics and deep learning methods based on enhanced CT imaging can provide effective auxiliary tools for predicting pulmonary metastasis in head and neck ACC patients, showing promising potential for clinical application.

Digital versus analogue PET in parathyroid imaging: comparison of PET metrics and machine learning-based characterisation of hyperfunctioning lesions (the DIGI-PET study).

Filippi L, Bianconi F, Ferrari C, Linguanti F, Battisti C, Urbano N, Minestrini M, Messina SG, Buci L, Baldoncini A, Rubini G, Schillaci O, Palumbo B

pubmed logopapersAug 22 2025
To compare PET-derived metrics between digital and analogue PET/CT in hyperparathyroidism, and to assess whether machine learning (ML) applied to quantitative PET parameters can distinguish parathyroid adenoma (PA) from hyperplasia (PH). From an initial multi-centre cohort of 179 patients, 86 were included, comprising 89 PET-positive lesions confirmed histologically (74 PA, 15 PH). Quantitative PET parameters-maximum standardised uptake value (SUVmax), metabolic tumour volume (MTV), target-to-background ratio (TBR), and maximum diameter-along with serum PTH and calcium levels, were compared between digital and analogue PET scanners using the Mann-Whitney U test. Receiver operating characteristic (ROC) analysis identified optimal threshold values. ML models (LASSO, decision tree, Gaussian naïve Bayes) were trained on harmonised quantitative features to distinguish PA from PH. Digital PET detected significantly smaller lesions than analogue PET, in both metabolic volume (1.32 ± 1.39 vs. 2.36 ± 2.01 cc; p < 0.001) and maximum diameter (8.35 ± 4.32 vs. 11.87 ± 5.29 mm; p < 0.001). PA lesions showed significantly higher SUVmax and TBR compared to PH (SUVmax: 8.58 ± 3.70 vs. 5.27 ± 2.34; TBR: 14.67 ± 6.99 vs. 8.82 ± 5.90; both p < 0.001). The optimal thresholds for identifying PA were SUVmax > 5.89 and TBR > 11.5. The best ML model (LASSO) achieved an AUC of 0.811, with 79.7% accuracy and balanced sensitivity and specificity. Digital PET outperforms analogue system in detecting small parathyroid lesions. Additionally, ML analysis of PET-derived metrics and PTH may support non-invasive distinction between adenoma and hyperplasia.
Page 57 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.