Sort by:
Page 181 of 2352345 results

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Predictive models of severe disease in patients with COVID-19 pneumonia at an early stage on CT images using topological properties.

Iwasaki T, Arimura H, Inui S, Kodama T, Cui YH, Ninomiya K, Iwanaga H, Hayashi T, Abe O

pubmed logopapersJun 1 2025
Prediction of severe disease (SVD) in patients with coronavirus disease (COVID-19) pneumonia at an early stage could allow for more appropriate triage and improve patient prognosis. Moreover, the visualization of the topological properties of COVID-19 pneumonia could help clinical physicians describe the reasons for their decisions. We aimed to construct predictive models of SVD in patients with COVID-19 pneumonia at an early stage on computed tomography (CT) images using SVD-specific features that can be visualized on accumulated Betti number (BN) maps. BN maps (b0 and b1 maps) were generated by calculating the BNs within a shifting kernel in a manner similar to a convolution. Accumulated BN maps were constructed by summing BN maps (b0 and b1 maps) derived from a range of multiple-threshold values. Topological features were computed as intrinsic topological properties of COVID-19 pneumonia from the accumulated BN maps. Predictive models of SVD were constructed with two feature selection methods and three machine learning models using nested fivefold cross-validation. The proposed model achieved an area under the receiver-operating characteristic curve of 0.854 and a sensitivity of 0.908 in a test fold. These results suggested that topological image features could characterize COVID-19 pneumonia at an early stage as SVD.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Eigenhearts: Cardiac diseases classification using eigenfaces approach.

Groun N, Villalba-Orero M, Casado-Martín L, Lara-Pezzi E, Valero E, Le Clainche S, Garicano-Mena J

pubmed logopapersJun 1 2025
In the realm of cardiovascular medicine, medical imaging plays a crucial role in accurately classifying cardiac diseases and making precise diagnoses. However, the integration of data science techniques in this field presents significant challenges, as it requires a large volume of images, while ethical constraints, high costs, and variability in imaging protocols limit data acquisition. As a consequence, it is necessary to investigate different avenues to overcome this challenge. In this contribution, we offer an innovative tool to conquer this limitation. In particular, we delve into the application of a well recognized method known as the eigenfaces approach to classify cardiac diseases. This approach was originally motivated for efficiently representing pictures of faces using principal component analysis, which provides a set of eigenvectors (aka eigenfaces), explaining the variation between face images. Given its effectiveness in face recognition, we sought to evaluate its applicability to more complex medical imaging datasets. In particular, we integrate this approach with convolutional neural networks to classify echocardiography images taken from mice in five distinct cardiac conditions (healthy, diabetic cardiomyopathy, myocardial infarction, obesity and TAC hypertension). The results show a substantial and noteworthy enhancement when employing the singular value decomposition for pre-processing, with classification accuracy increasing by approximately 50%.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

Myo-Guide: A Machine Learning-Based Web Application for Neuromuscular Disease Diagnosis With MRI.

Verdu-Diaz J, Bolano-Díaz C, Gonzalez-Chamorro A, Fitzsimmons S, Warman-Chardon J, Kocak GS, Mucida-Alvim D, Smith IC, Vissing J, Poulsen NS, Luo S, Domínguez-González C, Bermejo-Guerrero L, Gomez-Andres D, Sotoca J, Pichiecchio A, Nicolosi S, Monforte M, Brogna C, Mercuri E, Bevilacqua JA, Díaz-Jara J, Pizarro-Galleguillos B, Krkoska P, Alonso-Pérez J, Olivé M, Niks EH, Kan HE, Lilleker J, Roberts M, Buchignani B, Shin J, Esselin F, Le Bars E, Childs AM, Malfatti E, Sarkozy A, Perry L, Sudhakar S, Zanoteli E, Di Pace FT, Matthews E, Attarian S, Bendahan D, Garibaldi M, Fionda L, Alonso-Jiménez A, Carlier R, Okhovat AA, Nafissi S, Nalini A, Vengalil S, Hollingsworth K, Marini-Bettolo C, Straub V, Tasca G, Bacardit J, Díaz-Manera J

pubmed logopapersJun 1 2025
Neuromuscular diseases (NMDs) are rare disorders characterized by progressive muscle fibre loss, leading to replacement by fibrotic and fatty tissue, muscle weakness and disability. Early diagnosis is critical for therapeutic decisions, care planning and genetic counselling. Muscle magnetic resonance imaging (MRI) has emerged as a valuable diagnostic tool by identifying characteristic patterns of muscle involvement. However, the increasing complexity of these patterns complicates their interpretation, limiting their clinical utility. Additionally, multi-study data aggregation introduces heterogeneity challenges. This study presents a novel multi-study harmonization pipeline for muscle MRI and an AI-driven diagnostic tool to assist clinicians in identifying disease-specific muscle involvement patterns. We developed a preprocessing pipeline to standardize MRI fat content across datasets, minimizing source bias. An ensemble of XGBoost models was trained to classify patients based on intramuscular fat replacement, age at MRI and sex. The SHapley Additive exPlanations (SHAP) framework was adapted to analyse model predictions and identify disease-specific muscle involvement patterns. To address class imbalance, training and evaluation were conducted using class-balanced metrics. The model's performance was compared against four expert clinicians using 14 previously unseen MRI scans. Using our harmonization approach, we curated a dataset of 2961 MRI samples from genetically confirmed cases of 20 paediatric and adult NMDs. The model achieved a balanced accuracy of 64.8% ± 3.4%, with a weighted top-3 accuracy of 84.7% ± 1.8% and top-5 accuracy of 90.2% ± 2.4%. It also identified key features relevant for differential diagnosis, aiding clinical decision-making. Compared to four expert clinicians, the model obtained the highest top-3 accuracy (75.0% ± 4.8%). The diagnostic tool has been implemented as a free web platform, providing global access to the medical community. The application of AI in muscle MRI for NMD diagnosis remains underexplored due to data scarcity. This study introduces a framework for dataset harmonization, enabling advanced computational techniques. Our findings demonstrate the potential of AI-based approaches to enhance differential diagnosis by identifying disease-specific muscle involvement patterns. The developed tool surpasses expert performance in diagnostic ranking and is accessible to clinicians worldwide via the Myo-Guide online platform.

Future prospects of deep learning in esophageal cancer diagnosis and clinical decision support (Review).

Lin A, Song L, Wang Y, Yan K, Tang H

pubmed logopapersJun 1 2025
Esophageal cancer (EC) is one of the leading causes of cancer-related mortality worldwide, still faces significant challenges in early diagnosis and prognosis. Early EC lesions often present subtle symptoms and current diagnostic methods are limited in accuracy due to tumor heterogeneity, lesion morphology and variable image quality. These limitations are particularly prominent in the early detection of precancerous lesions such as Barrett's esophagus. Traditional diagnostic approaches, such as endoscopic examination, pathological analysis and computed tomography, require improvements in diagnostic precision and staging accuracy. Deep learning (DL), a key branch of artificial intelligence, shows great promise in improving the detection of early EC lesions, distinguishing benign from malignant lesions and aiding cancer staging and prognosis. However, challenges remain, including image quality variability, insufficient data annotation and limited generalization. The present review summarized recent advances in the application of DL to medical images obtained through various imaging techniques for the diagnosis of EC at different stages. It assesses the role of DL in tumor pathology, prognosis prediction and clinical decision support, highlighting its advantages in EC diagnosis and prognosis evaluation. Finally, it provided an objective analysis of the challenges currently facing the field and prospects for future applications.

Pediatric chest X-ray diagnosis using neuromorphic models.

Bokhari SM, Sohaib S, Shafi M

pubmed logopapersJun 1 2025
This research presents an innovative neuromorphic method utilizing Spiking Neural Networks (SNNs) to analyze pediatric chest X-rays (PediCXR) to identify prevalent thoracic illnesses. We incorporate spiking-based machine learning models such as Spiking Convolutional Neural Networks (SCNN), Spiking Residual Networks (S-ResNet), and Hierarchical Spiking Neural Networks (HSNN), for pediatric chest radiographic analysis utilizing the publically available benchmark PediCXR dataset. These models employ spatiotemporal feature extraction, residual connections, and event-driven processing to improve diagnostic precision. The HSNN model surpasses benchmark approaches from the literature, with a classification accuracy of 96% across six thoracic illness categories, with an F1-score of 0.95 and a specificity of 1.0 in pneumonia detection. Our research demonstrates that neuromorphic computing is a feasible and biologically inspired approach to real-time medical imaging diagnostics, significantly improving performance.
Page 181 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.