Sort by:
Page 159 of 2432424 results

Clinician-Led Code-Free Deep Learning for Detecting Papilloedema and Pseudopapilloedema Using Optic Disc Imaging

Shenoy, R., Samra, G. S., Sekhri, R., Yoon, H.-J., Teli, S., DeSilva, I., Tu, Z., Maconachie, G. D., Thomas, M. G.

medrxiv logopreprintJun 26 2025
ImportanceDifferentiating pseudopapilloedema from papilloedema is challenging, but critical for prompt diagnosis and to avoid unnecessary invasive procedures. Following diagnosis of papilloedema, objectively grading severity is important for determining urgency of management and therapeutic response. Automated machine learning (AutoML) has emerged as a promising tool for diagnosis in medical imaging and may provide accessible opportunities for consistent and accurate diagnosis and severity grading of papilloedema. ObjectiveThis study evaluates the feasibility of AutoML models for distinguishing the presence and severity of papilloedema using near infrared reflectance images (NIR) obtained from standard optical coherence tomography (OCT), comparing the performance of different AutoML platforms. Design, setting and participantsA retrospective cohort study was conducted using data from University Hospitals of Leicester, NHS Trust. The study involved 289 adults and children patients (813 images) who underwent optic nerve head-centred OCT imaging between 2021 and 2024. The dataset included patients with normal optic discs (69 patients, 185 images), papilloedema (135 patients, 372 images), and optic disc drusen (ODD) (85 patients, 256 images). AutoML platforms - Amazon Rekognition, Medic Mind (MM) and Google Vertex were evaluated for their ability to classify and grade papilloedema severity. Main outcomes and measuresTwo classification tasks were performed: (1) distinguishing papilloedema from normal discs and ODD; (2) grading papilloedema severity (mild/moderate vs. severe). Model performance was evaluated using area under the curve (AUC), precision, recall, F1 score, and confusion matrices for all six models. ResultsAmazon Rekognition outperformed the other platforms, achieving the highest AUC (0.90) and F1 score (0.81) in distinguishing papilloedema from normal/ODD. For papilloedema severity grading, Amazon Rekognition also performed best, with an AUC of 0.90 and F1 score of 0.79. Google Vertex and Medic Mind demonstrated good performance but had slightly lower accuracy and higher misclassification rates. Conclusions and relevanceThis evaluation of three widely available AutoML platforms using NIR images obtained from standard OCT shows promise in distinguishing and grading papilloedema. These models provide an accessible, scalable solution for clinical teams without coding expertise to feasibly develop intelligent diagnostic systems to recognise and characterise papilloedema. Further external validation and prospective testing is needed to confirm their clinical utility and applicability in diverse settings. Key PointsQuestion: Can clinician-led, code-free deep learning models using automated machine learning (AutoML) accurately differentiate papilloedema from pseudopapilloedema using optic disc imaging? Findings: Three widely available AutoML platforms were used to develop models that successfully distinguish the presence and severity of papilloedema on optic disc imaging, with Amazon Rekognition demonstrating the highest performance. Meaning: AutoML may assist clinical teams, even those with limited coding expertise, in diagnosing papilloedema, potentially reducing the need for invasive investigations.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Deep Learning MRI Models for the Differential Diagnosis of Tumefactive Demyelination versus <i>IDH</i> Wild-Type Glioblastoma.

Conte GM, Moassefi M, Decker PA, Kosel ML, McCarthy CB, Sagen JA, Nikanpour Y, Fereidan-Esfahani M, Ruff MW, Guido FS, Pump HK, Burns TC, Jenkins RB, Erickson BJ, Lachance DH, Tobin WO, Eckel-Passow JE

pubmed logopapersJun 26 2025
Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and nontumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality. Tumefactive demyelination has imaging features that mimic <i>isocitrate dehydrogenase</i> wild-type glioblastoma (<i>IDH</i>wt GBM). We hypothesized that deep learning applied to postcontrast T1-weighted (T1C) and T2-weighted (T2) MRI can discriminate tumefactive demyelination from <i>IDH</i>wt GBM. Patients with tumefactive demyelination (<i>n</i> = 144) and <i>IDH</i>wt GBM (<i>n</i> = 455) were identified by clinical registries. A 3D DenseNet121 architecture was used to develop models to differentiate tumefactive demyelination and <i>IDH</i>wt GBM by using both T1C and T2 MRI, as well as only T1C and only T2 images. A 3-stage design was used: 1) model development and internal validation via 5-fold cross validation by using a sex-, age-, and MRI technology-matched set of tumefactive demyelination and <i>IDH</i>wt GBM, 2) validation of model specificity on independent <i>IDH</i>wt GBM, and 3) prospective validation on tumefactive demyelination and <i>IDH</i>wt GBM. Stratified area under the receiver operating curves (AUROCs) were used to evaluate model performance stratified by sex, age at diagnosis, MRI scanner strength, and MRI acquisition. The deep learning model developed by using both T1C and T2 images had a prospective validation AUROC of 88% (95% CI: 0.82-0.95). In the prospective validation stage, a model score threshold of 0.28 resulted in 91% sensitivity of correctly classifying tumefactive demyelination and 80% specificity (correctly classifying <i>IDH</i>wt GBM). Stratified AUROCs demonstrated that model performance may be improved if thresholds were chosen stratified by age and MRI acquisition. MRI can provide the basis for applying deep learning models to aid in the differential diagnosis of brain lesions. Further validation is needed to evaluate how well the model generalizes across institutions, patient populations, and technology, and to evaluate optimal thresholds for classification. Next steps also should incorporate additional tumor etiologies such as CNS lymphoma and brain metastases.

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.

Radiomic fingerprints for knee MR images assessment

Yaxi Chen, Simin Ni, Shaheer U. Saeed, Aleksandra Ivanova, Rikin Hargunani, Jie Huang, Chaozong Liu, Yipeng Hu

arxiv logopreprintJun 25 2025
Accurate interpretation of knee MRI scans relies on expert clinical judgment, often with high variability and limited scalability. Existing radiomic approaches use a fixed set of radiomic features (the signature), selected at the population level and applied uniformly to all patients. While interpretable, these signatures are often too constrained to represent individual pathological variations. As a result, conventional radiomic-based approaches are found to be limited in performance, compared with recent end-to-end deep learning (DL) alternatives without using interpretable radiomic features. We argue that the individual-agnostic nature in current radiomic selection is not central to its intepretability, but is responsible for the poor generalization in our application. Here, we propose a novel radiomic fingerprint framework, in which a radiomic feature set (the fingerprint) is dynamically constructed for each patient, selected by a DL model. Unlike the existing radiomic signatures, our fingerprints are derived on a per-patient basis by predicting the feature relevance in a large radiomic feature pool, and selecting only those that are predictive of clinical conditions for individual patients. The radiomic-selecting model is trained simultaneously with a low-dimensional (considered relatively explainable) logistic regression for downstream classification. We validate our methods across multiple diagnostic tasks including general knee abnormalities, anterior cruciate ligament (ACL) tears, and meniscus tears, demonstrating comparable or superior diagnostic accuracy relative to state-of-the-art end-to-end DL models. More importantly, we show that the interpretability inherent in our approach facilitates meaningful clinical insights and potential biomarker discovery, with detailed discussion, quantitative and qualitative analysis of real-world clinical cases to evidence these advantages.

Framework for enhanced respiratory disease identification with clinical handcrafted features.

Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ

pubmed logopapersJun 25 2025
Respiratory disorders cause approximately 4 million deaths annually worldwide, making them the third leading cause of mortality. Early detection is critical to improving survival rates and recovery outcomes. However, chest X-rays require expertise, and computational intelligence provides valuable support to improve diagnostic accuracy and support medical professionals in decision-making. This study presents an automated system to classify respiratory diseases using three diverse datasets comprising 18,000 chest X-ray images and masks, categorized into six classes. Image preprocessing techniques, such as resizing for input standardization and CLAHE for contrast enhancement, were applied to ensure uniformity and improve the visual quality of the images. Albumentations-based augmentation methods addressed class imbalances, while bitwise segmentation focused on extracting the region of interest (ROI). Furthermore, clinically handcrafted feature extraction enabled the accurate identification of 20 critical clinical features essential for disease classification. The K-nearest neighbors (KNN) graph construction technique was utilized to transform tabular data into graph structures for effective node classification. We employed feature analysis to identify critical attributes that contribute to class predictions within the graph structure. Additionally, the GNNExplainer was utilized to validate these findings by highlighting significant nodes, edges, and features that influence the model's decision-making process. The proposed model, Chest X-ray Graph Neural Network (CHXGNN), a robust Graph Neural Network (GNN) architecture, incorporates advanced layers, batch normalization, dropout regularization, and optimization strategies. Extensive testing and ablation studies demonstrated the model's exceptional performance, achieving an accuracy of 99.56 %. Our CHXGNN model shows significant potential in detecting and classifying respiratory diseases, promising to enhance diagnostic efficiency and improve patient outcomes in respiratory healthcare.

Diagnostic Performance of Radiomics for Differentiating Intrahepatic Cholangiocarcinoma from Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Wang D, Sun L

pubmed logopapersJun 25 2025
Differentiating intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC) is essential for selecting the most effective treatment strategies. However, traditional imaging modalities and serum biomarkers often lack sufficient specificity. Radiomics, a sophisticated image analysis approach that derives quantitative data from medical imaging, has emerged as a promising non-invasive tool. To systematically review and meta-analyze the radiomics diagnostic accuracy in differentiating ICC from HCC. PubMed, EMBASE, and Web of Science databases were systematically searched through January 24, 2025. Studies evaluating radiomics models for distinguishing ICC from HCC were included. Assessing the quality of included studies was done by using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and METhodological RadiomICs Score tools. Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using a bivariate random-effects model. Subgroup and publication bias analyses were also performed. 12 studies with 2541 patients were included, with 14 validation cohorts entered into meta-analysis. The pooled sensitivity and specificity of radiomics models were 0.82 (95% CI: 0.76-0.86) and 0.90 (95% CI: 0.85-0.93), respectively, with an AUC of 0.88 (95% CI: 0.85-0.91). Subgroup analyses revealed variations based on segmentation method, software used, and sample size, though not all differences were statistically significant. Publication bias was not detected. Radiomics demonstrates high diagnostic accuracy in distinguishing ICC from HCC and offers a non-invasive adjunct to conventional diagnostics. Further prospective, multicenter studies with standardized workflows are needed to enhance clinical applicability and reproducibility.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

Alterations in the functional MRI-based temporal brain organisation in individuals with obesity.

Lee S, Namgung JY, Han JH, Park BY

pubmed logopapersJun 25 2025
Obesity is associated with functional alterations in the brain. Although spatial organisation changes in the brains of individuals with obesity have been widely studied, the temporal dynamics in their brains remain poorly understood. Therefore, in this study, we investigated variations in the intrinsic neural timescale (INT) across different degrees of obesity using resting-state functional and diffusion magnetic resonance imaging data from the enhanced Nathan Kline Institute Rockland Sample database. We examined the relationship between the INT and obesity phenotypes using supervised machine learning, controlling for age and sex. To further explore the structure-function characteristics of these regions, we assessed the modular network properties by analysing the participation coefficients and within-module degree derived from the structure-function coupling matrices. Finally, the INT values of the identified regions were used to predict eating behaviour traits. A significant negative correlation was observed, particularly in the default mode, limbic and reward networks. We found a negative association with the participation coefficients, suggesting that shorter INT values in higher-order association areas are related to reduced network integration. Moreover, the INT values of these identified regions moderately predicted eating behaviours, underscoring the potential of the INT as a candidate marker for obesity and eating behaviours. These findings provide insight into the temporal organisation of neural activity in obesity, highlighting the role of specific brain networks in shaping behavioural outcomes.

[Analysis of the global competitive landscape in artificial intelligence medical device research].

Chen J, Pan L, Long J, Yang N, Liu F, Lu Y, Ouyang Z

pubmed logopapersJun 25 2025
The objective of this study is to map the global scientific competitive landscape in the field of artificial intelligence (AI) medical devices using scientific data. A bibliometric analysis was conducted using the Web of Science Core Collection to examine global research trends in AI-based medical devices. As of the end of 2023, a total of 55 147 relevant publications were identified worldwide, with 76.6% published between 2018 and 2024. Research in this field has primarily focused on AI-assisted medical image and physiological signal analysis. At the national level, China (17 991 publications) and the United States (14 032 publications) lead in output. China has shown a rapid increase in publication volume, with its 2023 output exceeding twice that of the U.S.; however, the U.S. maintains a higher average citation per paper (China: 16.29; U.S.: 35.99). At the institutional level, seven Chinese institutions and three U.S. institutions rank among the global top ten in terms of publication volume. At the researcher level, prominent contributors include Acharya U Rajendra, Rueckert Daniel and Tian Jie, who have extensively explored AI-assisted medical imaging. Some researchers have specialized in specific imaging applications, such as Yang Xiaofeng (AI-assisted precision radiotherapy for tumors) and Shen Dinggang (brain imaging analysis). Others, including Gao Xiaorong and Ming Dong, focus on AI-assisted physiological signal analysis. The results confirm the rapid global development of AI in the medical device field, with "AI + imaging" emerging as the most mature direction. China and the U.S. maintain absolute leadership in this area-China slightly leads in publication volume, while the U.S., having started earlier, demonstrates higher research quality. Both countries host a large number of active research teams in this domain.
Page 159 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.