Sort by:
Page 25 of 2352341 results

Evaluating the diagnostic accuracy of WHO-recommended treatment decision algorithms for childhood tuberculosis using an individual person dataset: a study protocol.

Olbrich L, Larsson L, Dodd PJ, Palmer M, Nguyen MHTN, d'Elbée M, Hesseling AC, Heinrich N, Zar HJ, Ntinginya NE, Khosa C, Nliwasa M, Verghese V, Bonnet M, Wobudeya E, Nduna B, Moh R, Mwanga J, Mustapha A, Breton G, Taguebue JV, Borand L, Marcy O, Chabala C, Seddon J, van der Zalm MM

pubmed logopapersSep 17 2025
In 2022, the WHO conditionally recommended the use of treatment decision algorithms (TDAs) for treatment decision-making in children <10 years with presumptive tuberculosis (TB), aiming to decrease the substantial case detection gap and improve treatment access in high TB-incidence settings. WHO also called for external validation of these TDAs. Within the Decide-TB project (PACT ID: PACTR202407866544155, 23 July 2024), we aim to generate an individual-participant dataset (IPD) from prospective TB diagnostic accuracy cohorts (RaPaed-TB, UMOYA and two cohorts from TB-Speed). Using the IPD, we aim to: (1) assess the diagnostic accuracy of published TDAs using a set of consensus case definitions produced by the National Institute of Health as reference standard (confirmed and unconfirmed vs unlikely TB); (2) evaluate the added value of novel tools (including biomarkers and artificial intelligence-interpreted radiology) in the existing TDAs; (3) generate an artificial population, modelling the target population of children eligible for WHO-endorsed TDAs presenting at primary and secondary healthcare levels and assess the diagnostic accuracy of published TDAs and (4) identify clinical predictors of radiological disease severity in children from the study population of children with presumptive TB. This study will externally validate the first data-driven WHO TDAs in a large, well-characterised and diverse paediatric IPD derived from four large paediatric cohorts of children investigated for TB. The study has received ethical clearance for sharing secondary deidentified data from the ethics committees of the parent studies (RaPaed-TB, UMOYA and TB Speed) and as the aims of this study were part of the parent studies' protocols, a separate approval was not necessary. Study findings will be published in peer-reviewed journals and disseminated at local, regional and international scientific meetings and conferences. This database will serve as a catalyst for the assessment of the inclusion of novel tools and the generation of an artificial population to simulate the impact of novel diagnostic pathways for TB in children at lower levels of healthcare. TDAs have the potential to close the diagnostic gap in childhood TB. Further finetuning of the currently available algorithms will facilitate this and improve access to care.

<sup>18</sup>F-FDG PET/CT-based Radiomics Analysis of Different Machine Learning Models for Predicting Pathological Highly Invasive Non-small Cell Lung Cancer.

Li Y, Shen MJ, Yi JW, Zhao QQ, Zhao QP, Hao LY, Qi JJ, Li WH, Wu XD, Zhao L, Wang Y

pubmed logopapersSep 17 2025
This study aimed to develop and validate machine learning models integrating clinicoradiological and radiomic features from 2-[18 F]-fluoro-2-deoxy-D-glucose (<sup>18</sup>F-FDG) positron emission tomography/computed tomography (PET/CT) to predict pathological high invasiveness in cT1-sized (tumor size ≤ 3 cm) non-small cell lung cancer (NSCLC). We retrospectively reviewed 1459 patients with NSCLC (633 with pathological high invasiveness and 826 with pathological non-high invasiveness) from two medical centers. Patients with cT1-sized NSCLC were included. 1145 radiomic features were extracted per modality (PET and CT) from each patient. Optimal predictors were selected to construct a radiomics score (Rad-score) for the PET/CT radiomics model. A combined model incorporating significant clinicoradiological features and the Rad-score was developed. Logistic regression (LR), random forest (RF), support vector machine (SVM), and extreme gradient boosting (XGBoost) algorithms were used to train the combined model. Model performance was assessed the area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve analysis (DCA). Shapley Additive Explanations (SHAP) was applied to visualize the prediction process. The radiomics model was built using 11 radiomic features, achieving AUCs of 0.851 (training), 0.859 (internal validation), and 0.829 (external validation). Among all models, the XGBoost combined model demonstrated the best predictive performance, with AUCs of 0.958, 0.919, and 0.903, respectively, along with good calibration and high net benefit. The XGBoost combined model showed strong performance in predicting pathological high invasiveness in cT1-sized NSCLC.

Lightweight Edge-Aware Feature Extraction for Point-of-Care Health Monitoring.

Riaz F, Muzammal M, Atanbori J, Sodhro AH

pubmed logopapersSep 17 2025
Osteoporosis classification from X-ray images remains challenging due to the high visual similarity between scans of healthy individuals and osteoporotic patients. In this paper, we propose a novel framework that extracts a discriminative gradient-based map from each X-ray image, capturing subtle structural differences that are not readily apparent to the human eye. The method uses analytic Gabor filters to decompose the image into multi-scale, multi-orientation components. At each pixel, we construct a filter response matrix, from which second-order texture features are derived via covariance analysis, followed by eigenvalue decomposition to capture dominant local patterns. The resulting Gabor Eigen Map serves as a compact, information-rich representation that is both interpretable and lightweight, making it well-suited for deployment on edge devices. These feature maps are further processed using a convolutional neural network (CNN) to extract high-level descriptors, followed by classification using standard machine learning algorithms. Experimental results demonstrate that the proposed framework outperforms existing methods in identifying osteoporotic cases, while offering strong potential for real-time, privacy-preserving inference at the point of care.

Multimodal deep learning integration for predicting renal function outcomes in living donor kidney transplantation: a retrospective cohort study.

Kim JM, Jung H, Kwon HE, Ko Y, Jung JH, Shin S, Kim YH, Kim YH, Jun TJ, Kwon H

pubmed logopapersSep 17 2025
Accurately predicting post-transplant renal function is essential for optimizing donor-recipient matching and improving long-term outcomes in kidney transplantation (KT). Traditional models using only structured clinical data often fail to account for complex biological and anatomical factors. This study aimed to develop and validate a multimodal deep learning model that integrates computed tomography (CT) imaging, radiology report text, and structured clinical variables to predict 1-year estimated glomerular filtration rate (eGFR) in living donor kidney transplantation (LDKT) recipients. A retrospective cohort of 1,937 LDKT recipients was selected from 3,772 KT cases. Exclusions included deceased donor KT, immunologic high-risk recipients (n = 304), missing CT imaging, early graft complications, and anatomical abnormalities. eGFR at 1 year post-transplant was classified into four categories: > 90, 75-90, 60-75, and 45-60 mL/min/1.73 m2. Radiology reports were embedded using BioBERT, while CT videos were encoded using a CLIP-based visual extractor. These were fused with structured clinical features and input into ensemble classifiers including XGBoost. Model performance was evaluated using cross-validation and SHapley Additive exPlanations (SHAP) analysis. The full multimodal model achieved a macro F1 score of 0.675, micro F1 score of 0.704, and weighted F1 score of 0.698-substantially outperforming the clinical-only model (macro F1 = 0.292). CT imaging contributed more than text data (clinical + CT macro F1 = 0.651; clinical + text = 0.486). The model showed highest accuracy in the >90 (F1 = 0.7773) and 60-75 (F1 = 0.7303) categories. SHAP analysis identified donor age, BMI, and donor sex as key predictors. Dimensionality reduction confirmed internal feature validity. Multimodal deep learning integrating clinical, imaging, and textual data enhances prediction of post-transplant renal function. This framework offers a robust and interpretable approach for individualized risk stratification in LDKT, supporting precision medicine in transplantation.

Automating classification of treatment responses to combined targeted therapy and immunotherapy in HCC.

Quan B, Dai M, Zhang P, Chen S, Cai J, Shao Y, Xu P, Li P, Yu L

pubmed logopapersSep 17 2025
Tyrosine kinase inhibitors (TKIs) combined with immunotherapy regimens are now widely used for treating advanced hepatocellular carcinoma (HCC), but their clinical efficacy is limited to a subset of patients. Considering that the vast majority of advanced HCC patients lose the opportunity for liver resection and thus cannot provide tumor tissue samples, we leveraged the clinical and image data to construct a multimodal convolutional neural network (CNN)-Transformer model for predicting and analyzing tumor response to TKI-immunotherapy. An automatic liver tumor segmentation system, based on a two-stage 3D U-Net framework, delineates lesions by first segmenting the liver parenchyma and then precisely localizing the tumor. This approach effectively addresses the variability in clinical data and significantly reduces bias introduced by manual intervention. Thus, we developed a clinical model using only pre-treatment clinical information, a CNN model using only pre-treatment magnetic resonance imaging data, and an advanced multimodal CNN-Transformer model that fused imaging and clinical parameters using a training cohort (n = 181) and then validated them using an independent cohort (n = 30). In the validation cohort, the area under the curve (95% confidence interval) values were 0.720 (0.710-0.731), 0.695 (0.683-0.707), and 0.785 (0.760-0.810), respectively, indicating that the multimodal model significantly outperformed the single-modality baseline models across validations. Finally, single-cell sequencing with the surgical tumor specimens reveals tumor ecosystem diversity associated with treatment response, providing a preliminary biological validation for the prediction model. In summary, this multimodal model effectively integrates imaging and clinical features of HCC patients, has a superior performance in predicting tumor response to TKI-immunotherapy, and provides a reliable tool for optimizing personalized treatment strategies.

A machine learning model based on high-frequency ultrasound for differentiating benign and malignant skin tumors.

Qin Y, Zhang Z, Qu X, Liu W, Yan Y, Huang Y

pubmed logopapersSep 17 2025
This study aims to explore the potential of machine learning as a non-invasive automated tool for skin tumor differentiation. Data were included from 156 lesions, collected retrospectively from September 2021 to February 2024. Univariate and multivariate analyses of traditional clinical features were performed to establish a logistic regression model. Ultrasound-based radiomics features are extracted from grayscale images after delineating regions of interest (ROIs). Independent samples t-tests, Mann-Whitney U tests, and Least Absolute Shrinkage and Selection Operator (LASSO) regression were employed to select ultrasound-based radiomics features. Subsequently, five machine learning methods were used to construct radiomics models based on the selected features. Model performance was evaluated using receiver operating characteristic (ROC) curves and the Delong test. Age, poorly defined margins, and irregular shape were identified as independent risk factors for malignant skin tumors. The multilayer perception (MLP) model achieved the best performance, with area under the curve (AUC) values of 0.963 and 0.912, respectively. The results of DeLong's test revealed a statistically significant discrepancy in efficacy between the MLP and clinical models (Z=2.611, p=0.009). Machine learning based skin tumor models may serve as a potential non-invasive method to improve diagnostic efficiency.

Machine learning in sex estimation using CBCT morphometric measurements of canines.

Silva-Sousa AC, Dos Santos Cardoso G, Branco AC, Küchler EC, Baratto-Filho F, Candemil AP, Sousa-Neto MD, de Araujo CM

pubmed logopapersSep 17 2025
The aim of this study was to assess measurements of the maxillary canines using Cone Beam Computed Tomography (CBCT) and develop a machine learning model for sex estimation. CBCT scans from 610 patients were screened. The maxillary canines were examined to measure total tooth length, average enamel thickness, and mesiodistal width. Various supervised machine learning algorithms were employed to construct predictive models, including Decision Tree, Gradient Boosting Classifier, K-Nearest Neighbors (KNN), Logistic Regression, Multi-Layer Perceptron (MLP), Random Forest Classifier, Support Vector Machine (SVM), XGBoost, LightGBM, and CatBoost. Validation of each model was performed using a 10-fold cross-validation approach. Metrics such as area under the curve (AUC), accuracy, recall, precision, and F1 Score were computed, with ROC curves generated for visualization. The total length of the tooth proved to be the variable with the highest predictive power. The algorithms that demonstrated superior performance in terms of AUC were LightGBM and Logistic Regression, achieving AUC values of 0.77 [CI95% = 0.65-0.89] and 0.75 [CI95% = 0.62-0.86] for the test data, and 0.74 [CI95% = 0.70-0.80] and 0.75 [CI95% = 0.70-0.79] in cross-validation, respectively. Both models also showed high precision values. The use of maxillary canine measurements, combined with supervised machine learning techniques, has proven to be viable for sex estimation. The machine learning approach combined with is a low-cost option as it relies solely on a single anatomical structure.

Taylor-Series Expanded Kolmogorov-Arnold Network for Medical Imaging Classification

Kaniz Fatema, Emad A. Mohammed, Sukhjit Singh Sehra

arxiv logopreprintSep 17 2025
Effective and interpretable classification of medical images is a challenge in computer-aided diagnosis, especially in resource-limited clinical settings. This study introduces spline-based Kolmogorov-Arnold Networks (KANs) for accurate medical image classification with limited, diverse datasets. The models include SBTAYLOR-KAN, integrating B-splines with Taylor series; SBRBF-KAN, combining B-splines with Radial Basis Functions; and SBWAVELET-KAN, embedding B-splines in Morlet wavelet transforms. These approaches leverage spline-based function approximation to capture both local and global nonlinearities. The models were evaluated on brain MRI, chest X-rays, tuberculosis X-rays, and skin lesion images without preprocessing, demonstrating the ability to learn directly from raw data. Extensive experiments, including cross-dataset validation and data reduction analysis, showed strong generalization and stability. SBTAYLOR-KAN achieved up to 98.93% accuracy, with a balanced F1-score, maintaining over 86% accuracy using only 30% of the training data across three datasets. Despite class imbalance in the skin cancer dataset, experiments on both imbalanced and balanced versions showed SBTAYLOR-KAN outperforming other models, achieving 68.22% accuracy. Unlike traditional CNNs, which require millions of parameters (e.g., ResNet50 with 24.18M), SBTAYLOR-KAN achieves comparable performance with just 2,872 trainable parameters, making it more suitable for constrained medical environments. Gradient-weighted Class Activation Mapping (Grad-CAM) was used for interpretability, highlighting relevant regions in medical images. This framework provides a lightweight, interpretable, and generalizable solution for medical image classification, addressing the challenges of limited datasets and data-scarce scenarios in clinical AI applications.

Deep learning-based automated detection and diagnosis of gouty arthritis in ultrasound images of the first metatarsophalangeal joint.

Xiao L, Zhao Y, Li Y, Yan M, Liu M, Ning C

pubmed logopapersSep 17 2025
This study aimed to develop a deep learning (DL) model for automatic detection and diagnosis of gouty arthritis (GA) in the first metatarsophalangeal joint (MTPJ) using ultrasound (US) images. A retrospective study included individuals who underwent first MTPJ ultrasonography between February and July 2023. A five-fold cross-validation method (training set = 4:1) was employed. A deep residual convolutional neural network (CNN) was trained, and Gradient-weighted Class Activation Mapping (Grad-CAM) was used for visualization. Different ResNet18 models with varying residual blocks (2, 3, 4, 6) were compared to select the optimal model for image classification. Diagnostic decisions were based on a threshold proportion of abnormal images, determined from the training set. A total of 2401 US images from 260 patients (149 gout, 111 control) were analyzed. The model with 3 residual blocks performed best, achieving an AUC of 0.904 (95% CI: 0.887~0.927). Visualization results aligned with radiologist opinions in 2000 images. The diagnostic model attained an accuracy of 91.1% (95% CI: 90.4%~91.8%) on the testing set, with a diagnostic threshold of 0.328.  The DL model demonstrated excellent performance in automatically detecting and diagnosing GA in the first MTPJ.

Augmenting conventional criteria: a CT-based deep learning radiomics nomogram for early recurrence risk stratification in hepatocellular carcinoma after liver transplantation.

Wu Z, Liu D, Ouyang S, Hu J, Ding J, Guo Q, Gao J, Luo J, Ren K

pubmed logopapersSep 17 2025
We developed a deep learning radiomics nomogram (DLRN) using CT scans to improve clinical decision-making and risk stratification for early recurrence of hepatocellular carcinoma (HCC) after transplantation, which typically has a poor prognosis. In this two-center study, 245 HCC patients who had contrast-enhanced CT before liver transplantation were split into a training set (n = 184) and a validation set (n = 61). We extracted radiomics and deep learning features from tumor and peritumor areas on preoperative CT images. The DLRN was created by combining these features with significant clinical variables using multivariate logistic regression. Its performance was validated against four traditional risk criteria to assess its additional value. The DLRN model showed strong predictive accuracy for early HCC recurrence post-transplant, with AUCs of 0.884 and 0.829 in training and validation groups. High DLRN scores significantly increased relapse risk by 16.370 times (95% CI: 7.100-31.690; p  < 0.001). Combining DLRN with Metro-Ticket 2.0 criteria yielded the best prediction (AUC: training/validation: 0.936/0.863). The CT-based DLRN offers a non-invasive method for predicting early recurrence following liver transplantation in patients with HCC. Furthermore, it provides substantial additional predictive value with traditional prognostic scoring systems. AI-driven predictive models utilizing preoperative CT imaging enable accurate identification of early HCC recurrence risk following liver transplantation, facilitating risk-stratified surveillance protocols and optimized post-transplant management. A CT-based DLRN for predicting early HCC recurrence post-transplant was developed. The DLRN predicted recurrence with high accuracy (AUC: 0.829) and 16.370-fold increased recurrence risk. Combining DLRN with Metro-Ticket 2.0 criteria achieved optimal prediction (AUC: 0.863).
Page 25 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.