Sort by:
Page 20 of 42416 results

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Efficacy of an Automated Pulmonary Embolism (PE) Detection Algorithm on Routine Contrast-Enhanced Chest CT Imaging for Non-PE Studies.

Troutt HR, Huynh KN, Joshi A, Ling J, Refugio S, Cramer S, Lopez J, Wei K, Imanzadeh A, Chow DS

pubmed logopapersJun 25 2025
The urgency to accelerate PE management and minimize patient risk has driven the development of artificial intelligence (AI) algorithms designed to provide a swift and accurate diagnosis in dedicated chest imaging (computed tomography pulmonary angiogram; CTPA) for suspected PE; however, the accuracy of AI algorithms in the detection of incidental PE in non-dedicated CT imaging studies remains unclear and untested. This study explores the potential for a commercial AI algorithm to identify incidental PE in non-dedicated contrast-enhanced CT chest imaging studies. The Viz PE algorithm was deployed to identify the presence of PE on 130 dedicated and 63 non-dedicated contrast-enhanced CT chest exams. The predictions for non-dedicated contrast-enhanced chest CT imaging studies were 90.48% accurate, with a sensitivity of 0.14 and specificity of 1.00. Our findings reflect that the Viz PE algorithm demonstrated an overall accuracy of 90.16%, with a specificity of 96% and a sensitivity of 41%. Although the high specificity is promising for ruling in PE, the low sensitivity highlights a limitation, as it indicates the algorithm may miss a substantial number of true-positive incidental PEs. This study demonstrates that commercial AI detection tools hold promise as integral support for detecting PE, particularly when there is a strong clinical indication for their use; however, current limitations in sensitivity, especially for incidental cases, underscore the need for ongoing radiologist oversight.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Framework for enhanced respiratory disease identification with clinical handcrafted features.

Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ

pubmed logopapersJun 25 2025
Respiratory disorders cause approximately 4 million deaths annually worldwide, making them the third leading cause of mortality. Early detection is critical to improving survival rates and recovery outcomes. However, chest X-rays require expertise, and computational intelligence provides valuable support to improve diagnostic accuracy and support medical professionals in decision-making. This study presents an automated system to classify respiratory diseases using three diverse datasets comprising 18,000 chest X-ray images and masks, categorized into six classes. Image preprocessing techniques, such as resizing for input standardization and CLAHE for contrast enhancement, were applied to ensure uniformity and improve the visual quality of the images. Albumentations-based augmentation methods addressed class imbalances, while bitwise segmentation focused on extracting the region of interest (ROI). Furthermore, clinically handcrafted feature extraction enabled the accurate identification of 20 critical clinical features essential for disease classification. The K-nearest neighbors (KNN) graph construction technique was utilized to transform tabular data into graph structures for effective node classification. We employed feature analysis to identify critical attributes that contribute to class predictions within the graph structure. Additionally, the GNNExplainer was utilized to validate these findings by highlighting significant nodes, edges, and features that influence the model's decision-making process. The proposed model, Chest X-ray Graph Neural Network (CHXGNN), a robust Graph Neural Network (GNN) architecture, incorporates advanced layers, batch normalization, dropout regularization, and optimization strategies. Extensive testing and ablation studies demonstrated the model's exceptional performance, achieving an accuracy of 99.56 %. Our CHXGNN model shows significant potential in detecting and classifying respiratory diseases, promising to enhance diagnostic efficiency and improve patient outcomes in respiratory healthcare.

Diagnostic Performance of Universal versus Stratified Computer-Aided Detection Thresholds for Chest X-Ray-Based Tuberculosis Screening

Sung, J., Kitonsa, P. J., Nalutaaya, A., Isooba, D., Birabwa, S., Ndyabayunga, K., Okura, R., Magezi, J., Nantale, D., Mugabi, I., Nakiiza, V., Dowdy, D. W., Katamba, A., Kendall, E. A.

medrxiv logopreprintJun 24 2025
BackgroundComputer-aided detection (CAD) software analyzes chest X-rays for features suggestive of tuberculosis (TB) and provides a numeric abnormality score. However, estimates of CAD accuracy for TB screening are hindered by the lack of confirmatory data among people with lower CAD scores, including those without symptoms. Additionally, the appropriate CAD score thresholds for obtaining further testing may vary according to population and client characteristics. MethodsWe screened for TB in Ugandan individuals aged [&ge;]15 years using portable chest X-rays with CAD (qXR v3). Participants were offered screening regardless of their symptoms. Those with X-ray scores above a threshold of 0.1 (range, 0 - 1) were asked to provide sputum for Xpert Ultra testing. We estimated the diagnostic accuracy of CAD for detecting Xpert-positive TB when using the same threshold for all individuals (under different assumptions about TB prevalence among people with X-ray scores <0.1), and compared this estimate to age- and/or sex-stratified approaches. FindingsOf 52,835 participants screened for TB using CAD, 8,949 (16.9%) had X-ray scores [&ge;]0.1. Of 7,219 participants with valid Xpert Ultra results, 382 (5.3%) were Xpert-positive, including 81 with trace results. Assuming 0.1% of participants with X-ray scores <0.1 would have been Xpert-positive if tested, qXR had an estimated AUC of 0.920 (95% confidence interval 0.898-0.941) for Xpert-positive TB. Stratifying CAD thresholds according to age and sex improved accuracy; for example, at 96.1% specificity, estimated sensitivity was 75.0% for a universal threshold (of [&ge;]0.65) versus 76.9% for thresholds stratified by age and sex (p=0.046). InterpretationThe accuracy of CAD for TB screening among all screening participants, including those without symptoms or abnormal chest X-rays, is higher than previously estimated. Stratifying CAD thresholds based on client characteristics such as age and sex could further improve accuracy, enabling a more effective and personalized approach to TB screening. FundingNational Institutes of Health Research in contextO_ST_ABSEvidence before this studyC_ST_ABSThe World Health Organization (WHO) has endorsed computer-aided detection (CAD) as a screening tool for tuberculosis (TB), but the appropriate CAD score that triggers further diagnostic evaluation for tuberculosis varies by population. The WHO recommends determining the appropriate CAD threshold for specific settings and population and considering unique thresholds for specific populations, including older age groups, among whom CAD may perform poorly. We performed a PubMed literature search for articles published until September 9, 2024, using the search terms "tuberculosis" AND ("computer-aided detection" OR "computer aided detection" OR "CAD" OR "computer-aided reading" OR "computer aided reading" OR "artificial intelligence"), which resulted in 704 articles. Among them, we identified studies that evaluated the performance of CAD for tuberculosis screening and additionally reviewed relevant references. Most prior studies reported area under the curves (AUC) ranging from 0.76 to 0.88 but limited their evaluations to individuals with symptoms or abnormal chest X-rays. Some prior studies identified subgroups (including older individuals and people with prior TB) among whom CAD had lower-than-average AUCs, and authors discussed how the prevalence of such characteristics could affect the optimal value of a population-wide CAD threshold; however, none estimated the accuracy that could be gained with adjusting CAD thresholds between individuals based on personal characteristics. Added value of this studyIn this study, all consenting individuals in a high-prevalence setting were offered chest X-ray screening, regardless of symptoms, if they were [&ge;]15 years old, not pregnant, and not on TB treatment. A very low CAD score cutoff (qXR v3 score of 0.1 on a 0-1 scale) was used to select individuals for confirmatory sputum molecular testing, enabling the detection of radiographically mild forms of TB and facilitating comparisons of diagnostic accuracy at different CAD thresholds. With this more expansive, symptom-neutral evaluation of CAD, we estimated an AUC of 0.920, and we found that the qXR v3 threshold needed to decrease to under 0.1 to meet the WHO target product profile goal of [&ge;]90% sensitivity and [&ge;]70% specificity. Compared to using the same thresholds for all participants, adjusting CAD thresholds by age and sex strata resulted in a 1 to 2% increase in sensitivity without affecting specificity. Implications of all the available evidenceTo obtain high sensitivity with CAD screening in high-prevalence settings, low score thresholds may be needed. However, countries with a high burden of TB often do not have sufficient resources to test all individuals above a low threshold. In such settings, adjusting CAD thresholds based on individual characteristics associated with TB prevalence (e.g., male sex) and those associated with false-positive X-ray results (e.g., old age) can potentially improve the efficiency of TB screening programs.

Non-invasive prediction of NSCLC immunotherapy efficacy and tumor microenvironment through unsupervised machine learning-driven CT Radiomic subtypes: a multi-cohort study.

Guo Y, Gong B, Li Y, Mo P, Chen Y, Fan Q, Sun Q, Miao L, Li Y, Liu Y, Tan W, Yang L, Zheng C

pubmed logopapersJun 24 2025
Radiomics analyzes quantitative features from medical images to reveal tumor heterogeneity, offering new insights for diagnosis, prognosis, and treatment prediction. This study explored radiomics based biomarkers to predict immunotherapy response and its association with the tumor microenvironment in non-small cell lung cancer (NSCLC) using unsupervised machine learning models derived from CT imaging. This study included 1539 NSCLC patients from seven independent cohorts. For 1834 radiomic features extracted from 869 NSCLC patients, K-means unsupervised clustering was applied to identify radiomic subtypes. A random forest model extended subtype classification to external cohorts, model accuracy, sensitivity, and specificity were evaluated. By conducting bulk RNA sequencing (RNA-seq) and single-cell transcriptome sequencing (scRNA-seq) of tumors, the immune microenvironment characteristics of tumors can be obtained to evaluate the association between radiomic subtypes and immunotherapy efficacy, immune scores, and immune cells infiltration. Unsupervised clustering stratified NSCLC patients into two subtypes (Cluster 1 and Cluster 2). Principal component analysis confirmed significant distinctions between subtypes across all cohorts. Cluster 2 exhibited significantly longer median overall survival (35 vs. 30 months, P = 0.006) and progression-free survival (19 vs. 16 months, P = 0.020) compared to Cluster 1. Multivariate Cox regression identified radiomic subtype as an independent predictor of overall survival (HR: 0.738, 95% CI 0.583-0.935, P = 0.012), validated in two external cohorts. Bulk RNA seq showed elevated interaction signaling and immune scores in Cluster 2 and scRNA-seq demonstrated higher proportions of T cells, B cells, and NK cells in Cluster 2. This study establishes a radiomic subtype associated with NSCLC immunotherapy efficacy and tumor immune microenvironment. The findings provide a non-invasive tool for personalized treatment, enabling early identification of immunotherapy-responsive patients and optimized therapeutic strategies.

Differentiating adenocarcinoma and squamous cell carcinoma in lung cancer using semi automated segmentation and radiomics.

Vijitha R, Wickramasinghe WMIS, Perera PAS, Jayatissa RMGCSB, Hettiarachchi RT, Alwis HARV

pubmed logopapersJun 24 2025
Adenocarcinoma (AD) and squamous cell carcinoma (SCC) are frequently observed forms of non-small cell lung cancer (NSCLC), playing a significant role in global cancer mortality. This research categorizes NSCLC subtypes by analyzing image details using computer-assisted semi-automatic segmentation and radiomic features in model development. This study includes 80 patients with 50 AD and 30 SCC which were analyzed using 3D Slicer software and extracted 107 quantitative radiomic features per patient. After eliminating correlated attributes, LASSO binary logistic regression model and 10-fold cross-validation were used for feature selection. The Shapiro-Wilk test assessed radiomic score normality, and the Mann-Whitney U test compared score distributions. Random Forest (RF) and Support Vector Machine (SVM) classification models were implemented for subtype classification. Receiver-Operator Characteristic (ROC) curves evaluated the radiomics score, showing a moderate predictive ability with training set area under curve (AUC) of 0.679 (95 % CI, 0.541-0.871) and validation set AUC of 0.560 (95 % CI, 0.342-0.778). Rad-Score distributions were normal for AD and not normal for SCC. RF and SVM classification models, which are based on selected features, resulted RF accuracy (95 % CI) of 0.73 and SVM accuracy (95 % CI) of 0.87, with respective AUC values of 0.54 and 0.87. These findings enhance the understanding that the two subtypes of NSCLC can be differentiated. The study demonstrated radiomic analysis improves diagnostic accuracy and offers a non-invasive alternative. However, the AUCs and ROC curves for the machine learning models must be critically evaluated to ensure clinical acceptability. If robust, these models could reduce the need for biopsies and enhance personalized treatment planning. Further research is needed to validate these findings and integrate radiomics into NSCLC clinical practice.
Page 20 of 42416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.