Sort by:
Page 109 of 1111106 results

Application of artificial intelligence in X-ray imaging analysis for knee arthroplasty: A systematic review.

Zhang Z, Hui X, Tao H, Fu Z, Cai Z, Zhou S, Yang K

pubmed logopapersJan 1 2025
Artificial intelligence (AI) is a promising and powerful technology with increasing use in orthopedics. The global morbidity of knee arthroplasty is expanding. This study investigated the use of AI algorithms to review radiographs of knee arthroplasty. The Ovid-Embase, Web of Science, Cochrane Library, PubMed, China National Knowledge Infrastructure (CNKI), WeiPu (VIP), WanFang, and China Biology Medicine (CBM) databases were systematically screened from inception to March 2024 (PROSPERO study protocol registration: CRD42024507549). The quality assessment of the diagnostic accuracy studies tool assessed the risk of bias. A total of 21 studies were included in the analysis. Of these, 10 studies identified and classified implant brands, 6 measured implant size and component alignment, 3 detected implant loosening, and 2 diagnosed prosthetic joint infections (PJI). For classifying and identifying implant brands, 5 studies demonstrated near-perfect prediction with an area under the curve (AUC) ranging from 0.98 to 1.0, and 10 achieved accuracy (ACC) between 96-100%. Regarding implant measurement, one study showed an AUC of 0.62, and two others exhibited over 80% ACC in determining component sizes. Moreover, Artificial intelligence showed good to excellent reliability across all angles in three separate studies (Intraclass Correlation Coefficient > 0.78). In predicting PJI, one study achieved an AUC of 0.91 with a corresponding ACC of 90.5%, while another reported a positive predictive value ranging from 75% to 85%. For detecting implant loosening, the AUC was found to be at least as high as 0.976 with ACC ranging from 85.8% to 97.5%. These studies show that AI is promising in recognizing implants in knee arthroplasty. Future research should follow a rigorous approach to AI development, with comprehensive and transparent reporting of methods and the creation of open-source software programs and commercial tools that can provide clinicians with objective clinical decisions.

AISIM: evaluating impacts of user interface elements of an AI assisting tool.

Wiratchawa K, Wanna Y, Junsawang P, Titapun A, Techasen A, Boonrod A, Laopaiboon V, Chamadol N, Bulathwela S, Intharah T

pubmed logopapersJan 1 2025
While Artificial Intelligence (AI) has demonstrated human-level capabilities in many prediction tasks, collaboration between humans and machines is crucial in mission-critical applications, especially in the healthcare sector. An important factor that enables successful human-AI collaboration is the user interface (UI). This paper evaluated the UI of BiTNet, an intelligent assisting tool for human biliary tract diagnosis via ultrasound images. We evaluated the UI of the assisting tool with 11 healthcare professionals through two main research questions: 1) did the assisting tool help improve the diagnosis performance of the healthcare professionals who use the tool? and 2) how did different UI elements of the assisting tool influence the users' decisions? To analyze the impacts of different UI elements without multiple rounds of experiments, we propose the novel AISIM strategy. We demonstrated that our proposed strategy, AISIM, can be used to analyze the influence of different elements in the user interface in one go. Our main findings show that the assisting tool improved the diagnostic performance of healthcare professionals from different levels of experience (OR  = 3.326, p-value <10-15). In addition, high AI prediction confidence and correct AI attention area provided higher than twice the odds that the users would follow the AI suggestion. Finally, the interview results agreed with the experimental result that BiTNet boosted the users' confidence when they were assigned to diagnose abnormality in the biliary tract from the ultrasound images.

Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.

Zhou H, Xie M, Shi H, Shou C, Tang M, Zhang Y, Hu Y, Liu X

pubmed logopapersJan 1 2025
Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge. This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2). The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions. The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.

Improved swin transformer-based thorax disease classification with optimal feature selection using chest X-ray.

Rana N, Coulibaly Y, Noor A, Noor TH, Alam MI, Khan Z, Tahir A, Khan MZ

pubmed logopapersJan 1 2025
Thoracic diseases, including pneumonia, tuberculosis, lung cancer, and others, pose significant health risks and require timely and accurate diagnosis to ensure proper treatment. Thus, in this research, a model for thorax disease classification using Chest X-rays is proposed by considering deep learning model. The input is pre-processed by resizing, normalizing pixel values, and applying data augmentation to address the issue of imbalanced datasets and improve model generalization. Significant features are extracted from the images using an Enhanced Auto-Encoder (EnAE) model, which combines a stacked auto-encoder architecture with an attention module to enhance feature representation and classification accuracy. To further improve feature selection, we utilize the Chaotic Whale Optimization (ChWO) Algorithm, which optimally selects the most relevant attributes from the extracted features. Finally, the disease classification is performed using the novel Improved Swin Transformer (IMSTrans) model, which is designed to efficiently process high-dimensional medical image data and achieve superior classification performance. The proposed EnAE + ChWO+IMSTrans model for thorax disease classification was evaluated using extensive Chest X-ray datasets and the Lung Disease Dataset. The proposed method demonstrates enhanced Accuracy, Precision, Recall, F-Score, MCC and MAE of 0.964, 0.977, 0.9845, 0.964, 0.9647, and 0.184 respectively indicating the reliable and efficient solution for thorax disease classification.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Radiomics machine learning based on asymmetrically prominent cortical and deep medullary veins combined with clinical features to predict prognosis in acute ischemic stroke: a retrospective study.

Li H, Chang C, Zhou B, Lan Y, Zang P, Chen S, Qi S, Ju R, Duan Y

pubmed logopapersJan 1 2025
Acute ischemic stroke (AIS) has a poor prognosis and a high recurrence rate. Predicting the outcomes of AIS patients in the early stages of the disease is therefore important. The establishment of intracerebral collateral circulation significantly improves the survival of brain cells and the outcomes of AIS patients. However, no machine learning method has been applied to investigate the correlation between the dynamic evolution of intracerebral venous collateral circulation and AIS prognosis. Therefore, we employed a support vector machine (SVM) algorithm to analyze asymmetrically prominent cortical veins (APCVs) and deep medullary veins (DMVs) to establish a radiomic model for predicting the prognosis of AIS by combining clinical indicators. The magnetic resonance imaging (MRI) data and clinical indicators of 150 AIS patients were retrospectively analyzed. Regions of interest corresponding to the DMVs and APCVs were delineated, and least absolute shrinkage and selection operator (LASSO) regression was used to select features extracted from these regions. An APCV-DMV radiomic model was created via the SVM algorithm, and independent clinical risk factors associated with AIS were combined with the radiomic model to generate a joint model. The SVM algorithm was selected because of its proven efficacy in handling high-dimensional radiomic data compared with alternative classifiers (<i>e.g.</i>, random forest) in pilot experiments. Nine radiomic features associated with AIS patient outcomes were ultimately selected. In the internal training test set, the AUCs of the clinical, DMV-APCV radiomic and joint models were 0.816, 0.976 and 0.996, respectively. The DeLong test revealed that the predictive performance of the joint model was better than that of the individual models, with a test set AUC of 0.996, sensitivity of 0.905, and specificity of 1.000 (<i>P</i> < 0.05). Using radiomic methods, we propose a novel joint predictive model that combines the imaging histologic features of the APCV and DMV with clinical indicators. This model quantitatively characterizes the morphological and functional attributes of venous collateral circulation, elucidating its important role in accurately evaluating the prognosis of patients with AIS and providing a noninvasive and highly accurate imaging tool for early prognostic prediction.

Radiomic Model Associated with Tumor Microenvironment Predicts Immunotherapy Response and Prognosis in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.

Sun J, Wu X, Zhang X, Huang W, Zhong X, Li X, Xue K, Liu S, Chen X, Li W, Liu X, Shen H, You J, He W, Jin Z, Yu L, Li Y, Zhang S, Zhang B

pubmed logopapersJan 1 2025
<b>Background:</b> No robust biomarkers have been identified to predict the efficacy of programmed cell death protein 1 (PD-1) inhibitors in patients with locoregionally advanced nasopharyngeal carcinoma (LANPC). We aimed to develop radiomic models using pre-immunotherapy MRI to predict the response to PD-1 inhibitors and the patient prognosis. <b>Methods:</b> This study included 246 LANPC patients (training cohort, <i>n</i> = 117; external test cohort, <i>n</i> = 129) from 10 centers. The best-performing machine learning classifier was employed to create the radiomic models. A combined model was constructed by integrating clinical and radiomic data. A radiomic interpretability study was performed with whole slide images (WSIs) stained with hematoxylin and eosin (H&E) and immunohistochemistry (IHC). A total of 150 patient-level nuclear morphological features (NMFs) and 12 cell spatial distribution features (CSDFs) were extracted from WSIs. The correlation between the radiomic and pathological features was assessed using Spearman correlation analysis. <b>Results:</b> The radiomic model outperformed the clinical and combined models in predicting treatment response (area under the curve: 0.760 vs. 0.559 vs. 0.652). For overall survival estimation, the combined model performed comparably to the radiomic model but outperformed the clinical model (concordance index: 0.858 vs. 0.812 vs. 0.664). Six treatment response-related radiomic features correlated with 50 H&E-derived (146 pairs, |<i>r</i>|= 0.31 to 0.46) and 2 to 26 IHC-derived NMF, particularly for CD45RO (69 pairs, |<i>r</i>|= 0.31 to 0.48), CD8 (84, |<i>r</i>|= 0.30 to 0.59), PD-L1 (73, |<i>r</i>|= 0.32 to 0.48), and CD163 (53, |<i>r</i>| = 0.32 to 0.59). Eight prognostic radiomic features correlated with 11 H&E-derived (16 pairs, |<i>r</i>|= 0.48 to 0.61) and 2 to 31 IHC-derived NMF, particularly for PD-L1 (80 pairs, |<i>r</i>|= 0.44 to 0.64), CD45RO (65, |<i>r</i>|= 0.42 to 0.67), CD19 (35, |<i>r</i>|= 0.44 to 0.58), CD66b (61, |<i>r</i>| = 0.42 to 0.67), and FOXP3 (21, |<i>r</i>| = 0.41 to 0.71). In contrast, fewer CSDFs exhibited correlations with specific radiomic features. <b>Conclusion:</b> The radiomic model and combined model are feasible in predicting immunotherapy response and outcomes in LANPC patients. The radiology-pathology correlation suggests a potential biological basis for the predictive models.

Deep learning-based fine-grained assessment of aneurysm wall characteristics using 4D-CT angiography.

Kumrai T, Maekawa T, Chen Y, Sugiyama Y, Takagaki M, Yamashiro S, Takizawa K, Ichinose T, Ishida F, Kishima H

pubmed logopapersJan 1 2025
This study proposes a novel deep learning-based approach for aneurysm wall characteristics, including thin-walled (TW) and hyperplastic-remodeling (HR) regions. We analyzed fifty-two unruptured cerebral aneurysms employing 4D-computed tomography angiography (4D-CTA) and intraoperative recordings. The TW and HR regions were identified in intraoperative images. The 3D trajectories of observation points on aneurysm walls were processed to compute a time series of 3D speed, acceleration, and smoothness of motion, aiming to evaluate the aneurysm wall characteristics. To facilitate point-level risk evaluation using the time-series data, we developed a convolutional neural network (CNN)-long- short-term memory (LSTM)-based regression model enriched with attention layers. In order to accommodate patient heterogeneity, a patient-independent feature extraction mechanism was introduced. Furthermore, unlabeled data were incorporated to enhance the data-intensive deep model. The proposed method achieved an average diagnostic accuracy of 92%, significantly outperforming a simpler model lacking attention. These results underscore the significance of patient-independent feature extraction and the use of unlabeled data. This study demonstrates the efficacy of a fine-grained deep learning approach in predicting aneurysm wall characteristics using 4D-CTA. Notably, incorporating an attention-based network structure proved to be particularly effective, contributing to enhanced performance.
Page 109 of 1111106 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.