Sort by:
Page 124 of 1261258 results

Application of artificial intelligence in X-ray imaging analysis for knee arthroplasty: A systematic review.

Zhang Z, Hui X, Tao H, Fu Z, Cai Z, Zhou S, Yang K

pubmed logopapersJan 1 2025
Artificial intelligence (AI) is a promising and powerful technology with increasing use in orthopedics. The global morbidity of knee arthroplasty is expanding. This study investigated the use of AI algorithms to review radiographs of knee arthroplasty. The Ovid-Embase, Web of Science, Cochrane Library, PubMed, China National Knowledge Infrastructure (CNKI), WeiPu (VIP), WanFang, and China Biology Medicine (CBM) databases were systematically screened from inception to March 2024 (PROSPERO study protocol registration: CRD42024507549). The quality assessment of the diagnostic accuracy studies tool assessed the risk of bias. A total of 21 studies were included in the analysis. Of these, 10 studies identified and classified implant brands, 6 measured implant size and component alignment, 3 detected implant loosening, and 2 diagnosed prosthetic joint infections (PJI). For classifying and identifying implant brands, 5 studies demonstrated near-perfect prediction with an area under the curve (AUC) ranging from 0.98 to 1.0, and 10 achieved accuracy (ACC) between 96-100%. Regarding implant measurement, one study showed an AUC of 0.62, and two others exhibited over 80% ACC in determining component sizes. Moreover, Artificial intelligence showed good to excellent reliability across all angles in three separate studies (Intraclass Correlation Coefficient > 0.78). In predicting PJI, one study achieved an AUC of 0.91 with a corresponding ACC of 90.5%, while another reported a positive predictive value ranging from 75% to 85%. For detecting implant loosening, the AUC was found to be at least as high as 0.976 with ACC ranging from 85.8% to 97.5%. These studies show that AI is promising in recognizing implants in knee arthroplasty. Future research should follow a rigorous approach to AI development, with comprehensive and transparent reporting of methods and the creation of open-source software programs and commercial tools that can provide clinicians with objective clinical decisions.

OA-HybridCNN (OHC): An advanced deep learning fusion model for enhanced diagnostic accuracy in knee osteoarthritis imaging.

Liao Y, Yang G, Pan W, Lu Y

pubmed logopapersJan 1 2025
Knee osteoarthritis (KOA) is a leading cause of disability globally. Early and accurate diagnosis is paramount in preventing its progression and improving patients' quality of life. However, the inconsistency in radiologists' expertise and the onset of visual fatigue during prolonged image analysis often compromise diagnostic accuracy, highlighting the need for automated diagnostic solutions. In this study, we present an advanced deep learning model, OA-HybridCNN (OHC), which integrates ResNet and DenseNet architectures. This integration effectively addresses the gradient vanishing issue in DenseNet and augments prediction accuracy. To evaluate its performance, we conducted a thorough comparison with other deep learning models using five-fold cross-validation and external tests. The OHC model outperformed its counterparts across all performance metrics. In external testing, OHC exhibited an accuracy of 91.77%, precision of 92.34%, and recall of 91.36%. During the five-fold cross-validation, its average AUC and ACC were 86.34% and 87.42%, respectively. Deep learning, particularly exemplified by the OHC model, has greatly improved the efficiency and accuracy of KOA imaging diagnosis. The adoption of such technologies not only alleviates the burden on radiologists but also significantly enhances diagnostic precision.

Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.

Ramli Hamid MT, Ab Mumin N, Abdul Hamid S, Mohd Ariffin N, Mat Nor K, Saib E, Mohamed NA

pubmed logopapersJan 1 2025
This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.

Zhou H, Xie M, Shi H, Shou C, Tang M, Zhang Y, Hu Y, Liu X

pubmed logopapersJan 1 2025
Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge. This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2). The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions. The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach.

Santhosh TRS, Mohanty SN, Pradhan NR, Khan T, Derbali M

pubmed logopapersJan 1 2025
In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.

The application of ultrasound artificial intelligence in the diagnosis of endometrial diseases: Current practice and future development.

Wei Q, Xiao Z, Liang X, Guo Z, Zhang Y, Chen Z

pubmed logopapersJan 1 2025
Diagnosis and treatment of endometrial diseases are crucial for women's health. Over the past decade, ultrasound has emerged as a non-invasive, safe, and cost-effective imaging tool, significantly contributing to endometrial disease diagnosis and generating extensive datasets. The introduction of artificial intelligence has enabled the application of machine learning and deep learning to extract valuable information from these datasets, enhancing ultrasound diagnostic capabilities. This paper reviews the progress of artificial intelligence in ultrasound image analysis for endometrial diseases, focusing on applications in diagnosis, decision support, and prognosis analysis. We also summarize current research challenges and propose potential solutions and future directions to advance ultrasound artificial intelligence technology in endometrial disease diagnosis, ultimately improving women's health through digital tools.

AISIM: evaluating impacts of user interface elements of an AI assisting tool.

Wiratchawa K, Wanna Y, Junsawang P, Titapun A, Techasen A, Boonrod A, Laopaiboon V, Chamadol N, Bulathwela S, Intharah T

pubmed logopapersJan 1 2025
While Artificial Intelligence (AI) has demonstrated human-level capabilities in many prediction tasks, collaboration between humans and machines is crucial in mission-critical applications, especially in the healthcare sector. An important factor that enables successful human-AI collaboration is the user interface (UI). This paper evaluated the UI of BiTNet, an intelligent assisting tool for human biliary tract diagnosis via ultrasound images. We evaluated the UI of the assisting tool with 11 healthcare professionals through two main research questions: 1) did the assisting tool help improve the diagnosis performance of the healthcare professionals who use the tool? and 2) how did different UI elements of the assisting tool influence the users' decisions? To analyze the impacts of different UI elements without multiple rounds of experiments, we propose the novel AISIM strategy. We demonstrated that our proposed strategy, AISIM, can be used to analyze the influence of different elements in the user interface in one go. Our main findings show that the assisting tool improved the diagnostic performance of healthcare professionals from different levels of experience (OR  = 3.326, p-value <10-15). In addition, high AI prediction confidence and correct AI attention area provided higher than twice the odds that the users would follow the AI suggestion. Finally, the interview results agreed with the experimental result that BiTNet boosted the users' confidence when they were assigned to diagnose abnormality in the biliary tract from the ultrasound images.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

Radiomics of Dynamic Contrast-Enhanced MRI for Predicting Radiation-Induced Hepatic Toxicity After Intensity Modulated Radiotherapy for Hepatocellular Carcinoma: A Machine Learning Predictive Model Based on the SHAP Methodology.

Liu F, Chen L, Wu Q, Li L, Li J, Su T, Li J, Liang S, Qing L

pubmed logopapersJan 1 2025
To develop an interpretable machine learning (ML) model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomic data, dosimetric parameters, and clinical data for predicting radiation-induced hepatic toxicity (RIHT) in patients with hepatocellular carcinoma (HCC) following intensity-modulated radiation therapy (IMRT). A retrospective analysis of 150 HCC patients was performed, with a 7:3 ratio used to divide the data into training and validation cohorts. Radiomic features from the original MRI sequences and Delta-radiomic features were extracted. Seven ML models based on radiomics were developed: logistic regression (LR), random forest (RF), support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), adaptive boosting (AdaBoost), decision tree (DT), and artificial neural network (ANN). The predictive performance of the models was evaluated using receiver operating characteristic (ROC) curve analysis and calibration curves. Shapley additive explanations (SHAP) were employed to interpret the contribution of each variable and its risk threshold. Original radiomic features and Delta-radiomic features were extracted from DCE-MRI images and filtered to generate Radiomics-scores and Delta-Radiomics-scores. These were then combined with independent risk factors (Body Mass Index (BMI), V5, and pre-Child-Pugh score(pre-CP)) identified through univariate and multivariate logistic regression and Spearman correlation analysis to construct the ML models. In the training cohort, the AUC values were 0.8651 for LR, 0.7004 for RF, 0.6349 for SVM, 0.6706 for XGBoost, 0.7341 for AdaBoost, 0.6806 for Decision Tree, and 0.6786 for ANN. The corresponding accuracies were 84.4%, 65.6%, 75.0%, 65.6%, 71.9%, 68.8%, and 71.9%, respectively. The validation cohort further confirmed the superiority of the LR model, which was selected as the optimal model. SHAP analysis revealed that Delta-radiomics made a substantial positive contribution to the model. The interpretable ML model based on radiomics provides a non-invasive tool for predicting RIHT in patients with HCC, demonstrating satisfactory discriminative performance.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.
Page 124 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.