Sort by:
Page 364 of 3693681 results

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.

Real-time brain tumour diagnoses using a novel lightweight deep learning model.

Alnageeb MHO, M H S

pubmed logopapersMay 6 2025
Brain tumours continue to be a primary cause of worldwide death, highlighting the critical need for effective and accurate diagnostic tools. This article presents MK-YOLOv8, an innovative lightweight deep learning framework developed for the real-time detection and categorization of brain tumours from MRI images. Based on the YOLOv8 architecture, the proposed model incorporates Ghost Convolution, the C3Ghost module, and the SPPELAN module to improve feature extraction and substantially decrease computational complexity. An x-small object detection layer has been added, supporting precise detection of small and x-small tumours, which is crucial for early diagnosis. Trained on the Figshare Brain Tumour (FBT) dataset comprising (3,064) MRI images, MK-YOLOv8 achieved a mean Average Precision (mAP) of 99.1% at IoU (0.50) and 88.4% at IoU (0.50-0.95), outperforming YOLOv8 (98% and 78.8%, respectively). Glioma recall improved by 26%, underscoring the enhanced sensitivity to challenging tumour types. With a computational footprint of only 96.9 GFLOPs (representing 37.5% of YOYOLOv8x'sFLOPs) and utilizing 12.6 million parameters, a mere 18.5% of YOYOLOv8's parameters, MK-YOLOv8 delivers high efficiency with reduced resource demands. Also, it trained on the Br35H dataset (801 images) to guarantee the model's robustness and generalization; it achieved a mAP of 98.6% at IoU (0.50). The suggested model operates at 62 frames per second (FPS) and is suited for real-time clinical processes. These developments establish MK-YOLOv8 as an innovative framework, overcoming challenges in tiny tumour identification and providing a generalizable, adaptable, and precise detection approach for brain tumour diagnostics in clinical settings.

Upper-lobe CT imaging features improve prediction of lung function decline in COPD.

Makimoto K, Virdee S, Koo M, Hogg JC, Bourbeau J, Tan WC, Kirby M

pubmed logopapersMay 1 2025
It is unknown whether prediction models for lung function decline using computed tomography (CT) imaging-derived features from the upper lobes improve performance compared with globally derived features in individuals at risk of and with COPD. Individuals at risk (current or former smokers) and those with COPD from the Canadian Cohort Obstructive Lung Disease (CanCOLD) retrospective study, were investigated. A total of 103 CT features were extracted globally and regionally, and were used with 12 clinical features (demographics, questionnaires and spirometry) to predict rapid lung function decline for individuals at risk and those with COPD. Machine-learning models were evaluated in a hold-out test set using the area under the receiver operating characteristic curve (AUC) with DeLong's test for comparison. A total of 780 participants were included (n=276 at risk; n=298 Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 COPD; n=206 GOLD 2+ COPD). For predicting rapid lung function decline in those at risk, the upper-lobe CT model obtained a significantly higher AUC (AUC=0.80) than the lower-lobe CT model (AUC=0.63) and global model (AUC=0.66; p<0.05). For predicting rapid lung function decline in COPD, there was no significant differences between the upper-lobe (AUC=0.63), lower-lobe (AUC=0.59) or global CT features model (AUC=059; p>0.05). CT features extracted from the upper lobes obtained significantly improved prediction performance compared with globally extracted features for rapid lung function decline in early/mild COPD.

Artificial intelligence-based echocardiography assessment to detect pulmonary hypertension.

Salehi M, Alabed S, Sharkey M, Maiter A, Dwivedi K, Yardibi T, Selej M, Hameed A, Charalampopoulos A, Kiely DG, Swift AJ

pubmed logopapersMay 1 2025
Tricuspid regurgitation jet velocity (TRJV) on echocardiography is used for screening patients with suspected pulmonary hypertension (PH). Artificial intelligence (AI) tools, such as the US2.AI, have been developed for automated evaluation of echocardiograms and can yield measurements that aid PH detection. This study evaluated the performance and utility of the US2.AI in a consecutive cohort of patients with suspected PH. 1031 patients who had been investigated for suspected PH between 2009-2021 were retrospectively identified from the ASPIRE registry. All patients had undergone echocardiography and right heart catheterisation (RHC). Based on RHC results, 771 (75%) patients with a mean pulmonary arterial pressure >20 mmHg were classified as having a diagnosis of PH (as per the 2022 European guidelines). Echocardiograms were evaluated manually and by the US2.AI tool to yield TRJV measurements. The AI tool demonstrated high interpretation yield, successfully measuring TRJV in 87% of echocardiograms. Manually and automatically derived TRJV values showed excellent agreement (intraclass correlation coefficient 0.94, 95% CI 0.94-0.95) with minimal bias (Bland-Altman analysis). Automated TRJV measurements showed equally high diagnostic accuracy for PH as manual measurements (area under the curve 0.88, 95% CI 0.84-0.90 <i>versus</i> 0.88, 95% CI 0.86-0.91). Automated TRJV measurements on echocardiography were similar to manual measurements, with similarly high and noninferior diagnostic accuracy for PH. These findings demonstrate that automated measurement of TRJV on echocardiography is feasible, accurate and reliable and support the implementation of AI-based approaches to echocardiogram evaluation and diagnostic imaging for PH.

From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities.

Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV

pubmed logopapersMay 1 2025
Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.

Artificial intelligence in bronchoscopy: a systematic review.

Cold KM, Vamadevan A, Laursen CB, Bjerrum F, Singh S, Konge L

pubmed logopapersApr 1 2025
Artificial intelligence (AI) systems have been implemented to improve the diagnostic yield and operators' skills within endoscopy. Similar AI systems are now emerging in bronchoscopy. Our objective was to identify and describe AI systems in bronchoscopy. A systematic review was performed using MEDLINE, Embase and Scopus databases, focusing on two terms: bronchoscopy and AI. All studies had to evaluate their AI against human ratings. The methodological quality of each study was assessed using the Medical Education Research Study Quality Instrument (MERSQI). 1196 studies were identified, with 20 passing the eligibility criteria. The studies could be divided into three categories: nine studies in airway anatomy and navigation, seven studies in computer-aided detection and classification of nodules in endobronchial ultrasound, and four studies in rapid on-site evaluation. 16 were assessment studies, with 12 showing equal performance and four showing superior performance of AI compared with human ratings. Four studies within airway anatomy implemented their AI, all favouring AI guidance to no AI guidance. The methodological quality of the studies was moderate (mean MERSQI 12.9 points, out of a maximum 18 points). 20 studies developed AI systems, with only four examining the implementation of their AI. The four studies were all within airway navigation and favoured AI to no AI in a simulated setting. Future implementation studies are warranted to test for the clinical effect of AI systems within bronchoscopy.

Artificial intelligence demonstrates potential to enhance orthopaedic imaging across multiple modalities: A systematic review.

Longo UG, Lalli A, Nicodemi G, Pisani MG, De Sire A, D'Hooghe P, Nazarian A, Oeding JF, Zsidai B, Samuelsson K

pubmed logopapersApr 1 2025
While several artificial intelligence (AI)-assisted medical imaging applications are reported in the recent orthopaedic literature, comparison of the clinical efficacy and utility of these applications is currently lacking. The aim of this systematic review is to evaluate the effectiveness and reliability of AI applications in orthopaedic imaging, focusing on their impact on diagnostic accuracy, image segmentation and operational efficiency across various imaging modalities. Based on the PRISMA guidelines, a comprehensive literature search of PubMed, Cochrane and Scopus databases was performed, using combinations of keywords and MeSH descriptors ('AI', 'ML', 'deep learning', 'orthopaedic surgery' and 'imaging') from inception to March 2024. Included were studies published between September 2018 and February 2024, which evaluated machine learning (ML) model effectiveness in improving orthopaedic imaging. Studies with insufficient data regarding the output variable used to assess the reliability of the ML model, those applying deterministic algorithms, unrelated topics, protocol studies, and other systematic reviews were excluded from the final synthesis. The Joanna Briggs Institute (JBI) Critical Appraisal tool and the Risk Of Bias In Non-randomised Studies-of Interventions (ROBINS-I) tool were applied for the assessment of bias among the included studies. The 53 included studies reported the use of 11.990.643 images from several diagnostic instruments. A total of 39 studies reported details in terms of the Dice Similarity Coefficient (DSC), while both accuracy and sensitivity were documented across 15 studies. Precision was reported by 14, specificity by nine, and the F1 score by four of the included studies. Three studies applied the area under the curve (AUC) method to evaluate ML model performance. Among the studies included in the final synthesis, Convolutional Neural Networks (CNN) emerged as the most frequently applied category of ML models, present in 17 studies (32%). The systematic review highlights the diverse application of AI in orthopaedic imaging, demonstrating the capability of various machine learning models in accurately segmenting and analysing orthopaedic images. The results indicate that AI models achieve high performance metrics across different imaging modalities. However, the current body of literature lacks comprehensive statistical analysis and randomized controlled trials, underscoring the need for further research to validate these findings in clinical settings. Systematic Review; Level of evidence IV.

Auxiliary Diagnosis of Pulmonary Nodules' Benignancy and Malignancy Based on Machine Learning: A Retrospective Study.

Wang W, Yang B, Wu H, Che H, Tong Y, Zhang B, Liu H, Chen Y

pubmed logopapersJan 1 2025
Lung cancer, one of the most lethal malignancies globally, often presents insidiously as pulmonary nodules. Its nonspecific clinical presentation and heterogeneous imaging characteristics hinder accurate differentiation between benign and malignant lesions, while biopsy's invasiveness and procedural constraints underscore the critical need for non-invasive early diagnostic approaches. In this retrospective study, we analyzed outpatient and inpatient records from the First Medical Center of Chinese PLA General Hospital between 2011 and 2021, focusing on pulmonary nodules measuring 5-30mm on CT scans without overt signs of malignancy. Pathological examination served as the reference standard. Comparative experiments evaluated SVM, RF, XGBoost, FNN, and Atten_FNN using five-fold cross-validation to assess AUC, sensitivity, and specificity. The dataset was split 70%/30%, and stratified five-fold cross-validation was applied to the training set. The optimal model was interpreted with SHAP to identify the most influential predictive features. This study enrolled 3355 patients, including 1156 with benign and 2199 with malignant pulmonary nodules. The Atten_FNN model demonstrated superior performance in five-fold cross-validation, achieving an AUC of 0.82, accuracy of 0.75, sensitivity of 0.77, and F1 score of 0.80. SHAP analysis revealed key predictive factors: demographic variables (age, sex, BMI), CT-derived features (maximum nodule diameter, morphology, density, calcification, ground-glass opacity), and laboratory biomarkers (neuroendocrine markers, carcinoembryonic antigen). This study integrates electronic medical records and pathology data to predict pulmonary nodule malignancy using machine/deep learning models. SHAP-based interpretability analysis uncovered key clinical determinants. Acknowledging limitations in cross-center generalizability, we propose the development of a multimodal diagnostic systems that combines CT imaging and radiomics, to be validated in multi-center prospective cohorts to facilitate clinical translation. This framework establishes a novel paradigm for early precision diagnosis of lung cancer.

Radiomics of Dynamic Contrast-Enhanced MRI for Predicting Radiation-Induced Hepatic Toxicity After Intensity Modulated Radiotherapy for Hepatocellular Carcinoma: A Machine Learning Predictive Model Based on the SHAP Methodology.

Liu F, Chen L, Wu Q, Li L, Li J, Su T, Li J, Liang S, Qing L

pubmed logopapersJan 1 2025
To develop an interpretable machine learning (ML) model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomic data, dosimetric parameters, and clinical data for predicting radiation-induced hepatic toxicity (RIHT) in patients with hepatocellular carcinoma (HCC) following intensity-modulated radiation therapy (IMRT). A retrospective analysis of 150 HCC patients was performed, with a 7:3 ratio used to divide the data into training and validation cohorts. Radiomic features from the original MRI sequences and Delta-radiomic features were extracted. Seven ML models based on radiomics were developed: logistic regression (LR), random forest (RF), support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), adaptive boosting (AdaBoost), decision tree (DT), and artificial neural network (ANN). The predictive performance of the models was evaluated using receiver operating characteristic (ROC) curve analysis and calibration curves. Shapley additive explanations (SHAP) were employed to interpret the contribution of each variable and its risk threshold. Original radiomic features and Delta-radiomic features were extracted from DCE-MRI images and filtered to generate Radiomics-scores and Delta-Radiomics-scores. These were then combined with independent risk factors (Body Mass Index (BMI), V5, and pre-Child-Pugh score(pre-CP)) identified through univariate and multivariate logistic regression and Spearman correlation analysis to construct the ML models. In the training cohort, the AUC values were 0.8651 for LR, 0.7004 for RF, 0.6349 for SVM, 0.6706 for XGBoost, 0.7341 for AdaBoost, 0.6806 for Decision Tree, and 0.6786 for ANN. The corresponding accuracies were 84.4%, 65.6%, 75.0%, 65.6%, 71.9%, 68.8%, and 71.9%, respectively. The validation cohort further confirmed the superiority of the LR model, which was selected as the optimal model. SHAP analysis revealed that Delta-radiomics made a substantial positive contribution to the model. The interpretable ML model based on radiomics provides a non-invasive tool for predicting RIHT in patients with HCC, demonstrating satisfactory discriminative performance.

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach.

Santhosh TRS, Mohanty SN, Pradhan NR, Khan T, Derbali M

pubmed logopapersJan 1 2025
In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.
Page 364 of 3693681 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.