Sort by:
Page 193 of 1981980 results

Machine learning algorithms integrating positron emission tomography/computed tomography features to predict pathological complete response after neoadjuvant chemoimmunotherapy in lung cancer.

Sheng Z, Ji S, Chen Y, Mi Z, Yu H, Zhang L, Wan S, Song N, Shen Z, Zhang P

pubmed logopapersMay 6 2025
Reliable methods for predicting pathological complete response (pCR) in non-small cell lung cancer (NSCLC) patients undergoing neoadjuvant chemoimmunotherapy are still under exploration. Although Fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography (18F-FDG PET/CT) features reflect tumour response, their utility in predicting pCR remains controversial. This retrospective analysis included NSCLC patients who received neoadjuvant chemoimmunotherapy followed by 18F-FDG PET/CT imaging at Shanghai Pulmonary Hospital from October 2019 to August 2024. Eligible patients were randomly divided into training and validation cohort at a 7:3 ratio. Relevant 18F-FDG PET/CT features were evaluated as individual predictors and incorporated into 5 machine learning (ML) models. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), and Shapley additive explanation was applied for model interpretation. A total of 205 patients were included, with 91 (44.4%) achieving pCR. Post-treatment tumour maximum standardized uptake value (SUVmax) demonstrated the highest predictive performance among individual predictors, achieving an AUC of 0.72 (95% CI 0.65-0.79), while ΔT SUVmax achieved an AUC of 0.65 (95% CI 0.53-0.77). The Light Gradient Boosting Machine algorithm outperformed other models and individual predictors, achieving an average AUC of 0.87 (95% CI 0.78-0.97) in training cohort and 0.83 (95% CI 0.72-0.94) in validation cohort. Shapley additive explanation analysis identified post-treatment tumour SUVmax and post-treatment nodal volume as key contributors. This ML models offer a non-invasive and effective approach for predicting pCR after neoadjuvant chemoimmunotherapy in NSCLC.

Artificial intelligence-based echocardiography assessment to detect pulmonary hypertension.

Salehi M, Alabed S, Sharkey M, Maiter A, Dwivedi K, Yardibi T, Selej M, Hameed A, Charalampopoulos A, Kiely DG, Swift AJ

pubmed logopapersMay 1 2025
Tricuspid regurgitation jet velocity (TRJV) on echocardiography is used for screening patients with suspected pulmonary hypertension (PH). Artificial intelligence (AI) tools, such as the US2.AI, have been developed for automated evaluation of echocardiograms and can yield measurements that aid PH detection. This study evaluated the performance and utility of the US2.AI in a consecutive cohort of patients with suspected PH. 1031 patients who had been investigated for suspected PH between 2009-2021 were retrospectively identified from the ASPIRE registry. All patients had undergone echocardiography and right heart catheterisation (RHC). Based on RHC results, 771 (75%) patients with a mean pulmonary arterial pressure >20 mmHg were classified as having a diagnosis of PH (as per the 2022 European guidelines). Echocardiograms were evaluated manually and by the US2.AI tool to yield TRJV measurements. The AI tool demonstrated high interpretation yield, successfully measuring TRJV in 87% of echocardiograms. Manually and automatically derived TRJV values showed excellent agreement (intraclass correlation coefficient 0.94, 95% CI 0.94-0.95) with minimal bias (Bland-Altman analysis). Automated TRJV measurements showed equally high diagnostic accuracy for PH as manual measurements (area under the curve 0.88, 95% CI 0.84-0.90 <i>versus</i> 0.88, 95% CI 0.86-0.91). Automated TRJV measurements on echocardiography were similar to manual measurements, with similarly high and noninferior diagnostic accuracy for PH. These findings demonstrate that automated measurement of TRJV on echocardiography is feasible, accurate and reliable and support the implementation of AI-based approaches to echocardiogram evaluation and diagnostic imaging for PH.

From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities.

Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV

pubmed logopapersMay 1 2025
Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.

Designing a computer-assisted diagnosis system for cardiomegaly detection and radiology report generation.

Zhu T, Xu K, Son W, Linton-Reid K, Boubnovski-Martell M, Grech-Sollars M, Lain AD, Posma JM

pubmed logopapersMay 1 2025
Chest X-ray (CXR) is a diagnostic tool for cardiothoracic assessment. They make up 50% of all diagnostic imaging tests. With hundreds of images examined every day, radiologists can suffer from fatigue. This fatigue may reduce diagnostic accuracy and slow down report generation. We describe a prototype computer-assisted diagnosis (CAD) pipeline employing computer vision (CV) and Natural Language Processing (NLP). It was trained and evaluated on the publicly available MIMIC-CXR dataset. We perform image quality assessment, view labelling, and segmentation-based cardiomegaly severity classification. We use the output of the severity classification for large language model-based report generation. Four board-certified radiologists assessed the output accuracy of our CAD pipeline. Across the dataset composed of 377,100 CXR images and 227,827 free-text radiology reports, our system identified 0.18% of cases with mixed-sex mentions, 0.02% of poor quality images (F1 = 0.81), and 0.28% of wrongly labelled views (accuracy 99.4%). We assigned views for 4.18% of images which have unlabelled views. Our binary cardiomegaly classification model has 95.2% accuracy. The inter-radiologist agreement on evaluating the generated report's semantics and correctness for radiologist-MIMIC is 0.62 (strict agreement) and 0.85 (relaxed agreement) similar to the radiologist-CAD agreement of 0.55 (strict) and 0.93 (relaxed). Our work found and corrected several incorrect or missing metadata annotations for the MIMIC-CXR dataset. The performance of our CAD system suggests performance on par with human radiologists. Future improvements revolve around improved text generation and the development of CV tools for other diseases.

Upper-lobe CT imaging features improve prediction of lung function decline in COPD.

Makimoto K, Virdee S, Koo M, Hogg JC, Bourbeau J, Tan WC, Kirby M

pubmed logopapersMay 1 2025
It is unknown whether prediction models for lung function decline using computed tomography (CT) imaging-derived features from the upper lobes improve performance compared with globally derived features in individuals at risk of and with COPD. Individuals at risk (current or former smokers) and those with COPD from the Canadian Cohort Obstructive Lung Disease (CanCOLD) retrospective study, were investigated. A total of 103 CT features were extracted globally and regionally, and were used with 12 clinical features (demographics, questionnaires and spirometry) to predict rapid lung function decline for individuals at risk and those with COPD. Machine-learning models were evaluated in a hold-out test set using the area under the receiver operating characteristic curve (AUC) with DeLong's test for comparison. A total of 780 participants were included (n=276 at risk; n=298 Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 COPD; n=206 GOLD 2+ COPD). For predicting rapid lung function decline in those at risk, the upper-lobe CT model obtained a significantly higher AUC (AUC=0.80) than the lower-lobe CT model (AUC=0.63) and global model (AUC=0.66; p<0.05). For predicting rapid lung function decline in COPD, there was no significant differences between the upper-lobe (AUC=0.63), lower-lobe (AUC=0.59) or global CT features model (AUC=059; p>0.05). CT features extracted from the upper lobes obtained significantly improved prediction performance compared with globally extracted features for rapid lung function decline in early/mild COPD.

Automated Bi-Ventricular Segmentation and Regional Cardiac Wall Motion Analysis for Rat Models of Pulmonary Hypertension.

Niglas M, Baxan N, Ashek A, Zhao L, Duan J, O'Regan D, Dawes TJW, Nien-Chen C, Xie C, Bai W, Zhao L

pubmed logopapersApr 1 2025
Artificial intelligence-based cardiac motion mapping offers predictive insights into pulmonary hypertension (PH) disease progression and its impact on the heart. We proposed an automated deep learning pipeline for bi-ventricular segmentation and 3D wall motion analysis in PH rodent models for bridging the clinical developments. A data set of 163 short-axis cine cardiac magnetic resonance scans were collected longitudinally from monocrotaline (MCT) and Sugen-hypoxia (SuHx) PH rats and used for training a fully convolutional network for automated segmentation. The model produced an accurate annotation in < 1 s for each scan (Dice metric > 0.92). High-resolution atlas fitting was performed to produce 3D cardiac mesh models and calculate the regional wall motion between end-diastole and end-systole. Prominent right ventricular hypokinesia was observed in PH rats (-37.7% ± 12.2 MCT; -38.6% ± 6.9 SuHx) compared to healthy controls, attributed primarily to the loss in basal longitudinal and apical radial motion. This automated bi-ventricular rat-specific pipeline provided an efficient and novel translational tool for rodent studies in alignment with clinical cardiac imaging AI developments.

Artificial intelligence demonstrates potential to enhance orthopaedic imaging across multiple modalities: A systematic review.

Longo UG, Lalli A, Nicodemi G, Pisani MG, De Sire A, D'Hooghe P, Nazarian A, Oeding JF, Zsidai B, Samuelsson K

pubmed logopapersApr 1 2025
While several artificial intelligence (AI)-assisted medical imaging applications are reported in the recent orthopaedic literature, comparison of the clinical efficacy and utility of these applications is currently lacking. The aim of this systematic review is to evaluate the effectiveness and reliability of AI applications in orthopaedic imaging, focusing on their impact on diagnostic accuracy, image segmentation and operational efficiency across various imaging modalities. Based on the PRISMA guidelines, a comprehensive literature search of PubMed, Cochrane and Scopus databases was performed, using combinations of keywords and MeSH descriptors ('AI', 'ML', 'deep learning', 'orthopaedic surgery' and 'imaging') from inception to March 2024. Included were studies published between September 2018 and February 2024, which evaluated machine learning (ML) model effectiveness in improving orthopaedic imaging. Studies with insufficient data regarding the output variable used to assess the reliability of the ML model, those applying deterministic algorithms, unrelated topics, protocol studies, and other systematic reviews were excluded from the final synthesis. The Joanna Briggs Institute (JBI) Critical Appraisal tool and the Risk Of Bias In Non-randomised Studies-of Interventions (ROBINS-I) tool were applied for the assessment of bias among the included studies. The 53 included studies reported the use of 11.990.643 images from several diagnostic instruments. A total of 39 studies reported details in terms of the Dice Similarity Coefficient (DSC), while both accuracy and sensitivity were documented across 15 studies. Precision was reported by 14, specificity by nine, and the F1 score by four of the included studies. Three studies applied the area under the curve (AUC) method to evaluate ML model performance. Among the studies included in the final synthesis, Convolutional Neural Networks (CNN) emerged as the most frequently applied category of ML models, present in 17 studies (32%). The systematic review highlights the diverse application of AI in orthopaedic imaging, demonstrating the capability of various machine learning models in accurately segmenting and analysing orthopaedic images. The results indicate that AI models achieve high performance metrics across different imaging modalities. However, the current body of literature lacks comprehensive statistical analysis and randomized controlled trials, underscoring the need for further research to validate these findings in clinical settings. Systematic Review; Level of evidence IV.

Artificial intelligence in bronchoscopy: a systematic review.

Cold KM, Vamadevan A, Laursen CB, Bjerrum F, Singh S, Konge L

pubmed logopapersApr 1 2025
Artificial intelligence (AI) systems have been implemented to improve the diagnostic yield and operators' skills within endoscopy. Similar AI systems are now emerging in bronchoscopy. Our objective was to identify and describe AI systems in bronchoscopy. A systematic review was performed using MEDLINE, Embase and Scopus databases, focusing on two terms: bronchoscopy and AI. All studies had to evaluate their AI against human ratings. The methodological quality of each study was assessed using the Medical Education Research Study Quality Instrument (MERSQI). 1196 studies were identified, with 20 passing the eligibility criteria. The studies could be divided into three categories: nine studies in airway anatomy and navigation, seven studies in computer-aided detection and classification of nodules in endobronchial ultrasound, and four studies in rapid on-site evaluation. 16 were assessment studies, with 12 showing equal performance and four showing superior performance of AI compared with human ratings. Four studies within airway anatomy implemented their AI, all favouring AI guidance to no AI guidance. The methodological quality of the studies was moderate (mean MERSQI 12.9 points, out of a maximum 18 points). 20 studies developed AI systems, with only four examining the implementation of their AI. The four studies were all within airway navigation and favoured AI to no AI in a simulated setting. Future implementation studies are warranted to test for the clinical effect of AI systems within bronchoscopy.

Deep learning-based fine-grained assessment of aneurysm wall characteristics using 4D-CT angiography.

Kumrai T, Maekawa T, Chen Y, Sugiyama Y, Takagaki M, Yamashiro S, Takizawa K, Ichinose T, Ishida F, Kishima H

pubmed logopapersJan 1 2025
This study proposes a novel deep learning-based approach for aneurysm wall characteristics, including thin-walled (TW) and hyperplastic-remodeling (HR) regions. We analyzed fifty-two unruptured cerebral aneurysms employing 4D-computed tomography angiography (4D-CTA) and intraoperative recordings. The TW and HR regions were identified in intraoperative images. The 3D trajectories of observation points on aneurysm walls were processed to compute a time series of 3D speed, acceleration, and smoothness of motion, aiming to evaluate the aneurysm wall characteristics. To facilitate point-level risk evaluation using the time-series data, we developed a convolutional neural network (CNN)-long- short-term memory (LSTM)-based regression model enriched with attention layers. In order to accommodate patient heterogeneity, a patient-independent feature extraction mechanism was introduced. Furthermore, unlabeled data were incorporated to enhance the data-intensive deep model. The proposed method achieved an average diagnostic accuracy of 92%, significantly outperforming a simpler model lacking attention. These results underscore the significance of patient-independent feature extraction and the use of unlabeled data. This study demonstrates the efficacy of a fine-grained deep learning approach in predicting aneurysm wall characteristics using 4D-CTA. Notably, incorporating an attention-based network structure proved to be particularly effective, contributing to enhanced performance.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.
Page 193 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.