Sort by:
Page 386 of 3903899 results

Upper-lobe CT imaging features improve prediction of lung function decline in COPD.

Makimoto K, Virdee S, Koo M, Hogg JC, Bourbeau J, Tan WC, Kirby M

pubmed logopapersMay 1 2025
It is unknown whether prediction models for lung function decline using computed tomography (CT) imaging-derived features from the upper lobes improve performance compared with globally derived features in individuals at risk of and with COPD. Individuals at risk (current or former smokers) and those with COPD from the Canadian Cohort Obstructive Lung Disease (CanCOLD) retrospective study, were investigated. A total of 103 CT features were extracted globally and regionally, and were used with 12 clinical features (demographics, questionnaires and spirometry) to predict rapid lung function decline for individuals at risk and those with COPD. Machine-learning models were evaluated in a hold-out test set using the area under the receiver operating characteristic curve (AUC) with DeLong's test for comparison. A total of 780 participants were included (n=276 at risk; n=298 Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 COPD; n=206 GOLD 2+ COPD). For predicting rapid lung function decline in those at risk, the upper-lobe CT model obtained a significantly higher AUC (AUC=0.80) than the lower-lobe CT model (AUC=0.63) and global model (AUC=0.66; p<0.05). For predicting rapid lung function decline in COPD, there was no significant differences between the upper-lobe (AUC=0.63), lower-lobe (AUC=0.59) or global CT features model (AUC=059; p>0.05). CT features extracted from the upper lobes obtained significantly improved prediction performance compared with globally extracted features for rapid lung function decline in early/mild COPD.

Artificial intelligence-based echocardiography assessment to detect pulmonary hypertension.

Salehi M, Alabed S, Sharkey M, Maiter A, Dwivedi K, Yardibi T, Selej M, Hameed A, Charalampopoulos A, Kiely DG, Swift AJ

pubmed logopapersMay 1 2025
Tricuspid regurgitation jet velocity (TRJV) on echocardiography is used for screening patients with suspected pulmonary hypertension (PH). Artificial intelligence (AI) tools, such as the US2.AI, have been developed for automated evaluation of echocardiograms and can yield measurements that aid PH detection. This study evaluated the performance and utility of the US2.AI in a consecutive cohort of patients with suspected PH. 1031 patients who had been investigated for suspected PH between 2009-2021 were retrospectively identified from the ASPIRE registry. All patients had undergone echocardiography and right heart catheterisation (RHC). Based on RHC results, 771 (75%) patients with a mean pulmonary arterial pressure >20 mmHg were classified as having a diagnosis of PH (as per the 2022 European guidelines). Echocardiograms were evaluated manually and by the US2.AI tool to yield TRJV measurements. The AI tool demonstrated high interpretation yield, successfully measuring TRJV in 87% of echocardiograms. Manually and automatically derived TRJV values showed excellent agreement (intraclass correlation coefficient 0.94, 95% CI 0.94-0.95) with minimal bias (Bland-Altman analysis). Automated TRJV measurements showed equally high diagnostic accuracy for PH as manual measurements (area under the curve 0.88, 95% CI 0.84-0.90 <i>versus</i> 0.88, 95% CI 0.86-0.91). Automated TRJV measurements on echocardiography were similar to manual measurements, with similarly high and noninferior diagnostic accuracy for PH. These findings demonstrate that automated measurement of TRJV on echocardiography is feasible, accurate and reliable and support the implementation of AI-based approaches to echocardiogram evaluation and diagnostic imaging for PH.

From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities.

Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV

pubmed logopapersMay 1 2025
Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.

Artificial intelligence in bronchoscopy: a systematic review.

Cold KM, Vamadevan A, Laursen CB, Bjerrum F, Singh S, Konge L

pubmed logopapersApr 1 2025
Artificial intelligence (AI) systems have been implemented to improve the diagnostic yield and operators' skills within endoscopy. Similar AI systems are now emerging in bronchoscopy. Our objective was to identify and describe AI systems in bronchoscopy. A systematic review was performed using MEDLINE, Embase and Scopus databases, focusing on two terms: bronchoscopy and AI. All studies had to evaluate their AI against human ratings. The methodological quality of each study was assessed using the Medical Education Research Study Quality Instrument (MERSQI). 1196 studies were identified, with 20 passing the eligibility criteria. The studies could be divided into three categories: nine studies in airway anatomy and navigation, seven studies in computer-aided detection and classification of nodules in endobronchial ultrasound, and four studies in rapid on-site evaluation. 16 were assessment studies, with 12 showing equal performance and four showing superior performance of AI compared with human ratings. Four studies within airway anatomy implemented their AI, all favouring AI guidance to no AI guidance. The methodological quality of the studies was moderate (mean MERSQI 12.9 points, out of a maximum 18 points). 20 studies developed AI systems, with only four examining the implementation of their AI. The four studies were all within airway navigation and favoured AI to no AI in a simulated setting. Future implementation studies are warranted to test for the clinical effect of AI systems within bronchoscopy.

Artificial intelligence demonstrates potential to enhance orthopaedic imaging across multiple modalities: A systematic review.

Longo UG, Lalli A, Nicodemi G, Pisani MG, De Sire A, D'Hooghe P, Nazarian A, Oeding JF, Zsidai B, Samuelsson K

pubmed logopapersApr 1 2025
While several artificial intelligence (AI)-assisted medical imaging applications are reported in the recent orthopaedic literature, comparison of the clinical efficacy and utility of these applications is currently lacking. The aim of this systematic review is to evaluate the effectiveness and reliability of AI applications in orthopaedic imaging, focusing on their impact on diagnostic accuracy, image segmentation and operational efficiency across various imaging modalities. Based on the PRISMA guidelines, a comprehensive literature search of PubMed, Cochrane and Scopus databases was performed, using combinations of keywords and MeSH descriptors ('AI', 'ML', 'deep learning', 'orthopaedic surgery' and 'imaging') from inception to March 2024. Included were studies published between September 2018 and February 2024, which evaluated machine learning (ML) model effectiveness in improving orthopaedic imaging. Studies with insufficient data regarding the output variable used to assess the reliability of the ML model, those applying deterministic algorithms, unrelated topics, protocol studies, and other systematic reviews were excluded from the final synthesis. The Joanna Briggs Institute (JBI) Critical Appraisal tool and the Risk Of Bias In Non-randomised Studies-of Interventions (ROBINS-I) tool were applied for the assessment of bias among the included studies. The 53 included studies reported the use of 11.990.643 images from several diagnostic instruments. A total of 39 studies reported details in terms of the Dice Similarity Coefficient (DSC), while both accuracy and sensitivity were documented across 15 studies. Precision was reported by 14, specificity by nine, and the F1 score by four of the included studies. Three studies applied the area under the curve (AUC) method to evaluate ML model performance. Among the studies included in the final synthesis, Convolutional Neural Networks (CNN) emerged as the most frequently applied category of ML models, present in 17 studies (32%). The systematic review highlights the diverse application of AI in orthopaedic imaging, demonstrating the capability of various machine learning models in accurately segmenting and analysing orthopaedic images. The results indicate that AI models achieve high performance metrics across different imaging modalities. However, the current body of literature lacks comprehensive statistical analysis and randomized controlled trials, underscoring the need for further research to validate these findings in clinical settings. Systematic Review; Level of evidence IV.

Enhancing Attention Network Spatiotemporal Dynamics for Motor Rehabilitation in Parkinson's Disease.

Pei G, Hu M, Ouyang J, Jin Z, Wang K, Meng D, Wang Y, Chen K, Wang L, Cao LZ, Funahashi S, Yan T, Fang B

pubmed logopapersJan 1 2025
Optimizing resource allocation for Parkinson's disease (PD) motor rehabilitation necessitates identifying biomarkers of responsiveness and dynamic neuroplasticity signatures underlying efficacy. A cohort study of 52 early-stage PD patients undergoing 2-week multidisciplinary intensive rehabilitation therapy (MIRT) was conducted, which stratified participants into responders and nonresponders. A multimodal analysis of resting-state electroencephalography (EEG) microstates and functional magnetic resonance imaging (fMRI) coactivation patterns was performed to characterize MIRT-induced spatiotemporal network reorganization. Responders demonstrated clinically meaningful improvement in motor symptoms, exceeding the minimal clinically important difference threshold of 3.25 on the Unified PD Rating Scale part III, alongside significant reductions in bradykinesia and a significant enhancement in quality-of-life scores at the 3-month follow-up. Resting-state EEG in responders showed a significant attenuation in microstate C and a significant enhancement in microstate D occurrences, along with significantly increased transitions from microstate A/B to D, which significantly correlated with motor function, especially in bradykinesia gains. Concurrently, fMRI analyses identified a prolonged dwell time of the dorsal attention network coactivation/ventral attention network deactivation pattern, which was significantly inversely associated with microstate C occurrence and significantly linked to motor improvement. The identified brain spatiotemporal neural markers were validated using machine learning models to assess the efficacy of MIRT in motor rehabilitation for PD patients, achieving an average accuracy rate of 86%. These findings suggest that MIRT may facilitate a shift in neural networks from sensory processing to higher-order cognitive control, with the dynamic reallocation of attentional resources. This preliminary study validates the necessity of integrating cognitive-motor strategies for the motor rehabilitation of PD and identifies novel neural markers for assessing treatment efficacy.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

OA-HybridCNN (OHC): An advanced deep learning fusion model for enhanced diagnostic accuracy in knee osteoarthritis imaging.

Liao Y, Yang G, Pan W, Lu Y

pubmed logopapersJan 1 2025
Knee osteoarthritis (KOA) is a leading cause of disability globally. Early and accurate diagnosis is paramount in preventing its progression and improving patients' quality of life. However, the inconsistency in radiologists' expertise and the onset of visual fatigue during prolonged image analysis often compromise diagnostic accuracy, highlighting the need for automated diagnostic solutions. In this study, we present an advanced deep learning model, OA-HybridCNN (OHC), which integrates ResNet and DenseNet architectures. This integration effectively addresses the gradient vanishing issue in DenseNet and augments prediction accuracy. To evaluate its performance, we conducted a thorough comparison with other deep learning models using five-fold cross-validation and external tests. The OHC model outperformed its counterparts across all performance metrics. In external testing, OHC exhibited an accuracy of 91.77%, precision of 92.34%, and recall of 91.36%. During the five-fold cross-validation, its average AUC and ACC were 86.34% and 87.42%, respectively. Deep learning, particularly exemplified by the OHC model, has greatly improved the efficiency and accuracy of KOA imaging diagnosis. The adoption of such technologies not only alleviates the burden on radiologists but also significantly enhances diagnostic precision.

YOLOv8 framework for COVID-19 and pneumonia detection using synthetic image augmentation.

A Hasib U, Md Abu R, Yang J, Bhatti UA, Ku CS, Por LY

pubmed logopapersJan 1 2025
Early and accurate detection of COVID-19 and pneumonia through medical imaging is critical for effective patient management. This study aims to develop a robust framework that integrates synthetic image augmentation with advanced deep learning (DL) models to address dataset imbalance, improve diagnostic accuracy, and enhance trust in artificial intelligence (AI)-driven diagnoses through Explainable AI (XAI) techniques. The proposed framework benchmarks state-of-the-art models (InceptionV3, DenseNet, ResNet) for initial performance evaluation. Synthetic images are generated using Feature Interpolation through Linear Mapping and principal component analysis to enrich dataset diversity and balance class distribution. YOLOv8 and InceptionV3 models, fine-tuned via transfer learning, are trained on the augmented dataset. Grad-CAM is used for model explainability, while large language models (LLMs) support visualization analysis to enhance interpretability. YOLOv8 achieved superior performance with 97% accuracy, precision, recall, and F1-score, outperforming benchmark models. Synthetic data generation effectively reduced class imbalance and improved recall for underrepresented classes. Comparative analysis demonstrated significant advancements over existing methodologies. XAI visualizations (Grad-CAM heatmaps) highlighted anatomically plausible focus areas aligned with clinical markers of COVID-19 and pneumonia, thereby validating the model's decision-making process. The integration of synthetic data generation, advanced DL, and XAI significantly enhances the detection of COVID-19 and pneumonia while fostering trust in AI systems. YOLOv8's high accuracy, coupled with interpretable Grad-CAM visualizations and LLM-driven analysis, promotes transparency crucial for clinical adoption. Future research will focus on developing a clinically viable, human-in-the-loop diagnostic workflow, further optimizing performance through the integration of transformer-based language models to improve interpretability and decision-making.

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.
Page 386 of 3903899 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.