Sort by:
Page 65 of 66652 results

Upper-lobe CT imaging features improve prediction of lung function decline in COPD.

Makimoto K, Virdee S, Koo M, Hogg JC, Bourbeau J, Tan WC, Kirby M

pubmed logopapersMay 1 2025
It is unknown whether prediction models for lung function decline using computed tomography (CT) imaging-derived features from the upper lobes improve performance compared with globally derived features in individuals at risk of and with COPD. Individuals at risk (current or former smokers) and those with COPD from the Canadian Cohort Obstructive Lung Disease (CanCOLD) retrospective study, were investigated. A total of 103 CT features were extracted globally and regionally, and were used with 12 clinical features (demographics, questionnaires and spirometry) to predict rapid lung function decline for individuals at risk and those with COPD. Machine-learning models were evaluated in a hold-out test set using the area under the receiver operating characteristic curve (AUC) with DeLong's test for comparison. A total of 780 participants were included (n=276 at risk; n=298 Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 COPD; n=206 GOLD 2+ COPD). For predicting rapid lung function decline in those at risk, the upper-lobe CT model obtained a significantly higher AUC (AUC=0.80) than the lower-lobe CT model (AUC=0.63) and global model (AUC=0.66; p<0.05). For predicting rapid lung function decline in COPD, there was no significant differences between the upper-lobe (AUC=0.63), lower-lobe (AUC=0.59) or global CT features model (AUC=059; p>0.05). CT features extracted from the upper lobes obtained significantly improved prediction performance compared with globally extracted features for rapid lung function decline in early/mild COPD.

Artificial intelligence in bronchoscopy: a systematic review.

Cold KM, Vamadevan A, Laursen CB, Bjerrum F, Singh S, Konge L

pubmed logopapersApr 1 2025
Artificial intelligence (AI) systems have been implemented to improve the diagnostic yield and operators' skills within endoscopy. Similar AI systems are now emerging in bronchoscopy. Our objective was to identify and describe AI systems in bronchoscopy. A systematic review was performed using MEDLINE, Embase and Scopus databases, focusing on two terms: bronchoscopy and AI. All studies had to evaluate their AI against human ratings. The methodological quality of each study was assessed using the Medical Education Research Study Quality Instrument (MERSQI). 1196 studies were identified, with 20 passing the eligibility criteria. The studies could be divided into three categories: nine studies in airway anatomy and navigation, seven studies in computer-aided detection and classification of nodules in endobronchial ultrasound, and four studies in rapid on-site evaluation. 16 were assessment studies, with 12 showing equal performance and four showing superior performance of AI compared with human ratings. Four studies within airway anatomy implemented their AI, all favouring AI guidance to no AI guidance. The methodological quality of the studies was moderate (mean MERSQI 12.9 points, out of a maximum 18 points). 20 studies developed AI systems, with only four examining the implementation of their AI. The four studies were all within airway navigation and favoured AI to no AI in a simulated setting. Future implementation studies are warranted to test for the clinical effect of AI systems within bronchoscopy.

YOLOv8 framework for COVID-19 and pneumonia detection using synthetic image augmentation.

A Hasib U, Md Abu R, Yang J, Bhatti UA, Ku CS, Por LY

pubmed logopapersJan 1 2025
Early and accurate detection of COVID-19 and pneumonia through medical imaging is critical for effective patient management. This study aims to develop a robust framework that integrates synthetic image augmentation with advanced deep learning (DL) models to address dataset imbalance, improve diagnostic accuracy, and enhance trust in artificial intelligence (AI)-driven diagnoses through Explainable AI (XAI) techniques. The proposed framework benchmarks state-of-the-art models (InceptionV3, DenseNet, ResNet) for initial performance evaluation. Synthetic images are generated using Feature Interpolation through Linear Mapping and principal component analysis to enrich dataset diversity and balance class distribution. YOLOv8 and InceptionV3 models, fine-tuned via transfer learning, are trained on the augmented dataset. Grad-CAM is used for model explainability, while large language models (LLMs) support visualization analysis to enhance interpretability. YOLOv8 achieved superior performance with 97% accuracy, precision, recall, and F1-score, outperforming benchmark models. Synthetic data generation effectively reduced class imbalance and improved recall for underrepresented classes. Comparative analysis demonstrated significant advancements over existing methodologies. XAI visualizations (Grad-CAM heatmaps) highlighted anatomically plausible focus areas aligned with clinical markers of COVID-19 and pneumonia, thereby validating the model's decision-making process. The integration of synthetic data generation, advanced DL, and XAI significantly enhances the detection of COVID-19 and pneumonia while fostering trust in AI systems. YOLOv8's high accuracy, coupled with interpretable Grad-CAM visualizations and LLM-driven analysis, promotes transparency crucial for clinical adoption. Future research will focus on developing a clinically viable, human-in-the-loop diagnostic workflow, further optimizing performance through the integration of transformer-based language models to improve interpretability and decision-making.

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

Metal artifact reduction combined with deep learning image reconstruction algorithm for CT image quality optimization: a phantom study.

Zou H, Wang Z, Guo M, Peng K, Zhou J, Zhou L, Fan B

pubmed logopapersJan 1 2025
Aiming to evaluate the effects of the smart metal artifact reduction (MAR) algorithm and combinations of various scanning parameters, including radiation dose levels, tube voltage, and reconstruction algorithms, on metal artifact reduction and overall image quality, to identify the optimal protocol for clinical application. A phantom with a pacemaker was examined using standard dose (effective dose (ED): 3 mSv) and low dose (ED: 0.5 mSv), with three scan voltages (70, 100, and 120 kVp) selected for each dose. Raw data were reconstructed using 50% adaptive statistical iterative reconstruction-V (ASIR-V), ASIR-V with MAR, high-strength deep learning image reconstruction (DLIR-H), and DLIR-H with MAR. Quantitative analyses (artifact index (AI), noise, signal-to-noise ratio (SNR) of artifact-impaired pulmonary nodules (PNs), and noise power spectrum (NPS) of artifact-free regions) and qualitative evaluation were performed. Quantitatively, the deep learning image recognition (DLIR) algorithm or high tube voltages exhibited lower noise compared to the ASIR-V or low tube voltages (<i>p</i> < 0.001). AI of images with MAR or high tube voltages was significantly lower than that of images without MAR or low tube voltages (<i>p</i> < 0.001). No significant difference was observed in AI between low-dose images with 120 kVp DLIR-H MAR and standard-dose images with 70 kVp ASIR-V MAR (<i>p</i> = 0.143). Only the 70 kVp 3 mSv protocol demonstrated statistically significant differences in SNR for artifact-impaired PNs (<i>p</i> = 0.041). The f<sub>peak</sub> and f<sub>avg</sub> values were similar across various scenarios, indicating that the MAR algorithm did not alter the image texture in artifact-free regions. The qualitative results of the extent of metal artifacts, the confidence in diagnosing artifact-impaired PNs, and the overall image quality were generally consistent with the quantitative results. The MAR algorithm combined with DLIR-H can reduce metal artifacts and enhance the overall image quality, particularly at high kVp tube voltages.

Auxiliary Diagnosis of Pulmonary Nodules' Benignancy and Malignancy Based on Machine Learning: A Retrospective Study.

Wang W, Yang B, Wu H, Che H, Tong Y, Zhang B, Liu H, Chen Y

pubmed logopapersJan 1 2025
Lung cancer, one of the most lethal malignancies globally, often presents insidiously as pulmonary nodules. Its nonspecific clinical presentation and heterogeneous imaging characteristics hinder accurate differentiation between benign and malignant lesions, while biopsy's invasiveness and procedural constraints underscore the critical need for non-invasive early diagnostic approaches. In this retrospective study, we analyzed outpatient and inpatient records from the First Medical Center of Chinese PLA General Hospital between 2011 and 2021, focusing on pulmonary nodules measuring 5-30mm on CT scans without overt signs of malignancy. Pathological examination served as the reference standard. Comparative experiments evaluated SVM, RF, XGBoost, FNN, and Atten_FNN using five-fold cross-validation to assess AUC, sensitivity, and specificity. The dataset was split 70%/30%, and stratified five-fold cross-validation was applied to the training set. The optimal model was interpreted with SHAP to identify the most influential predictive features. This study enrolled 3355 patients, including 1156 with benign and 2199 with malignant pulmonary nodules. The Atten_FNN model demonstrated superior performance in five-fold cross-validation, achieving an AUC of 0.82, accuracy of 0.75, sensitivity of 0.77, and F1 score of 0.80. SHAP analysis revealed key predictive factors: demographic variables (age, sex, BMI), CT-derived features (maximum nodule diameter, morphology, density, calcification, ground-glass opacity), and laboratory biomarkers (neuroendocrine markers, carcinoembryonic antigen). This study integrates electronic medical records and pathology data to predict pulmonary nodule malignancy using machine/deep learning models. SHAP-based interpretability analysis uncovered key clinical determinants. Acknowledging limitations in cross-center generalizability, we propose the development of a multimodal diagnostic systems that combines CT imaging and radiomics, to be validated in multi-center prospective cohorts to facilitate clinical translation. This framework establishes a novel paradigm for early precision diagnosis of lung cancer.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.

Enhancing Disease Detection in Radiology Reports Through Fine-tuning Lightweight LLM on Weak Labels.

Wei Y, Wang X, Ong H, Zhou Y, Flanders A, Shih G, Peng Y

pubmed logopapersJan 1 2025
Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4-o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential offine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.
Page 65 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.