Sort by:
Page 1 of 46452 results
Next

[Advances in the application of artificial intelligence for pulmonary function assessment based on chest imaging in thoracic surgery].

Huang LC, Liang HR, Jiang Y, Lin YC, He JX

pubmed logopapersSep 27 2025
In recent years, lung function assessment has attracted increasing attention in the perioperative management of thoracic surgery. However, traditional pulmonary function testing methods remain limited in clinical practice due to high equipment requirements and complex procedures. With the rapid development of artificial intelligence (AI) technology, lung function assessment based on multimodal chest imaging (such as X-rays, CT, and MRI) has become a new research focus. Through deep learning algorithms, AI models can accurately extract imaging features of patients and have made significant progress in quantitative analysis of pulmonary ventilation, evaluation of diffusion capacity, measurement of lung volumes, and prediction of lung function decline. Previous studies have demonstrated that AI models perform well in predicting key indicators such as forced expiratory volume in one second (FEV1), diffusing capacity for carbon monoxide (DLCO), and total lung capacity (TLC). Despite these promising prospects, challenges remain in clinical translation, including insufficient data standardization, limited model interpretability, and the lack of prediction models for postoperative complications. In the future, greater emphasis should be placed on multicenter collaboration, the construction of high-quality databases, the promotion of multimodal data integration, and clinical validation to further enhance the application value of AI technology in precision decision-making for thoracic surgery.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

Pathomics-based machine learning models for optimizing LungPro navigational bronchoscopy in peripheral lung lesion diagnosis: a retrospective study.

Ying F, Bao Y, Ma X, Tan Y, Li S

pubmed logopapersSep 26 2025
To construct a pathomics-based machine learning model to enhance the diagnostic efficacy of LungPro navigational bronchoscopy for peripheral pulmonary lesions and to optimize the management strategy for LungPro-diagnosed negative lesions. Clinical data and hematoxylin and eosin (H&E)-stained whole slide images (WSIs) were collected from 144 consecutive patients undergoing LungPro virtual bronchoscopy at a single institution between January 2022 and December 2023. Patients were stratified into diagnosis-positive and diagnosis-negative cohorts based on histopathological or etiological confirmation. An artificial intelligence (AI) model was developed and validated using 94 diagnosis-positive cases. Logistic regression (LR) identified associations between clinical/imaging characteristics and malignant pulmonary lesion risk factors. We implemented a convolutional neural network (CNN) with weakly supervised learning to extract image-level features, followed by multiple instance learning (MIL) for patient-level feature aggregation. Multiple machine learning (ML) algorithms were applied to model the extracted features. A multimodal diagnostic framework integrating clinical, imaging, and pathomics data were subsequently developed and evaluated on 50 LungPro-negative patients to assess the framework's diagnostic performance and predictive validity. Univariable and multivariable logistic regression analyses identified that age, lesion boundary and mean computed tomography (CT) attenuation were independent risk factors for malignant peripheral pulmonary lesions (P < 0.05). A histopathological model using a MIL fusion strategy showed strong diagnostic performance for lung cancer, with area under the curve (AUC) values of 0.792 (95% CI 0.680-0.903) in the training cohort and 0.777 (95% CI 0.531-1.000) in the test cohort. Combining predictive clinical features with pathological characteristics enhanced diagnostic yield for peripheral pulmonary lesions to 0.848 (95% CI 0.6945-1.0000). In patients with initially negative LungPro biopsy results, the model identified 20 of 28 malignant lesions (sensitivity: 71.43%) and 15 of 22 benign lesions (specificity: 68.18%). Class activation mapping (CAM) validated the model by highlighting key malignant features, including conspicuous nucleoli and nuclear atypia. The fusion diagnostic model that incorporates clinical and pathomic features markedly enhances the diagnostic accuracy of LungPro in this retrospective cohort. This model aids in the detection of subtle malignant characteristics, thereby offering evidence to support precise and targeted therapeutic interventions for lesions that LungPro classifies as negative in clinical settings.

MedIENet: medical image enhancement network based on conditional latent diffusion model.

Yuan W, Feng Y, Wen T, Luo G, Liang J, Sun Q, Liang S

pubmed logopapersSep 26 2025
Deep learning necessitates a substantial amount of data, yet obtaining sufficient medical images is difficult due to concerns about patient privacy and high collection costs. To address this issue, we propose a conditional latent diffusion model-based medical image enhancement network, referred to as the Medical Image Enhancement Network (MedIENet). To meet the rigorous standards required for image generation in the medical imaging field, a multi-attention module is incorporated in the encoder of the denoising U-Net backbone. Additionally Rotary Position Embedding (RoPE) is integrated into the self-attention module to effectively capture positional information, while cross-attention is utilised to embed integrate class information into the diffusion process. MedIENet is evaluated on three datasets: Chest CT-Scan images, Chest X-Ray Images (Pneumonia), and Tongue dataset. Compared to existing methods, MedIENet demonstrates superior performance in both fidelity and diversity of the generated images. Experimental results indicate that for downstream classification tasks using ResNet50, the Area Under the Receiver Operating Characteristic curve (AUROC) achieved with real data alone is 0.76 for the Chest CT-Scan images dataset, 0.87 for the Chest X-Ray Images (Pneumonia) dataset, and 0.78 for the Tongue Dataset. When using mixed data consisting of real data and generated data, the AUROC improves to 0.82, 0.94, and 0.82, respectively, reflecting increases of approximately 6%, 7%, and 4%. These findings indicate that the images generated by MedIENet can enhance the performance of downstream classification tasks, providing an effective solution to the scarcity of medical image training data.

Enhanced CoAtNet based hybrid deep learning architecture for automated tuberculosis detection in human chest X-rays.

Siddharth G, Ambekar A, Jayakumar N

pubmed logopapersSep 26 2025
Tuberculosis (TB) is a serious infectious disease that remains a global health challenge. While chest X-rays (CXRs) are widely used for TB detection, manual interpretation can be subjective and time-consuming. Automated classification of CXRs into TB and non-TB cases can significantly support healthcare professionals in timely and accurate diagnosis. This paper introduces a hybrid deep learning approach for classifying CXR images. The solution is based on the CoAtNet framework, which combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). The model is pre-trained on the large-scale ImageNet dataset to ensure robust generalization across diverse images. The evaluation is conducted on the IN-CXR tuberculosis dataset from ICMR-NIRT, which contains a comprehensive collection of CXR images of both normal and abnormal categories. The hybrid model achieves a binary classification accuracy of 86.39% and an ROC-AUC score of 93.79%, outperforming tested baseline models that rely exclusively on either CNNs or ViTs when trained on this dataset. Furthermore, the integration of Local Interpretable Model-agnostic Explanations (LIME) enhances the interpretability of the model's predictions. This combination of reliable performance and transparent, interpretable results strengthens the model's role in AI-driven medical imaging research. Code will be made available upon request.

Intratumoral heterogeneity score enhances invasiveness prediction in pulmonary ground-glass nodules via stacking ensemble machine learning.

Zuo Z, Zeng Y, Deng J, Lin S, Qi W, Fan X, Feng Y

pubmed logopapersSep 26 2025
The preoperative differentiation of adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma using computed tomography (CT) is crucial for guiding clinical management decisions. However, accurately classifying ground-glass nodules poses a significant challenge. Incorporating quantitative intratumoral heterogeneity scores may improve the accuracy of this ternary classification. In this multicenter retrospective study, we developed ternary classification models by leveraging insights from both base and stacking ensemble machine learning models, incorporating intratumoral heterogeneity scores along with clinical-radiological features to distinguish adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma. The machine learning models were trained, and final model selection depended on maximizing the macro-average area under the curve (macro-AUC) in both the internal and external validation sets. Data from 802 patients from three centers were divided into a training set (n = 477) and an internal test set (n = 205), in a 7:3 ratio, with an additional external validation set comprising 120 patients. The stacking classifier exhibited superior performance relative to the other models, achieving macro-AUC values of 0.7850 and 0.7717 for the internal and external validation sets, respectively. Moreover, an interpretability analysis utilizing the Shapley Additive Explanation identified four key features of this ternary classification: intratumoral heterogeneity score, nodule size, nodule type, and age. The stacking classifier, recognized as the optimal algorithm for integrating the intratumoral heterogeneity score and clinical-radiological features, effectively served as a ternary classification model for assessing the invasiveness of lung adenocarcinoma in chest CT images. Our stacking classifier integrated intratumoral heterogeneity scores and clinical-radiological features to improve the ternary classification of lung adenocarcinoma invasiveness (adenocarcinomas in situ/minimally invasive adenocarcinoma/invasive adenocarcinoma), aiding in precise diagnosis and clinical decision-making for ground-glass nodules. The intratumoral heterogeneity score effectively assessed the invasiveness of lung adenocarcinoma. The stacking classifier outperformed other methods for this ternary classification task. Intratumoral heterogeneity score, nodule size, nodule type, and age predict invasiveness.

NextGen lung disease diagnosis with explainable artificial intelligence.

Veeramani N, S A RS, S SP, S S, Jayaraman P

pubmed logopapersSep 26 2025
The COVID-19 pandemic has been the most catastrophic global health emergency of the [Formula: see text] century, resulting in hundreds of millions of reported cases and five million deaths. Chest X-ray (CXR) images are highly valuable for early detection of lung diseases in monitoring and investigating pulmonary disorders such as COVID-19, pneumonia, and tuberculosis. These CXR images offer crucial features about the lung's health condition and can assist in making accurate diagnoses. Manual interpretation of CXR images is challenging even for expert radiologists due to the overlapping radiological features. Therefore, Artificial Intelligence (AI) based image processing took over the charge in healthcare. But still it is uncertain to trust the prediction results by an AI model. However, this can be resolved by implementing explainable artificial intelligence (XAI) tools that transform a black-box AI into a glass-box model. In this research article, we have proposed a novel XAI-TRANS model with inception based transfer learning addressing the challenge of overlapping features in multiclass classification of CXR images. Also, we proposed an improved U-Net Lung segmentation dedicated to obtaining the radiological features for classification. The proposed approach achieved a maximum precision of 98% and accuracy of 97% in multiclass lung disease classification. By leveraging XAI techniques with the evident improvement of 4.75%, specifically LIME and Grad-CAM, to provide detailed and accurate explanations for the model's prediction.

A Deep Learning-Based EffConvNeXt Model for Automatic Classification of Cystic Bronchiectasis: An Explainable AI Approach.

Tekin V, Tekinhatun M, Özçelik STA, Fırat H, Üzen H

pubmed logopapersSep 25 2025
Cystic bronchiectasis and pneumonia are respiratory conditions that significantly impact morbidity and mortality worldwide. Diagnosing these diseases accurately is crucial, as early detection can greatly improve patient outcomes. These diseases are respiratory conditions that present with overlapping features on chest X-rays (CXR), making accurate diagnosis challenging. Recent advancements in deep learning (DL) have improved diagnostic accuracy in medical imaging. This study proposes the EffConvNeXt model, a hybrid approach combining EfficientNetB1 and ConvNeXtTiny, designed to enhance classification accuracy for cystic bronchiectasis, pneumonia, and normal cases in CXRs. The model effectively balances EfficientNetB1's efficiency with ConvNeXtTiny's advanced feature extraction, allowing for better identification of complex patterns in CXR images. Additionally, the EffConvNeXt model combines EfficientNetB1 and ConvNeXtTiny, addressing limitations of each model individually: EfficientNetB1's SE blocks improve focus on critical image areas while keeping the model lightweight and fast, and ConvNeXtTiny enhances detection of subtle abnormalities, making the combined model highly effective for rapid and accurate CXR image analysis in clinical settings. For the performance analysis of the EffConvNeXt model, experimental studies were conducted using 5899 CXR images collected from Dicle University Medical Faculty. When used individually, ConvNeXtTiny achieved an accuracy rate of 97.12%, while EfficientNetB1 reached 97.79%. By combining both models, the EffConvNeXt raised the accuracy to 98.25%, showing a 0.46% improvement. With this result, the other tested DL models fell behind. These findings indicate that EffConvNeXt provides a reliable, automated solution for distinguishing cystic bronchiectasis and pneumonia, supporting clinical decision-making with enhanced diagnostic accuracy.

The identification and severity staging of chronic obstructive pulmonary disease using quantitative CT parameters, radiomics features, and deep learning features.

Feng S, Zhang W, Zhang R, Yang Y, Wang F, Miao C, Chen Z, Yang K, Yao Q, Liang Q, Zhao H, Chen Y, Liang C, Liang X, Chen R, Liang Z

pubmed logopapersSep 25 2025
To evaluate the value of quantitative CT (QCT) parameters, radiomics features, and deep learning (DL) features based on inspiratory and expiratory CT for the identification and severity staging of chronic obstructive pulmonary disease (COPD). This retrospective analysis included 223 COPD patients and 59 healthy controls from the Guangzhou cohort. We stratified the participants into a training cohort and a testing cohort (7:3) and extracted DL features based on VGG-16 method, radiomics features based on pyradiomics package, and QCT parameters based on NeuLungCARE software. The Logistic regression method was employed to construct models for the identification and severity staging of COPD. The Shenzhen cohort was used as the external validation cohort to assess the generalizability of the models. In the COPD identification models, Model 5-B1 (the QCT combined with DL model in biphasic CT) showed the best predictive performance with AUC of 0.920, and 0.897 in testing cohort and external validation cohort, respectively. In the COPD severity staging models, the predictive performance of Model 4-B2 (the model combining QCT with radiomics features in biphasic CT) and Model 5-B2 (the model combining QCT with DL features in biphasic CT was superior to that of the other models. This biphasic CT-based multi-modal approach integrating QCT, radiomics, or DL features offers a clinically valuable tool for COPD identification and severity staging.

Multimodal text guided network for chest CT pneumonia classification.

Feng Y, Huang G, Ju F, Cui H

pubmed logopapersSep 25 2025
Pneumonia is a prevalent and serious respiratory disease, responsible for a significant number of cases globally. With advancements in deep learning, the automatic diagnosis of pneumonia has attracted significant research attention in medical image classification. However, current methods still face several challenges. First, since lesions are often visible in only a few slices, slice-based classification algorithms may overlook critical spatial contextual information in CT sequences, and slice-level annotations are labor-intensive. Moreover, chest CT sequence-based pneumonia classification algorithms that rely solely on sequence-level coarse-grained labels remain limited, especially in integrating multi-modal information. To address these challenges, we propose a Multi-modal Text-Guided Network (MTGNet) for pneumonia classification using chest CT sequences. In this model, we design a sequential graph pooling network to encode the CT sequences by gradually selecting important slice features to obtain a sequence-level representation. Additionally, a CT description encoder is developed to learn representations from textual reports. To simulate the clinical diagnostic process, we employ multi-modal training and single-modal testing. A modal transfer module is proposed to generate simulated textual features from CT sequences. Cross-modal attention is then employed to fuse the sequence-level and simulated textual representations, thereby enhancing feature learning within the CT sequences by incorporating semantic information from textual descriptions. Furthermore, contrastive learning is applied to learn discriminative features by maximizing the similarity of positive sample pairs and minimizing the similarity of negative sample pairs. Extensive experiments on a self-constructed pneumonia CT sequences dataset demonstrate that the proposed model significantly improves classification performance.
Page 1 of 46452 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.