Sort by:
Page 30 of 42412 results

Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer.

Swinburne NC, Jackson CB, Pagano AM, Stember JN, Schefflein J, Marinelli B, Panyam PK, Autz A, Chopra MS, Holodny AI, Ginsberg MS

pubmed logopapersJun 1 2025
This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.

Integrating VAI-Assisted Quantified CXRs and Multimodal Data to Assess the Risk of Mortality.

Chen YC, Fang WH, Lin CS, Tsai DJ, Hsiang CW, Chang CK, Ko KH, Huang GS, Lee YT, Lin C

pubmed logopapersJun 1 2025
To address the unmet need for a widely available examination for mortality prediction, this study developed a foundation visual artificial intelligence (VAI) to enhance mortality risk stratification using chest X-rays (CXRs). The VAI employed deep learning to extract CXR features and a Cox proportional hazard model to generate a hazard score ("CXR-risk"). We retrospectively collected CXRs from patients visited outpatient department and physical examination center. Subsequently, we reviewed mortality and morbidity outcomes from electronic medical records. The dataset consisted of 41,945, 10,492, 31,707, and 4441 patients in the training, validation, internal test, and external test sets, respectively. During the median follow-up of 3.2 (IQR, 1.2-6.1) years of both internal and external test sets, the "CXR-risk" demonstrated C-indexes of 0.859 (95% confidence interval (CI), 0.851-0.867) and 0.870 (95% CI, 0.844-0.896), respectively. Patients with high "CXR-risk," above 85th percentile, had a significantly higher risk of mortality than those with low risk, below 50th percentile. The addition of clinical and laboratory data and radiographic report further improved the predictive accuracy, resulting in C-indexes of 0.888 and 0.900. The VAI can provide accurate predictions of mortality and morbidity outcomes using just a single CXR, and it can complement other risk prediction indicators to assist physicians in assessing patient risk more effectively.

Developing approaches to incorporate donor-lung computed tomography images into machine learning models to predict severe primary graft dysfunction after lung transplantation.

Ma W, Oh I, Luo Y, Kumar S, Gupta A, Lai AM, Puri V, Kreisel D, Gelman AE, Nava R, Witt CA, Byers DE, Halverson L, Vazquez-Guillamet R, Payne PRO, Sotiras A, Lu H, Niazi K, Gurcan MN, Hachem RR, Michelson AP

pubmed logopapersJun 1 2025
Primary graft dysfunction (PGD) is a common complication after lung transplantation associated with poor outcomes. Although risk factors have been identified, the complex interactions between clinical variables affecting PGD risk are not well understood, which can complicate decisions about donor-lung acceptance. Previously, we developed a machine learning model to predict grade 3 PGD using donor and recipient electronic health record data, but it lacked granular information from donor-lung computed tomography (CT) scans, which are routinely assessed during offer review. In this study, we used a gated approach to determine optimal methods for analyzing donor-lung CT scans among patients receiving first-time, bilateral lung transplants at a single center over 10 years. We assessed 4 computer vision approaches and fused the best with electronic health record data at 3 points in the machine learning process. A total of 160 patients had donor-lung CT scans for analysis. The best imaging-only approach employed a 3D ResNet model, yielding median (interquartile range) areas under the receiver operating characteristic and precision-recall curves of 0.63 (0.49-0.72) and 0.48 (0.35-0.6), respectively. Combining imaging with clinical data using late fusion provided the highest performance, with median areas under the receiver operating characteristic and precision-recall curves of 0.74 (0.59-0.85) and 0.61 (0.47-0.72), respectively.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

NeoPred: dual-phase CT AI forecasts pathologic response to neoadjuvant chemo-immunotherapy in NSCLC.

Zheng J, Yan Z, Wang R, Xiao H, Chen Z, Ge X, Li Z, Liu Z, Yu H, Liu H, Wang G, Yu P, Fu J, Zhang G, Zhang J, Liu B, Huang Y, Deng H, Wang C, Fu W, Zhang Y, Wang R, Jiang Y, Lin Y, Huang L, Yang C, Cui F, He J, Liang H

pubmed logopapersMay 31 2025
Accurate preoperative prediction of major pathological response or pathological complete response after neoadjuvant chemo-immunotherapy remains a critical unmet need in resectable non-small-cell lung cancer (NSCLC). Conventional size-based imaging criteria offer limited reliability, while biopsy confirmation is available only post-surgery. We retrospectively assembled 509 consecutive NSCLC cases from four Chinese thoracic-oncology centers (March 2018 to March 2023) and prospectively enrolled 50 additional patients. Three 3-dimensional convolutional neural networks (pre-treatment CT, pre-surgical CT, dual-phase CT) were developed; the best-performing dual-phase model (NeoPred) optionally integrated clinical variables. Model performance was measured by area under the receiver-operating-characteristic curve (AUC) and compared with nine board-certified radiologists. In an external validation set (n=59), NeoPred achieved an AUC of 0.772 (95% CI: 0.650 to 0.895), sensitivity 0.591, specificity 0.733, and accuracy 0.627; incorporating clinical data increased the AUC to 0.787. In a prospective cohort (n=50), NeoPred reached an AUC of 0.760 (95% CI: 0.628 to 0.891), surpassing the experts' mean AUC of 0.720 (95% CI: 0.574 to 0.865). Model assistance raised the pooled expert AUC to 0.829 (95% CI: 0.707 to 0.951) and accuracy to 0.820. Marked performance persisted within radiological stable-disease subgroups (external AUC 0.742, 95% CI: 0.468 to 1.000; prospective AUC 0.833, 95% CI: 0.497 to 1.000). Combining dual-phase CT and clinical variables, NeoPred reliably and non-invasively predicts pathological response to neoadjuvant chemo-immunotherapy in NSCLC, outperforms unaided expert assessment, and significantly enhances radiologist performance. Further multinational trials are needed to confirm generalizability and support surgical decision-making.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Machine learning-based hemodynamics quantitative assessment of pulmonary circulation using computed tomographic pulmonary angiography.

Xie H, Zhao X, Zhang N, Liu J, Yang G, Cao Y, Xu J, Xu L, Sun Z, Wen Z, Chai S, Liu D

pubmed logopapersMay 30 2025
Pulmonary hypertension (pH) is a malignant pulmonary circulation disease. Right heart catheterization (RHC) is the gold standard procedure for quantitative evaluation of pulmonary hemodynamics. Accurate and noninvasive quantitative evaluation of pulmonary hemodynamics is challenging due to the limitations of currently available assessment methods. Patients who underwent computed tomographic pulmonary angiography (CTPA) and RHC examinations within 2 weeks were included. The dataset was randomly divided into a training set and a test set at an 8:2 ratio. A radiomic feature model and another two-dimensional (2D) feature model aimed to quantitatively evaluate of pulmonary hemodynamics were constructed. The performance of models was determined by calculating the mean squared error, the intraclass correlation coefficient (ICC) and the area under the precision-recall curve (AUC-PR) and performing Bland-Altman analyses. 345 patients: 271 patients with PH (mean age 50 ± 17 years, 93 men) and 74 without PH (mean age 55 ± 16 years, 26 men) were identified. The predictive results of pulmonary hemodynamics of radiomic feature model integrating 5 2D features and other 30 radiomic features were consistent with the results from RHC, and outperformed another 2D feature model. The radiomic feature model exhibited moderate to good reproducibility to predict pulmonary hemodynamic parameters (ICC reached 0.87). In addition, pH can be accurately identified based on a classification model (AUC-PR =0.99). This study provides a noninvasive method for comprehensively and quantitatively evaluating pulmonary hemodynamics using CTPA images, which has the potential to serve as an alternative to RHC, pending further validation.

DeepChest: Dynamic Gradient-Free Task Weighting for Effective Multi-Task Learning in Chest X-ray Classification

Youssef Mohamed, Noran Mohamed, Khaled Abouhashad, Feilong Tang, Sara Atito, Shoaib Jameel, Imran Razzak, Ahmed B. Zaky

arxiv logopreprintMay 29 2025
While Multi-Task Learning (MTL) offers inherent advantages in complex domains such as medical imaging by enabling shared representation learning, effectively balancing task contributions remains a significant challenge. This paper addresses this critical issue by introducing DeepChest, a novel, computationally efficient and effective dynamic task-weighting framework specifically designed for multi-label chest X-ray (CXR) classification. Unlike existing heuristic or gradient-based methods that often incur substantial overhead, DeepChest leverages a performance-driven weighting mechanism based on effective analysis of task-specific loss trends. Given a network architecture (e.g., ResNet18), our model-agnostic approach adaptively adjusts task importance without requiring gradient access, thereby significantly reducing memory usage and achieving a threefold increase in training speed. It can be easily applied to improve various state-of-the-art methods. Extensive experiments on a large-scale CXR dataset demonstrate that DeepChest not only outperforms state-of-the-art MTL methods by 7% in overall accuracy but also yields substantial reductions in individual task losses, indicating improved generalization and effective mitigation of negative transfer. The efficiency and performance gains of DeepChest pave the way for more practical and robust deployment of deep learning in critical medical diagnostic applications. The code is publicly available at https://github.com/youssefkhalil320/DeepChest-MTL
Page 30 of 42412 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.