Sort by:
Page 13 of 66652 results

Multimodal feature distinguishing and deep learning approach to detect lung disease from MRI images.

Alanazi TM

pubmed logopapersAug 29 2025
Precise and early detection and diagnosis of lung diseases reduce the severity of life risk and further spread of infections in patients. Computer-based image processing techniques utilize magnetic resonance imaging (MRI) as input for computing, detecting, segmenting, etc., processes for improving the processing efficacy. This article introduces a Multimodal Feature Distinguishing Method (MFDM) for augmenting lung disease detection precision. The method distinguishes the extractable features of an MRI lung input using a homogeneity measure. Depending on the possible differentiations for heterogeneity feature detection, the training using a transformer network is pursued. This network performs differentiation verification and training classification independently and integrates the same for identifying heterogeneous features. The integration classifications are used for detecting the infected region based on feature precision. If the differentiation fails, then the transformer process reinitiates its process from the last known homogeneity feature between successive segments. Therefore, the distinguishing multimodal features between successive segments are validated for different differentiation levels, augmenting the accuracy. Thus, the introduced system ensures 8.78% of sensitivity, 8.81% of precision 9.75% of differentiation time while analyzing various lung features. Then, the effective results indicate that the MFDM model was successfully utilized in medical applications to improve the disease recognition rate.

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

A hybrid computer vision model to predict lung cancer in diverse populations

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

Artificial intelligence as an independent reader of risk-dominant lung nodules: influence of CT reconstruction parameters.

Mao Y, Heuvelmans MA, van Tuinen M, Yu D, Yi J, Oudkerk M, Ye Z, de Bock GH, Dorrius MD

pubmed logopapersAug 29 2025
To assess the impact of reconstruction parameters on AI's performance in detecting and classifying risk-dominant nodules in a baseline low-dose CT (LDCT) screening among a Chinese general population. Baseline LDCT scans from 300 consecutive participants in the Netherlands and China Big-3 (NELCIN-B3) trial were included. AI analyzed each scan reconstructed with four settings: 1 mm/0.7 mm thickness/interval with medium-soft and hard kernels (D45f/1 mm, B80f/1 mm) and 2 mm/1 mm with soft and medium-soft kernels (B30f/2 mm, D45f/2 mm). Reading results from consensus read by two radiologists served as reference standard. At scan level, inter-reader agreement between AI and reference standard, sensitivity, and specificity in determining the presence of a risk-dominant nodule were evaluated. For reference-standard risk-dominant nodules, nodule detection rate, and agreement in nodule type classification between AI and reference standard were assessed. AI-D45f/1 mm demonstrated a significantly higher sensitivity than AI-B80f/1 mm in determining the presence of a risk-dominant nodule per scan (77.5% vs. 31.5%, p < 0.0001). For reference-standard risk-dominant nodules (111/300, 37.0%), kernel variations (AI-D45f/1 mm vs. AI-B80f/1 mm) did not significantly affect AI's nodule detection rate (87.4% vs. 82.0%, p = 0.26) but substantially influenced the agreement in nodule type classification between AI and reference standard (87.7% [50/57] vs. 17.7% [11/62], p < 0.0001). Change in thickness/interval (AI-D45f/1 mm vs. AI-D45f/2 mm) had no substantial influence on any of AI's performance (p > 0.05). Variations in reconstruction kernels significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Ensuring consistency with radiologist-preferred kernels significantly improved agreement in nodule type classification and may help integrate AI more smoothly into clinical workflows. Question Patient management in lung cancer screening depends on the risk-dominant nodule, yet no prior studies have assessed the impact of reconstruction parameters on AI performance for these nodules. Findings The difference between reconstruction kernels (AI-D45f/1 mm vs. AI-B80f/1 mm, or AI-B30f/2 mm vs. AI-D45f/2 mm) significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Clinical relevance The use of kernel for AI consistent with radiologist's choice is likely to improve the overall performance of AI-based CAD systems as an independent reader and support greater clinical acceptance and integration of AI tools into routine practice.

Deep Active Learning for Lung Disease Severity Classification from Chest X-rays: Learning with Less Data in the Presence of Class Imbalance

Roy M. Gabriel, Mohammadreza Zandehshahvar, Marly van Assen, Nattakorn Kittisut, Kyle Peters, Carlo N. De Cecco, Ali Adibi

arxiv logopreprintAug 28 2025
To reduce the amount of required labeled data for lung disease severity classification from chest X-rays (CXRs) under class imbalance, this study applied deep active learning with a Bayesian Neural Network (BNN) approximation and weighted loss function. This retrospective study collected 2,319 CXRs from 963 patients (mean age, 59.2 $\pm$ 16.6 years; 481 female) at Emory Healthcare affiliated hospitals between January and November 2020. All patients had clinically confirmed COVID-19. Each CXR was independently labeled by 3 to 6 board-certified radiologists as normal, moderate, or severe. A deep neural network with Monte Carlo Dropout was trained using active learning to classify disease severity. Various acquisition functions were used to iteratively select the most informative samples from an unlabeled pool. Performance was evaluated using accuracy, area under the receiver operating characteristic curve (AU ROC), and area under the precision-recall curve (AU PRC). Training time and acquisition time were recorded. Statistical analysis included descriptive metrics and performance comparisons across acquisition strategies. Entropy Sampling achieved 93.7% accuracy (AU ROC, 0.91) in binary classification (normal vs. diseased) using 15.4% of the training data. In the multi-class setting, Mean STD sampling achieved 70.3% accuracy (AU ROC, 0.86) using 23.1% of the labeled data. These methods outperformed more complex and computationally expensive acquisition functions and significantly reduced labeling needs. Deep active learning with BNN approximation and weighted loss effectively reduces labeled data requirements while addressing class imbalance, maintaining or exceeding diagnostic performance.

Comparison of Outcomes Between Ablation and Lobectomy in Stage IA Non-Small Cell Lung Cancer: A Retrospective Multicenter Study.

Xu B, Chen Z, Liu D, Zhu Z, Zhang F, Lin L

pubmed logopapersAug 28 2025
Image-guided thermal ablation (IGTA) has been increasingly used in patients with stage IA non-small cell lung cancer (NSCLC) without surgical contraindications, but its long-term outcomes compared to lobectomy remain unknown. This study aims to evaluate the long-term outcomes of IGTA versus lobectomy and explore which patients may benefit most from ablation. After propensity score matching, a total of 290 patients with stage IA NSCLC between 2015 and 2023 were included. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method. A Markov model was constructed to evaluate cost-effectiveness. Finally, a radiomics model based on preoperative computed tomography (CT) was developed to perform risk stratification. After matching, the median follow-up intervals were 34.8 months for the lobectomy group and 47.2 months for the ablation group. There were no significant differences between the groups in terms of 5-year PFS (hazard ratio [HR], 1.83; 95% CI, 0.86-3.92; p = 0.118) or OS (HR, 2.44; 95% CI, 0.87-6.63; p = 0.092). In low-income regions, lobectomy was not cost-effective in 99% of simulations. The CT-based radiomics model outperformed the traditional TNM model (AUC, 0.759 vs. 0.650; p < 0.01). Moreover, disease-free survival was significantly lower in the high-risk group than in the low-risk group (p = 0.009). This study comprehensively evaluated IGTA versus lobectomy in terms of survival outcomes, cost-effectiveness, and prognostic prediction. The findings suggest that IGTA may be a safe and feasible alternative to conventional surgery for carefully selected patients.

Dual-model approach for accurate chest disease detection using GViT and swin transformer V2.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

Development of a Large-Scale Dataset of Chest Computed Tomography Reports in Japanese and a High-Performance Finding Classification Model: Dataset Development and Validation Study.

Yamagishi Y, Nakamura Y, Kikuchi T, Sonoda Y, Hirakawa H, Kano S, Nakamura S, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersAug 28 2025
Recent advances in large language models have highlighted the need for high-quality multilingual medical datasets. Although Japan is a global leader in computed tomography (CT) scanner deployment and use, the absence of large-scale Japanese radiology datasets has hindered the development of specialized language models for medical imaging analysis. Despite the emergence of multilingual models and language-specific adaptations, the development of Japanese-specific medical language models has been constrained by a lack of comprehensive datasets, particularly in radiology. This study aims to address this critical gap in Japanese medical natural language processing resources, for which a comprehensive Japanese CT report dataset was developed through machine translation, to establish a specialized language model for structured classification. In addition, a rigorously validated evaluation dataset was created through expert radiologist refinement to ensure a reliable assessment of model performance. We translated the CT-RATE dataset (24,283 CT reports from 21,304 patients) into Japanese using GPT-4o mini. The training dataset consisted of 22,778 machine-translated reports, and the validation dataset included 150 reports carefully revised by radiologists. We developed CT-BERT-JPN, a specialized Bidirectional Encoder Representations from Transformers (BERT) model for Japanese radiology text, based on the "tohoku-nlp/bert-base-japanese-v3" architecture, to extract 18 structured findings from reports. Translation quality was assessed with Bilingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and further evaluated by radiologists in a dedicated human-in-the-loop experiment. In that experiment, each of a randomly selected subset of reports was independently reviewed by 2 radiologists-1 senior (postgraduate year [PGY] 6-11) and 1 junior (PGY 4-5)-using a 5-point Likert scale to rate: (1) grammatical correctness, (2) medical terminology accuracy, and (3) overall readability. Inter-rater reliability was measured via quadratic weighted kappa (QWK). Model performance was benchmarked against GPT-4o using accuracy, precision, recall, F1-score, ROC (receiver operating characteristic)-AUC (area under the curve), and average precision. General text structure was preserved (BLEU: 0.731 findings, 0.690 impression; ROUGE: 0.770-0.876 findings, 0.748-0.857 impression), though expert review identified 3 categories of necessary refinements-contextual adjustment of technical terms, completion of incomplete translations, and localization of Japanese medical terminology. The radiologist-revised translations scored significantly higher than raw machine translations across all dimensions, and all improvements were statistically significant (P<.001). CT-BERT-JPN outperformed GPT-4o on 11 of 18 findings (61%), achieving perfect F1-scores for 4 conditions and F1-score >0.95 for 14 conditions, despite varied sample sizes (7-82 cases). Our study established a robust Japanese CT report dataset and demonstrated the effectiveness of a specialized language model in structured classification of findings. This hybrid approach of machine translation and expert validation enabled the creation of large-scale datasets while maintaining high-quality standards. This study provides essential resources for advancing medical artificial intelligence research in Japanese health care settings, using datasets and models publicly available for research to facilitate further advancement in the field.

Ultra-Low-Dose CTPA Using Sparse Sampling CT Combined with the U-Net for Deep Learning-Based Artifact Reduction: An Exploratory Study.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

Automatic opportunistic osteoporosis screening using chest X-ray images via deep neural networks.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersAug 27 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and quality, which increases the risk of fragility fractures. The current diagnostic gold standard, dual-energy X-ray absorptiometry (DXA), faces limitations such as low equipment penetration, high testing costs, and radiation exposure, restricting its feasibility as a screening tool. To address these limitations, We retrospectively collected data from 1995 patients who visited Daping Hospital in Chongqing from January 2019 to August 2024. We developed an opportunistic screening method using chest X-rays. Furthermore, we designed three innovative deep neural network models using transfer learning: Inception v3, VGG16, and ResNet50. These models were evaluated based on their classification performance for osteoporosis using chest X-ray images, with external validation via multi-center data. The ResNet50 model demonstrated superior performance, achieving average accuracies of 87.85 % and 90.38 % in the internal test dataset across two experiments, with AUC values of 0.945 and 0.957, respectively. These results outperformed traditional convolutional neural networks. In the external validation, the ResNet50 model achieved an AUC of 0.904, accuracy of 89 %, sensitivity of 90 %, and specificity of 88.57 %, demonstrating strong generalization ability. And the model shows robust performance despite concurrent pulmonary pathologies. This study provides an automatic screening method for osteoporosis using chest X-rays, without additional radiation exposure or cost. The ResNet50 model's high performance supports clinicians in the early identification and treatment of osteoporosis patients.
Page 13 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.