Sort by:
Page 56 of 66652 results

Comparative Analysis of Machine Learning Models for Lung Cancer Mutation Detection and Staging Using 3D CT Scans

Yiheng Li, Francisco Carrillo-Perez, Mohammed Alawad, Olivier Gevaert

arxiv logopreprintMay 28 2025
Lung cancer is the leading cause of cancer mortality worldwide, and non-invasive methods for detecting key mutations and staging are essential for improving patient outcomes. Here, we compare the performance of two machine learning models - FMCIB+XGBoost, a supervised model with domain-specific pretraining, and Dinov2+ABMIL, a self-supervised model with attention-based multiple-instance learning - on 3D lung nodule data from the Stanford Radiogenomics and Lung-CT-PT-Dx cohorts. In the task of KRAS and EGFR mutation detection, FMCIB+XGBoost consistently outperformed Dinov2+ABMIL, achieving accuracies of 0.846 and 0.883 for KRAS and EGFR mutations, respectively. In cancer staging, Dinov2+ABMIL demonstrated competitive generalization, achieving an accuracy of 0.797 for T-stage prediction in the Lung-CT-PT-Dx cohort, suggesting SSL's adaptability across diverse datasets. Our results emphasize the clinical utility of supervised models in mutation detection and highlight the potential of SSL to improve staging generalization, while identifying areas for enhancement in mutation sensitivity.

Chest Disease Detection In X-Ray Images Using Deep Learning Classification Method

Alanna Hazlett, Naomi Ohashi, Timothy Rodriguez, Sodiq Adewole

arxiv logopreprintMay 28 2025
In this work, we investigate the performance across multiple classification models to classify chest X-ray images into four categories of COVID-19, pneumonia, tuberculosis (TB), and normal cases. We leveraged transfer learning techniques with state-of-the-art pre-trained Convolutional Neural Networks (CNNs) models. We fine-tuned these pre-trained architectures on a labeled medical x-ray images. The initial results are promising with high accuracy and strong performance in key classification metrics such as precision, recall, and F1 score. We applied Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretability to provide visual explanations for classification decisions, improving trust and transparency in clinical applications.

Look & Mark: Leveraging Radiologist Eye Fixations and Bounding boxes in Multimodal Large Language Models for Chest X-ray Report Generation

Yunsoo Kim, Jinge Wu, Su-Hwan Kim, Pardeep Vasudev, Jiashu Shen, Honghan Wu

arxiv logopreprintMay 28 2025
Recent advancements in multimodal Large Language Models (LLMs) have significantly enhanced the automation of medical image analysis, particularly in generating radiology reports from chest X-rays (CXR). However, these models still suffer from hallucinations and clinically significant errors, limiting their reliability in real-world applications. In this study, we propose Look & Mark (L&M), a novel grounding fixation strategy that integrates radiologist eye fixations (Look) and bounding box annotations (Mark) into the LLM prompting framework. Unlike conventional fine-tuning, L&M leverages in-context learning to achieve substantial performance gains without retraining. When evaluated across multiple domain-specific and general-purpose models, L&M demonstrates significant gains, including a 1.2% improvement in overall metrics (A.AVG) for CXR-LLaVA compared to baseline prompting and a remarkable 9.2% boost for LLaVA-Med. General-purpose models also benefit from L&M combined with in-context learning, with LLaVA-OV achieving an 87.3% clinical average performance (C.AVG)-the highest among all models, even surpassing those explicitly trained for CXR report generation. Expert evaluations further confirm that L&M reduces clinically significant errors (by 0.43 average errors per report), such as false predictions and omissions, enhancing both accuracy and reliability. These findings highlight L&M's potential as a scalable and efficient solution for AI-assisted radiology, paving the way for improved diagnostic workflows in low-resource clinical settings.

Machine learning-driven imaging data for early prediction of lung toxicity in breast cancer radiotherapy.

Ungvári T, Szabó D, Győrfi A, Dankovics Z, Kiss B, Olajos J, Tőkési K

pubmed logopapersMay 27 2025
One possible adverse effect of breast irradiation is the development of pulmonary fibrosis. The aim of this study was to determine whether planning CT scans can predict which patients are more likely to develop lung lesions after treatment. A retrospective analysis of 242 patient records was performed using different machine learning models. These models showed a remarkable correlation between the occurrence of fibrosis and the hounsfield units of lungs in CT data. Three different classification methods (Tree, Kernel-based, k-Nearest Neighbors) showed predictive values above 60%. The human predictive factor (HPF), a mathematical predictive model, further strengthened the association between lung hounsfield unit (HU) metrics and radiation-induced lung injury (RILI). These approaches optimize radiation treatment plans to preserve lung health. Machine learning models and HPF can also provide effective diagnostic and therapeutic support for other diseases.

Privacy-Preserving Chest X-ray Report Generation via Multimodal Federated Learning with ViT and GPT-2

Md. Zahid Hossain, Mustofa Ahmed, Most. Sharmin Sultana Samu, Md. Rakibul Islam

arxiv logopreprintMay 27 2025
The automated generation of radiology reports from chest X-ray images holds significant promise in enhancing diagnostic workflows while preserving patient privacy. Traditional centralized approaches often require sensitive data transfer, posing privacy concerns. To address this, the study proposes a Multimodal Federated Learning framework for chest X-ray report generation using the IU-Xray dataset. The system utilizes a Vision Transformer (ViT) as the encoder and GPT-2 as the report generator, enabling decentralized training without sharing raw data. Three Federated Learning (FL) aggregation strategies: FedAvg, Krum Aggregation and a novel Loss-aware Federated Averaging (L-FedAvg) were evaluated. Among these, Krum Aggregation demonstrated superior performance across lexical and semantic evaluation metrics such as ROUGE, BLEU, BERTScore and RaTEScore. The results show that FL can match or surpass centralized models in generating clinically relevant and semantically rich radiology reports. This lightweight and privacy-preserving framework paves the way for collaborative medical AI development without compromising data confidentiality.

China Protocol for early screening, precise diagnosis, and individualized treatment of lung cancer.

Wang C, Chen B, Liang S, Shao J, Li J, Yang L, Ren P, Wang Z, Luo W, Zhang L, Liu D, Li W

pubmed logopapersMay 27 2025
Early screening, diagnosis, and treatment of lung cancer are pivotal in clinical practice since the tumor stage remains the most dominant factor that affects patient survival. Previous initiatives have tried to develop new tools for decision-making of lung cancer. In this study, we proposed the China Protocol, a complete workflow of lung cancer tailored to the Chinese population, which is implemented by steps including early screening by evaluation of risk factors and three-dimensional thin-layer image reconstruction technique for low-dose computed tomography (Tre-LDCT), accurate diagnosis via artificial intelligence (AI) and novel biomarkers, and individualized treatment through non-invasive molecule visualization strategies. The application of this protocol has improved the early diagnosis and 5-year survival rates of lung cancer in China. The proportion of early-stage (stage I) lung cancer has increased from 46.3% to 65.6%, along with a 5-year survival rate of 90.4%. Moreover, especially for stage IA1 lung cancer, the diagnosis rate has improved from 16% to 27.9%; meanwhile, the 5-year survival rate of this group achieved 97.5%. Thus, here we defined stage IA1 lung cancer, which cohort benefits significantly from early diagnosis and treatment, as the "ultra-early stage lung cancer", aiming to provide an intuitive description for more precise management and survival improvement. In the future, we will promote our findings to multicenter remote areas through medical alliances and mobile health services with the desire to move forward the diagnosis and treatment of lung cancer.

A Deep Neural Network Framework for the Detection of Bacterial Diseases from Chest X-Ray Scans.

Jain S, Jindal H, Bharti M

pubmed logopapersMay 27 2025
This research aims to develop an advanced deep-learning framework for detecting respiratory diseases, including COVID-19, pneumonia, and tuberculosis (TB), using chest X-ray scans. A Deep Neural Network (DNN)-based system was developed to analyze medical images and extract key features from chest X-rays. The system leverages various DNN learning algorithms to study X-ray scan color, curve, and edge-based features. The Adam optimizer is employed to minimize error rates and enhance model training. A dataset of 1800 chest X-ray images, consisting of COVID-19, pneumonia, TB, and typical cases, was evaluated across multiple DNN models. The highest accuracy was achieved using the VGG19 model. The proposed system demonstrated an accuracy of 94.72%, with a sensitivity of 92.73%, a specificity of 96.68%, and an F1-score of 94.66%. The error rate was 5.28% when trained with 80% of the dataset and tested on 20%. The VGG19 model showed significant accuracy improvements of 32.69%, 36.65%, 42.16%, and 8.1% over AlexNet, GoogleNet, InceptionV3, and VGG16, respectively. The prediction time was also remarkably low, ranging between 3 and 5 seconds. The proposed deep learning model efficiently detects respiratory diseases, including COVID-19, pneumonia, and TB, within seconds. The method ensures high reliability and efficiency by optimizing feature extraction and maintaining system complexity, making it a valuable tool for clinicians in rapid disease diagnosis.

Quantitative computed tomography imaging classification of cement dust-exposed patients-based Kolmogorov-Arnold networks.

Chau NK, Kim WJ, Lee CH, Chae KJ, Jin GY, Choi S

pubmed logopapersMay 27 2025
Occupational health assessment is critical for detecting respiratory issues caused by harmful exposures, such as cement dust. Quantitative computed tomography (QCT) imaging provides detailed insights into lung structure and function, enhancing the diagnosis of lung diseases. However, its high dimensionality poses challenges for traditional machine learning methods. In this study, Kolmogorov-Arnold networks (KANs) were used for the binary classification of QCT imaging data to assess respiratory conditions associated with cement dust exposure. The dataset comprised QCT images from 609 individuals, including 311 subjects exposed to cement dust and 298 healthy controls. We derived 141 QCT-based variables and employed KANs with two hidden layers of 15 and 8 neurons. The network parameters, including grid intervals, polynomial order, learning rate, and penalty strengths, were carefully fine-tuned. The performance of the model was assessed through various metrics, including accuracy, precision, recall, F1 score, specificity, and the Matthews Correlation Coefficient (MCC). A five-fold cross-validation was employed to enhance the robustness of the evaluation. SHAP analysis was applied to interpret the sensitive QCT features. The KAN model demonstrated consistently high performance across all metrics, with an average accuracy of 98.03 %, precision of 97.35 %, recall of 98.70 %, F1 score of 98.01 %, and specificity of 97.40 %. The MCC value further confirmed the robustness of the model in managing imbalanced datasets. The comparative analysis demonstrated that the KAN model outperformed traditional methods and other deep learning approaches, such as TabPFN, ANN, FT-Transformer, VGG19, MobileNets, ResNet101, XGBoost, SVM, random forest, and decision tree. SHAP analysis highlighted structural and functional lung features, such as airway geometry, wall thickness, and lung volume, as key predictors. KANs significantly improved the classification of QCT imaging data, enhancing early detection of cement dust-induced respiratory conditions. SHAP analysis supported model interpretability, enhancing its potential for clinical translation in occupational health assessments.

MedBridge: Bridging Foundation Vision-Language Models to Medical Image Diagnosis

Yitong Li, Morteza Ghahremani, Christian Wachinger

arxiv logopreprintMay 27 2025
Recent vision-language foundation models deliver state-of-the-art results on natural image classification but falter on medical images due to pronounced domain shifts. At the same time, training a medical foundation model requires substantial resources, including extensive annotated data and high computational capacity. To bridge this gap with minimal overhead, we introduce MedBridge, a lightweight multimodal adaptation framework that re-purposes pretrained VLMs for accurate medical image diagnosis. MedBridge comprises three key components. First, a Focal Sampling module that extracts high-resolution local regions to capture subtle pathological features and compensate for the limited input resolution of general-purpose VLMs. Second, a Query Encoder (QEncoder) injects a small set of learnable queries that attend to the frozen feature maps of VLM, aligning them with medical semantics without retraining the entire backbone. Third, a Mixture of Experts mechanism, driven by learnable queries, harnesses the complementary strength of diverse VLMs to maximize diagnostic performance. We evaluate MedBridge on five medical imaging benchmarks across three key adaptation tasks, demonstrating its superior performance in both cross-domain and in-domain adaptation settings, even under varying levels of training data availability. Notably, MedBridge achieved over 6-15% improvement in AUC compared to state-of-the-art VLM adaptation methods in multi-label thoracic disease diagnosis, underscoring its effectiveness in leveraging foundation models for accurate and data-efficient medical diagnosis. Our code is available at https://github.com/ai-med/MedBridge.

Development and validation of a CT-based radiomics machine learning model for differentiating immune-related interstitial pneumonia.

Luo T, Guo J, Xi J, Luo X, Fu Z, Chen W, Huang D, Chen K, Xiao Q, Wei S, Wang Y, Du H, Liu L, Cai S, Dong H

pubmed logopapersMay 27 2025
Immune checkpoint inhibitor-related interstitial pneumonia (CIP) poses a diagnostic challenge due to its radiographic similarity to other pneumonias. We developed a non-invasive model using CT imaging to differentiate CIP from other pneumonias (OTP). We analyzed CIP and OTP patients after the immunotherapy from five medical centers between 2020 and 2023, and randomly divided into training and validation in 7:3. A radiomics model was developed using random forest analysis. A new model was then built by combining independent risk factors for CIP. The models were evaluated using ROC, calibration, and decision curve analysis. A total of 238 patients with pneumonia following immunotherapy were included, with 116 CIP and 122 OTP. After random allocation, the training cohort included 166 patients, and the validation included 72 patients. A radiomics model composed of 11 radiomic features was established using the random forest method, with an AUC of 0.833 for the training cohort and 0.821 for the validation. Univariate and multivariate logistic regression analysis revealed significant differences in smoking history, radiotherapy history, and radiomics score between CIP and OTP (p < 0.05). A new model was constructed based on these three factors and a nomogram was drawn. This model showed good calibration and net benefit in both the training and validation cohorts, with AUCs of 0.872 and 0.860, respectively. Using the random forest method of machine learning, we successfully constructed a CT-based radiomics CIP differential diagnostic model that can accurately, non-invasively, and rapidly provide clinicians with etiological support for pneumonia diagnosis.
Page 56 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.