Sort by:
Page 138 of 1391390 results

A hybrid AI method for lung cancer classification using explainable AI techniques.

Shivwanshi RR, Nirala NS

pubmed logopapersMay 8 2025
The use of Artificial Intelligence (AI) methods for the analysis of CT (computed tomography) images has greatly contributed to the development of an effective computer-assisted diagnosis (CAD) system for lung cancer (LC). However, complex structures, multiple radiographic interrelations, and the dynamic locations of abnormalities within lung CT images make extracting relevant information to process and implement LC CAD systems difficult. These prominent problems are addressed in this paper by presenting a hybrid method of LC malignancy classification, which may help researchers and experts properly engineer the model's performance by observing how the model makes decisions. The proposed methodology is named IncCat-LCC: Explainer (Inception Net Cat Boost LC Classification: Explainer), which consists of feature extraction (FE) using the handcrafted radiomic Feature (HcRdF) extraction technique, InceptionNet CNN Feature (INCF) extraction, Vision Transformer Feature (ViTF) extraction, and XGBOOST (XGB)-based feature selection, and the GPU based CATBOOST (CB) classification technique. The proposed framework achieves better and highest performance scores for lung nodule multiclass malignancy classification when evaluated using metrics such as accuracy, precision, recall, f-1 score, specificity, and area under the roc curve as 96.74 %, 93.68 %, 96.74 %, 95.19 %, 98.47 % and 99.76 % consecutively for classifying highly normal class. Observing the explainable artificial intelligence (XAI) explanations will help readers understand the model performance and the statistical outcomes of the evaluation parameter. The work presented in this article may improve the existing LC CAD system and help assess the important parameters using XAI to recognize the factors contributing to enhanced performance and reliability.

A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: a large-scale retrospective study.

Wang F, Bao M, Tao B, Yang F, Wang G, Zhu L

pubmed logopapersMay 7 2025
CT images and circulating tumor cells (CTCs) are indispensable for diagnosing the mediastinal lesions by providing radiological and intra-tumoral information. This study aimed to develop and validate a deep multimodal fusion network (DMFN) combining CTCs and CT images for the multi-classification of mediastinal lesions. In this retrospective diagnostic study, we enrolled 1074 patients with 1500 enhanced CT images and 1074 CTCs results between Jan 1, 2020, and Dec 31, 2023. Patients were divided into the training cohort (n = 434), validation cohort (n = 288), and test cohort (n = 352). The DMFN and monomodal convolutional neural network (CNN) models were developed and validated using the CT images and CTCs results. The diagnostic performances of DMFN and monomodal CNN models were based on the Paraffin-embedded pathologies from surgical tissues. The predictive abilities were compared with thoracic resident physicians, attending physicians, and chief physicians by the area under the receiver operating characteristic (ROC) curve, and diagnostic results were visualized in the heatmap. For binary classification, the predictive performances of DMFN (AUC = 0.941, 95% CI 0.901-0.982) were better than the monomodal CNN model (AUC = 0.710, 95% CI 0.664-0.756). In addition, the DMFN model achieved better predictive performances than the thoracic chief physicians, attending physicians, and resident physicians (P = 0.054, 0.020, 0.016) respectively. For the multiclassification, the DMFN achieved encouraging predictive abilities (AUC = 0.884, 95%CI 0.837-0.931), significantly outperforming the monomodal CNN (AUC = 0.722, 95%CI 0.705-0.739), also better than the chief physicians (AUC = 0.787, 95%CI 0.714-0.862), attending physicians (AUC = 0.632, 95%CI 0.612-0.654), and resident physicians (AUC = 0.541, 95%CI 0.508-0.574). This study showed the feasibility and effectiveness of CNN model combing CT images and CTCs levels in predicting the diagnosis of mediastinal lesions. It could serve as a useful method to assist thoracic surgeons in improving diagnostic accuracy and has the potential to make management decisions.

Potential of artificial intelligence for radiation dose reduction in computed tomography -A scoping review.

Bani-Ahmad M, England A, McLaughlin L, Hadi YH, McEntee M

pubmed logopapersMay 7 2025
Artificial intelligence (AI) is now transforming medical imaging, with extensive ramifications for nearly every aspect of diagnostic imaging, including computed tomography (CT). This current work aims to review, evaluate, and summarise the role of AI in radiation dose optimisation across three fundamental domains in CT: patient positioning, scan range determination, and image reconstruction. A comprehensive scoping review of the literature was performed. Electronic databases including Scopus, Ovid, EBSCOhost and PubMed were searched between January 2018 and December 2024. Relevant articles were identified from their titles had their abstracts evaluated, and those deemed relevant had their full text reviewed. Extracted data from selected studies included the application of AI, radiation dose, anatomical part, and any relevant evaluation metrics based on the CT parameter in which AI is applied. 90 articles met the selection criteria. Included studies evaluated the performance of AI for dose optimisation through patient positioning, scan range determination, and reconstruction across various CT scans, including the abdomen, chest, head, neck, and pelvis, as well as CT angiography. A concise overview of the present state of AI in these three domains, emphasising benefits, limitations, and impact on the transformation of dose reduction in CT scanning, is provided. AI methods can help minimise positioning offsets and over-scanning caused by manual errors and helped to overcome the limitation associated with low-dose CT settings through deep learning image reconstruction algorithms. Further clinical integration of AI will continue to allow for improvements in optimising CT scan protocols and radiation dose. This review underscores the significance of AI in optimizing radiation doses in CT imaging, focusing on three key areas: patient positioning, scan range determination, and image reconstruction.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.

Nonperiodic dynamic CT reconstruction using backward-warping INR with regularization of diffeomorphism (BIRD)

Muge Du, Zhuozhao Zheng, Wenying Wang, Guotao Quan, Wuliang Shi, Le Shen, Li Zhang, Liang Li, Yinong Liu, Yuxiang Xing

arxiv logopreprintMay 6 2025
Dynamic computed tomography (CT) reconstruction faces significant challenges in addressing motion artifacts, particularly for nonperiodic rapid movements such as cardiac imaging with fast heart rates. Traditional methods struggle with the extreme limited-angle problems inherent in nonperiodic cases. Deep learning methods have improved performance but face generalization challenges. Recent implicit neural representation (INR) techniques show promise through self-supervised deep learning, but have critical limitations: computational inefficiency due to forward-warping modeling, difficulty balancing DVF complexity with anatomical plausibility, and challenges in preserving fine details without additional patient-specific pre-scans. This paper presents a novel INR-based framework, BIRD, for nonperiodic dynamic CT reconstruction. It addresses these challenges through four key contributions: (1) backward-warping deformation that enables direct computation of each dynamic voxel with significantly reduced computational cost, (2) diffeomorphism-based DVF regularization that ensures anatomically plausible deformations while maintaining representational capacity, (3) motion-compensated analytical reconstruction that enhances fine details without requiring additional pre-scans, and (4) dimensional-reduction design for efficient 4D coordinate encoding. Through various simulations and practical studies, including digital and physical phantoms and retrospective patient data, we demonstrate the effectiveness of our approach for nonperiodic dynamic CT reconstruction with enhanced details and reduced motion artifacts. The proposed framework enables more accurate dynamic CT reconstruction with potential clinical applications, such as one-beat cardiac reconstruction, cinematic image sequences for functional imaging, and motion artifact reduction in conventional CT scans.

Artificial intelligence applications for the diagnosis of pulmonary nodules.

Ost DE

pubmed logopapersMay 6 2025
This review evaluates the role of artificial intelligence (AI) in diagnosing solitary pulmonary nodules (SPNs), focusing on clinical applications and limitations in pulmonary medicine. It explores AI's utility in imaging and blood/tissue-based diagnostics, emphasizing practical challenges over technical details of deep learning methods. AI enhances computed tomography (CT)-based computer-aided diagnosis (CAD) through steps like nodule detection, false positive reduction, segmentation, and classification, leveraging convolutional neural networks and machine learning. Segmentation achieves Dice similarity coefficients of 0.70-0.92, while malignancy classification yields areas under the curve of 0.86-0.97. AI-driven blood tests, incorporating RNA sequencing and clinical data, report AUCs up to 0.907 for distinguishing benign from malignant nodules. However, most models lack prospective, multiinstitutional validation, risking overfitting and limited generalizability. The "black box" nature of AI, coupled with overlapping inputs (e.g., nodule size, smoking history) with physician assessments, complicates integration into clinical workflows and precludes standard Bayesian analysis. AI shows promise for SPN diagnosis but requires rigorous validation in diverse populations and better clinician training for effective use. Rather than replacing judgment, AI should serve as a second opinion, with its reported performance metrics understood as study-specific, not directly applicable at the bedside due to double-counting issues.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.

Comprehensive Cerebral Aneurysm Rupture Prediction: From Clustering to Deep Learning

Zakeri, M., Atef, A., Aziznia, M., Jafari, A.

medrxiv logopreprintMay 6 2025
Cerebral aneurysm is a silent yet prevalent condition that affects a substantial portion of the global population. Aneurysms can develop due to various factors and present differently, necessitating diverse treatment approaches. Choosing the appropriate treatment upon diagnosis is paramount, as the severity of the disease dictates the course of action. The vulnerability of an aneurysm, particularly in the circle of Willis, is a critical concern; rupture can lead to irreversible consequences, including death. The primary objective of this study is to predict the rupture status of cerebral aneurysms using a comprehensive dataset that includes clinical, morphological, and hemodynamic data extracted from blood flow simulations of patients with actual vessels. Our goal is to provide valuable insights that can aid in treatment decision-making and potentially save the lives of future patients. Diagnosing and predicting the rupture status of aneurysms based solely on brain scans poses a significant challenge, often with limited accuracy, even for experienced physicians. However, harnessing statistical and machine learning (ML) techniques can enhance rupture prediction and treatment strategy selection. We employed a diverse set of supervised and unsupervised algorithms, training them on a database comprising over 700 cerebral aneurysms, which included 55 different parameters: 3 clinical, 35 morphological, and 17 hemodynamic features. Two of our models including stochastic gradient descent (SGD) and multi-layer perceptron (MLP) achieved a maximum area under the curve (AUC) of 0.86, a precision rate of 0.86, and a recall rate of 0.90 for prediction of cerebral aneurysm rupture. Given the sensitivity of the data and the critical nature of the condition, recall is a more vital parameter than accuracy and precision; our study achieved an acceptable recall score. Key features for rupture prediction included ellipticity index, low shear area ratio, and irregularity. Additionally, a one-dimensional CNN model predicted rupture status along a continuous spectrum, achieving 0.78 accuracy on the testing dataset, providing nuanced insights into rupture propensity.

A Vision-Language Model for Focal Liver Lesion Classification

Song Jian, Hu Yuchang, Wang Hui, Chen Yen-Wei

arxiv logopreprintMay 6 2025
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.
Page 138 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.