Sort by:
Page 46 of 1411410 results

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

GPT-4 vs. Radiologists: who advances mediastinal tumor classification better across report quality levels? A cohort study.

Wen R, Li X, Chen K, Sun M, Zhu C, Xu P, Chen F, Ji C, Mi P, Li X, Deng X, Yang Q, Song W, Shang Y, Huang S, Zhou M, Wang J, Zhou C, Chen W, Liu C

pubmed logopapersAug 8 2025
Accurate mediastinal tumor classification is crucial for treatment planning, but diagnostic performance varies with radiologists' experience and report quality. To evaluate GPT-4's diagnostic accuracy in classifying mediastinal tumors from radiological reports compared to radiologists of different experience levels using radiological reports of varying quality. We conducted a retrospective study of 1,494 patients from five tertiary hospitals with mediastinal tumors diagnosed via chest CT and pathology. Radiological reports were categorized into low-, medium-, and high-quality based on predefined criteria assessed by experienced radiologists. Six radiologists (two residents, two attending radiologists, and two associate senior radiologists) and GPT-4 evaluated the chest CT reports. Diagnostic performance was analyzed overall, by report quality, and by tumor type using Wald χ2 tests and 95% CIs calculated via the Wilson method. GPT-4 achieved an overall diagnostic accuracy of 73.3% (95% CI: 71.0-75.5), comparable to associate senior radiologists (74.3%, 95% CI: 72.0-76.5; p >0.05). For low-quality reports, GPT-4 outperformed associate senior radiologists (60.8% vs. 51.1%, p<0.001). In high-quality reports, GPT-4 was comparable to attending radiologists (80.6% vs.79.4%, p>0.05). Diagnostic performance varied by tumor type: GPT-4 was comparable to radiology residents for neurogenic tumors (44.9% vs. 50.3%, p>0.05), similar to associate senior radiologists for teratomas (68.1% vs. 65.9%, p>0.05), and superior in diagnosing lymphoma (75.4% vs. 60.4%, p<0.001). GPT-4 demonstrated interpretation accuracy comparable to Associate Senior Radiologists, excelling in low-quality reports and outperforming them in diagnosing lymphoma. These findings underscore GPT-4's potential to enhance diagnostic performance in challenging diagnostic scenarios.

Clinical insights to improve medical deep learning design: A comprehensive review of methods and benefits.

Thornblad TAE, Ewals LJS, Nederend J, Luyer MDP, De With PHN, van der Sommen F

pubmed logopapersAug 8 2025
The success of deep learning and computer vision of natural images has led to an increased interest in medical image deep learning applications. However, introducing black-box deep learning models leaves little room for domain-specific knowledge when making the final diagnosis. For medical computer vision applications, not only accuracy, but also robustness, interpretability and explainability are essential to ensure trust for clinicians. Medical deep learning applications can therefore benefit from insights into the application at hand by involving clinical staff and considering the clinical diagnostic process. In this review, different clinically-inspired methods are surveyed, including clinical insights used at different stages of deep learning design for three-dimensional (3D) computed tomography (CT) image data. This review is conducted by investigating 400 research articles, covering different deep learning-based approaches for diagnosis of different diseases, in terms of including clinical insights in the published work. Based on this, a further detailed review is conducted of the 47 scientific articles using clinical inspiration. The clinically-inspired methods were found to be made with respect to preparation for training, 3D medical image data processing, integration of clinical data and model architecture selection and development. This highlights different ways in which domain-specific knowledge can be used in the design of deep learning systems.

Vision-Language Model-Based Semantic-Guided Imaging Biomarker for Lung Nodule Malignancy Prediction.

Zhuang L, Tabatabaei SMH, Salehi-Rad R, Tran LM, Aberle DR, Prosper AE, Hsu W

pubmed logopapersAug 8 2025
Machine learning models have utilized semantic features, deep features, or both to assess lung nodule malignancy. However, their reliance on manual annotation during inference, limited interpretability, and sensitivity to imaging variations hinder their application in real-world clinical settings. Thus, this research aims to integrate semantic features derived from radiologists' assessments of nodules, guiding the model to learn clinically relevant, robust, and explainable imaging features for predicting lung cancer. We obtained 938 low-dose CT scans from the National Lung Screening Trial (NLST) with 1,246 nodules and semantic features. Additionally, the Lung Image Database Consortium dataset contains 1,018 CT scans, with 2,625 lesions annotated for nodule characteristics. Three external datasets were obtained from UCLA Health, the LUNGx Challenge, and the Duke Lung Cancer Screening. We fine-tuned a pretrained Contrastive Language-Image Pretraining (CLIP) model with a parameter-efficient fine-tuning approach to align imaging and semantic text features and predict the one-year lung cancer diagnosis. Our model outperformed state-of-the-art (SOTA) models in the NLST test set with an AUROC of 0.901 and AUPRC of 0.776. It also showed robust results in external datasets. Using CLIP, we also obtained predictions on semantic features through zero-shot inference, such as nodule margin (AUROC: 0.812), nodule consistency (0.812), and pleural attachment (0.840). Our approach surpasses the SOTA models in predicting lung cancer across datasets collected from diverse clinical settings, providing explainable outputs, aiding clinicians in comprehending the underlying meaning of model predictions. This approach also prevents the model from learning shortcuts and generalizes across clinical settings. The code is available at https://github.com/luotingzhuang/CLIP_nodule.

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation.

Uhm KH, Cho H, Hong SH, Jung SW

pubmed logopapersAug 8 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

Three-dimensional pulp chamber volume quantification in first molars using CBCT: Implications for machine learning-assisted age estimation

Ding, Y., Zhong, T., He, Y., Wang, W., Zhang, S., Zhang, X., Shi, W., jin, b.

medrxiv logopreprintAug 8 2025
Accurate adult age estimation represents a critical component of forensic individual identification. However, traditional methods relying on skeletal developmental characteristics are susceptible to preservation status and developmental variation. Teeth, owing to their exceptional taphonomic resistance and minimal postmortem alteration, emerge as premier biological samples. Utilizing the high-resolution capabilities of Cone Beam Computed Tomography (CBCT), this study retrospectively analyzed 1,857 right first molars obtained from Han Chinese adults in Sichuan Province (883 males, 974 females; aged 18-65 years). Pulp chamber volume (PCV) was measured using semi-automatic segmentation in Mimics software (v21.0). Statistically significant differences in PCV were observed based on sex and tooth position (maxillary vs. mandibular). Significant negative correlations existed between PCV and age (r = -0.86 to -0.81). The strongest correlation (r = -0.88) was identified in female maxillary first molars. Eleven curvilinear regression models and six machine learning models (Linear Regression, Lasso Regression, Neural Network, Random Forest, Gradient Boosting, and XGBoost) were developed. Among the curvilinear regression models, the cubic model demonstrated the best performance, with the female maxillary-specific model achieving a mean absolute error (MAE) of 4.95 years. Machine learning models demonstrated superior accuracy. Specifically, the sex- and tooth position-specific XGBoost model for female maxillary first molars achieved an MAE of 3.14 years (R{superscript 2} = 0.87). This represents a significant 36.5% reduction in error compared to the optimal cubic regression model. These findings demonstrate that PCV measurements in first molars, combined with machine learning algorithms (specifically XGBoost), effectively overcome the limitations of traditional methods, providing a highly precise and reproducible approach for forensic age estimation.

A Cohort Study of Pediatric Severe Community-Acquired Pneumonia Involving AI-Based CT Image Parameters and Electronic Health Record Data.

He M, Yuan J, Liu A, Pu R, Yu W, Wang Y, Wang L, Nie X, Yi J, Xue H, Xie J

pubmed logopapersAug 8 2025
Community-acquired pneumonia (CAP) is a significant concern for children worldwide and is associated with a high morbidity and mortality. To improve patient outcomes, early intervention and accurate diagnosis are essential. Artificial intelligence (AI) can mine and label imaging data and thus may contribute to precision research and personalized clinical management. The baseline characteristics of 230 children with severe CAP hospitalized from January 2023 to October 2024 were retrospectively analyzed. The patients were divided into two groups according to the presence of respiratory failure. The predictive ability of AI-derived chest CT (computed tomography) indices alone for respiratory failure was assessed via logistic regression analysis. ROC (receiver operating characteristic) curves were plotted for these regression models. After adjusting for age, white blood cell count, neutrophils, lymphocytes, creatinine, wheezing, and fever > 5 days, a greater number of involved lung lobes [odds ratio 1.347, 95% confidence interval (95% CI) 1.036-1.750, P = 0.026] and bilateral lung involvement (odds ratio 2.734, 95% CI 1.084-6.893, P = 0.033) were significantly associated with respiratory failure. The discriminatory power (as measured by the area under curve) of Model 2 and Model 3, which included electronic health record data and the accuracy of CT imaging features, was better than that of Model 0 and Model 1, which contained only the chest CT parameters. The sensitivity and specificity of Model 2 at the optimal critical value (0.441) were 84.3% and 59.8%, respectively. The sensitivity and specificity of Model 3 at the optimal critical value (0.446) were 68.6% and 76.0%, respectively. The use of AI-derived chest CT indices may achieve high diagnostic accuracy and guide precise interventions for patients with severe CAP. However, clinical, laboratory, and AI-derived chest CT indices should be included to accurately predict and treat severe CAP.

Text Embedded Swin-UMamba for DeepLesion Segmentation

Ruida Cheng, Tejas Sudharshan Mathai, Pritam Mukherjee, Benjamin Hou, Qingqing Zhu, Zhiyong Lu, Matthew McAuliffe, Ronald M. Summers

arxiv logopreprintAug 8 2025
Segmentation of lesions on CT enables automatic measurement for clinical assessment of chronic diseases (e.g., lymphoma). Integrating large language models (LLMs) into the lesion segmentation workflow offers the potential to combine imaging features with descriptions of lesion characteristics from the radiology reports. In this study, we investigate the feasibility of integrating text into the Swin-UMamba architecture for the task of lesion segmentation. The publicly available ULS23 DeepLesion dataset was used along with short-form descriptions of the findings from the reports. On the test dataset, a high Dice Score of 82% and low Hausdorff distance of 6.58 (pixels) was obtained for lesion segmentation. The proposed Text-Swin-UMamba model outperformed prior approaches: 37% improvement over the LLM-driven LanGuideMedSeg model (p < 0.001),and surpassed the purely image-based xLSTM-UNet and nnUNet models by 1.74% and 0.22%, respectively. The dataset and code can be accessed at https://github.com/ruida/LLM-Swin-UMamba

Advanced Deep Learning Techniques for Accurate Lung Cancer Detection and Classification

Mobarak Abumohsen, Enrique Costa-Montenegro, Silvia García-Méndez, Amani Yousef Owda, Majdi Owda

arxiv logopreprintAug 8 2025
Lung cancer (LC) ranks among the most frequently diagnosed cancers and is one of the most common causes of death for men and women worldwide. Computed Tomography (CT) images are the most preferred diagnosis method because of their low cost and their faster processing times. Many researchers have proposed various ways of identifying lung cancer using CT images. However, such techniques suffer from significant false positives, leading to low accuracy. The fundamental reason results from employing a small and imbalanced dataset. This paper introduces an innovative approach for LC detection and classification from CT images based on the DenseNet201 model. Our approach comprises several advanced methods such as Focal Loss, data augmentation, and regularization to overcome the imbalanced data issue and overfitting challenge. The findings show the appropriateness of the proposal, attaining a promising performance of 98.95% accuracy.
Page 46 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.