Sort by:
Page 93 of 1411410 results

AI-based large-scale screening of gastric cancer from noncontrast CT imaging.

Hu C, Xia Y, Zheng Z, Cao M, Zheng G, Chen S, Sun J, Chen W, Zheng Q, Pan S, Zhang Y, Chen J, Yu P, Xu J, Xu J, Qiu Z, Lin T, Yun B, Yao J, Guo W, Gao C, Kong X, Chen K, Wen Z, Zhu G, Qiao J, Pan Y, Li H, Gong X, Ye Z, Ao W, Zhang L, Yan X, Tong Y, Yang X, Zheng X, Fan S, Cao J, Yan C, Xie K, Zhang S, Wang Y, Zheng L, Wu Y, Ge Z, Tian X, Zhang X, Wang Y, Zhang R, Wei Y, Zhu W, Zhang J, Qiu H, Su M, Shi L, Xu Z, Zhang L, Cheng X

pubmed logopapersJun 24 2025
Early detection through screening is critical for reducing gastric cancer (GC) mortality. However, in most high-prevalence regions, large-scale screening remains challenging due to limited resources, low compliance and suboptimal detection rate of upper endoscopic screening. Therefore, there is an urgent need for more efficient screening protocols. Noncontrast computed tomography (CT), routinely performed for clinical purposes, presents a promising avenue for large-scale designed or opportunistic screening. Here we developed the Gastric Cancer Risk Assessment Procedure with Artificial Intelligence (GRAPE), leveraging noncontrast CT and deep learning to identify GC. Our study comprised three phases. First, we developed GRAPE using a cohort from 2 centers in China (3,470 GC and 3,250 non-GC cases) and validated its performance on an internal validation set (1,298 cases, area under curve = 0.970) and an independent external cohort from 16 centers (18,160 cases, area under curve = 0.927). Subgroup analysis showed that the detection rate of GRAPE increased with advancing T stage but was independent of tumor location. Next, we compared the interpretations of GRAPE with those of radiologists and assessed its potential in assisting diagnostic interpretation. Reader studies demonstrated that GRAPE significantly outperformed radiologists, improving sensitivity by 21.8% and specificity by 14.0%, particularly in early-stage GC. Finally, we evaluated GRAPE in real-world opportunistic screening using 78,593 consecutive noncontrast CT scans from a comprehensive cancer center and 2 independent regional hospitals. GRAPE identified persons at high risk with GC detection rates of 24.5% and 17.7% in 2 regional hospitals, with 23.2% and 26.8% of detected cases in T1/T2 stage. Additionally, GRAPE detected GC cases that radiologists had initially missed, enabling earlier diagnosis of GC during follow-up for other diseases. In conclusion, GRAPE demonstrates strong potential for large-scale GC screening, offering a feasible and effective approach for early detection. ClinicalTrials.gov registration: NCT06614179 .

NeRF-based CBCT Reconstruction needs Normalization and Initialization

Zhuowei Xu, Han Li, Dai Sun, Zhicheng Li, Yujia Li, Qingpeng Kong, Zhiwei Cheng, Nassir Navab, S. Kevin Zhou

arxiv logopreprintJun 24 2025
Cone Beam Computed Tomography (CBCT) is widely used in medical imaging. However, the limited number and intensity of X-ray projections make reconstruction an ill-posed problem with severe artifacts. NeRF-based methods have achieved great success in this task. However, they suffer from a local-global training mismatch between their two key components: the hash encoder and the neural network. Specifically, in each training step, only a subset of the hash encoder's parameters is used (local sparse), whereas all parameters in the neural network participate (global dense). Consequently, hash features generated in each step are highly misaligned, as they come from different subsets of the hash encoder. These misalignments from different training steps are then fed into the neural network, causing repeated inconsistent global updates in training, which leads to unstable training, slower convergence, and degraded reconstruction quality. Aiming to alleviate the impact of this local-global optimization mismatch, we introduce a Normalized Hash Encoder, which enhances feature consistency and mitigates the mismatch. Additionally, we propose a Mapping Consistency Initialization(MCI) strategy that initializes the neural network before training by leveraging the global mapping property from a well-trained model. The initialized neural network exhibits improved stability during early training, enabling faster convergence and enhanced reconstruction performance. Our method is simple yet effective, requiring only a few lines of code while substantially improving training efficiency on 128 CT cases collected from 4 different datasets, covering 7 distinct anatomical regions.

Non-invasive prediction of NSCLC immunotherapy efficacy and tumor microenvironment through unsupervised machine learning-driven CT Radiomic subtypes: a multi-cohort study.

Guo Y, Gong B, Li Y, Mo P, Chen Y, Fan Q, Sun Q, Miao L, Li Y, Liu Y, Tan W, Yang L, Zheng C

pubmed logopapersJun 24 2025
Radiomics analyzes quantitative features from medical images to reveal tumor heterogeneity, offering new insights for diagnosis, prognosis, and treatment prediction. This study explored radiomics based biomarkers to predict immunotherapy response and its association with the tumor microenvironment in non-small cell lung cancer (NSCLC) using unsupervised machine learning models derived from CT imaging. This study included 1539 NSCLC patients from seven independent cohorts. For 1834 radiomic features extracted from 869 NSCLC patients, K-means unsupervised clustering was applied to identify radiomic subtypes. A random forest model extended subtype classification to external cohorts, model accuracy, sensitivity, and specificity were evaluated. By conducting bulk RNA sequencing (RNA-seq) and single-cell transcriptome sequencing (scRNA-seq) of tumors, the immune microenvironment characteristics of tumors can be obtained to evaluate the association between radiomic subtypes and immunotherapy efficacy, immune scores, and immune cells infiltration. Unsupervised clustering stratified NSCLC patients into two subtypes (Cluster 1 and Cluster 2). Principal component analysis confirmed significant distinctions between subtypes across all cohorts. Cluster 2 exhibited significantly longer median overall survival (35 vs. 30 months, P = 0.006) and progression-free survival (19 vs. 16 months, P = 0.020) compared to Cluster 1. Multivariate Cox regression identified radiomic subtype as an independent predictor of overall survival (HR: 0.738, 95% CI 0.583-0.935, P = 0.012), validated in two external cohorts. Bulk RNA seq showed elevated interaction signaling and immune scores in Cluster 2 and scRNA-seq demonstrated higher proportions of T cells, B cells, and NK cells in Cluster 2. This study establishes a radiomic subtype associated with NSCLC immunotherapy efficacy and tumor immune microenvironment. The findings provide a non-invasive tool for personalized treatment, enabling early identification of immunotherapy-responsive patients and optimized therapeutic strategies.

Validation of a Pretrained Artificial Intelligence Model for Pancreatic Cancer Detection on Diagnosis and Prediagnosis Computed Tomography Scans.

Degand L, Abi-Nader C, Bône A, Vetil R, Placido D, Chmura P, Rohé MM, De Masi F, Brunak S

pubmed logopapersJun 24 2025
To evaluate PANCANAI, a previously developed AI model for pancreatic cancer (PC) detection, on a longitudinal cohort of patients. In particular, aiming for PC detection on scans acquired before histopathologic diagnosis was assessed. The model has been previously trained to predict PC suspicion on 2134 portal venous CTs. In this study, the algorithm was evaluated on a retrospective cohort of Danish patients with biopsy-confirmed PC and with CT scans acquired between 2006 and 2016. The sensitivity was measured, and bootstrapping was performed to provide median and 95% CI. The study included 1083 PC patients (mean age: 69 y ± 11, 575 men). CT scans were divided into 2 groups: (1) concurrent diagnosis (CD): 1022 CT scans acquired within 2 months around histopathologic diagnosis, and (2) prediagnosis (PD): 198 CT scans acquired before histopathologic diagnosis (median 7 months before diagnosis). The sensitivity was 91.8% (938 of 1022; 95% CI: 89.9-93.5) and 68.7% (137 of 198; 95% CI: 62.1-75.3) on the CD and PD groups, respectively. Sensitivity on CT scans acquired 1 year or more before diagnosis was 53.9% (36 of 67; 95% CI: 41.8-65.7). Sensitivity on CT scans acquired at stage I was 82.9% (29 of 35; 95% CI: 68.6-94.3). PANCANAI showed high sensitivity for automatic PC detection on a large retrospective cohort of biopsy-confirmed patients. PC suspicion was detected in more than half of the CT scans that were acquired at least a year before histopathologic diagnosis.

Comprehensive predictive modeling in subarachnoid hemorrhage: integrating radiomics and clinical variables.

Urbanos G, Castaño-León AM, Maldonado-Luna M, Salvador E, Ramos A, Lechuga C, Sanz C, Juárez E, Lagares A

pubmed logopapersJun 24 2025
Subarachnoid hemorrhage (SAH) is a severe condition with high morbidity and long-term neurological consequences. Radiomics, by extracting quantitative features from Computed Tomograhpy (CT) scans, may reveal imaging biomarkers predictive of outcomes. This study evaluates the predictive value of radiomics in SAH for multiple outcomes and compares its performance to models based on clinical data.Radiomic features were extracted from admission CTs using segmentations of brain tissue (white and gray matter) and hemorrhage. Machine learning models with cross-validation were trained using clinical data, radiomics, or both, to predict 6-month mortality, Glasgow Outcome Scale (GOS), vasospasm, and long-term hydrocephalus. SHapley Additive exPlanations (SHAP) analysis was used to interpret feature contributions.The training dataset included 403 aneurysmal SAH patients; GOS predictions used all patients, while vasospasm and hydrocephalus predictions excluded those with incomplete data or early death, leaving 328 and 332 patients, respectively. Radiomics and clinical models demonstrated comparable performance, achieving in validation set AUCs more than 85% for six-month mortality and clinical outcome, and 75% and 86% for vasospasm and hydrocephalus, respectively. In an independent cohort of 41 patients, the combined models yielded AUCs of 89% for mortality, 87% for clinical outcome, 66% for vasospasm, and 72% for hydrocephalus. SHAP analysis highlighted significant contributions of radiomic features from brain tissue and hemorrhage segmentation, alongside key clinical variables, in predicting SAH outcomes.This study underscores the potential of radiomics-based approaches for SAH outcome prediction, demonstrating predictive power comparable to traditional clinical models and enhancing understanding of SAH-related complications.Clinical trial number Not applicable.

Enabling Early Identification of Malignant Vertebral Compression Fractures via 2.5D Convolutional Neural Network Model with CT Image Analysis.

Huang C, Li E, Hu J, Huang Y, Wu Y, Wu B, Tang J, Yang L

pubmed logopapersJun 23 2025
This study employed a retrospective data analysis approach combined with model development and validation. The present study introduces a 2.5D convolutional neural network (CNN) model leveraging CT imaging to facilitate the early detection of malignant vertebral compression fractures (MVCFs), potentially reducing reliance on invasive biopsies. Vertebral histopathological biopsy is recognized as the gold standard for differentiating between osteoporotic and malignant vertebral compression fractures (VCFs). Nevertheless, its application is restricted due to its invasive nature and high cost, highlighting the necessity for alternative methods to identify MVCFs. The clinical, imaging, and pathological data of patients who underwent vertebral augmentation and biopsy at Institution 1 and Institution 2 were collected and analyzed. Based on the vertebral CT images of these patients, 2D, 2.5D, and 3D CNN models were developed to identify the patients with osteoporotic vertebral compression fractures (OVCF) and MVCF. To verify the clinical application value of the CNN model, two rounds of reader studies were performed. The 2.5D CNN model performed well, and its performance in identifying MVCF patients was significantly superior to that of the 2D and 3D CNN models. In the training dataset, the area under the receiver operating characteristic curve (AUC) of the 2.5D CNN model was 0.996 and an F1 score of 0.915. In the external cohort test, the AUC was 0.815 and an F1 score of 0.714. And clinicians' ability to identify MVCF patients has been enhanced by the 2.5D CNN model. With the assistance of the 2.5D CNN model, the AUC of senior clinicians was 0.882, and the F1 score was 0.774. For junior clinicians, the 2.5D CNN model-assisted AUC was 0.784 and the F1 score was 0.667. The development of our 2.5D CNN model marks a significant step towards non-invasive identification of MVCF patients,. The 2.5D CNN model may be a potential model to assist clinicians in better identifying MVCF patients.

Intelligent Virtual Dental Implant Placement via 3D Segmentation Strategy.

Cai G, Wen B, Gong Z, Lin Y, Liu H, Zeng P, Shi M, Wang R, Chen Z

pubmed logopapersJun 23 2025
Virtual dental implant placement in cone-beam computed tomography (CBCT) is a prerequisite for digital implant surgery, carrying clinical significance. However, manual placement is a complex process that should meet clinical essential requirements of restoration orientation, bone adaptation, and anatomical safety. This complexity presents challenges in balancing multiple considerations comprehensively and automating the entire workflow efficiently. This study aims to achieve intelligent virtual dental implant placement through a 3-dimensional (3D) segmentation strategy. Focusing on the missing mandibular first molars, we developed a segmentation module based on nnU-Net to generate the virtual implant from the edentulous region of CBCT and employed an approximation module for mathematical optimization. The generated virtual implant was integrated with the original CBCT to meet clinical requirements. A total of 190 CBCT scans from 4 centers were collected for model development and testing. This tool segmented the virtual implant with a surface Dice coefficient (sDice) of 0.903 and 0.884 on internal and external testing sets. Compared to the ground truth, the average deviations of the implant platform, implant apex, and angle were 0.850 ± 0.554 mm, 1.442 ± 0.539 mm, and 4.927 ± 3.804° on the internal testing set and 0.822 ± 0.353 mm, 1.467 ± 0.560 mm, and 5.517 ± 2.850° on the external testing set, respectively. The 3D segmentation-based artificial intelligence tool demonstrated good performance in predicting both the dimension and position of the virtual implants, showing significant clinical application potential in implant planning.

Machine Learning Models Based on CT Enterography for Differentiating Between Ulcerative Colitis and Colonic Crohn's Disease Using Intestinal Wall, Mesenteric Fat, and Visceral Fat Features.

Wang X, Wang X, Lei J, Rong C, Zheng X, Li S, Gao Y, Wu X

pubmed logopapersJun 23 2025
This study aimed to develop radiomic-based machine learning models using computed tomography enterography (CTE) features derived from the intestinal wall, mesenteric fat, and visceral fat to differentiate between ulcerative colitis (UC) and colonic Crohn's disease (CD). Clinical and imaging data from 116 patients with inflammatory bowel disease (IBD) (68 with UC and 48 with colonic CD) were retrospectively collected. Radiomic features were extracted from venous-phase CTE images. Feature selection was performed via the intraclass correlation coefficient (ICC), correlation analysis, SelectKBest, and least absolute shrinkage and selection operator (LASSO) regression. Support vector machine models were constructed using features from individual and combined regions, with model performance evaluated using the area under the ROC curve (AUC). The combined radiomic model, integrating features from all three regions, exhibited superior classification performance (AUC= 0.857, 95% CI, 0.732-0.982), with a sensitivity of 0.762 (95% CI, 0.547-0.903) and specificity of 0.857 (95% CI, 0.601-0.960) in the testing cohort. The models based on features from the intestinal wall, mesenteric fat, and visceral fat achieved AUCs of 0.847 (95% CI, 0.710-0.984), 0.707 (95% CI, 0.526-0.889), and 0.731 (95% CI, 0.553-0.910), respectively, in the testing cohort. The intestinal wall model demonstrated the best calibration. This study demonstrated the feasibility of constructing machine learning models based on radiomic features of the intestinal wall, mesenteric fat, and visceral fat to distinguish between UC and colonic CD.

Fine-tuned large language model for classifying CT-guided interventional radiology reports.

Yasaka K, Nishimura N, Fukushima T, Kubo T, Kiryu S, Abe O

pubmed logopapersJun 23 2025
BackgroundManual data curation was necessary to extract radiology reports due to the ambiguities of natural language.PurposeTo develop a fine-tuned large language model that classifies computed tomography (CT)-guided interventional radiology reports into technique categories and to compare its performance with that of the readers.Material and MethodsThis retrospective study included patients who underwent CT-guided interventional radiology between August 2008 and November 2024. Patients were chronologically assigned to the training (n = 1142; 646 men; mean age = 64.1 ± 15.7 years), validation (n = 131; 83 men; mean age = 66.1 ± 16.1 years), and test (n = 332; 196 men; mean age = 66.1 ± 14.8 years) datasets. In establishing a reference standard, reports were manually classified into categories 1 (drainage), 2 (lesion biopsy within fat or soft tissue density tissues), 3 (lung biopsy), and 4 (bone biopsy). The bi-directional encoder representation from the transformers model was fine-tuned with the training dataset, and the model with the best performance in the validation dataset was selected. The performance and required time for classification in the test dataset were compared between the best-performing model and the two readers.ResultsCategories 1/2/3/4 included 309/367/270/196, 30/42/40/19, and 75/124/78/55 patients for the training, validation, and test datasets, respectively. The model demonstrated an accuracy of 0.979 in the test dataset, which was significantly better than that of the readers (0.922-0.940) (<i>P</i> ≤0.012). The model classified reports within a 49.8-53.5-fold shorter time compared to readers.ConclusionThe fine-tuned large language model classified CT-guided interventional radiology reports into four categories demonstrating high accuracy within a remarkably short time.

[Incidental pulmonary nodules on CT imaging: what to do?].

van der Heijden EHFM, Snoeren M, Jacobs C

pubmed logopapersJun 23 2025
Incidental pulmonary nodules are very frequently found on CT imaging and may represent (early stage) lung cancers without any signs or symptoms. These incidental findings can be solid lesions or ground glass lesions that may be solitary or multiple. Careful, and systematic evaluation of these findings in imaging is needed to determine the risk of malignancy, based on imaging characteristics, patient factors like smoking habits, prior cancers or family history, and growth rate preferably determined by volume measurements. Once the risk of malignancy is increased, minimal invasive image guided biopsy is warranted, preferably by navigation bronchoscopy. We present two cases to illustrate this clinical workup: one case with a benign solitary pulmonary nodule, and a second case with multiple ground glass opacities, diagnosed as synchronous primary adenocarcinomas of the lung. This is followed by a review of the current status of computer and artificial intelligence aided diagnostic support and clinical workflow optimization.
Page 93 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.