Sort by:
Page 63 of 1421416 results

Integrative radiomics of intra- and peri-tumoral features for enhanced risk prediction in thymic tumors: a multimodal analysis of tumor microenvironment contributions.

Zhu L, Li J, Wang X, He Y, Li S, He S, Deng B

pubmed logopapersJul 17 2025
This study aims to explore the role of intra- and peri-tumoral radiomics features in tumor risk prediction, with a particular focus on the impact of peri-tumoral characteristics on the tumor microenvironment. A total of 133 patients, including 128 with thymomas and 5 with thymic carcinomas, were ultimately enrolled in this study. Based on the high- and low-risk classification, the cohort was divided into a training set (n = 93) and a testing set (n = 40) for subsequent analysis.Based on imaging data from these 133 patients, multiple radiomics prediction models integrating intra-tumoral and peritumoral features were developed. The data were sourced from patients treated at the Affiliated Hospital of Guangdong Medical University between 2015 and 2023, with all imaging obtained through preoperative CT scans. Radiomics feature extraction involved three primary categories: first-order features, shape features, and high-order features. Initially, the tumor's region of interest (ROI) was manually delineated using ITK-SNAP software. A custom Python algorithm was then used to automatically expand the peri-tumoral area, extracting features within 1 mm, 2 mm, and 3 mm zones surrounding the tumor. Additionally, considering the multimodal nature of the imaging data, image fusion techniques were incorporated to further enhance the model's ability to capture the tumor microenvironment. To build the radiomics models, selected features were first standardized using z-scores. Initial feature selection was performed using a t-test (p < 0.05), followed by Spearman correlation analysis to remove redundancy by retaining only one feature from each pair with a correlation coefficient ≥ 0.90. Subsequently, hierarchical clustering and the LASSO algorithm were applied to identify the most predictive features. These selected features were then used to train machine learning models, which were optimized on the training dataset and assessed for predictive performance. To further evaluate the effectiveness of these models, various statistical methods were applied, including DeLong's test, NRI, and IDI, to compare predictive differences among models. Decision curve analysis (DCA) was also conducted to assess the clinical applicability of the models. The results indicate that the IntraPeri1mm model performed the best, achieving an AUC of 0.837, with sensitivity and specificity at 0.846 and 0.84, respectively, significantly outperforming other models. SHAP value analysis identified several key features, such as peri_log_sigma_2_0_mm 3D_firstorder RootMeanSquared and intra_wavelet_LLL_firstorder Skewness, which made substantial contributions to the model's predictive accuracy. NRI and IDI analyses further confirmed the model's superior clinical applicability, and the DCA curve demonstrated robust performance across different thresholds. DeLong's test highlighted the statistical significance of the IntraPeri1mm model, underscoring its potential utility in radiomics research. Overall, this study provides a new perspective on tumor risk assessment, highlighting the importance of peri-tumoral features in the analysis of the tumor microenvironment. It aims to offer valuable insights for the development of personalized treatment plans. Not applicable.

Task based evaluation of sparse view CT reconstruction techniques for intracranial hemorrhage diagnosis using an AI observer model.

Tivnan M, Kikkert ID, Wu D, Yang K, Wolterink JM, Li Q, Gupta R

pubmed logopapersJul 17 2025
Sparse-view computed tomography (CT) holds promise for reducing radiation exposure and enabling novel system designs. Traditional reconstruction algorithms, including Filtered Backprojection (FBP) and Model-Based Iterative Reconstruction (MBIR), often produce artifacts in sparse-view data. Deep Learning Reconstruction (DLR) offers potential improvements, but task-based evaluations of DLR in sparse-view CT remain limited. This study employs an Artificial Intelligence (AI) observer to evaluate the diagnostic accuracy of FBP, MBIR, and DLR for intracranial hemorrhage detection and classification, offering a cost-effective alternative to human radiologist studies. A public brain CT dataset with labeled intracranial hemorrhages was used to train an AI observer model. Sparse-view CT data were simulated, with reconstructions performed using FBP, MBIR, and DLR. Reconstruction quality was assessed using metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). Diagnostic utility was evaluated using Receiver Operating Characteristic (ROC) analysis and Area Under the Curve (AUC) values for One-vs-Rest and One-vs-One classification tasks. DLR outperformed FBP and MBIR in all quality metrics, demonstrating reduced noise, improved structural similarity, and fewer artifacts. The AI observer achieved the highest classification accuracy with DLR, while FBP surpassed MBIR in task-based accuracy despite inferior image quality metrics, emphasizing the value of task-based evaluations. DLR provides an effective balance of artifact reduction and anatomical detail in sparse-view CT brain imaging. This proof-of-concept study highlights AI observer models as a viable, cost-effective alternative for evaluating CT reconstruction techniques.

Validation of artificial intelligence software for automatic calcium scoring in cardiac and chest computed tomography.

Hamelink II, Nie ZZ, Severijn TEJT, van Tuinen MM, van Ooijen PMAP, Kwee TCT, Dorrius MDM, van der Harst PP, Vliegenthart RR

pubmed logopapersJul 16 2025
Coronary artery calcium scoring (CACS), i.e. quantification of Agatston (AS) or volume score (VS), can be time consuming. The aim of this study was to compare automated, artificial intelligence (AI)-based CACS to manual scoring, in cardiac and chest CT for lung cancer screening. We selected 684 participants (59 ± 4.8 years; 48.8 % men) who underwent cardiac and non-ECG-triggered chest CT, including 484 participants with AS > 0 on cardiac CT. AI-based results were compared to manual AS and VS, by assessing sensitivity and accuracy, intraclass correlation coefficient (ICC), Bland-Altman analysis and Cohen's kappa for classification in AS strata (0;1-99;100-299;≥300). AI showed high CAC detection rate: 98.1% in cardiac CT (accuracy 97.1%) and 92.4% in chest CT (accuracy 92.1%). AI showed excellent agreement with manual AS (ICC:0.997 and 0.992) and manual VS (ICC:0.997 and 0.991), in cardiac CT and chest CT, respectively. In Bland-Altman analysis, there was a mean difference of 2.3 (limits of agreement (LoA):-42.7, 47.4) for AS on cardiac CT; 1.9 (LoA:-36.4, 40.2) for VS on cardiac CT; -0.3 (LoA:-74.8, 74.2) for AS on chest CT; and -0.6 (LoA:-65.7, 64.5) for VS on chest CT. Cohen's kappa was 0.952 (95%CI:0.934-0.970) for cardiac CT and 0.901 (95%CI:0.875-0.926) for chest CT, with concordance in 95.9 and 91.4% of cases, respectively. AI-based CACS shows high detection rate and strong correlation compared to manual CACS, with excellent risk classification agreement. AI may reduce evaluation time and enable opportunistic screening for CAC on low-dose chest CT.

Automated CAD-RADS scoring from multiplanar CCTA images using radiomics-driven machine learning.

Corti A, Ronchetti F, Lo Iacono F, Chiesa M, Colombo G, Annoni A, Baggiano A, Carerj ML, Del Torto A, Fazzari F, Formenti A, Junod D, Mancini ME, Maragna R, Marchetti F, Sbordone FP, Tassetti L, Volpe A, Mushtaq S, Corino VDA, Pontone G

pubmed logopapersJul 16 2025
Coronary Artery Disease-Reporting and Data System (CAD-RADS), a standardized reporting system of stenosis severity from coronary computed tomography angiography (CCTA), is performed manually by expert radiologists, being time-consuming and prone to interobserver variability. While deep learning methods automating CAD-RADS scoring have been proposed, radiomics-based machine-learning approaches are lacking, despite their improved interpretability. This study aims to introduce a novel radiomics-based machine-learning approach for automating CAD-RADS scoring from CCTA images with multiplanar reconstruction. This retrospective monocentric study included 251 patients (male 70 %; mean age 60.5 ± 12.7) who underwent CCTA in 2016-2018 for clinical evaluation of CAD. Images were automatically segmented, and radiomic features were extracted. Clinical characteristics were collected. The image dataset was partitioned into training and test sets (90 %-10 %). The training phase encompassed feature scaling and selection, data balancing and model training within a 5-fold cross-validation. A cascade pipeline was implemented for both 6-class CAD-RADS scoring and 4-class therapy-oriented classification (0-1, 2, 3-4, 5), through consecutive sub-tasks. For each classification task the cascade pipeline was applied to develop clinical, radiomic, and combined models. The radiomic, combined and clinical models yielded AUC = 0.88 [0.86-0.88], AUC = 0.90 [0.88-0.90], and AUC = 0.66 [0.66-0.67] for the CAD-RADS scoring, and AUC = 0.93 [0.91-0.93], AUC = 0.97 [0.96-0.97], and AUC = 79 [0.78-0.79] for the therapy-oriented classification. The radiomic and combined models significantly outperformed (DeLong p-value < 0.05) the clinical one in class 1 and 2 (CAD-RADS cascade) and class 2 (therapy-oriented cascade). This study represents the first CAD-RADS classification radiomic model, guaranteeing higher explainability and providing a promising support system in coronary artery stenosis assessment.

Multi-DECT image-based radiomics with interpretable machine learning for preoperative prediction of tumor budding grade and prognosis in colorectal cancer: a dual-center study.

Lin G, Chen W, Chen Y, Cao J, Mao W, Xia S, Chen M, Xu M, Lu C, Ji J

pubmed logopapersJul 16 2025
This study evaluates the predictive ability of multiparametric dual-energy computed tomography (multi-DECT) radiomics for tumor budding (TB) grade and prognosis in patients with colorectal cancer (CRC). This study comprised 510 CRC patients at two institutions. The radiomics features of multi-DECT images (including polyenergetic, virtual monoenergetic, iodine concentration [IC], and effective atomic number images) were screened to build radiomics models utilizing nine machine learning (ML) algorithms. An ML-based fusion model comprising clinical-radiological variables and radiomics features was developed. The assessment of model performance was conducted through the area under the receiver operating characteristic curve (AUC), while the model's interpretability was assessed by shapley additive explanation (SHAP). The prognostic significance of the fusion model was determined via survival analysis. The CT-reported lymph node status and normalized IC were used to develop a clinical-radiological model. Among the nine examined ML algorithms, the extreme gradient boosting (XGB) algorithm performed best. The XGB-based fusion model containing multi-DECT radiomics features outperformed the clinical-radiological model in predicting TB grade, demonstrating superior AUCs of 0.969 in the training cohort, 0.934 in the internal validation cohort, and 0.897 in the external validation cohort. The SHAP analysis identified variables influencing model predictions. Patients with a model-predicted high TB grade had worse recurrence-free survival (RFS) in both the training (P < 0.001) and internal validation (P = 0.016) cohorts. The XGB-based fusion model using multi-DECT radiomics could serve as a non-invasive tool to predict TB grade and RFS in patients with CRC preoperatively.

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airway Disease.

Zhang D, Zhao M, Zhou X, Li Y, Guan Y, Xia Y, Zhang J, Dai Q, Zhang J, Fan L, Zhou SK, Liu S

pubmed logopapersJul 16 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning model that uses a single inspiratory chest CT scan to generate parametric response maps (PRM) and predict functional small airway disease (fSAD). Materials and Methods In this retrospective study, predictive and generative deep learning models for PRM using inspiratory chest CT were developed using a model development dataset with fivefold cross-validation, with PRM derived from paired respiratory CT as the reference standard. Voxel-wise metrics, including sensitivity, area under the receiver operating characteristic curve (AUC), and structural similarity, were used to evaluate model performance in predicting PRM and expiratory CT images. The best performing model was tested on three internal test sets and an external test set. Results The model development dataset of 308 patients (median age, 67 years, [IQR: 62-70 years]; 113 female) was divided into the training set (<i>n</i> = 216), the internal validation set (<i>n</i> = 31), and the first internal test set (<i>n</i> = 61). The generative model outperformed the predictive model in detecting fSAD (sensitivity 86.3% vs 38.9%; AUC 0.86 vs 0.70). The generative model performed well in the second internal (AUCs of 0.64, 0.84, 0.97 for emphysema, fSAD and normal lung tissue), the third internal (AUCs of 0.63, 0.83, 0.97), and the external (AUCs of 0.58, 0.85, 0.94) test sets. Notably, the model exhibited exceptional performance in the PRISm group of the fourth internal test set (AUC = 0.62, 0.88, and 0.96). Conclusion The proposed generative model, using a single inspiratory CT, outperformed existing algorithms in PRM evaluation, achieved comparable results to paired respiratory CT. Published under a CC BY 4.0 license.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Imaging analysis using Artificial Intelligence to predict outcomes after endovascular aortic aneurysm repair: protocol for a retrospective cohort study.

Lareyre F, Raffort J, Kakkos SK, D'Oria M, Nasr B, Saratzis A, Antoniou GA, Hinchliffe RJ

pubmed logopapersJul 16 2025
Endovascular aortic aneurysm repair (EVAR) requires long-term surveillance to detect and treat postoperative complications. However, prediction models to optimise follow-up strategies are still lacking. The primary objective of this study is to develop predictive models of post-operative outcomes following elective EVAR using Artificial Intelligence (AI)-driven analysis. The secondary objective is to investigate morphological aortic changes following EVAR. This international, multicentre, observational study will retrospectively include 500 patients who underwent elective EVAR. Primary outcomes are EVAR postoperative complications including deaths, re-interventions, endoleaks, limb occlusion and stent-graft migration occurring within 1 year and at mid-term follow-up (1 to 3 years). Secondary outcomes are aortic anatomical changes. Morphological changes following EVAR will be analysed and compared based on preoperative and postoperative CT angiography (CTA) images (within 1 to 12 months, and at the last follow-up) using the AI-based software PRAEVAorta 2 (Nurea). Deep learning algorithms will be applied to stratify the risk of postoperative outcomes into low or high-risk categories. The training and testing dataset will be respectively composed of 70% and 30% of the cohort. The study protocol is designed to ensure that the sponsor and the investigators comply with the principles of the Declaration of Helsinki and the ICH E6 good clinical practice guideline. The study has been approved by the ethics committee of the University Hospital of Patras (Patras, Greece) under the number 492/05.12.2024. The results of the study will be presented at relevant national and international conferences and submitted for publication to peer-review journals.
Page 63 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.