Sort by:
Page 145 of 2432424 results

Prediction of PD-L1 expression in NSCLC patients using PET/CT radiomics and prognostic modelling for immunotherapy in PD-L1-positive NSCLC patients.

Peng M, Wang M, Yang X, Wang Y, Xie L, An W, Ge F, Yang C, Wang K

pubmed logopapersJul 1 2025
To develop a positron emission tomography/computed tomography (PET/CT)-based radiomics model for predicting programmed cell death ligand 1 (PD-L1) expression in non-small cell lung cancer (NSCLC) patients and estimating progression-free survival (PFS) and overall survival (OS) in PD-L1-positive patients undergoing first-line immunotherapy. We retrospectively analysed 143 NSCLC patients who underwent pretreatment <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) PET/CT scans, of whom 86 were PD-L1-positive. Clinical data collected included gender, age, smoking history, Tumor-Node-Metastases (TNM) staging system, pathologic types, laboratory parameters, and PET metabolic parameters. Four machine learning algorithms-Bayes, logistic, random forest, and Supportsupport vector machine (SVM)-were used to build models. The predictive performance was validated using receiver operating characteristic (ROC) curves. Univariate and multivariate Cox analyses identified independent predictors of OS and PFS in PD-L1-positive expression patients undergoing immunotherapy, and a nomogram was created to predict OS. A total of 20 models were built for predicting PD-L1 expression. The clinical combined PET/CT radiomics model based on the SVM algorithm performed best (area under curve for training and test sets: 0.914 and 0.877, respectively). The Cox analyses showed that smoking history independently predicted PFS. SUVmean, monocyte percentage and white blood cell count were independent predictors of OS, and the nomogram was created to predict 1-year, 2-year, and 3-year OS based on these three factors. We developed PET/CT-based machine learning models to help predict PD-L1 expression in NSCLC patients and identified independent predictors of PFS and OS in PD-L1-positive patients receiving immunotherapy, thereby aiding precision treatment.

CausalMixNet: A mixed-attention framework for causal intervention in robust medical image diagnosis.

Zhang Y, Huang YA, Hu Y, Liu R, Wu J, Huang ZA, Tan KC

pubmed logopapersJul 1 2025
Confounding factors inherent in medical images can significantly impact the causal exploration capabilities of deep learning models, resulting in compromised accuracy and diminished generalization performance. In this paper, we present an innovative methodology named CausalMixNet that employs query-mixed intra-attention and key&value-mixed inter-attention to probe causal relationships between input images and labels. For mitigating unobservable confounding factors, CausalMixNet integrates the non-local reasoning module (NLRM) and the key&value-mixed inter-attention (KVMIA) to conduct a front-door adjustment strategy. Furthermore, CausalMixNet incorporates a patch-masked ranking module (PMRM) and query-mixed intra-attention (QMIA) to enhance mediator learning, thereby facilitating causal intervention. The patch mixing mechanism applied to query/(key&value) features within QMIA and KVMIA specifically targets lesion-related feature enhancement and the inference of average causal effect inference. CausalMixNet consistently outperforms existing methods, achieving superior accuracy and F1-scores across in-domain and out-of-domain scenarios on multiple datasets, with an average improvement of 3% over the closest competitor. Demonstrating robustness against noise, gender bias, and attribute bias, CausalMixNet excels in handling unobservable confounders, maintaining stable performance even in challenging conditions.

Automatic quality control of brain 3D FLAIR MRIs for a clinical data warehouse.

Loizillon S, Bottani S, Maire A, Ströer S, Chougar L, Dormont D, Colliot O, Burgos N

pubmed logopapersJul 1 2025
Clinical data warehouses, which have arisen over the last decade, bring together the medical data of millions of patients and offer the potential to train and validate machine learning models in real-world scenarios. The quality of MRIs collected in clinical data warehouses differs significantly from that generally observed in research datasets, reflecting the variability inherent to clinical practice. Consequently, the use of clinical data requires the implementation of robust quality control tools. By using a substantial number of pre-existing manually labelled T1-weighted MR images (5,500) alongside a smaller set of newly labelled FLAIR images (926), we present a novel semi-supervised adversarial domain adaptation architecture designed to exploit shared representations between MRI sequences thanks to a shared feature extractor, while taking into account the specificities of the FLAIR thanks to a specific classification head for each sequence. This architecture thus consists of a common invariant feature extractor, a domain classifier and two classification heads specific to the source and target, all designed to effectively deal with potential class distribution shifts between the source and target data classes. The primary objectives of this paper were: (1) to identify images which are not proper 3D FLAIR brain MRIs; (2) to rate the overall image quality. For the first objective, our approach demonstrated excellent results, with a balanced accuracy of 89%, comparable to that of human raters. For the second objective, our approach achieved good performance, although lower than that of human raters. Nevertheless, the automatic approach accurately identified bad quality images (balanced accuracy >79%). In conclusion, our proposed approach overcomes the initial barrier of heterogeneous image quality in clinical data warehouses, thereby facilitating the development of new research using clinical routine 3D FLAIR brain images.

Tumor grade-titude: XGBoost radiomics paves the way for RCC classification.

Ellmann S, von Rohr F, Komina S, Bayerl N, Amann K, Polifka I, Hartmann A, Sikic D, Wullich B, Uder M, Bäuerle T

pubmed logopapersJul 1 2025
This study aimed to develop and evaluate a non-invasive XGBoost-based machine learning model using radiomic features extracted from pre-treatment CT images to differentiate grade 4 renal cell carcinoma (RCC) from lower-grade tumours. A total of 102 RCC patients who underwent contrast-enhanced CT scans were included in the analysis. Radiomic features were extracted, and a two-step feature selection methodology was applied to identify the most relevant features for classification. The XGBoost model demonstrated high performance in both training (AUC = 0.87) and testing (AUC = 0.92) sets, with no significant difference between the two (p = 0.521). The model also exhibited high sensitivity, specificity, positive predictive value, and negative predictive value. The selected radiomic features captured both the distribution of intensity values and spatial relationships, which may provide valuable insights for personalized treatment decision-making. Our findings suggest that the XGBoost model has the potential to be integrated into clinical workflows to facilitate personalized adjuvant immunotherapy decision-making, ultimately improving patient outcomes. Further research is needed to validate the model in larger, multicentre cohorts and explore the potential of combining radiomic features with other clinical and molecular data.

Quantitative CT biomarkers for renal cell carcinoma subtype differentiation: a comparison of DECT, PCT, and CT texture analysis.

Sah A, Goswami S, Gupta A, Garg S, Yadav N, Dhanakshirur R, Das CJ

pubmed logopapersJul 1 2025
To evaluate and compare the diagnostic performance of CT texture analysis (CTTA), perfusion CT (PCT), and dual-energy CT (DECT) in distinguishing between clear-cell renal cell carcinoma (ccRCC) and non-ccRCC. This retrospective study included 66 patients with RCC (52 ccRCC and 14 non-ccRCC) who underwent DECT and PCT imaging before surgery (2017-2022). This DECT parameters (iodine concentration, iodine ratio [IR]) and PCT parameters (blood flow, blood volume, mean transit time, time to peak) were measured using circular regions of interest (ROIs). CT texture analysis features were extracted from manually annotated corticomedullary-phase images. A machine learning (ML) model was developed to differentiate RCC subtypes, with performance evaluated using k-fold cross-validation. Multivariate logistic regression analysis was performed to assess the predictive value of each imaging modality. All 3 imaging modalities demonstrated high diagnostic accuracy, with F1 scores of 0.9107, 0.9358, and 0.9348 for PCT, DECT, and CTTA, respectively. None of the 3 models were significantly different (P > 0.05). While each modality could effectively differentiate between ccRCC and non-ccRCC, higher IR on DECT and increased entropy on CTTA were independent predictors of ccRCC, with F1 scores of 0.9345 and 0.9272, respectively (P < 0.001). Dual-energy CT achieved the highest individual performance, with IR being the best predictor (F1 = 0.902). Iodine ratio was significantly higher in ccRCC (65.12 ± 23.73) compared to non-ccRCC (35.17 ± 17.99, P < 0.001), yielding an Area under curve (AUC) of 0.91, sensitivity of 87.5%, and specificity of 89.3%. Entropy on CTTA was the strongest texture feature, with higher values in ccRCC (7.94 ± 0.336) than non-ccRCC (6.43 ± 0.297, P < 0.001), achieving an AUC of 0.94, sensitivity of 83.0%, and specificity of 92.3%. The combined ML model integrating DECT, PCT, and CTTA parameters yielded the highest diagnostic accuracy, with an F1 score of 0.954. PCT, DECT, and CTTA effectively differentiate RCC subtypes. However, IR (DECT) and entropy (CTTA) emerged as key independent markers, suggesting their clinical utility in RCC characterization. Accurate, non-invasive biomarkers are essential to differentiate RCC subtypes, aiding in prognosis and guiding targeted therapies, particularly in ccRCC, where treatment options differ significantly.

Radiomics for lung cancer diagnosis, management, and future prospects.

Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.

Integrating prior knowledge with deep learning for optimized quality control in corneal images: A multicenter study.

Li FF, Li GX, Yu XX, Zhang ZH, Fu YN, Wu SQ, Wang Y, Xiao C, Ye YF, Hu M, Dai Q

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models are effective for analyzing high-quality slit-lamp images but often face challenges in real-world clinical settings due to image variability. This study aims to develop and evaluate a hybrid AI-based image quality control system to classify slit-lamp images, improving diagnostic accuracy and efficiency, particularly in telemedicine applications. Cross-sectional study. Our Zhejiang Eye Hospital dataset comprised 2982 slit-lamp images as the internal dataset. Two external datasets were included: 13,554 images from the Aier Guangming Eye Hospital (AGEH) and 9853 images from the First People's Hospital of Aksu District in Xinjiang (FPH of Aksu). We developed a Hybrid Prior-Net (HP-Net), a novel network that combines a ResNet-based classification branch with a prior knowledge branch leveraging Hough circle transform and frequency domain blur detection. The two branches' features are channel-wise concatenated at the fully connected layer, enhancing representational power and improving the network's ability to classify eligible, misaligned, blurred, and underexposed corneal images. Model performance was evaluated using metrics such as accuracy, precision, recall, specificity, and F1-score, and compared against the performance of other deep learning models. The HP-Net outperformed all other models, achieving an accuracy of 99.03 %, precision of 98.21 %, recall of 95.18 %, specificity of 99.36 %, and an F1-score of 96.54 % in image classification. The results demonstrated that HP-Net was also highly effective in filtering slit-lamp images from the other two datasets, AGEH and FPH of Aksu with accuracies of 97.23 % and 96.97 %, respectively. These results underscore the superior feature extraction and classification capabilities of HP-Net across all evaluated metrics. Our AI-based image quality control system offers a robust and efficient solution for classifying corneal images, with significant implications for telemedicine applications. By incorporating slightly blurred but diagnostically usable images into training datasets, the system enhances the reliability and adaptability of AI tools for medical imaging quality control, paving the way for more accurate and efficient diagnostic workflows.

Optimizing clinical risk stratification of localized prostate cancer.

Gnanapragasam VJ

pubmed logopapersJul 1 2025
To review the current risk and prognostic stratification systems in localised prostate cancer. To explore some of the most promising adjuncts to clinical models and what the evidence has shown regarding their value. There are many new biomarker-based models seeking to improve, optimise or replace clinical models. There are promising data on the value of MRI, radiomics, genomic classifiers and most recently artificial intelligence tools in refining stratification. Despite the extensive literature however, there remains uncertainty on where in pathways they can provide the most benefit and whether a biomarker is most useful for prognosis or predictive use. Comparisons studies have also often overlooked the fact that clinical models have themselves evolved and the context of the baseline used in biomarker studies that have shown superiority have to be considered. For new biomarkers to be included in stratification models, well designed prospective clinical trials are needed. Until then, there needs to be caution in interpretation of their use for day-to-day decision making. It is critical that users balance any purported incremental value against the performance of the latest clinical classification and multivariate models especially as the latter are cost free and widely available.

Development and validation of an interpretable machine learning model for diagnosing pathologic complete response in breast cancer.

Zhou Q, Peng F, Pang Z, He R, Zhang H, Jiang X, Song J, Li J

pubmed logopapersJul 1 2025
Pathologic complete response (pCR) following neoadjuvant chemotherapy (NACT) is a critical prognostic marker for patients with breast cancer, potentially allowing surgery omission. However, noninvasive and accurate pCR diagnosis remains a significant challenge due to the limitations of current imaging techniques, particularly in cases where tumors completely disappear post-NACT. We developed a novel framework incorporating Dimensional Accumulation for Layered Images (DALI) and an Attention-Box annotation tool to address the unique challenge of analyzing imaging data where target lesions are absent. These methods transform three-dimensional magnetic resonance imaging into two-dimensional representations and ensure consistent target tracking across time-points. Preprocessing techniques, including tissue-region normalization and subtraction imaging, were used to enhance model performance. Imaging features were extracted using radiomics and pretrained deep-learning models, and machine-learning algorithms were integrated into a stacked ensemble model. The approach was developed using the I-SPY 2 dataset and validated with an independent Tangshan People's Hospital cohort. The stacked ensemble model achieved superior diagnostic performance, with an area under the receiver operating characteristic curve of 0.831 (95 % confidence interval, 0.769-0.887) on the test set, outperforming individual models. Tissue-region normalization and subtraction imaging significantly enhanced diagnostic accuracy. SHAP analysis identified variables that contributed to the model predictions, ensuring model interpretability. This innovative framework addresses challenges of noninvasive pCR diagnosis. Integrating advanced preprocessing techniques improves feature quality and model performance, supporting clinicians in identifying patients who can safely omit surgery. This innovation reduces unnecessary treatments and improves quality of life for patients with breast cancer.

Prediction of early recurrence in primary central nervous system lymphoma based on multimodal MRI-based radiomics: A preliminary study.

Wang X, Wang S, Zhao X, Chen L, Yuan M, Yan Y, Sun X, Liu Y, Sun S

pubmed logopapersJul 1 2025
To evaluate the role of multimodal magnetic resonance imaging radiomics features in predicting early recurrence of primary central nervous system lymphoma (PCNSL) and to investigate their correlation with patient prognosis. A retrospective analysis was conducted on 145 patients with PCNSL who were treated with high-dose methotrexate-based chemotherapy. Clinical data and MRI images were collected, with tumor regions segmented using ITK-SNAP software. Radiomics features were extracted via Pyradiomics, and predictive models were developed using various machine learning algorithms. The predictive performance of these models was assessed using receiver operating characteristic (ROC) curves. Additionally, Cox regression analysis was employed to identify risk factors associated with progression-free survival (PFS). In the cohort of 145 PCNSL patients (72 recurrence, 73 non-recurrence), clinical characteristics were comparable between groups except for multiple lesion frequency (61.1% vs. 39.7%, p < 0.05) and not receiving consolidation therapy (44.4% vs. 13.7%, p < 0.05). A total of 2392 radiomics features were extracted from CET1 and T2WI MRI sequence. Combining clinical variables, 10 features were retained after the feature selection process. The logistic regression (LR) model exhibited superior predictive performance in the test set to predict PCNSL early relapse, with an area under the curve (AUC) of 0.887 (95 % confidence interval: 0.785-0.988). Multivariate Cox regression identified the Cli-Rad score as an independent prognostic factor for PFS. Significant difference in PFS was observed between high- and low-risk groups defined by Cli-Rad score (8.24 months vs. 24.17 months, p < 0.001). The LR model based on multimodal MRI radiomics and clinical features, can effectively predict early recurrence of PCNSL, while the Cli-Rad score could independently forecast PFS among PCNSL patients.
Page 145 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.