Sort by:
Page 64 of 65642 results

The added value of artificial intelligence using Quantib Prostate for the detection of prostate cancer at multiparametric magnetic resonance imaging.

Russo T, Quarta L, Pellegrino F, Cosenza M, Camisassa E, Lavalle S, Apostolo G, Zaurito P, Scuderi S, Barletta F, Marzorati C, Stabile A, Montorsi F, De Cobelli F, Brembilla G, Gandaglia G, Briganti A

pubmed logopapersMay 7 2025
Artificial intelligence (AI) has been proposed to assist radiologists in reporting multiparametric magnetic resonance imaging (mpMRI) of the prostate. We evaluate the diagnostic performance of radiologists with different levels of experience when reporting mpMRI with the support of available AI-based software (Quantib Prostate). This is a single-center study (NCT06298305) involving 110 patients. Those with a positive mpMRI (PI-RADS ≥ 3) underwent targeted plus systematic biopsy (TBx plus SBx), while those with a negative mpMRI but a high clinical suspicion of prostate cancer (PCa) underwent SBx. Three readers with different levels of experience, identified as R1, R2, and R3 reviewed all mpMRI. Inter-reader agreement among the three readers with or without the assistance of Quantib Prostate as well as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for the detection of clinically significant PCa (csPCa) were assessed. 102 patients underwent prostate biopsy and the csPCa detection rate was 47%. Using Quantib Prostate resulted in an increased number of lesions identified for R3 (101 vs. 127). Inter-reader agreement slightly increased when using Quantib Prostate from 0.37 to 0.41 without vs. with Quantib Prostate, respectively. PPV, NPV and diagnostic accuracy (measured by the area under the curve [AUC]) of R3 improved (0.51 vs. 0.55, 0.65 vs.0.82 and 0.56 vs. 0.62, respectively). Conversely, no changes were observed for R1 and R2. Using Quantib Prostate did not enhance the detection rate of csPCa for readers with some experience in prostate imaging. However, for an inexperienced reader, this AI-based software is demonstrated to improve the performance. Name of registry: clinicaltrials.gov. NCT06298305. Date of registration: 2022-09.

A Vision-Language Model for Focal Liver Lesion Classification

Song Jian, Hu Yuchang, Wang Hui, Chen Yen-Wei

arxiv logopreprintMay 6 2025
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

Bai Y, An ZC, Du LF, Li F, Cai YY

pubmed logopapersMay 6 2025
The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics.

Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, Cai Y

pubmed logopapersJan 1 2025
Thyroid nodule, as a common clinical endocrine disease, has become increasingly prevalent worldwide. Ultrasound, as the premier method of thyroid imaging, plays an important role in accurately diagnosing and managing thyroid nodules. However, there is a high degree of inter- and intra-observer variability in image interpretation due to the different knowledge and experience of sonographers who have huge ultrasound examination tasks everyday. Artificial intelligence based on computer-aided diagnosis technology maybe improve the accuracy and time efficiency of thyroid nodules diagnosis. This study introduced an artificial intelligence software called SW-TH01/II to evaluate ultrasound image characteristics of thyroid nodules including echogenicity, shape, border, margin, and calcification. We included 225 ultrasound images from two hospitals in Shanghai, respectively. The sonographers and software performed characteristics analysis on the same group of images. We analyzed the consistency of the two results and used the sonographers' results as the gold standard to evaluate the accuracy of SW-TH01/II. A total of 449 images were included in the statistical analysis. For the seven indicators, the proportions of agreement between SW-TH01/II and sonographers' analysis results were all greater than 0.8. For the echogenicity (with very hypoechoic), aspect ratio and margin, the kappa coefficient between the two methods were above 0.75 (P < 0.001). The kappa coefficients of echogenicity (echotexture and echogenicity level), border and calcification between the two methods were above 0.6 (P < 0.001). The median time it takes for software and sonographers to interpret an image were 3 (2, 3) seconds and 26.5 (21.17, 34.33) seconds, respectively, and the difference were statistically significant (z = -18.36, P < 0.001). SW-TH01/II has a high degree of accuracy and great time efficiency benefits in judging the characteristics of thyroid nodule. It can provide more objective results and improve the efficiency of ultrasound examination. SW-TH01/II can be used to assist the sonographers in characterizing the thyroid nodule ultrasound images.

The application of ultrasound artificial intelligence in the diagnosis of endometrial diseases: Current practice and future development.

Wei Q, Xiao Z, Liang X, Guo Z, Zhang Y, Chen Z

pubmed logopapersJan 1 2025
Diagnosis and treatment of endometrial diseases are crucial for women's health. Over the past decade, ultrasound has emerged as a non-invasive, safe, and cost-effective imaging tool, significantly contributing to endometrial disease diagnosis and generating extensive datasets. The introduction of artificial intelligence has enabled the application of machine learning and deep learning to extract valuable information from these datasets, enhancing ultrasound diagnostic capabilities. This paper reviews the progress of artificial intelligence in ultrasound image analysis for endometrial diseases, focusing on applications in diagnosis, decision support, and prognosis analysis. We also summarize current research challenges and propose potential solutions and future directions to advance ultrasound artificial intelligence technology in endometrial disease diagnosis, ultimately improving women's health through digital tools.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.

Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.

Zhou H, Xie M, Shi H, Shou C, Tang M, Zhang Y, Hu Y, Liu X

pubmed logopapersJan 1 2025
Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge. This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2). The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions. The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.
Page 64 of 65642 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.