Sort by:
Page 57 of 64636 results

A 3D deep learning model based on MRI for predicting lymphovascular invasion in rectal cancer.

Wang T, Chen C, Liu C, Li S, Wang P, Yin D, Liu Y

pubmed logopapersMay 20 2025
The assessment of lymphovascular invasion (LVI) is crucial in the management of rectal cancer; However, accurately evaluating LVI preoperatively using imaging remains challenging. Recent advances in radiomics have created opportunities for developing more accurate diagnostic tools. This study aimed to develop and validate a deep learning model for predicting LVI in rectal cancer patients using preoperative MR imaging. These cases were randomly divided into a training cohort (n = 233) and an validation cohort (n = 101) at a ratio of 7:3. Based on the pathological reports, the patients were classified into positive and negative groups according to their LVI status. Based on the preoperative MRI T2WI axial images, the regions of interest (ROI) were defined from the tumor itself and the edges of the tumor extending outward by 5 pixels, 10 pixels, 15 pixels, and 20 pixels. The 2D and 3D deep learning features were extracted using the DenseNet121 architecture, and the deep learning models were constructed, including a total of ten models: GTV (the tumor itself), GPTV5 (the tumor itself and the tumor extending outward by 5 pixels), GPTV10, GPTV15, and GPTV20. To assess model performance, we utilized the area under the curve (AUC) and conducted DeLong test to compare different models, aiming to identify the optimal model for predicting LVI in rectal cancer. In the 2D deep learning model group, the 2D GPTV10 model demonstrated superior performance with an AUC of 0.891 (95% confidence interval [CI] 0.850-0.933) in the training cohort and an AUC of 0.841 (95% CI 0.767-0.915) in the validation cohort. The difference in AUC between this model and other 2D models was not statistically significant based on DeLong test (p > 0.05); In the group of 3D deep learning models, the 3D GPTV10 model had the highest AUC, with a training cohort AUC of 0.961 (95% CI 0.940-0.982) and a validation cohort AUC of 0.928 (95% CI 0.881-0.976). DeLong test demonstrated that the performance of the 3D GPTV10 model surpassed other 3D models as well as the 2D GPTV10 model (p < 0.05). The study developed a deep learning model, namely 3D GPTV10, utilizing preoperative MRI data to accurately predict the presence of LVI in rectal cancer patients. By training on the tumor itself and its surrounding margin 10 pixels as the region of interest, this model achieved superior performance compared to other deep learning models. These findings have significant implications for clinicians in formulating personalized treatment plans for rectal cancer patients.

A multi-modal model integrating MRI habitat and clinicopathology to predict platinum sensitivity in patients with high-grade serous ovarian cancer: a diagnostic study.

Bi Q, Ai C, Meng Q, Wang Q, Li H, Zhou A, Shi W, Lei Y, Wu Y, Song Y, Xiao Z, Li H, Qiang J

pubmed logopapersMay 20 2025
Platinum resistance of high-grade serous ovarian cancer (HGSOC) cannot currently be recognized by specific molecular biomarkers. We aimed to compare the predictive capacity of various models integrating MRI habitat, whole slide images (WSIs), and clinical parameters to predict platinum sensitivity in HGSOC patients. A retrospective study involving 998 eligible patients from four hospitals was conducted. MRI habitats were clustered using K-means algorithm on multi-parametric MRI. Following feature extraction and selection, a Habitat model was developed. Vision Transformer (ViT) and multi-instance learning were trained to derive the patch-level prediction and WSI-level prediction on hematoxylin and eosin (H&E)-stained WSIs, respectively, forming a Pathology model. Logistic regression (LR) was used to create a Clinic model. A multi-modal model integrating Clinic, Habitat, and Pathology (CHP) was constructed using Multi-Head Attention (MHA) and compared with the unimodal models and Ensemble multi-modal models. The area under the curve (AUC) and integrated discrimination improvement (IDI) value were used to assess model performance and gains. In the internal validation cohort and the external test cohort, the Habitat model showed the highest AUCs (0.722 and 0.685) compared to the Clinic model (0.683 and 0.681) and the Pathology model (0.533 and 0.565), respectively. The AUCs (0.789 and 0.807) of the multi-modal model interating CHP based on MHA were highest than those of any unimodal models and Ensemble multi-modal models, with positive IDI values. MRI-based habitat imaging showed potentials to predict platinum sensitivity in HGSOC patients. Multi-modal integration of CHP based on MHA was helpful to improve prediction performance.

Expert-guided StyleGAN2 image generation elevates AI diagnostic accuracy for maxillary sinus lesions.

Zeng P, Song R, Chen S, Li X, Li H, Chen Y, Gong Z, Cai G, Lin Y, Shi M, Huang K, Chen Z

pubmed logopapersMay 20 2025
The progress of artificial intelligence (AI) research in dental medicine is hindered by data acquisition challenges and imbalanced distributions. These problems are especially apparent when planning to develop AI-based diagnostic or analytic tools for various lesions, such as maxillary sinus lesions (MSL) including mucosal thickening and polypoid lesions. Traditional unsupervised generative models struggle to simultaneously control the image realism, diversity, and lesion-type specificity. This study establishes an expert-guided framework to overcome these limitations to elevate AI-based diagnostic accuracy. A StyleGAN2 framework was developed for generating clinically relevant MSL images (such as mucosal thickening and polypoid lesion) under expert control. The generated images were then integrated into training datasets to evaluate their effect on ResNet50's diagnostic performance. Here we show: 1) Both lesion subtypes achieve satisfactory fidelity metrics, with structural similarity indices (SSIM > 0.996) and maximum mean discrepancy values (MMD < 0.032), and clinical validation scores close to those of real images; 2) Integrating baseline datasets with synthetic images significantly enhances diagnostic accuracy for both internal and external test sets, particularly improving area under the precision-recall curve (AUPRC) by approximately 8% and 14% for mucosal thickening and polypoid lesions in the internal test set, respectively. The StyleGAN2-based image generation tool effectively addressed data scarcity and imbalance through high-quality MSL image synthesis, consequently boosting diagnostic model performance. This work not only facilitates AI-assisted preoperative assessment for maxillary sinus lift procedures but also establishes a methodological framework for overcoming data limitations in medical image analysis.

Automated Fetal Biometry Assessment with Deep Ensembles using Sparse-Sampling of 2D Intrapartum Ultrasound Images

Jayroop Ramesh, Valentin Bacher, Mark C. Eid, Hoda Kalabizadeh, Christian Rupprecht, Ana IL Namburete, Pak-Hei Yeung, Madeleine K. Wyburd, Nicola K. Dinsdale

arxiv logopreprintMay 20 2025
The International Society of Ultrasound advocates Intrapartum Ultrasound (US) Imaging in Obstetrics and Gynecology (ISUOG) to monitor labour progression through changes in fetal head position. Two reliable ultrasound-derived parameters that are used to predict outcomes of instrumental vaginal delivery are the angle of progression (AoP) and head-symphysis distance (HSD). In this work, as part of the Intrapartum Ultrasounds Grand Challenge (IUGC) 2024, we propose an automated fetal biometry measurement pipeline to reduce intra- and inter-observer variability and improve measurement reliability. Our pipeline consists of three key tasks: (i) classification of standard planes (SP) from US videos, (ii) segmentation of fetal head and pubic symphysis from the detected SPs, and (iii) computation of the AoP and HSD from the segmented regions. We perform sparse sampling to mitigate class imbalances and reduce spurious correlations in task (i), and utilize ensemble-based deep learning methods for task (i) and (ii) to enhance generalizability under different US acquisition settings. Finally, to promote robustness in task iii) with respect to the structural fidelity of measurements, we retain the largest connected components and apply ellipse fitting to the segmentations. Our solution achieved ACC: 0.9452, F1: 0.9225, AUC: 0.983, MCC: 0.8361, DSC: 0.918, HD: 19.73, ASD: 5.71, $\Delta_{AoP}$: 8.90 and $\Delta_{HSD}$: 14.35 across an unseen hold-out set of 4 patients and 224 US frames. The results from the proposed automated pipeline can improve the understanding of labour arrest causes and guide the development of clinical risk stratification tools for efficient and effective prenatal care.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Prediction of prognosis of immune checkpoint inhibitors combined with anti-angiogenic agents for unresectable hepatocellular carcinoma by machine learning-based radiomics.

Xu X, Jiang X, Jiang H, Yuan X, Zhao M, Wang Y, Chen G, Li G, Duan Y

pubmed logopapersMay 19 2025
This study aims to develop and validate a novel radiomics model utilizing magnetic resonance imaging (MRI) to predict progression-free survival (PFS) in patients with unresectable hepatocellular carcinoma (uHCC) who are receiving a combination of immune checkpoint inhibitors (ICIs) and antiangiogenic agents. This is an area that has not been previously explored using MRI-based radiomics. 111 patients with uHCC were enrolled in this study. After performing univariate cox regression and the least absolute shrinkage and selection operator (LASSO) algorithms to extract radiological features, the Rad-score was calculated through a Cox proportional hazards regression model and a random survival forest (RSF) model. The optimal calculation method was selected by comparing the Harrell's concordance index (C-index) values. The Rad-score was then combined with independent clinical risk factors to create a nomogram. C-index, time-dependent receiver operating characteristics (ROC) curves, calibration curves, and decision curve analysis were employed to assess the forecast ability of the risk models. The combined nomogram incorporated independent clinical factors and Rad-score calculated by RSF demonstrated better prognosis prediction for PFS, with C-index of 0.846, 0.845, separately in the training and the validation cohorts. This indicates that our model performs well and has the potential to enable more precise patient stratification and personalized treatment strategies. Based on the risk level, the participants were classified into two distinct groups: the high-risk signature (HRS) group and the low-risk signature (LRS) group, with a significant difference between the groups (P < 0.01). The effective clinical-radiomics nomogram based on MRI imaging is a promising tool in predicting the prognosis in uHCC patients receiving ICIs combined with anti-angiogenic agents, potentially leading to more effective clinical outcomes.

Deep learning models based on multiparametric magnetic resonance imaging and clinical parameters for identifying synchronous liver metastases from rectal cancer.

Sun J, Wu PY, Shen F, Chen X, She J, Luo M, Feng F, Zheng D

pubmed logopapersMay 19 2025
To establish and validate deep learning (DL) models based on pre-treatment multiparametric magnetic resonance imaging (MRI) images of primary rectal cancer and basic clinical data for the prediction of synchronous liver metastases (SLM) in patients with Rectal cancer (RC). In this retrospective study, 176 and 31 patients with RC who underwent multiparametric MRI from two centers were enrolled in the primary and external validation cohorts, respectively. Clinical factors, including sex, primary tumor site, CEA level, and CA199 level were assessed. A clinical feature (CF) model was first developed by multivariate logistic regression, then two residual network DL models were constructed based on multiparametric MRI of primary cancer with or without CF incorporation. Finally, the SLM prediction models were validated by 5-fold cross-validation and external validation. The performance of the models was evaluated by decision curve analysis (DCA) and receiver operating characteristic (ROC) analysis. Among three SLM prediction models, the Combined DL model integrating primary tumor MRI and basic clinical data achieved the best performance (AUC = 0.887 in primary study cohort; AUC = 0.876 in the external validation cohort). In the primary study cohort, the CF model, MRI DL model, and Combined DL model achieved AUCs of 0.816 (95% CI: 0.750, 0.881), 0.788 (95% CI: 0.720, 0.857), and 0.887 (95% CI: 0.834, 0.940) respectively. In the external validation cohort, the CF model, DL model without CF, and DL model with CF achieved AUCs of 0.824 (95% CI: 0.664, 0.984), 0.662 (95% CI: 0.461, 0.863), and 0.876 (95% CI: 0.728, 1.000), respectively. The combined DL model demonstrates promising potential to predict SLM in patients with RC, thereby making individualized imaging test strategies. Accurate synchronous liver metastasis (SLM) risk stratification is important for treatment planning and prognosis improvement. The proposed DL signature may be employed to better understand an individual patient's SLM risk, aiding in treatment planning and selection of further imaging examinations to personalize clinical decisions. Not applicable.

Advances in pancreatic cancer diagnosis: from DNA methylation to AI-Assisted imaging.

Sharma R, Komal K, Kumar S, Ghosh R, Pandey P, Gupta GD, Kumar M

pubmed logopapersMay 19 2025
Pancreatic Cancer (PC) is a highly aggressive tumor that is mainly diagnosed at later stages. Various imaging technologies, such as CT, MRI, and EUS, possess limitations in early PC diagnosis. Therefore, this review article explores the various innovative biomarkers for PC detection, such as DNA methylation, Noncoding RNAs, and proteomic biomarkers, and the role of AI in PC detection at early stages. Innovative biomarkers, such as DNA methylation genes, show higher specificity and sensitivity in PC diagnosis. Additionally, various non-coding RNAs, such as long non-coding RNAs (lncRNAs) and microRNAs, show high diagnostic accuracy and serve as diagnostic and prognostic biomarkers. Additionally, proteomic biomarkers retain higher diagnostic accuracy in different body fluids. Apart from this, the utilization of AI showed that AI surpassed the radiologist's diagnostic performance in PC detection. The combination of AI and advanced biomarkers can revolutionize early PC detection. However, large-scale, prospective studies are needed to validate its clinical utility. Further. standardization of biomarker panels and AI algorithms is a vital step toward their reliable applications in early PC detection, ultimately improving patient outcomes.

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset.

Nguyen DT, Imami M, Zhao LM, Wu J, Borhani A, Mohseni A, Khunte M, Zhong Z, Shi V, Yao S, Wang Y, Loizou N, Silva AC, Zhang PJ, Zhang Z, Jiao Z, Kamel I, Liao WH, Bai H

pubmed logopapersMay 19 2025
Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL). To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets. Retrospective multi-center study. A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions. MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences. FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset. Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05). No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set. FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models. 4. Stage 2.

Transformer model based on Sonazoid contrast-enhanced ultrasound for microvascular invasion prediction in hepatocellular carcinoma.

Qin Q, Pang J, Li J, Gao R, Wen R, Wu Y, Liang L, Que Q, Liu C, Peng J, Lv Y, He Y, Lin P, Yang H

pubmed logopapersMay 19 2025
Microvascular invasion (MVI) is strongly associated with the prognosis of patients with hepatocellular carcinoma (HCC). To evaluate the value of Transformer models with Sonazoid contrast-enhanced ultrasound (CEUS) in the preoperative prediction of MVI. This retrospective study included 164 HCC patients. Deep learning features and radiomic features were extracted from arterial and Kupffer phase images, alongside the collection of clinicopathological parameters. Normality was assessed using the Shapiro-Wilk test. The Mann‒Whitney U-test and least absolute shrinkage and selection operator algorithm were applied to screen features. Transformer, radiomic, and clinical prediction models for MVI were constructed with logistic regression. Repeated random splits followed a 7:3 ratio, with model performance evaluated over 50 iterations. The area under the receiver operating characteristic curve (AUC, 95% confidence interval [CI]), sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), decision curve, and calibration curve were used to evaluate the performance of the models. The DeLong test was applied to compare performance between models. The Bonferroni method was used to control type I error rates arising from multiple comparisons. A two-sided p-value of < 0.05 was considered statistically significant. In the training set, the diagnostic performance of the arterial-phase Transformer (AT) and Kupffer-phase Transformer (KT) models were better than that of the radiomic and clinical (Clin) models (p < 0.0001). In the validation set, both the AT and KT models outperformed the radiomic and Clin models in terms of diagnostic performance (p < 0.05). The AUC (95% CI) for the AT model was 0.821 (0.72-0.925) with an accuracy of 80.0%, and the KT model was 0.859 (0.766-0.977) with an accuracy of 70.0%. Logistic regression analysis indicated that tumor size (p = 0.016) and alpha-fetoprotein (AFP) (p = 0.046) were independent predictors of MVI. Transformer models using Sonazoid CEUS have potential for effectively identifying MVI-positive patients preoperatively.
Page 57 of 64636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.