Sort by:
Page 56 of 64636 results

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.

Deep Learning and Radiomic Signatures Associated with Tumor Immune Heterogeneity Predict Microvascular Invasion in Colon Cancer.

Jia J, Wang J, Zhang Y, Bai G, Han L, Niu Y

pubmed logopapersMay 23 2025
This study aims to develop and validate a deep learning radiomics signature (DLRS) that integrates radiomics and deep learning features for the non-invasive prediction of microvascular invasion (MVI) in patients with colon cancer (CC). Furthermore, the study explores the potential association between DLRS and tumor immune heterogeneity. This study is a multi-center retrospective study that included a total of 1007 patients with colon cancer (CC) from three medical centers and The Cancer Genome Atlas (TCGA-COAD) database. Patients from Medical Centers 1 and 2 were divided into a training cohort (n = 592) and an internal validation cohort (n = 255) in a 7:3 ratio. Medical Center 3 (n = 135) and the TCGA-COAD database (n = 25) were used as external validation cohorts. Radiomics and deep learning features were extracted from contrast-enhanced venous-phase CT images. Feature selection was performed using machine learning algorithms, and three predictive models were developed: a radiomics model, a deep learning (DL) model, and a combined deep learning radiomics (DLR) model. The predictive performance of each model was evaluated using multiple metrics, including the area under the curve (AUC), sensitivity, and specificity. Additionally, differential gene expression analysis was conducted on RNA-seq data from the TCGA-COAD dataset to explore the association between the DLRS and tumor immune heterogeneity within the tumor microenvironment. Compared to the standalone radiomics and deep learning models, DLR fusion model demonstrated superior predictive performance. The AUC for the internal validation cohort was 0.883 (95% CI: 0.828-0.937), while the AUC for the external validation cohort reached 0.855 (95% CI: 0.775-0.935). Furthermore, stratifying patients from the TCGA-COAD dataset into high-risk and low-risk groups based on the DLRS revealed significant differences in immune cell infiltration and immune checkpoint expression between the two groups (P < 0.05). The contrast-enhanced CT-based DLR fusion model developed in this study effectively predicts the MVI status in patients with CC. This model serves as a non-invasive preoperative assessment tool and reveals a potential association between the DLRS and immune heterogeneity within the tumor microenvironment, providing insights to optimize individualized treatment strategies.

Radiomics-Based Early Triage of Prostate Cancer: A Multicenter Study from the CHAIMELEON Project

Vraka, A., Marfil-Trujillo, M., Ribas-Despuig, G., Flor-Arnal, S., Cerda-Alberich, L., Jimenez-Gomez, P., Jimenez-Pastor, A., Marti-Bonmati, L.

medrxiv logopreprintMay 22 2025
Prostate cancer (PCa) is the most commonly diagnosed malignancy in men worldwide. Accurate triage of patients based on tumor aggressiveness and staging is critical for selecting appropriate management pathways. While magnetic resonance imaging (MRI) has become a mainstay in PCa diagnosis, most predictive models rely on multiparametric imaging or invasive inputs, limiting generalizability in real-world clinical settings. This study aimed to develop and validate machine learning (ML) models using radiomic features extracted from T2-weighted MRI--alone and in combination with clinical variables--to predict ISUP grade (tumor aggressiveness), lymph node involvement (cN) and distant metastasis (cM). A retrospective multicenter cohort from three European sites in the Chaimeleon project was analyzed. Radiomic features were extracted from prostate zone segmentations and lesion masks, following standardized preprocessing and ComBat harmonization. Feature selection and model optimization were performed using nested cross-validation and Bayesian tuning. Hybrid models were trained using XGBoost and interpreted with SHAP values. The ISUP model achieved an AUC of 0.66, while the cN and cM models reached AUCs of 0.77 and 0.80, respectively. The best-performing models consistently combined prostate zone radiomics with clinical features such as PSA, PIRADSv2 and ISUP grade. SHAP analysis confirmed the importance of both clinical and texture-based radiomic features, with entropy and non-uniformity measures playing central roles in all tasks. Our results demonstrate the feasibility of using T2-weighted MRI and zonal radiomics for robust prediction of aggressiveness, nodal involvement and distant metastasis in PCa. This fully automated pipeline offers an interpretable, accessible and clinically translatable tool for first-line PCa triage, with potential integration into real-world diagnostic workflows.

Daily proton dose re-calculation on deep-learning corrected cone-beam computed tomography scans.

Vestergaard CD, Muren LP, Elstrøm UV, Stolarczyk L, Nørrevang O, Petersen SE, Taasti VT

pubmed logopapersMay 22 2025
Synthetic CT (sCT) generation from cone-beam CT (CBCT) must maintain stable performance and allow for accurate dose calculation across all treatment fractions to effectively support adaptive proton therapy. This study evaluated a 3D deep-learning (DL) network for sCT generation for prostate cancer patients over the full treatment course. Patient data from 25/6 prostate cancer patients were used to train/test the DL network. Patients in the test set had a planning CT, 39 CBCT images, and at least one repeat CT (reCT) used for replanning. The generated sCT images were compared to fan-beam planning and reCT images in terms of i) CT number accuracy and stability within spherical regions-of-interest (ROIs) in the bladder, prostate, and femoral heads, ii) proton range calculation accuracy through single-spot plans, and iii) dose trends in target coverage over the treatment course (one patient). The sCT images demonstrated image quality comparable to CT, while preserving the CBCT anatomy. The mean CT numbers on the sCT and CT images were comparable, e.g. for the prostate ROI they ranged from 29 HU to 59 HU for sCT, and from 36 HU to 50 HU for CT. The largest median proton range difference was 1.9 mm. Proton dose calculations showed excellent target coverage (V95%≥99.6 %) for the high-dose target. The DL network effectively generated high-quality sCT images with CT numbers, proton range, and dose characteristics comparable to fan-beam CT. Its robustness against intra-patient variations makes it a feasible tool for adaptive proton therapy.

FasNet: a hybrid deep learning model with attention mechanisms and uncertainty estimation for liver tumor segmentation on LiTS17.

Singh R, Gupta S, Almogren A, Rehman AU, Bharany S, Altameem A, Choi J

pubmed logopapersMay 21 2025
Liver cancer, especially hepatocellular carcinoma (HCC), remains one of the most fatal cancers globally, emphasizing the critical need for accurate tumor segmentation to enable timely diagnosis and effective treatment planning. Traditional imaging techniques, such as CT and MRI, rely on manual interpretation, which can be both time-intensive and subject to variability. This study introduces FasNet, an innovative hybrid deep learning model that combines ResNet-50 and VGG-16 architectures, incorporating Channel and Spatial Attention mechanisms alongside Monte Carlo Dropout to improve segmentation precision and reliability. FasNet leverages ResNet-50's robust feature extraction and VGG-16's detailed spatial feature capture to deliver superior liver tumor segmentation accuracy. Channel and spatial attention mechanisms could selectively focus on the most relevant features and spatial regions for suitable segmentation with good accuracy and reliability. Monte Carlo Dropout estimates uncertainty and adds robustness, which is critical for high-stakes medical applications. Tested on the LiTS17 dataset, FasNet achieved a Dice Coefficient of 0.8766 and a Jaccard Index of 0.8487, surpassing several state-of-the-art methods. The Channel and Spatial Attention mechanisms in FasNet enhance feature selection, focusing on the most relevant spatial and channel information, while Monte Carlo Dropout improves model robustness and uncertainty estimation. These results position FasNet as a powerful diagnostic tool, offering precise and automated liver tumor segmentation that aids in early detection and precise treatment, ultimately enhancing patient outcomes.

Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans.

Shi Y, Wang L, Qureshi TA, Deng Z, Xie Y, Li D

pubmed logopapersMay 21 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning segmentation model that can segment abdominal organs on CT and MR images with high accuracy and generalization ability. Materials and Methods In this study, an extended nnU-Net model was trained for abdominal organ segmentation. A domain randomization method in both the image and feature space was developed to improve the generalization ability under cross-site and cross-modality settings on public prostate MRI and abdominal CT and MRI datasets. The prostate MRI dataset contains data from multiple health care institutions with domain shifts. The abdominal CT and MRI dataset is structured for cross-modality evaluation, training on one modality (eg, MRI) and testing on the other (eg, CT). This domain randomization method was then used to train a segmentation model with enhanced generalization ability on the abdominal multiorgan segmentation challenge (AMOS) dataset to improve abdominal CT and MR multiorgan segmentation, and the model was compared with two commonly used segmentation algorithms (TotalSegmentator and MRSegmentator). Model performance was evaluated using the Dice similarity coefficient (DSC). Results The proposed domain randomization method showed improved generalization ability on the cross-site and cross-modality datasets compared with the state-of-the-art methods. The segmentation model using this method outperformed two other publicly available segmentation models on data from unseen test domains (Average DSC: 0.88 versus 0.79; <i>P</i> < .001 and 0.88 versus 0.76; <i>P</i> < .001). Conclusion The combination of image and feature domain randomizations improved the accuracy and generalization ability of deep learning-based abdominal segmentation on CT and MR images. © RSNA, 2025.

An Ultrasound Image-Based Deep Learning Radiomics Nomogram for Differentiating Between Benign and Malignant Indeterminate Cytology (Bethesda III) Thyroid Nodules: A Retrospective Study.

Zhong L, Shi L, Li W, Zhou L, Wang K, Gu L

pubmed logopapersMay 21 2025
Our objective is to develop and validate a deep learning radiomics nomogram (DLRN) based on preoperative ultrasound images and clinical features, for predicting the malignancy of thyroid nodules with indeterminate cytology (Bethesda III). Between June 2017 and June 2022, we conducted a retrospective study on 194 patients with surgically confirmed indeterminate cytology (Bethesda III) in our hospital. The training and internal validation cohorts were comprised of 155 and 39 patients, in a 7:3 ratio. To facilitate external validation, we selected an additional 80 patients from each of the remaining two medical centers. Utilizing preoperative ultrasound data, we obtained imaging markers that encompass both deep learning and manually radiomic features. After feature selection, we developed a comprehensive diagnostic model to evaluate the predictive value for Bethesda III benign and malignant cases. The model's diagnostic accuracy, calibration, and clinical applicability were systematically assessed. The results showed that the prediction model, which integrated 512 DTL features extracted from the pre-trained Resnet34 network, ultrasound radiomics, and clinical features, exhibited superior stability in distinguishing between benign and malignant indeterminate thyroid nodules (Bethesda Class III). In the validation set, the AUC was 0.92 (95% CI: 0.831-1.000), and the accuracy, sensitivity, specificity, precision, and recall were 0.897, 0.882, 0.909, 0.882, and 0.882, respectively. The comprehensive multidimensional data model based on deep transfer learning, ultrasound radiomics features, and clinical characteristics can effectively distinguish the benign and malignant indeterminate thyroid nodules (Bethesda Class III), providing valuable guidance for treatment selection in patients with indeterminate thyroid nodules (Bethesda Class III).

Challenges in Using Deep Neural Networks Across Multiple Readers in Delineating Prostate Gland Anatomy.

Abudalou S, Choi J, Gage K, Pow-Sang J, Yilmaz Y, Balagurunathan Y

pubmed logopapersMay 20 2025
Deep learning methods provide enormous promise in automating manually intense tasks such as medical image segmentation and provide workflow assistance to clinical experts. Deep neural networks (DNN) require a significant amount of training examples and a variety of expert opinions to capture the nuances and the context, a challenging proposition in oncological studies (H. Wang et al., Nature, vol. 620, no. 7972, pp. 47-60, Aug 2023). Inter-reader variability among clinical experts is a real-world problem that severely impacts the generalization of DNN reproducibility. This study proposes quantifying the variability in DNN performance using expert opinions and exploring strategies to train the network and adapt between expert opinions. We address the inter-reader variability problem in the context of prostate gland segmentation using a well-studied DNN, the 3D U-Net model. Reference data includes magnetic resonance imaging (MRI, T2-weighted) with prostate glandular anatomy annotations from two expert readers (R#1, n = 342 and R#2, n = 204). 3D U-Net was trained and tested with individual expert examples (R#1 and R#2) and had an average Dice coefficient of 0.825 (CI, [0.81 0.84]) and 0.85 (CI, [0.82 0.88]), respectively. Combined training with a representative cohort proportion (R#1, n = 100 and R#2, n = 150) yielded enhanced model reproducibility across readers, achieving an average test Dice coefficient of 0.863 (CI, [0.85 0.87]) for R#1 and 0.869 (CI, [0.87 0.88]) for R#2. We re-evaluated the model performance across the gland volumes (large, small) and found improved performance for large gland size with an average Dice coefficient to be at 0.846 [CI, 0.82 0.87] and 0.872 [CI, 0.86 0.89] for R#1 and R#2, respectively, estimated using fivefold cross-validation. Performance for small gland sizes diminished with average Dice of 0.8 [0.79, 0.82] and 0.8 [0.79, 0.83] for R#1 and R#2, respectively.

Pancreas segmentation in CT scans: A novel MOMUNet based workflow.

Juwita J, Hassan GM, Datta A

pubmed logopapersMay 20 2025
Automatic pancreas segmentation in CT scans is crucial for various medical applications, including early diagnosis and computer-assisted surgery. However, existing segmentation methods remain suboptimal due to significant pancreas size variations across slices and severe class imbalance caused by the pancreas's small size and CT scanner movement during imaging. Traditional computer vision techniques struggle with these challenges, while deep learning-based approaches, despite their success in other domains, still face limitations in pancreas segmentation. To address these issues, we propose a novel, three-stage workflow that enhances segmentation accuracy and computational efficiency. First, we introduce External Contour Cropping (ECC), a background cleansing technique that mitigates class imbalance. Second, we propose a Size Ratio (SR) technique that restructures the training dataset based on the relative size of the target organ, improving the robustness of the model against anatomical variations. Third, we develop MOMUNet, an ultra-lightweight segmentation model with only 1.31 million parameters, designed for optimal performance on limited computational resources. Our proposed workflow achieves an improvement in Dice Score (DSC) of 2.56% over state-of-the-art (SOTA) models in the NIH-Pancreas dataset and 2.97% in the MSD-Pancreas dataset. Furthermore, applying the proposed model to another small organ, such as colon cancer segmentation in the MSD-Colon dataset, yielded a DSC of 68.4%, surpassing the SOTA models. These results demonstrate the effectiveness of our approach in significantly improving segmentation accuracy for small abdomen organs including pancreas and colon, making deep learning more accessible for low-resource medical facilities.

Non-Invasive Tumor Budding Evaluation and Correlation with Treatment Response in Bladder Cancer: A Multi-Center Cohort Study.

Li X, Zou C, Wang C, Chang C, Lin Y, Liang S, Zheng H, Liu L, Deng K, Zhang L, Liu B, Gao M, Cai P, Lao J, Xu L, Wu D, Zhao X, Wu X, Li X, Luo Y, Zhong W, Lin T

pubmed logopapersMay 20 2025
The clinical benefits of neoadjuvant chemoimmunotherapy (NACI) are demonstrated in patients with bladder cancer (BCa); however, more than half fail to achieve a pathological complete response (pCR). This study utilizes multi-center cohorts of 2322 patients with pathologically diagnosed BCa, collected between January 1, 2014, and December 31, 2023, to explore the correlation between tumor budding (TB) status and NACI response and disease prognosis. A deep learning model is developed to noninvasively evaluate TB status based on CT images. The deep learning model accurately predicts the TB status, with area under the curve values of 0.932 (95% confidence interval: 0.898-0.965) in the training cohort, 0.944 (0.897-0.991) in the internal validation cohort, 0.882 (0.832-0.933) in external validation cohort 1, 0.944 (0.908-0.981) in the external validation cohort 2, and 0.854 (0.739-0.970) in the NACI validation cohort. Patients predicted to have a high TB status exhibit a worse prognosis (p < 0.05) and a lower pCR rate of 25.9% (7/20) than those predicted to have a low TB status (pCR rate: 73.9% [17/23]; p < 0.001). Hence, this model may be a reliable, noninvasive tool for predicting TB status, aiding clinicians in prognosis assessment and NACI strategy formulation.
Page 56 of 64636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.