Sort by:
Page 3 of 99986 results

A study on predicting recurrence of non-muscle-invasive bladder cancer within 2 years using mp-MRI radiomics.

Chen B, Zhou Y, Li Z, Chen J, Zuo J, Wang H, Li Z, Fu S

pubmed logopapersOct 2 2025
Non-muscle invasive bladder cancer (NMIBC) has a high rate of postoperative recurrence and the efficacy of existing clinical prediction models is limited. This study aimed to combine multiparametric magnetic resonance imaging (mp-MRI) radiomic features with clinical characteristics to construct a machine learning model for accurately predicting the risk of recurrence within 2 years postoperatively in NMIBC patients. Retrospectively including 183 NMIBC patients (57 in the recurrence group, 126 in the non-recurrence group), radiomic features from mp-MRI imaging (T2W, ADC, and enhancement sequences) were extracted. Through LASSO selection, 4 key imaging features (MajorAxisLength, SZNN, S/V, Skewness) and 6 clinical features based on the EAU 2021 risk stratification were identified to constitute the clinical-imaging dataset. Through comparison with 10 machine learning models, Support Vector Machine (SVM) performed the best (training set AUC = 0.973, validation set AUC = 0.891), with external independent validation (108 cases) showing AUCs of 0.88 and 0.87, demonstrating good generalization ability. A bar chart integrating radiomics score (Rad-Score) with clinical features provides an intuitive prognostic tool. The study indicates that the clinical-imaging radiomics model based on SVM significantly enhances the efficacy of NMIBC recurrence prediction, addressing the shortcomings of traditional risk assessment and offering a reliable basis for personalized postoperative management. Study limitations include the retrospective design and the absence of molecular biomarkers, necessitating future multicenter prospective validation.

Interpretable deep learning model and nomogram for predicting pathological grading of PNETs based on endoscopic ultrasound.

Mo S, Zhang Y, Liu N, Jiang R, Yi N, Wang Y, Zhao H, Qin S, Cai H

pubmed logopapersOct 2 2025
This study aims to develop and validate an interpretable deep learning (DL) model and a nomogram based on endoscopic ultrasound (EUS) images for the prediction of pathological grading in pancreatic neuroendocrine tumors (PNETs). This multicenter retrospective study included 108 patients with PNETs, who were divided into train (<i>n</i> = 81, internal center) and test cohorts (<i>n</i> = 27, external centers). Univariate and multivariate logistic regression were used for screening demographic characteristics and EUS semantic features. Deep transfer learning was employed using a pre-trained ResNet18 model to extract features from EUS images. Feature selection was conducted using the least absolute shrinkage and selection operator (LASSO), and various machine learning algorithms were utilized to construct DL models. The optimal model was then integrated with clinical features to develop a nomogram. The performance of the model was assessed using the area under the curve (AUC), calibration curves, decision curve analysis (DCA), and clinical impact curves (CIC). The nomogram, which integrates the optimal DL model (Naive Bayes) with clinical features, achieved AUC values of 0.928 (95% CI 0.849–0.981) in the train cohort and 0.882 (95% CI 0.778–0.954) in the test cohort. Calibration curves revealed minimal discrepancies between predicted and actual probabilities, with mean absolute errors of 4.5% and 6.6% in the train and test cohorts, respectively. DCA and CIC demonstrated substantial net benefit and clinical utility. The SHapley Additive exPlanations (SHAP) method provided insights into the contribution of each DL feature to the model’s predictions. This study developed and validated a novel interpretable DL model and nomogram using EUS images and machine learning, which holds promise for enhancing the clinical application of EUS in identifying PNETs’ pathological grading. The online version contains supplementary material available at 10.1186/s12911-025-03193-3.

Real-Time Deep-Learning Image Reconstruction and Instrument Tracking in MR-Guided Biopsies.

Noordman CR, Te Molder LPW, Maas MC, Overduin CG, Fütterer JJ, Huisman HJ

pubmed logopapersOct 1 2025
Transrectal in-bore MR-guided biopsy (MRGB) is accurate but time-consuming, limiting clinical throughput. Faster imaging could improve workflow and enable real-time instrument tracking. Existing acceleration methods often use simulated data and lack validation in clinical settings. To accelerate MRGB by using deep learning for undersampled image reconstruction and instrument tracking, trained on multi-slice MR DICOM images and evaluated on raw k-space acquisitions. Prospective feasibility study. Briefly, 1289 male patients (aged 44-87, median age 68) for model training, 8 male patients (aged 59-78, median age 65) for prospective feasibility testing. 2D Cartesian balanced steady-state free precession, 3 T. Segmentation and reconstruction models were trained on 8464 MRGB confirmation scans containing a biopsy needle guide instrument and evaluated on 10 prospectively acquired dynamic k-space samples. Needle guide tracking accuracy was assessed using instrument tip prediction (ITP) error, computed per frame as the Euclidean distance from reference positions defined via pre- and post-movement scans. Feasibility was measured by the proportion of frames with < 5 mm error. Additional experiments tested model robustness under increasing undersampling rates. In a segmentation validation experiment, a one-sample t-test tested if the mean ITP error was below 5 mm. Statistical significance was defined as p < 0.05. In the tracking experiments, the mean, standard deviation, and Wilson 95% CI of the ITP success rate were computed per sample, across undersampling levels. ITP was first evaluated independently on 201 fully sampled scans, yielding an ITP error of 1.55 ± 1.01 mm (95% CI: 1.41-1.69). Tracking performance was assessed across increasing undersampling factors, achieving high ITP success rates from 97.5% ± 5.8% (68.8%-99.9%) at 8× up to 92.5% ± 10.3% (62.5%-98.9%) at 16× undersampling. Performance declined at 18×, dropping to 74.6% ± 33.6% (43.8%-91.7%). Results confirm stable needle guide tip prediction accuracy and support the robustness of the reconstruction model for tracking at high undersampling. 2. Stage 2.

An interpretable hybrid deep learning framework for gastric cancer diagnosis using histopathological imaging.

Ren T, Govindarajan V, Bourouis S, Wang X, Ke S

pubmed logopapersOct 1 2025
The increasing incidence of gastric cancer and the complexity of histopathological image interpretation present significant challenges for accurate and timely diagnosis. Manual assessments are often subjective and time-intensive, leading to a growing demand for reliable, automated diagnostic tools in digital pathology. This study proposes a hybrid deep learning approach combining convolutional neural networks (CNNs) and Transformer-based architectures to classify gastric histopathological images with high precision. The model is designed to enhance feature representation and spatial contextual understanding, particularly across diverse tissue subtypes and staining variations. Three publicly available datasets-GasHisSDB, TCGA-STAD, and NCT-CRC-HE-100 K-were utilized to train and evaluate the model. Image patches were preprocessed through stain normalization, augmented using standard techniques, and fed into the hybrid model. The CNN backbone extracts local spatial features, while the Transformer encoder captures global context. Performance was assessed using fivefold cross-validation and evaluated through accuracy, F1-score, AUC, and Grad-CAM-based interpretability. The proposed model achieved a 99.2% accuracy on the GasHisSDB dataset, with a macro F1-score of 0.991 and AUC of 0.996. External validation on TCGA-STAD and NCT-CRC-HE-100 K further confirmed the model's robustness. Grad-CAM visualizations highlighted biologically relevant regions, demonstrating interpretability and alignment with expert annotations. This hybrid deep learning framework offers a reliable, interpretable, and generalizable tool for gastric cancer diagnosis. Its superior performance and explainability highlight its clinical potential for deployment in digital pathology workflows.

Application of artificial intelligence in assisting treatment of gynecologic tumors: a systematic review.

Guo L, Zhang S, Chen H, Li Y, Liu Y, Liu W, Wang Q, Tang Z, Jiang P, Wang J

pubmed logopapersOct 1 2025
In recent years, the application of artificial intelligence (AI) in medical image analysis has drawn increasing attention in clinical studies of gynecologic tumors. This study presents the development and prospects of AI applications to assist in the treatment of gynecological oncology. The Web of Science database was screened for articles published until August 2023. "artificial intelligence," "deep learning," "machine learning," "radiomics," "radiotherapy," "chemoradiotherapy," "neoadjuvant therapy," "immunotherapy," "gynecological malignancy," "cervical carcinoma," "cervical cancer," "ovarian cancer," "endometrial cancer," "vulvar cancer," "Vaginal cancer" were used as keywords. Research articles related to AI-assisted treatment of gynecological cancers were included. A total of 317 articles were retrieved based on the search strategy, and 133 were selected by applying the inclusion and exclusion criteria, including 114 on cervical cancer, 10 on endometrial cancer, and 9 on ovarian cancer. Among the included studies, 44 (33%) focused on prognosis prediction, 24 (18%) on treatment response prediction, 13 (10%) on adverse event prediction, five (4%) on dose distribution prediction, and 47 (35%) on target volume delineation. Target volume delineation and dose prediction were performed using deep Learning methods. For the prediction of treatment response, prognosis, and adverse events, 57 studies (70%) used conventional radiomics methods, 13 (16%) used deep Learning methods, 8 (10%) used spatial-related unconventional radiomics methods, and 3 (4%) used temporal-related unconventional radiomics methods. In cervical and endometrial cancers, target prediction mostly included treatment response, overall survival, recurrence, toxicity undergoing radiotherapy, lymph node metastasis, and dose distribution. For ovarian cancer, the target prediction included platinum sensitivity and postoperative complications. The majority of the studies were single-center, retrospective, and small-scale; 101 studies (76%) had single-center data, 125 studies (94%) were retrospective, and 127 studies (95%) included Less than 500 cases. The application of AI in assisting treatment in gynecological oncology remains limited. Although the results of AI in predicting the response, prognosis, adverse events, and dose distribution in gynecological oncology are superior, it is evident that there is no validation of substantial data from multiple centers for these tasks.

AortaDiff: A Unified Multitask Diffusion Framework For Contrast-Free AAA Imaging

Yuxuan Ou, Ning Bi, Jiazhen Pan, Jiancheng Yang, Boliang Yu, Usama Zidan, Regent Lee, Vicente Grau

arxiv logopreprintOct 1 2025
While contrast-enhanced CT (CECT) is standard for assessing abdominal aortic aneurysms (AAA), the required iodinated contrast agents pose significant risks, including nephrotoxicity, patient allergies, and environmental harm. To reduce contrast agent use, recent deep learning methods have focused on generating synthetic CECT from non-contrast CT (NCCT) scans. However, most adopt a multi-stage pipeline that first generates images and then performs segmentation, which leads to error accumulation and fails to leverage shared semantic and anatomical structures. To address this, we propose a unified deep learning framework that generates synthetic CECT images from NCCT scans while simultaneously segmenting the aortic lumen and thrombus. Our approach integrates conditional diffusion models (CDM) with multi-task learning, enabling end-to-end joint optimization of image synthesis and anatomical segmentation. Unlike previous multitask diffusion models, our approach requires no initial predictions (e.g., a coarse segmentation mask), shares both encoder and decoder parameters across tasks, and employs a semi-supervised training strategy to learn from scans with missing segmentation labels, a common constraint in real-world clinical data. We evaluated our method on a cohort of 264 patients, where it consistently outperformed state-of-the-art single-task and multi-stage models. For image synthesis, our model achieved a PSNR of 25.61 dB, compared to 23.80 dB from a single-task CDM. For anatomical segmentation, it improved the lumen Dice score to 0.89 from 0.87 and the challenging thrombus Dice score to 0.53 from 0.48 (nnU-Net). These segmentation enhancements led to more accurate clinical measurements, reducing the lumen diameter MAE to 4.19 mm from 5.78 mm and the thrombus area error to 33.85% from 41.45% when compared to nnU-Net. Code is available at https://github.com/yuxuanou623/AortaDiff.git.

Automated machine learning for prostate cancer detection and Gleason score prediction using T2WI: a diagnostic multi-center study.

Jin L, Ma Z, Gao F, Li M, Li H, Geng D

pubmed logopapersOct 1 2025
Prostate cancer (PCa) is one of the most common malignancies in men, and accurate assessment of tumor aggressiveness is crucial for treatment planning. The Gleason score (GS) remains the gold standard for risk stratification, yet it relies on invasive biopsy, which has inherent risks and sampling errors. The aim of this study was to detect PCa and non-invasively predict the GS for the early detection and stratification of clinically significant cases. We used single-modality T2-weighted imaging (T2WI) with an automatic machine-learning (ML) approach, MLJAR. The internal dataset comprised PCa patients who underwent magnetic resonance imaging (MRI) examinations at our hospital from September 2015 to June 2022 prior to prostate biopsy, surgery, radiotherapy, and endocrine therapy and whose examinations resulted in pathological findings. An external dataset from another medical center and a public challenge dataset were used for external validation. The Kolmogorov-Smirnov curve was used to evaluate the risk-differentiation ability of the PCa detection model. The area under the receiver operating characteristic curve (AUC) was calculated with confidence intervals to compare the model performance. The internal MRI dataset included 198 non-PCa and 291 PCa patients with histopathological results obtained through biopsy or surgery. External and public challenge datasets included 45 and 68 PCa patients, respectively. AUC for PCa detection in the internal-testing cohort (n = 147, PCa = 78) was 0.99. For GS prediction, AUCs were GS = 3 + 3 (0.97), GS = 3 + 4 (0.97), GS = 3 + 5 (1.0), GS = 4 + 3 (0.87), GS = 4 + 4 (0.91), GS = 4 + 5 (0.95), GS = 5 + 4 (1.0), and GS = 5 + 5 (0.99) in the internal-testing cohort (PCa = 88); GS = 3 + 3 (0.95), GS = 3 + 4 (0.76); GS = 3 + 5 (0.77), GS = 4 + 3 (0.88), GS = 4 + 4 (0.82), GS = 4 + 5 (0.87), GS = 5 + 4 (0.95), and GS = 5 + 5 (0.85) in the external-testing cohort (PCa = 45); and GS = 3 + 4 (0.89), GS = 4 + 3 (0.75), GS = 4 + 4 (0.65), and GS = 4 + 5 (0.91) in the public challenge cohort (PCa = 68). This multi-center study shows that an auto-ML model using only T2WI can accurately detect PCa and predict Gleason scores non-invasively, offering potential to reduce biopsy reliance and improve early risk stratification. These results warrant further validation and exploration for integration into clinical workflows.

U-DFA: A Unified DINOv2-Unet with Dual Fusion Attention for Multi-Dataset Medical Segmentation

Zulkaif Sajjad, Furqan Shaukat, Junaid Mir

arxiv logopreprintOct 1 2025
Accurate medical image segmentation plays a crucial role in overall diagnosis and is one of the most essential tasks in the diagnostic pipeline. CNN-based models, despite their extensive use, suffer from a local receptive field and fail to capture the global context. A common approach that combines CNNs with transformers attempts to bridge this gap but fails to effectively fuse the local and global features. With the recent emergence of VLMs and foundation models, they have been adapted for downstream medical imaging tasks; however, they suffer from an inherent domain gap and high computational cost. To this end, we propose U-DFA, a unified DINOv2-Unet encoder-decoder architecture that integrates a novel Local-Global Fusion Adapter (LGFA) to enhance segmentation performance. LGFA modules inject spatial features from a CNN-based Spatial Pattern Adapter (SPA) module into frozen DINOv2 blocks at multiple stages, enabling effective fusion of high-level semantic and spatial features. Our method achieves state-of-the-art performance on the Synapse and ACDC datasets with only 33\% of the trainable model parameters. These results demonstrate that U-DFA is a robust and scalable framework for medical image segmentation across multiple modalities.

Machine learning combined with CT-based radiomics predicts the prognosis of oesophageal squamous cell carcinoma.

Liu M, Lu R, Wang B, Fan J, Wang Y, Zhu J, Luo J

pubmed logopapersOct 1 2025
This retrospective study aims to develop a machine learning model integrating preoperative CT radiomics and clinicopathological data to predict 3-year recurrence and recurrence patterns in postoperative oesophageal squamous cell carcinoma. Tumour regions were segmented using 3D-Slicer, and radiomic features were extracted via Python. LASSO regression selected prognostic features for model integration. Clinicopathological data include tumour length, lymph node positivity, differentiation grade, and neurovascular infiltration. Ultimately, a machine learning model was established by combining the screened imaging feature data and clinicopathological data and validating model performance. A nomogram was constructed for survival prediction, and risk stratification was carried out through the prediction results of the machine learning model and the nomogram. Survival analysis was performed for stage-based patient subgroups across risk stratifications to identify adjuvant therapy-benefiting cohorts. Patients were randomly divided into a 7:3 ratio of 368 patients in the training cohorts and 158 patients in the validation cohorts. The LASSO regression screens out 6 recurrence prediction and 9 recurrence pattern prediction features, respectively. Among 526 patients (mean age 63; 427 males), the model achieved high accuracy in predicting recurrence (training cohort AUC: 0.826 [logistic regression]/0.820 [SVM]; validation cohort: 0.830/0.825) and recurrence patterns (training:0.801/0.799; validation:0.806/0.798). Risk stratification based on a machine learning model and nomogram predictions revealed that adjuvant therapy significantly improved disease-free survival in stages II-III patients with predicted recurrence and low survival (HR 0.372, 95% CI: 0.206-0.669; p < 0.001). Machine learning models exhibit excellent performance in predicting recurrence after surgery for squamous oesophageal cancer. Radiomic features of contrast-enhanced CT imaging can predict the prognosis of patients with oesophageal squamous cell carcinoma, which in turn can help clinicians stratify risk and screen out patient populations that could benefit from adjuvant therapy, thereby aiding medical decision-making. There is a lack of prognostic models for oesophageal squamous cell carcinoma in current research. The prognostic prediction model that we have developed has high accuracy by combining radiomics features and clinicopathologic data. This model aids in risk stratification of patients and aids clinical decision-making through predictive outcomes.

Radiomics analysis using machine learning to predict perineural invasion in pancreatic cancer.

Sun Y, Li Y, Li M, Hu T, Wang J

pubmed logopapersSep 30 2025
Pancreatic cancer is one of the most aggressive and lethal malignancies of the digestive system and is characterized by an extremely low five-year survival rate. The perineural invasion (PNI) status in patients with pancreatic cancer is positively correlated with adverse prognoses, including overall survival and recurrence-free survival. Emerging radiomic methods can reveal subtle variations in tumor structure by analyzing preoperative contrast-enhanced computed tomography (CECT) imaging data. Therefore, we propose the development of a preoperative CECT-based radiomic model to predict the risk of PNI in patients with pancreatic cancer. This study enrolled patients with pancreatic malignancies who underwent radical resection. Computerized tools were employed to extract radiomic features from tumor regions of interest (ROIs). The optimal radiomic features associated with PNI were selected to construct a radiomic score (RadScore). The model's reliability was comprehensively evaluated by integrating clinical and follow-up information, with SHapley Additive exPlanations (SHAP)-based visualization to interpret the decision-making processes. A total of 167 patients with pancreatic malignancies were included. From the CECT images, 851 radiomic features were extracted, 22 of which were identified as most strongly correlated with PNI. These 22 features were evaluated using seven machine learning methods. We ultimately selected the Gaussian naive Bayes model, which demonstrated robust predictive performance in both the training and validation cohorts, and achieved area under the ROC curve (AUC) values of 0.899 and 0.813, respectively. Among the clinical features, maximum tumor diameter, CA-199 level, blood glucose concentration, and lymph node metastasis were found to be independent risk factors for PNI. The integrated model yielded AUCs of 0.945 (training cohort) and 0.881 (validation cohort). Decision curve analysis confirmed the clinical utility of the ensemble model to predict perineural invasion. The combined model integrating clinical and radiomic features exhibited excellent performance in predicting the probability of perineural invasion in patients with pancreatic cancer. This approach has significant potential to optimize therapeutic decision-making and prognostic evaluation in patients with PNI.
Page 3 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.