Sort by:
Page 90 of 99990 results

Does Machine Learning Prediction of Magnetic Resonance Imaging PI-RADS Correlate with Target Prostate Biopsy Results?

Arafa MA, Farhat KH, Lotfy N, Khan FK, Mokhtar A, Althunayan AM, Al-Taweel W, Al-Khateeb SS, Azhari S, Rabah DM

pubmed logopapersMay 26 2025
This study aimed to predict and classify MRI PI-RADs scores using different machine learning algorithms and to detect the concordance of PI-RADs scoring with the outcome target of prostate biopsy. Machine learning (ML) algorithms were used to develop best-fitting models for the prediction and classification of MRI PI-RAD. The Random Forest and Extra Trees models achieved the best performance compared to the other methods. The accuracy of both models was 91.95%. The AUC was 0.9329 for the Random Forest model and 0.9404 for the Extra Trees model. PSA level, PSA density, and diameter of the largest lesion were the most important features for the importance of outcome classification. ML prediction enhanced the PI-RAD classification, where clinically significant prostate cancer (csPCa) cases increased from 0% to 1.9% in the low-risk PI-RAD class, this showed that the model identified some previously missed cases. Predictive machine learning models showed an excellent ability to predict MRI Pi-RAD scores and discriminate between low- and high-risk scores. However, caution should be exercised, as a high percentage of negative biopsy cases were assigned Pi-RAD 4 and Pi-RAD 5 scores. ML integration may enhance PI-RAD's utility by reducing unnecessary biopsies in low-risk patients (via better csPCa detection) and refining the high-risk categorization. Combining such PI-RAD scores with significant parameters, such as PSA density, lesion diameter, number of lesions, and age, in decision curve analysis and utility paradigms would assist physicians' clinical decisions.

SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours

Catalina Tan, Yipeng Hu, Shaheer U. Saeed

arxiv logopreprintMay 25 2025
Accurate tumour segmentation is vital for various targeted diagnostic and therapeutic procedures for cancer, e.g., planning biopsies or tumour ablations. Manual delineation is extremely labour-intensive, requiring substantial expert time. Fully-supervised machine learning models aim to automate such localisation tasks, but require a large number of costly and often subjective 3D voxel-level labels for training. The high-variance and subjectivity in such labels impacts model generalisability, even when large datasets are available. Histopathology labels may offer more objective labels but the infeasibility of acquiring pixel-level annotations to develop tumour localisation methods based on histology remains challenging in-vivo. In this work, we propose a novel weakly-supervised semantic segmentation framework called SPARS (Self-Play Adversarial Reinforcement Learning for Segmentation), which utilises an object presence classifier, trained on a small number of image-level binary cancer presence labels, to localise cancerous regions on CT scans. Such binary labels of patient-level cancer presence can be sourced more feasibly from biopsies and histopathology reports, enabling a more objective cancer localisation on medical images. Evaluating with real patient data, we observed that SPARS yielded a mean dice score of $77.3 \pm 9.4$, which outperformed other weakly-supervised methods by large margins. This performance was comparable with recent fully-supervised methods that require voxel-level annotations. Our results demonstrate the potential of using SPARS to reduce the need for extensive human-annotated labels to detect cancer in real-world healthcare settings.

A novel network architecture for post-applicator placement CT auto-contouring in cervical cancer HDR brachytherapy.

Lei Y, Chao M, Yang K, Gupta V, Yoshida EJ, Wang T, Yang X, Liu T

pubmed logopapersMay 25 2025
High-dose-rate brachytherapy (HDR-BT) is an integral part of treatment for locally advanced cervical cancer, requiring accurate segmentation of the high-risk clinical target volume (HR-CTV) and organs at risk (OARs) on post-applicator CT (pCT) for precise and safe dose delivery. Manual contouring, however, is time-consuming and highly variable, with challenges heightened in cervical HDR-BT due to complex anatomy and low tissue contrast. An effective auto-contouring solution could significantly enhance efficiency, consistency, and accuracy in cervical HDR-BT planning. To develop a machine learning-based approach that improves the accuracy and efficiency of HR-CTV and OAR segmentation on pCT images for cervical HDR-BT. The proposed method employs two sequential deep learning models to segment target and OARs from planning CT data. The intuitive model, a U-Net, initially segments simpler structures such as the bladder and HR-CTV, utilizing shallow features and iodine contrast agents. Building on this, the sophisticated model targets complex structures like the sigmoid, rectum, and bowel, addressing challenges from low contrast, anatomical proximity, and imaging artifacts. This model incorporates spatial information from the intuitive model and uses total variation regularization to improve segmentation smoothness by applying a penalty to changes in gradient. This dual-model approach improves accuracy and consistency in segmenting high-risk clinical target volumes and organs at risk in cervical HDR-BT. To validate the proposed method, 32 cervical cancer patients treated with tandem and ovoid (T&O) HDR brachytherapy (3-5 fractions, 115 CT images) were retrospectively selected. The method's performance was assessed using four-fold cross-validation, comparing segmentation results to manual contours across five metrics: Dice similarity coefficient (DSC), 95% Hausdorff distance (HD<sub>95</sub>), mean surface distance (MSD), center-of-mass distance (CMD), and volume difference (VD). Dosimetric evaluations included D90 for HR-CTV and D2cc for OARs. The proposed method demonstrates high segmentation accuracy for HR-CTV, bladder, and rectum, achieving DSC values of 0.79 ± 0.06, 0.83 ± 0.10, and 0.76 ± 0.15, MSD values of 1.92 ± 0.77 mm, 2.24 ± 1.20 mm, and 4.18 ± 3.74 mm, and absolute VD values of 5.34 ± 4.85 cc, 17.16 ± 17.38 cc, and 18.54 ± 16.83 cc, respectively. Despite challenges in bowel and sigmoid segmentation due to poor soft tissue contrast in CT and variability in manual contouring (ground truth volumes of 128.48 ± 95.9 cc and 51.87 ± 40.67 cc), the method significantly outperforms two state-of-the-art methods on DSC, MSD, and CMD metrics (p-value < 0.05). For HR-CTV, the mean absolute D90 difference was 0.42 ± 1.17 Gy (p-value > 0.05), less than 5% of the prescription dose. Over 75% of cases showed changes within ± 0.5 Gy, and fewer than 10% exceeded ± 1 Gy. The mean and variation in structure volume and D2cc parameters between manual and segmented contours for OARs showed no significant differences (p-value > 0.05), with mean absolute D2cc differences within 0.5 Gy, except for the bladder, which exhibited higher variability (0.97 Gy). Our innovative auto-contouring method showed promising results in segmenting HR-CTV and OARs from pCT, potentially enhancing the efficiency of HDR BT cervical treatment planning. Further validation and clinical implementation are required to fully realize its clinical benefits.

[Clinical value of medical imaging artificial intelligence in the diagnosis and treatment of peritoneal metastasis in gastrointestinal cancers].

Fang MJ, Dong D, Tian J

pubmed logopapersMay 25 2025
Peritoneal metastasis is a key factor in the poor prognosis of advanced gastrointestinal cancer patients. Traditional radiological diagnostic faces challenges such as insufficient sensitivity. Through technologies like radiomics and deep learning, artificial intelligence can deeply analyze the tumor heterogeneity and microenvironment features in medical images, revealing markers of peritoneal metastasis and constructing high-precision predictive models. These technologies have demonstrated advantages in tasks such as predicting peritoneal metastasis, assessing the risk of peritoneal recurrence, and identifying small metastatic foci during surgery. This paper summarizes the representative progress and application prospects of medical imaging artificial intelligence in the diagnosis and treatment of peritoneal metastasis, and discusses potential development directions such as multimodal data fusion and large model. The integration of medical imaging artificial intelligence with clinical practice is expected to advance personalized and precision medicine in the diagnosis and treatment of peritoneal metastasis in gastrointestinal cancers.

Quantitative image quality metrics enable resource-efficient quality control of clinically applied AI-based reconstructions in MRI.

White OA, Shur J, Castagnoli F, Charles-Edwards G, Whitcher B, Collins DJ, Cashmore MTD, Hall MG, Thomas SA, Thompson A, Harrison CA, Hopkinson G, Koh DM, Winfield JM

pubmed logopapersMay 24 2025
AI-based MRI reconstruction techniques improve efficiency by reducing acquisition times whilst maintaining or improving image quality. Recent recommendations from professional bodies suggest centres should perform quality assessments on AI tools. However, monitoring long-term performance presents challenges, due to model drift or system updates. Radiologist-based assessments are resource-intensive and may be subjective, highlighting the need for efficient quality control (QC) measures. This study explores using image quality metrics (IQMs) to assess AI-based reconstructions. 58 patients undergoing standard-of-care rectal MRI were imaged using AI-based and conventional T2-weighted sequences. Paired and unpaired IQMs were calculated. Sensitivity of IQMs to detect retrospective perturbations in AI-based reconstructions was assessed using control charts, and statistical comparisons between the four MR systems in the evaluation were performed. Two radiologists evaluated the image quality of the perturbed images, giving an indication of their clinical relevance. Paired IQMs demonstrated sensitivity to changes in AI-reconstruction settings, identifying deviations outside ± 2 standard deviations of the reference dataset. Unpaired metrics showed less sensitivity. Paired IQMs showed no difference in performance between 1.5 T and 3 T systems (p > 0.99), whilst minor but significant (p < 0.0379) differences were noted for unpaired IQMs. IQMs are effective for QC of AI-based MR reconstructions, offering resource-efficient alternatives to repeated radiologist evaluations. Future work should expand this to other imaging applications and assess additional measures.

Preoperative risk assessment of invasive endometrial cancer using MRI-based radiomics: a systematic review and meta-analysis.

Gao Y, Liang F, Tian X, Zhang G, Zhang H

pubmed logopapersMay 24 2025
Image-derived machine learning (ML) is a robust and growing field in diagnostic imaging systems for both clinicians and radiologists. Accurate preoperative radiological evaluation of the invasive ability of endometrial cancer (EC) can increase the degree of clinical benefit. The present study aimed to investigate the diagnostic performance of magnetic resonance imaging (MRI)-derived artificial intelligence for accurate preoperative assessment of the invasive risk. The PubMed, Embase, Cochrane Library and Web of Science databases were searched, and pertinent English-language papers were collected. The pooled sensitivity, specificity, diagnostic odds ratio (DOR), and positive and negative likelihood ratios (PLR and NLR, respectively) of all the papers were calculated using Stata software. The results were plotted on a summary receiver operating characteristic (SROC) curve, publication bias and threshold effects were evaluated, and meta-regression and subgroup analyses were conducted to explore the possible causes of intratumoral heterogeneity. MRI-based radiomics revealed pooled sensitivity (SEN) and specificity (SPE) values of 0.85 and 0.82 for the prediction of high-grade EC; 0.80 and 0.85 for deep myometrial invasion (DMI); 0.85 and 0.73 for lymphovascular space invasion (LVSI); 0.79 and 0.85 for microsatellite instability (MSI); and 0.90 and 0.72 for lymph node metastasis (LNM), respectively. For LVSI prediction and high-grade histological analysis, meta-regression revealed that the image segmentation and MRI-based radiomics modeling contributed to heterogeneity (p = 0.003 and 0.04). Through a systematic review and meta-analysis of the reported literature, preoperative MRI-derived ML could help clinicians accurately evaluate EC risk factors, potentially guiding individual treatment thereafter.

Construction of a Prediction Model for Adverse Perinatal Outcomes in Foetal Growth Restriction Based on a Machine Learning Algorithm: A Retrospective Study.

Meng X, Wang L, Wu M, Zhang N, Li X, Wu Q

pubmed logopapersMay 23 2025
To create and validate a machine learning (ML)-based model for predicting the adverse perinatal outcome (APO) in foetal growth restriction (FGR) at diagnosis. A retrospective study. Multi-centre in China. Pregnancies affected by FGR. We enrolled singleton foetuses with a perinatal diagnosis of FGR who were admitted between January 2021 and November 2023. A total of 361 pregnancies from Beijing Obstetrics and Gynecology Hospital were used as the training set and the internal test set. In comparison, data from 50 pregnancies from Haidian Maternal and Child Health Hospital were used as the external test set. Feature screening was performed using the random forest (RF), the Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression (LR). Subsequently, six ML methods, including Stacking, were used to construct models to predict the APO of FGR. Model's performance was evaluated through indicators such as the area under the receiver operating characteristic curve (AUROC). The Shapley Additive Explanation analysis was used to rank each model feature and explain the final model. Mean ± SD gestational age at diagnosis was 32.3 ± 4.8 weeks in the absent APO group and 27.3 ± 3.7 in the present APO group. Women enrolled in the present APO group had a higher rate of hypertension related to pregnancy (74.8% vs. 18.8%, p < 0.001). Among 17 candidate predictors (including maternal characteristics, maternal comorbidities, obstetric characteristics and ultrasound parameters), the integration of RF, LASSO and LR methodologies identified maternal body mass index, hypertension, gestational age at diagnosis of FGR, estimated foetal weight (EFW) z score, EFW growth velocity and abnormal umbilical artery Doppler (defined as a pulsatility index above the 95th percentile or instances of absent/reversed diastolic flow) as significant predictors. The Stacking model demonstrated a good performance in both the internal test set [AUROC: 0.861, 95% confidence interval (CI), 0.838-0.896] and the external test set [AUROC: 0.906, 95% CI, 0.875-0.947]. The calibration curve showed high agreement between the predicted and observed risks. The Hosmer-Lemeshow test for the internal and external test sets was p = 0.387 and p = 0.825, respectively. The ML algorithm for APO, which integrates maternal clinical factors and ultrasound parameters, demonstrates good predictive value for APO in FGR at diagnosis. This suggested that ML techniques may be a valid approach for the early detection of high-risk APO in FGR pregnancies.

Anatomy-Guided Multitask Learning for MRI-Based Classification of Placenta Accreta Spectrum and its Subtypes

Hai Jiang, Qiongting Liu, Yuanpin Zhou, Jiawei Pan, Ting Song, Yao Lu

arxiv logopreprintMay 23 2025
Placenta Accreta Spectrum Disorders (PAS) pose significant risks during pregnancy, frequently leading to postpartum hemorrhage during cesarean deliveries and other severe clinical complications, with bleeding severity correlating to the degree of placental invasion. Consequently, accurate prenatal diagnosis of PAS and its subtypes-placenta accreta (PA), placenta increta (PI), and placenta percreta (PP)-is crucial. However, existing guidelines and methodologies predominantly focus on the presence of PAS, with limited research addressing subtype recognition. Additionally, previous multi-class diagnostic efforts have primarily relied on inefficient two-stage cascaded binary classification tasks. In this study, we propose a novel convolutional neural network (CNN) architecture designed for efficient one-stage multiclass diagnosis of PAS and its subtypes, based on 4,140 magnetic resonance imaging (MRI) slices. Our model features two branches: the main classification branch utilizes a residual block architecture comprising multiple residual blocks, while the second branch integrates anatomical features of the uteroplacental area and the adjacent uterine serous layer to enhance the model's attention during classification. Furthermore, we implement a multitask learning strategy to leverage both branches effectively. Experiments conducted on a real clinical dataset demonstrate that our model achieves state-of-the-art performance.

A Foundation Model Framework for Multi-View MRI Classification of Extramural Vascular Invasion and Mesorectal Fascia Invasion in Rectal Cancer

Yumeng Zhang, Zohaib Salahuddin, Danial Khan, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Background: Accurate MRI-based identification of extramural vascular invasion (EVI) and mesorectal fascia invasion (MFI) is pivotal for risk-stratified management of rectal cancer, yet visual assessment is subjective and vulnerable to inter-institutional variability. Purpose: To develop and externally evaluate a multicenter, foundation-model-driven framework that automatically classifies EVI and MFI on axial and sagittal T2-weighted MRI. Methods: This retrospective study used 331 pre-treatment rectal cancer MRI examinations from three European hospitals. After TotalSegmentator-guided rectal patch extraction, a self-supervised frequency-domain harmonization pipeline was trained to minimize scanner-related contrast shifts. Four classifiers were compared: ResNet50, SeResNet, the universal biomedical pretrained transformer (UMedPT) with a lightweight MLP head, and a logistic-regression variant using frozen UMedPT features (UMedPT_LR). Results: UMedPT_LR achieved the best EVI detection when axial and sagittal features were fused (AUC = 0.82; sensitivity = 0.75; F1 score = 0.73), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.74). The highest MFI performance was attained by UMedPT on axial harmonized images (AUC = 0.77), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.75). Frequency-domain harmonization improved MFI classification but variably affected EVI performance. Conventional CNNs (ResNet50, SeResNet) underperformed, especially in F1 score and balanced accuracy. Conclusion: These findings demonstrate that combining foundation model features, harmonization, and multi-view fusion significantly enhances diagnostic performance in rectal MRI.

Explainable Anatomy-Guided AI for Prostate MRI: Foundation Models and In Silico Clinical Trials for Virtual Biopsy-based Risk Assessment

Danial Khan, Zohaib Salahuddin, Yumeng Zhang, Sheng Kuang, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Rachel Cavill, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Adrian Galiana-Bordera, Paula Jimenez Gomez, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
We present a fully automated, anatomically guided deep learning pipeline for prostate cancer (PCa) risk stratification using routine MRI. The pipeline integrates three key components: an nnU-Net module for segmenting the prostate gland and its zones on axial T2-weighted MRI; a classification module based on the UMedPT Swin Transformer foundation model, fine-tuned on 3D patches with optional anatomical priors and clinical data; and a VAE-GAN framework for generating counterfactual heatmaps that localize decision-driving image regions. The system was developed using 1,500 PI-CAI cases for segmentation and 617 biparametric MRIs with metadata from the CHAIMELEON challenge for classification (split into 70% training, 10% validation, and 20% testing). Segmentation achieved mean Dice scores of 0.95 (gland), 0.94 (peripheral zone), and 0.92 (transition zone). Incorporating gland priors improved AUC from 0.69 to 0.72, with a three-scale ensemble achieving top performance (AUC = 0.79, composite score = 0.76), outperforming the 2024 CHAIMELEON challenge winners. Counterfactual heatmaps reliably highlighted lesions within segmented regions, enhancing model interpretability. In a prospective multi-center in-silico trial with 20 clinicians, AI assistance increased diagnostic accuracy from 0.72 to 0.77 and Cohen's kappa from 0.43 to 0.53, while reducing review time per case by 40%. These results demonstrate that anatomy-aware foundation models with counterfactual explainability can enable accurate, interpretable, and efficient PCa risk assessment, supporting their potential use as virtual biopsies in clinical practice.
Page 90 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.