Sort by:
Page 5 of 41408 results

Preoperative prediction of post hepatectomy liver failure after surgery for hepatocellular carcinoma on CT-scan by machine learning and radiomics analyses.

Famularo S, Maino C, Milana F, Ardito F, Rompianesi G, Ciulli C, Conci S, Gallotti A, La Barba G, Romano M, De Angelis M, Patauner S, Penzo C, De Rose AM, Marescaux J, Diana M, Ippolito D, Frena A, Boccia L, Zanus G, Ercolani G, Maestri M, Grazi GL, Ruzzenente A, Romano F, Troisi RI, Giuliante F, Donadon M, Torzilli G

pubmed logopapersJul 1 2025
No instruments are available to predict preoperatively the risk of posthepatectomy liver failure (PHLF) in HCC patients. The aim was to predict the occurrence of PHLF preoperatively by radiomics and clinical data through machine-learning algorithms. Clinical data and 3-phases CT scans were retrospectively collected among 13 Italian centres between 2008 and 2022. Radiomics features were extracted in the non-tumoral liver area. Data were split between training(70 %) and test(30 %) sets. An oversampling was run(ADASYN) in the training set. Random-Forest(RF), extreme gradient boosting (XGB) and support vector machine (SVM) models were fitted to predict PHLF. Final evaluation of the metrics was run in the test set. The best models were included in an averaging ensemble model (AEM). Five-hundred consecutive preoperative CT scans were collected with the relative clinical data. Of them, 17 (3.4 %) experienced a PHLF. Two-hundred sixteen radiomics features per patient were extracted. PCA selected 19 dimensions explaining >75 % of the variance. Associated clinical variables were: size, macrovascular invasion, cirrhosis, major resection and MELD score. Data were split in training cohort (70 %, n = 351) and a test cohort (30 %, n = 149). The RF model obtained an AUC = 89.1 %(Spec. = 70.1 %, Sens. = 100 %, accuracy = 71.1 %, PPV = 10.4 %, NPV = 100 %). The XGB model showed an AUC = 89.4 %(Spec. = 100 %, Sens. = 20.0 %, Accuracy = 97.3 %, PPV = 20 %, NPV = 97.3 %). The AEM combined the XGB and RF model, obtaining an AUC = 90.1 %(Spec. = 89.5 %, Sens. = 80.0 %, accuracy = 89.2 %, PPV = 21.0 %, NPV = 99.2 %). The AEM obtained the best results in terms of discrimination and true positive identification. This could lead to better define patients fit or unfit for liver resection.

Accuracy of machine learning models for pre-diagnosis and diagnosis of pancreatic ductal adenocarcinoma in contrast-CT images: a systematic review and meta-analysis.

Lopes Costa GL, Tasca Petroski G, Machado LG, Eulalio Santos B, de Oliveira Ramos F, Feuerschuette Neto LM, De Luca Canto G

pubmed logopapersJul 1 2025
To evaluate the diagnostic ability and methodological quality of ML models in detecting Pancreatic Ductal Adenocarcinoma (PDAC) in Contrast CT images. Included studies assessed adults diagnosed with PDAC, confirmed by histopathology. Metrics of tests were interpreted by ML algorithms. Studies provided data on sensitivity and specificity. Studies that did not meet the inclusion criteria, segmentation-focused studies, multiple classifiers or non-diagnostic studies were excluded. PubMed, Cochrane Central Register of Controlled Trials, and Embase were searched without restrictions. Risk of bias was assessed using QUADAS-2, methodological quality was evaluated using Radiomics Quality Score (RQS) and a Checklist for AI in Medical Imaging (CLAIM). Bivariate random-effects models were used for meta-analysis of sensitivity and specificity, I<sup>2</sup> values and subgroup analysis used to assess heterogeneity. Nine studies were included and 12,788 participants were evaluated, of which 3,997 were included in the meta-analysis. AI models based on CT scans showed an accuracy of 88.7% (IC 95%, 87.7%-89.7%), sensitivity of 87.9% (95% CI, 82.9%-91.6%), and specificity of 92.2% (95% CI, 86.8%-95.5%). The average score of six radiomics studies was 17.83 RQS points. Nine ML methods had an average CLAIM score of 30.55 points. Our study is the first to quantitatively interpret various independent research, offering insights for clinical application. Despite favorable sensitivity and specificity results, the studies were of low quality, limiting definitive conclusions. Further research is necessary to validate these models before widespread adoption.

A multimodal deep-learning model based on multichannel CT radiomics for predicting pathological grade of bladder cancer.

Zhao T, He J, Zhang L, Li H, Duan Q

pubmed logopapersJul 1 2025
To construct a predictive model using deep-learning radiomics and clinical risk factors for assessing the preoperative histopathological grade of bladder cancer according to computed tomography (CT) images. A retrospective analysis was conducted involving 201 bladder cancer patients with definite pathological grading results after surgical excision at the organization between January 2019 and June 2023. The cohort was classified into a test set of 81 cases and a training set of 120 cases. Hand-crafted radiomics (HCR) and features derived from deep-learning (DL) were obtained from computed tomography (CT) images. The research builds a prediction model using 12 machine-learning classifiers, which integrate HCR, DL features, and clinical data. Model performance was estimated utilizing decision-curve analysis (DCA), the area under the curve (AUC), and calibration curves. Among the classifiers tested, the logistic regression model that combined DL and HCR characteristics demonstrated the finest performance. The AUC values were 0.912 (training set) and 0.777 (test set). The AUC values of clinical model achieved 0.850 (training set) and 0.804 (test set). The AUC values of the combined model were 0.933 (training set) and 0.824 (test set), outperforming both the clinical and HCR-only models. The CT-based combined model demonstrated considerable diagnostic capability in differentiating high-grade from low-grade bladder cancer, serving as a valuable noninvasive instrument for preoperative pathological evaluation.

Response prediction for neoadjuvant treatment in locally advanced rectal cancer patients-improvement in decision-making: A systematic review.

Boldrini L, Charles-Davies D, Romano A, Mancino M, Nacci I, Tran HE, Bono F, Boccia E, Gambacorta MA, Chiloiro G

pubmed logopapersJul 1 2025
Predicting pathological complete response (pCR) from pre or post-treatment features could be significant in improving the process of making clinical decisions and providing a more personalized treatment approach for better treatment outcomes. However, the lack of external validation of predictive models, missing in several published articles, is a major issue that can potentially limit the reliability and applicability of predictive models in clinical settings. Therefore, this systematic review described different externally validated methods of predicting response to neoadjuvant chemoradiotherapy (nCRT) in locally advanced rectal cancer (LARC) patients and how they could improve clinical decision-making. An extensive search for eligible articles was performed on PubMed, Cochrane, and Scopus between 2018 and 2023, using the keywords: (Response OR outcome) prediction AND (neoadjuvant OR chemoradiotherapy) treatment in 'locally advanced Rectal Cancer'. (i) Studies including patients diagnosed with LARC (T3/4 and N- or any T and N+) by pre-medical imaging and pathological examination or as stated by the author (ii) Standardized nCRT completed. (iii) Treatment with long or short course radiotherapy. (iv) Studies reporting on the prediction of response to nCRT with pathological complete response (pCR) as the primary outcome. (v) Studies reporting external validation results for response prediction. (vi) Regarding language restrictions, only articles in English were accepted. (i) We excluded case report studies, conference abstracts, reviews, studies reporting patients with distant metastases at diagnosis. (ii) Studies reporting response prediction with only internally validated approaches. Three researchers (DC-D, FB, HT) independently reviewed and screened titles and abstracts of all articles retrieved after de-duplication. Possible disagreements were resolved through discussion among the three researchers. If necessary, three other researchers (LB, GC, MG) were consulted to make the final decision. The extraction of data was performed using the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) template and quality assessment was done using the Prediction model Risk Of Bias Assessment Tool (PROBAST). A total of 4547 records were identified from the three databases. After excluding 392 duplicate results, 4155 records underwent title and abstract screening. Three thousand and eight hundred articles were excluded after title and abstract screening and 355 articles were retrieved. Out of the 355 retrieved articles, 51 studies were assessed for eligibility. Nineteen reports were then excluded due to lack of reports on external validation, while 4 were excluded due to lack of evaluation of pCR as the primary outcome. Only Twenty-eight articles were eligible and included in this systematic review. In terms of quality assessment, 89 % of the models had low concerns in the participants domain, while 11 % had an unclear rating. 96 % of the models were of low concern in both the predictors and outcome domains. The overall rating showed high applicability potential of the models with 82 % showing low concern, while 18 % were deemed unclear. Most of the external validated techniques showed promising performances and the potential to be applied in clinical settings, which is a crucial step towards evidence-based medicine. However, more studies focused on the external validations of these models in larger cohorts is necessary to ensure that they can reliably predict outcomes in diverse populations.

Scout-Dose-TCM: Direct and Prospective Scout-Based Estimation of Personalized Organ Doses from Tube Current Modulated CT Exams

Maria Jose Medrano, Sen Wang, Liyan Sun, Abdullah-Al-Zubaer Imran, Jennie Cao, Grant Stevens, Justin Ruey Tse, Adam S. Wang

arxiv logopreprintJun 30 2025
This study proposes Scout-Dose-TCM for direct, prospective estimation of organ-level doses under tube current modulation (TCM) and compares its performance to two established methods. We analyzed contrast-enhanced chest-abdomen-pelvis CT scans from 130 adults (120 kVp, TCM). Reference doses for six organs (lungs, kidneys, liver, pancreas, bladder, spleen) were calculated using MC-GPU and TotalSegmentator. Based on these, we trained Scout-Dose-TCM, a deep learning model that predicts organ doses corresponding to discrete cosine transform (DCT) basis functions, enabling real-time estimates for any TCM profile. The model combines a feature learning module that extracts contextual information from lateral and frontal scouts and scan range with a dose learning module that output DCT-based dose estimates. A customized loss function incorporated the DCT formulation during training. For comparison, we implemented size-specific dose estimation per AAPM TG 204 (Global CTDIvol) and its organ-level TCM-adapted version (Organ CTDIvol). A 5-fold cross-validation assessed generalizability by comparing mean absolute percentage dose errors and r-squared correlations with benchmark doses. Average absolute percentage errors were 13% (Global CTDIvol), 9% (Organ CTDIvol), and 7% (Scout-Dose-TCM), with bladder showing the largest discrepancies (15%, 13%, and 9%). Statistical tests confirmed Scout-Dose-TCM significantly reduced errors vs. Global CTDIvol across most organs and improved over Organ CTDIvol for the liver, bladder, and pancreas. It also achieved higher r-squared values, indicating stronger agreement with Monte Carlo benchmarks. Scout-Dose-TCM outperformed Global CTDIvol and was comparable to or better than Organ CTDIvol, without requiring organ segmentations at inference, demonstrating its promise as a tool for prospective organ-level dose estimation in CT.

Deep Learning-Based Semantic Segmentation for Real-Time Kidney Imaging and Measurements with Augmented Reality-Assisted Ultrasound

Gijs Luijten, Roberto Maria Scardigno, Lisle Faray de Paiva, Peter Hoyer, Jens Kleesiek, Domenico Buongiorno, Vitoantonio Bevilacqua, Jan Egger

arxiv logopreprintJun 30 2025
Ultrasound (US) is widely accessible and radiation-free but has a steep learning curve due to its dynamic nature and non-standard imaging planes. Additionally, the constant need to shift focus between the US screen and the patient poses a challenge. To address these issues, we integrate deep learning (DL)-based semantic segmentation for real-time (RT) automated kidney volumetric measurements, which are essential for clinical assessment but are traditionally time-consuming and prone to fatigue. This automation allows clinicians to concentrate on image interpretation rather than manual measurements. Complementing DL, augmented reality (AR) enhances the usability of US by projecting the display directly into the clinician's field of view, improving ergonomics and reducing the cognitive load associated with screen-to-patient transitions. Two AR-DL-assisted US pipelines on HoloLens-2 are proposed: one streams directly via the application programming interface for a wireless setup, while the other supports any US device with video output for broader accessibility. We evaluate RT feasibility and accuracy using the Open Kidney Dataset and open-source segmentation models (nnU-Net, Segmenter, YOLO with MedSAM and LiteMedSAM). Our open-source GitHub pipeline includes model implementations, measurement algorithms, and a Wi-Fi-based streaming solution, enhancing US training and diagnostics, especially in point-of-care settings.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Diffusion Model-based Data Augmentation Method for Fetal Head Ultrasound Segmentation

Fangyijie Wang, Kevin Whelan, Félix Balado, Guénolé Silvestre, Kathleen M. Curran

arxiv logopreprintJun 30 2025
Medical image data is less accessible than in other domains due to privacy and regulatory constraints. In addition, labeling requires costly, time-intensive manual image annotation by clinical experts. To overcome these challenges, synthetic medical data generation offers a promising solution. Generative AI (GenAI), employing generative deep learning models, has proven effective at producing realistic synthetic images. This study proposes a novel mask-guided GenAI approach using diffusion models to generate synthetic fetal head ultrasound images paired with segmentation masks. These synthetic pairs augment real datasets for supervised fine-tuning of the Segment Anything Model (SAM). Our results show that the synthetic data captures real image features effectively, and this approach reaches state-of-the-art fetal head segmentation, especially when trained with a limited number of real image-mask pairs. In particular, the segmentation reaches Dice Scores of 94.66\% and 94.38\% using a handful of ultrasound images from the Spanish and African cohorts, respectively. Our code, models, and data are available on GitHub.

Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound

Yuhao Huang, Yueyue Xu, Haoran Dou, Jiaxiao Deng, Xin Yang, Hongyu Zheng, Dong Ni

arxiv logopreprintJun 30 2025
Congenital uterine anomalies (CUAs) can lead to infertility, miscarriage, preterm birth, and an increased risk of pregnancy complications. Compared to traditional 2D ultrasound (US), 3D US can reconstruct the coronal plane, providing a clear visualization of the uterine morphology for assessing CUAs accurately. In this paper, we propose an intelligent system for simultaneous automated plane localization and CUA diagnosis. Our highlights are: 1) we develop a denoising diffusion model with local (plane) and global (volume/text) guidance, using an adaptive weighting strategy to optimize attention allocation to different conditions; 2) we introduce a reinforcement learning-based framework with unsupervised rewards to extract the key slice summary from redundant sequences, fully integrating information across multiple planes to reduce learning difficulty; 3) we provide text-driven uncertainty modeling for coarse prediction, and leverage it to adjust the classification probability for overall performance improvement. Extensive experiments on a large 3D uterine US dataset show the efficacy of our method, in terms of plane localization and CUA diagnosis. Code is available at https://github.com/yuhoo0302/CUA-US.

A Hierarchical Slice Attention Network for Appendicitis Classification in 3D CT Scans

Chia-Wen Huang, Haw Hwai, Chien-Chang Lee, Pei-Yuan Wu

arxiv logopreprintJun 29 2025
Timely and accurate diagnosis of appendicitis is critical in clinical settings to prevent serious complications. While CT imaging remains the standard diagnostic tool, the growing number of cases can overwhelm radiologists, potentially causing delays. In this paper, we propose a deep learning model that leverages 3D CT scans for appendicitis classification, incorporating Slice Attention mechanisms guided by external 2D datasets to enhance small lesion detection. Additionally, we introduce a hierarchical classification framework using pre-trained 2D models to differentiate between simple and complicated appendicitis. Our approach improves AUC by 3% for appendicitis and 5.9% for complicated appendicitis, offering a more efficient and reliable diagnostic solution compared to previous work.
Page 5 of 41408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.