Sort by:
Page 57 of 100991 results

Deep learning-based auto-contouring of organs/structures-at-risk for pediatric upper abdominal radiotherapy.

Ding M, Maspero M, Littooij AS, van Grotel M, Fajardo RD, van Noesel MM, van den Heuvel-Eibrink MM, Janssens GO

pubmed logopapersJul 1 2025
This study aimed to develop a computed tomography (CT)-based multi-organ segmentation model for delineating organs-at-risk (OARs) in pediatric upper abdominal tumors and evaluate its robustness across multiple datasets. In-house postoperative CTs from pediatric patients with renal tumors and neuroblastoma (n = 189) and a public dataset (n = 189) with CTs covering thoracoabdominal regions were used. Seventeen OARs were delineated: nine by clinicians (Type 1) and eight using TotalSegmentator (Type 2). Auto-segmentation models were trained using in-house (Model-PMC-UMCU) and a combined dataset of public data (Model-Combined). Performance was assessed with Dice Similarity Coefficient (DSC), 95 % Hausdorff Distance (HD95), and mean surface distance (MSD). Two clinicians rated clinical acceptability on a 5-point Likert scale across 15 patient contours. Model robustness was evaluated against sex, age, intravenous contrast, and tumor type. Model-PMC-UMCU achieved mean DSC values above 0.95 for five of nine OARs, while the spleen and heart ranged between 0.90 and 0.95. The stomach-bowel and pancreas exhibited DSC values below 0.90. Model-Combined demonstrated improved robustness across both datasets. Clinical evaluation revealed good usability, with both clinicians rating six of nine Type 1 OARs above four and six of eight Type 2 OARs above three. Significant performance differences were only found across age groups in both datasets, specifically in the left lung and pancreas. The 0-2 age group showed the lowest performance. A multi-organ segmentation model was developed, showcasing enhanced robustness when trained on combined datasets. This model is suitable for various OARs and can be applied to multiple datasets in clinical settings.

TIER-LOC: Visual Query-based Video Clip Localization in fetal ultrasound videos with a multi-tier transformer.

Mishra D, Saha P, Zhao H, Hernandez-Cruz N, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersJul 1 2025
In this paper, we introduce the Visual Query-based task of Video Clip Localization (VQ-VCL) for medical video understanding. Specifically, we aim to retrieve a video clip containing frames similar to a given exemplar frame from a given input video. To solve the task, we propose a novel visual query-based video clip localization model called TIER-LOC. TIER-LOC is designed to improve video clip retrieval, especially in fine-grained videos by extracting features from different levels, i.e., coarse to fine-grained, referred to as TIERS. The aim is to utilize multi-Tier features for detecting subtle differences, and adapting to scale or resolution variations, leading to improved video-clip retrieval. TIER-LOC has three main components: (1) a Multi-Tier Spatio-Temporal Transformer to fuse spatio-temporal features extracted from multiple Tiers of video frames with features from multiple Tiers of the visual query enabling better video understanding. (2) a Multi-Tier, Dual Anchor Contrastive Loss to deal with real-world annotation noise which can be notable at event boundaries and in videos featuring highly similar objects. (3) a Temporal Uncertainty-Aware Localization Loss designed to reduce the model sensitivity to imprecise event boundary. This is achieved by relaxing hard boundary constraints thus allowing the model to learn underlying class patterns and not be influenced by individual noisy samples. To demonstrate the efficacy of TIER-LOC, we evaluate it on two ultrasound video datasets and an open-source egocentric video dataset. First, we develop a sonographer workflow assistive task model to detect standard-frame clips in fetal ultrasound heart sweeps. Second, we assess our model's performance in retrieving standard-frame clips for detecting fetal anomalies in routine ultrasound scans, using the large-scale PULSE dataset. Lastly, we test our model's performance on an open-source computer vision video dataset by creating a VQ-VCL fine-grained video dataset based on the Ego4D dataset. Our model outperforms the best-performing state-of-the-art model by 7%, 4%, and 4% on the three video datasets, respectively.

Tumor grade-titude: XGBoost radiomics paves the way for RCC classification.

Ellmann S, von Rohr F, Komina S, Bayerl N, Amann K, Polifka I, Hartmann A, Sikic D, Wullich B, Uder M, Bäuerle T

pubmed logopapersJul 1 2025
This study aimed to develop and evaluate a non-invasive XGBoost-based machine learning model using radiomic features extracted from pre-treatment CT images to differentiate grade 4 renal cell carcinoma (RCC) from lower-grade tumours. A total of 102 RCC patients who underwent contrast-enhanced CT scans were included in the analysis. Radiomic features were extracted, and a two-step feature selection methodology was applied to identify the most relevant features for classification. The XGBoost model demonstrated high performance in both training (AUC = 0.87) and testing (AUC = 0.92) sets, with no significant difference between the two (p = 0.521). The model also exhibited high sensitivity, specificity, positive predictive value, and negative predictive value. The selected radiomic features captured both the distribution of intensity values and spatial relationships, which may provide valuable insights for personalized treatment decision-making. Our findings suggest that the XGBoost model has the potential to be integrated into clinical workflows to facilitate personalized adjuvant immunotherapy decision-making, ultimately improving patient outcomes. Further research is needed to validate the model in larger, multicentre cohorts and explore the potential of combining radiomic features with other clinical and molecular data.

Development and validation of a fusion model based on multi-phase contrast CT radiomics combined with clinical features for predicting Ki-67 expression in gastric cancer.

Song T, Xue B, Liu M, Chen L, Cao A, Du P

pubmed logopapersJul 1 2025
The present study aimed to develop and validate a fusion model based on multi-phase contrast-enhanced computed tomography (CECT) radiomics features combined with clinical features to preoperatively predict the expression levels of Ki-67 in patients with gastric cancer (GC). A total of 164 patients with GC who underwent surgical treatment at our hospital between September 2015 and September 2023 were retrospectively included and were randomly divided into a training set (n=114) and a testing set (n=50). Using Pyradiomics, radiomics features were extracted from multi-phase CECT images and were combined with significant clinical features through various machine learning algorithms [support vector machine (SVM), random forest (RandomForest), K-nearest neighbors (KNN), LightGBM and XGBoost] to build a fusion model. Receiver operating characteristic, area under the curve (AUC), calibration curve and decision curve analysis (DCA) were used to evaluate, validate and compare the predictive performance and clinical utility of the model. Among the three single-phase models, for the arterial phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.697; and the RandomForest radiomics model had the highest AUC value in the testing set, which was 0.658. For the venous phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.783; and the LightGBM radiomics model had the highest AUC value in the testing set, which was 0.747. For the delayed phase model, the KNN radiomics model had the highest AUC value in the training set, which was 0.772; and the SVM radiomics model had the highest AUC in the testing set, which was 0.719. The clinical feature model had the lowest AUC values in both the training set and the testing set, which were 0.614 and 0.520, respectively. Notably, the multi-phase model and the fusion model, which were constructed by combining the clinical features and the multi-phase features, demonstrated excellent discriminative performance, with the fusion model achieving AUC values of 0.933 and 0.817 in the training and testing sets, thus outperforming other models (DeLong test, both P<0.05). The calibration curve showed that the fusion model had goodness of fit (Hosmer-Lemeshow test, >0.5 in the training and validation sets). The DCA showed that the net benefit of the fusion model in identifying high expression of Ki-67 was improved compared with that of other models. Furthermore, the fusion model achieved an AUC value of 0.805 in the external validation data from The Cancer Imaging Archive. In conclusion, the fusion model established in the present study was revealed to have excellent performance and is expected to serve as a non-invasive tool for predicting Ki-67 status and guiding clinical treatment.

Deep Learning Reveals Liver MRI Features Associated With PNPLA3 I148M in Steatotic Liver Disease.

Chen Y, Laevens BPM, Lemainque T, Müller-Franzes GA, Seibel T, Dlugosch C, Clusmann J, Koop PH, Gong R, Liu Y, Jakhar N, Cao F, Schophaus S, Raju TB, Raptis AA, van Haag F, Joy J, Loomba R, Valenti L, Kather JN, Brinker TJ, Herzog M, Costa IG, Hernando D, Schneider KM, Truhn D, Schneider CV

pubmed logopapersJul 1 2025
Steatotic liver disease (SLD) is the most common liver disease worldwide, affecting 30% of the global population. It is strongly associated with the interplay of genetic and lifestyle-related risk factors. The genetic variant accounting for the largest fraction of SLD heritability is PNPLA3 I148M, which is carried by 23% of the western population and increases the risk of SLD two to three-fold. However, identification of variant carriers is not part of routine clinical care and prevents patients from receiving personalised care. We analysed MRI images and common genetic variants in PNPLA3, TM6SF2, MTARC1, HSD17B13 and GCKR from a cohort of 45 603 individuals from the UK Biobank. Proton density fat fraction (PDFF) maps were generated using a water-fat separation toolbox, applied to the magnitude and phase MRI data. The liver region was segmented using a U-Net model trained on 600 manually segmented ground truth images. The resulting liver masks and PDFF maps were subsequently used to calculate liver PDFF values. Individuals with (PDFF ≥ 5%) and without SLD (PDFF < 5%) were selected as the study cohort and used to train and test a Vision Transformer classification model with five-fold cross validation. We aimed to differentiate individuals who are homozygous for the PNPLA3 I148M variant from non-carriers, as evaluated by the area under the receiver operating characteristic curve (AUROC). To ensure a clear genetic contrast, all heterozygous individuals were excluded. To interpret our model, we generated attention maps that highlight the regions that are most predictive of the outcomes. Homozygosity for the PNPLA3 I148M variant demonstrated the best predictive performance among five variants with AUROC of 0.68 (95% CI: 0.64-0.73) in SLD patients and 0.57 (95% CI: 0.52-0.61) in non-SLD patients. The AUROCs for the other SNPs ranged from 0.54 to 0.57 in SLD patients and from 0.52 to 0.54 in non-SLD patients. The predictive performance was generally higher in SLD patients compared to non-SLD patients. Attention maps for PNPLA3 I148M carriers showed that fat deposition in regions adjacent to the hepatic vessels, near the liver hilum, plays an important role in predicting the presence of the I148M variant. Our study marks novel progress in the non-invasive detection of homozygosity for PNPLA3 I148M through the application of deep learning models on MRI images. Our findings suggest that PNPLA3 I148M might affect the liver fat distribution and could be used to predict the presence of PNPLA3 variants in patients with fatty liver. The findings of this research have the potential to be integrated into standard clinical practice, particularly when combined with clinical and biochemical data from other modalities to increase accuracy, enabling easier identification of at-risk individuals and facilitating the development of tailored interventions for PNPLA3 I148M-associated liver disease.

Liver Fat Fraction and Machine Learning Improve Steatohepatitis Diagnosis in Liver Transplant Patients.

Hajek M, Sedivy P, Burian M, Mikova I, Trunecka P, Pajuelo D, Dezortova M

pubmed logopapersJul 1 2025
Machine learning identifies liver fat fraction (FF) measured by <sup>1</sup>H MR spectroscopy, insulinemia, and elastography as robust, non-invasive biomarkers for diagnosing steatohepatitis in liver transplant patients, validated through decision tree analysis. Compared to the general population (~5.8% prevalence), MASH is significantly more common in liver transplant recipients (~30%-50%). In patients with FF > 5.3%, the positive predictive value for MASH ranged up to 97%, more than twice the value observed in the general population.

A quantitative tumor-wide analysis of morphological heterogeneity of colorectal adenocarcinoma.

Dragomir MP, Popovici V, Schallenberg S, Čarnogurská M, Horst D, Nenutil R, Bosman F, Budinská E

pubmed logopapersJul 1 2025
The intertumoral and intratumoral heterogeneity of colorectal adenocarcinoma (CRC) at the morphologic level is poorly understood. Previously, we identified morphological patterns associated with CRC molecular subtypes and their distinct molecular motifs. Here we aimed to evaluate the heterogeneity of these patterns across CRC. Three pathologists evaluated dominant, secondary, and tertiary morphology on four sections from four different FFPE blocks per tumor in a pilot set of 22 CRCs. An AI-based image analysis tool was trained on these tumors to evaluate the morphologic heterogeneity on an extended set of 161 stage I-IV primary CRCs (n = 644 H&E sections). We found that most tumors had two or three different dominant morphotypes and the complex tubular (CT) morphotype was the most common. The CT morphotype showed no combinatorial preferences. Desmoplastic (DE) morphotype was rarely dominant and rarely combined with other dominant morphotypes. Mucinous (MU) morphotype was mostly combined with solid/trabecular (TB) and papillary (PP) morphotypes. Most tumors showed medium or high heterogeneity, but no associations were found between heterogeneity and clinical parameters. A higher proportion of DE morphotype was associated with higher T-stage, N-stage, distant metastases, AJCC stage, and shorter overall survival (OS) and relapse-free survival (RFS). A higher proportion of MU morphotype was associated with higher grade, right side, and microsatellite instability (MSI). PP morphotype was associated with earlier T- and N-stage, absence of metastases, and improved OS and RFS. CT was linked to left side, lower grade, and better survival in stage I-III patients. MSI tumors showed higher proportions of MU and TB, and lower CT and PP morphotypes. These findings suggest that morphological shifts accompany tumor progression and highlight the need for extensive sampling and AI-based analysis. In conclusion, we observed unexpectedly high intratumoral morphological heterogeneity of CRC and found that it is not heterogeneity per se, but the proportions of morphologies that are associated with clinical outcomes.

Agreement between Routine-Dose and Lower-Dose CT with and without Deep Learning-based Denoising for Active Surveillance of Solid Small Renal Masses: A Multiobserver Study.

Borgbjerg J, Breen BS, Kristiansen CH, Larsen NE, Medrud L, Mikalone R, Müller S, Naujokaite G, Negård A, Nielsen TK, Salte IM, Frøkjær JB

pubmed logopapersJul 1 2025
Purpose To assess the agreement between routine-dose (RD) and lower-dose (LD) contrast-enhanced CT scans, with and without Digital Imaging and Communications in Medicine-based deep learning-based denoising (DLD), in evaluating small renal masses (SRMs) during active surveillance. Materials and Methods In this retrospective study, CT scans from patients undergoing active surveillance for an SRM were included. Using a validated simulation technique, LD CT images were generated from the RD images to simulate 75% (LD75) and 90% (LD90) radiation dose reductions. Two additional LD image sets, in which the DLD was applied (LD75-DLD and LD90-DLD), were generated. Between January 2023 and June 2024, nine radiologists from three institutions independently evaluated 350 CT scans across five datasets for tumor size, tumor nearness to the collecting system (TN), and tumor shape irregularity (TSI), and interobserver reproducibility and agreement were assessed using the 95% limits of agreement with the mean (LOAM) and Gwet AC2 coefficient, respectively. Subjective and quantitative image quality assessments were also performed. Results The study sample included 70 patients (mean age, 73.2 years ± 9.2 [SD]; 48 male, 22 female). LD75 CT was found to be in agreement with RD scans for assessing SRM diameter, with a LOAM of ±2.4 mm (95% CI: 2.3, 2.6) for LD75 compared with ±2.2 mm (95% CI: 2.1, 2.4) for RD. However, a 90% dose reduction compromised reproducibility (LOAM ±3.0 mm; 95% CI: 2.8, 3.2). LD90-DLD preserved measurement reproducibility (LOAM ±2.4 mm; 95% CI: 2.3, 2.6). Observer agreement was comparable between TN and TSI assessments across all image sets, with no statistically significant differences identified (all comparisons <i>P</i> ≥ .35 for TN and <i>P</i> ≥ .02 for TSI; Holm-corrected significance threshold, <i>P</i> = .013). Subjective and quantitative image quality assessments confirmed that DLD effectively restored image quality at reduced dose levels: LD75-DLD had the highest overall image quality, significantly lower noise, and improved contrast-to-noise ratio compared with RD (<i>P</i> < .001). Conclusion A 75% reduction in radiation dose is feasible for SRM assessment in active surveillance using CT with a conventional iterative reconstruction technique, whereas applying DLD allows submillisievert dose reduction. <b>Keywords:</b> CT, Urinary, Kidney, Radiation Safety, Observer Performance, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Muglia in this issue.

A Deep Learning Model Based on High-Frequency Ultrasound Images for Classification of Different Stages of Liver Fibrosis.

Zhang L, Tan Z, Li C, Mou L, Shi YL, Zhu XX, Luo Y

pubmed logopapersJul 1 2025
To develop a deep learning model based on high-frequency ultrasound images to classify different stages of liver fibrosis in chronic hepatitis B patients. This retrospective multicentre study included chronic hepatitis B patients who underwent both high-frequency and low-frequency liver ultrasound examinations between January 2014 and August 2024 at six hospitals. Paired images were employed to train the HF-DL and the LF-DL models independently. Three binary tasks were conducted: (1) Significant Fibrosis (S0-1 vs. S2-4); (2) Advanced Fibrosis (S0-2 vs. S3-4); (3) Cirrhosis (S0-3 vs. S4). Hepatic pathological results constituted the ground truth for algorithm development and evaluation. The diagnostic value of high-frequency and low-frequency liver ultrasound images was compared across commonly used CNN networks. The HF-DL model performance was compared against the LF-DL model, FIB-4, APRI, and with SWE (external test set). The calibration of models was plotted. The clinical benefits were calculated. Subgroup analysis for patients with different characteristics (BMI, ALT, inflammation level, alcohol consumption level) was conducted. The HF-DL model demonstrated consistently superior diagnostic performance across all stages of liver fibrosis compared to the LF-DL model, FIB-4, APRI and SWE, particularly in classifying advanced fibrosis (0.93 [95% CI 0.90-0.95], 0.93 [95% CI 0.89-0.96], p < 0.01). The HF-DL model demonstrates significantly improved performance in both target patient detection and negative population exclusion. The HF-DL model based on high-frequency ultrasound images outperforms other routinely used non-invasive modalities across different stages of liver fibrosis, particularly in advanced fibrosis, and may offer considerable clinical value.

Mechanically assisted non-invasive ventilation for liver SABR: Improve CBCT, treat more accurately.

Pierrard J, Audag N, Massih CA, Garcia MA, Moreno EA, Colot A, Jardinet S, Mony R, Nevez Marques AF, Servaes L, Tison T, den Bossche VV, Etume AW, Zouheir L, Ooteghem GV

pubmed logopapersJul 1 2025
Cone-beam computed tomography (CBCT) for image-guided radiotherapy (IGRT) during liver stereotactic ablative radiotherapy (SABR) is degraded by respiratory motion artefacts, potentially jeopardising treatment accuracy. Mechanically assisted non-invasive ventilation-induced breath-hold (MANIV-BH) can reduce these artefacts. This study compares MANIV-BH and free-breathing CBCTs regarding image quality, IGRT variability, automatic registration accuracy, and deep-learning auto-segmentation performance. Liver SABR CBCTs were presented blindly to 14 operators: 25 patients with FB and 25 with MANIV-BH. They rated CBCT quality and IGRT ease (rigid registration with planning CT). Interoperator IGRT variability was compared between FB and MANIV-BH. Automatic gross tumour volume (GTV) mapping accuracy was compared using automatic rigid registration and image-guided deformable registration. Deep-learning organ-at-risk (OAR) auto-segmentation was rated by an operator, who recorded the time dedicated for manual correction of these volumes. MANIV-BH significantly improved CBCT image quality ("Excellent"/"Good": 83.4 % versus 25.4 % with FB, p < 0.001), facilitated IGRT ("Very easy"/"Easy": 68.0 % versus 38.9 % with FB, p < 0.001), and reduced IGRT variability, particularly for trained operators (overall variability of 3.2 mm versus 4.6 mm with FB, p = 0.010). MANIV-BH improved deep-learning auto-segmentation performance (80.0 % rated "Excellent"/"Good" versus 4.0 % with FB, p < 0.001), and reduced median manual correction time by 54.2 % compared to FB (p < 0.001). However, automatic GTV mapping accuracy was not significantly different between MANIV-BH and FB. In liver SABR, MANIV-BH significantly improves CBCT quality, reduces interoperator IGRT variability, and enhances OAR auto-segmentation. Beyond being safe and effective for respiratory motion mitigation, MANIV increases accuracy during treatment delivery, although its implementation requires resources.
Page 57 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.