Sort by:
Page 4 of 542 results

KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.

Boucher T, Tetlow N, Fung A, Dewar A, Arina P, Kerneis S, Whittle J, Mazomenos EB

pubmed logopapersMay 9 2025
The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.

Artificial intelligence applied to ultrasound diagnosis of pelvic gynecological tumors: a systematic review and meta-analysis.

Geysels A, Garofalo G, Timmerman S, Barreñada L, De Moor B, Timmerman D, Froyman W, Van Calster B

pubmed logopapersMay 8 2025
To perform a systematic review on artificial intelligence (AI) studies focused on identifying and differentiating pelvic gynecological tumors on ultrasound scans. Studies developing or validating AI models for diagnosing gynecological pelvic tumors on ultrasound scans were eligible for inclusion. We systematically searched PubMed, Embase, Web of Science, and Cochrane Central from their database inception until April 30th, 2024. To assess the quality of the included studies, we adapted the QUADAS-2 risk of bias tool to address the unique challenges of AI in medical imaging. Using multi-level random effects models, we performed a meta-analysis to generate summary estimates of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. To provide a reference point of current diagnostic support tools for ultrasound examiners, we descriptively compared the pooled performance to that of the well-recognized ADNEX model on external validation. Subgroup analyses were performed to explore sources of heterogeneity. From 9151 records retrieved, 44 studies were eligible: 40 on ovarian, three on endometrial, and one on myometrial pathology. Overall, 95% were at high risk of bias - primarily due to inappropriate study inclusion criteria, the absence of a patient-level split of training and testing image sets, and no calibration assessment. For ovarian tumors, the summary AUC for AI models distinguishing benign from malignant tumors was 0.89 (95% CI: 0.85-0.92). In lower-risk studies (at least three low-risk domains), the summary AUC dropped to 0.87 (0.83-0.90), with deep learning models outperforming radiomics-based machine learning approaches in this subset. Only five studies included an external validation, and six evaluated calibration performance. In a recent systematic review of external validation studies, the ADNEX model had a pooled AUC of 0.93 (0.91-0.94) in studies at low risk of bias. Studies on endometrial and myometrial pathologies were reported individually. Although AI models show promising discriminative performances for diagnosing gynecological tumors on ultrasound, most studies have methodological shortcomings that result in a high risk of bias. In addition, the ADNEX model appears to outperform most AI approaches for ovarian tumors. Future research should emphasize robust study designs - ideally large, multicenter, and prospective cohorts that mirror real-world populations - along with external validation, proper calibration, and standardized reporting. This study was pre-registered with Open Science Framework (OSF): https://doi.org/10.17605/osf.io/bhkst.

Comparative analysis of open-source against commercial AI-based segmentation models for online adaptive MR-guided radiotherapy.

Langner D, Nachbar M, Russo ML, Boeke S, Gani C, Niyazi M, Thorwarth D

pubmed logopapersMay 8 2025
Online adaptive magnetic resonance-guided radiotherapy (MRgRT) has emerged as a state-of-the-art treatment option for multiple tumour entities, accounting for daily anatomical and tumour volume changes, thus allowing sparing of relevant organs at risk (OARs). However, the annotation of treatment-relevant anatomical structures in context of online plan adaptation remains challenging, often relying on commercial segmentation solutions due to limited availability of clinically validated alternatives. The aim of this study was to investigate whether an open-source artificial intelligence (AI) segmentation network can compete with the annotation accuracy of a commercial solution, both trained on the identical dataset, questioning the need for commercial models in clinical practice. For 47 pelvic patients, T2w MR imaging data acquired on a 1.5 T MR-Linac were manually contoured, identifying prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, and bony structures. These training data were used for the generation of an in-house AI segmentation model, a nnU-Net with residual encoder architecture featuring a streamlined single image inference pipeline, and re-training of a commercial solution. For quantitative evaluation, 20 MR images were contoured by a radiation oncologist, considered as ground truth contours (GTC) and compared with the in-house/commercial AI-based contours (iAIC/cAIC) using Dice Similarity Coefficient (DSC), 95% Hausdorff distances (HD95), and surface DSC (sDSC). For qualitative evaluation, four radiation oncologists assessed the usability of OAR/target iAIC within an online adaptive workflow using a four-point Likert scale: (1) acceptable without modification, (2) requiring minor adjustments, (3) requiring major adjustments, and (4) not usable. Patient-individual annotations were generated in a median [range] time of 23 [16-34] s for iAIC and 152 [121-198] s for cAIC, respectively. OARs showed a maximum median DSC of 0.97/0.97 (iAIC/cAIC) for bladder and minimum median DSC of 0.78/0.79 (iAIC/cAIC) for anal canal/penile bulb. Maximal respectively minimal median HD95 were detected for rectum with 17.3/20.6 mm (iAIC/cAIC) and for bladder with 5.6/6.0 mm (iAIC/cAIC). Overall, the average median DSC/HD95 values were 0.87/11.8mm (iAIC) and 0.83/10.2mm (cAIC) for OAR/targets and 0.90/11.9mm (iAIC) and 0.91/16.5mm (cAIC) for bony structures. For a tolerance of 3 mm, the highest and lowest sDSC were determined for bladder (iAIC:1.00, cAIC:0.99) and prostate in iAIC (0.89) and anal canal in cAIC (0.80), respectively. Qualitatively, 84.8% of analysed contours were considered as clinically acceptable for iAIC, while 12.9% required minor and 2.3% major adjustments or were classed as unusable. Contour-specific analysis showed that iAIC achieved the highest mean scores with 1.00 for the anal canal and the lowest with 1.61 for the prostate. This study demonstrates that open-source segmentation framework can achieve comparable annotation accuracy to commercial solutions for pelvic anatomy in online adaptive MRgRT. The adapted framework not only maintained high segmentation performance, with 84.8% of contours accepted by physicians or requiring only minor corrections (12.9%) but also enhanced clinical workflow efficiency of online adaptive MRgRT through reduced inference times. These findings establish open-source frameworks as viable alternatives to commercial systems in supervised clinical workflows.

Predicting treatment response to systemic therapy in advanced gallbladder cancer using multiphase enhanced CT images.

Wu J, Zheng Z, Li J, Shen X, Huang B

pubmed logopapersMay 8 2025
Accurate estimation of treatment response can help clinicians identify patients who would potentially benefit from systemic therapy. This study aimed to develop and externally validate a model for predicting treatment response to systemic therapy in advanced gallbladder cancer (GBC). We recruited 399 eligible GBC patients across four institutions. Multivariable logistic regression analysis was performed to identify independent clinical factors related to therapeutic efficacy. This deep learning (DL) radiomics signature was developed for predicting treatment response using multiphase enhanced CT images. Then, the DL radiomic-clinical (DLRSC) model was built by combining the DL signature and significant clinical factors, and its predictive performance was evaluated using area under the curve (AUC). Gradient-weighted class activation mapping analysis was performed to help clinicians better understand the predictive results. Furthermore, patients were stratified into low- and high-score groups by the DLRSC model. The progression-free survival (PFS) and overall survival (OS) between the two different groups were compared. Multivariable analysis revealed that tumor size was a significant predictor of efficacy. The DLRSC model showed great predictive performance, with AUCs of 0.86 (95% CI, 0.82-0.89) and 0.84 (95% CI, 0.80-0.87) in the internal and external test datasets, respectively. This model showed great discrimination, calibration, and clinical utility. Moreover, Kaplan-Meier survival analysis revealed that low-score group patients who were insensitive to systemic therapy predicted by the DLRSC model had worse PFS and OS. The DLRSC model allows for predicting treatment response in advanced GBC patients receiving systemic therapy. The survival benefit provided by the DLRSC model was also assessed. Question No effective tools exist for identifying patients who would potentially benefit from systemic therapy in clinical practice. Findings Our combined model allows for predicting treatment response to systemic therapy in advanced gallbladder cancer. Clinical relevance With the help of this model, clinicians could inform patients of the risk of potential ineffective treatment. Such a strategy can reduce unnecessary adverse events and effectively help reallocate societal healthcare resources.

A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation.

Li S, Jia B, Huang W, Zhang X, Zhou W, Wang C, Teng G

pubmed logopapersMay 8 2025
In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.

Accelerated inference for thyroid nodule recognition in ultrasound imaging using FPGA.

Ma W, Wu X, Zhang Q, Li X, Wu X, Wang J

pubmed logopapersMay 7 2025
Thyroid cancer is the most prevalent malignant tumour in the endocrine system, with its incidence steadily rising in recent years. Current central processing units (CPUs) and graphics processing units (GPUs) face significant challenges in terms of processing speed, energy consumption, cost, and scalability in the identification of thyroid nodules, making them inadequate for the demands of future green, efficient, and accessible healthcare. To overcome these limitations, this study proposes an efficient quantized inference method using a field-programmable gate array (FPGA). We employ the YOLOv4-tiny neural network model, enhancing software performance with the K-means + + optimization algorithm and improving hardware performance through techniques such as 8-bit weight quantization, batch normalization, and convolutional layer fusion. The study is based on the ZYNQ7020 FPGA platform. Experimental results demonstrate an average accuracy of 81.44% on the Tn3k dataset and 81.20% on the internal test set from a Chinese tertiary hospital. The power consumption of the FPGA platform, CPU (Intel Core i5-10200 H), and GPU (NVIDIA RTX 4090) were 3.119 watts, 45 watts, and 68 watts, respectively, with energy efficiency ratios of 5.45, 0.31, and 5.56. This indicates that the FPGA's energy efficiency is 17.6 times that of the CPU and 0.98 times that of the GPU. These results show that the FPGA not only significantly outperforms the CPU in speed but also consumes far less power than the GPU. Moreover, using mid-to-low-end FPGAs yields performance comparable to that of commercial-grade GPUs. This technology presents a novel solution for medical imaging diagnostics, with the potential to significantly enhance the speed, accuracy, and environmental sustainability of ultrasound image analysis, thereby supporting the future development of medical care.

The added value of artificial intelligence using Quantib Prostate for the detection of prostate cancer at multiparametric magnetic resonance imaging.

Russo T, Quarta L, Pellegrino F, Cosenza M, Camisassa E, Lavalle S, Apostolo G, Zaurito P, Scuderi S, Barletta F, Marzorati C, Stabile A, Montorsi F, De Cobelli F, Brembilla G, Gandaglia G, Briganti A

pubmed logopapersMay 7 2025
Artificial intelligence (AI) has been proposed to assist radiologists in reporting multiparametric magnetic resonance imaging (mpMRI) of the prostate. We evaluate the diagnostic performance of radiologists with different levels of experience when reporting mpMRI with the support of available AI-based software (Quantib Prostate). This is a single-center study (NCT06298305) involving 110 patients. Those with a positive mpMRI (PI-RADS ≥ 3) underwent targeted plus systematic biopsy (TBx plus SBx), while those with a negative mpMRI but a high clinical suspicion of prostate cancer (PCa) underwent SBx. Three readers with different levels of experience, identified as R1, R2, and R3 reviewed all mpMRI. Inter-reader agreement among the three readers with or without the assistance of Quantib Prostate as well as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for the detection of clinically significant PCa (csPCa) were assessed. 102 patients underwent prostate biopsy and the csPCa detection rate was 47%. Using Quantib Prostate resulted in an increased number of lesions identified for R3 (101 vs. 127). Inter-reader agreement slightly increased when using Quantib Prostate from 0.37 to 0.41 without vs. with Quantib Prostate, respectively. PPV, NPV and diagnostic accuracy (measured by the area under the curve [AUC]) of R3 improved (0.51 vs. 0.55, 0.65 vs.0.82 and 0.56 vs. 0.62, respectively). Conversely, no changes were observed for R1 and R2. Using Quantib Prostate did not enhance the detection rate of csPCa for readers with some experience in prostate imaging. However, for an inexperienced reader, this AI-based software is demonstrated to improve the performance. Name of registry: clinicaltrials.gov. NCT06298305. Date of registration: 2022-09.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.

A Vision-Language Model for Focal Liver Lesion Classification

Song Jian, Hu Yuchang, Wang Hui, Chen Yen-Wei

arxiv logopreprintMay 6 2025
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

Bai Y, An ZC, Du LF, Li F, Cai YY

pubmed logopapersMay 6 2025
The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.
Page 4 of 542 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.