Sort by:
Page 40 of 100995 results

Real-time Monitoring of Urinary Stone Status During Shockwave Lithotripsy.

Noble PA

pubmed logopapersJul 24 2025
To develop a standardized, real-time feedback system for monitoring urinary stone fragmentation during shockwave lithotripsy (SWL), thereby optimizing treatment efficacy and minimizing patient risk. A two-pronged approach was implemented to quantify stone fragmentation in C-arm X-ray images. First, the initial pre-treatment stone image was compared to subsequent images to measure stone area loss. Second, a Convolutional Neural Network (CNN) was trained to estimate the probability that an image contains a urinary stone. These two criteria were integrated to create a real-time signaling system capable of evaluating shockwave efficacy during SWL. The system was developed using data from 522 shockwave treatments encompassing 4,057 C-arm X-ray images. The combined area-loss metric and CNN output enabled consistent real-time assessment of stone fragmentation, providing actionable feedback to guide SWL in diverse clinical contexts. The proposed system offers a novel and reliable method for monitoring of urinary stone fragmentation during SWL. By helping to balance treatment efficacy with patient safety, it holds significant promise for semi-automated SWL platforms, particularly in resource-limited or remote environments such as arid regions and extended space missions.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation.

AI-Driven Framework for Automated Detection of Kidney Stones in CT Images: Integration of Deep Learning Architectures and Transformers.

Alshenaifi R, Alqahtani Y, Ma S, Umapathy S

pubmed logopapersJul 24 2025
Kidney stones, a prevalent urological condition, associated with acute pain requires prompt and precise diagnosis for optimal therapeutic intervention. While computed tomography (CT) imaging remains the definitive diagnostic modality, manual interpretation of these images is a labor-intensive and error-prone process. This research endeavors to introduce Artificial Intelligence based methodology for automated detection and classification of renal calculi within the CT images. To identify the CT images with kidney stones, a comprehensive exploration of various ML and DL architectures, along with rigorous experimentation with diverse hyperparameters, was undertaken to refine the model's performance. The proposed workflow involves two key stages: (1) precise segmentation of pathological regions of interest (ROIs) using DL algorithms, and (2) binary classification of the segmented ROIs using both ML and DL models. The SwinTResNet model, optimized using the RMSProp algorithm with a learning rate of 0.0001, demonstrated optimal performance, achieving a training accuracy of 97.27% and a validation accuracy of 96.16% in the segmentation task. The Vision Transformer (ViT) architecture, when coupled with the ADAM optimizer and a learning rate of 0.0001, exhibited robust convergence and consistently achieved the highest performance metrics. Specifically, the model attained a peak training accuracy of 96.63% and a validation accuracy of 95.67%. The results demonstrate the potential of this integrated framework to enhance diagnostic accuracy and efficiency, thereby supporting improved clinical decision-making in the management of kidney stones.

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.

A Multi-Modal Pelvic MRI Dataset for Deep Learning-Based Pelvic Organ Segmentation in Endometriosis.

Liang X, Alpuing Radilla LA, Khalaj K, Dawoodally H, Mokashi C, Guan X, Roberts KE, Sheth SA, Tammisetti VS, Giancardo L

pubmed logopapersJul 24 2025
Endometriosis affects approximately 190 million females of reproductive age worldwide. Magnetic Resonance Imaging (MRI) has been recommended as the primary non-invasive diagnostic method for endometriosis. This study presents new female pelvic MRI multicenter datasets for endometriosis and shows the baseline segmentation performance of two auto-segmentation pipelines: the self-configuring nnU-Net and RAovSeg, a custom network. The multi-sequence endometriosis MRI scans from two clinical institutions were collected. A multicenter dataset of 51 subjects with manual labels for multiple pelvic structures from three raters was used to assess interrater agreement. A second single-center dataset of 81 subjects with labels for multiple pelvic structures from one rater was used to develop the ovary auto-segmentation pipelines. Uterus and ovary segmentations are available for all subjects, endometrioma segmentation is available for all subjects where it is detectable in the image. This study highlights the challenges of manual ovary segmentation in endometriosis MRI and emphasizes the need for an auto-segmentation method. The dataset is publicly available for further research in pelvic MRI auto-segmentation to support endometriosis research.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Our code will be publicly available upon acceptance.

Minimal Ablative Margin Quantification Using Hepatic Arterial Versus Portal Venous Phase CT for Colorectal Metastases Segmentation: A Dual-center, Retrospective Analysis.

Siddiqi NS, Lin YM, Marques Silva JA, Laimer G, Schullian P, Scharll Y, Dunker AM, O'Connor CS, Jones KA, Brock KK, Bale R, Odisio BC, Paolucci I

pubmed logopapersJul 24 2025
To compare the predictive value of minimal ablative margin (MAM) quantification using tumor segmentation on intraprocedural contrast-enhanced hepatic arterial (HAP) versus portal venous phase (PVP) CT on local outcomes following percutaneous thermal ablation of colorectal liver metastases (CRLM). This dual-center retrospective study included patients undergoing thermal ablation of CRLM with intraprocedural preablation and postablation contrast-enhanced CT imaging between 2009 and 2021. Tumors were segmented in both HAP and PVP CT phases using an artificial intelligence-based auto-segmentation model and reviewed by a trained radiologist. The MAM was quantified using a biomechanical deformable image registration process. The area under the receiver operating characteristic curve (AUROC) was used to compare the prognostic value for predicting local tumor progression (LTP). Among 81 patients (60 y±13, 53 men), 151 CRLMs were included. During 29.4 months of median follow-up, LTP was noted in 24/151 (15.9%). Median tumor volumes on HAP and PVP CT were 1.7 mL and 1.2 mL, respectively, with respective median MAMs of 2.3 and 4.0 mm (both P< 0.001). The AUROC for 1-year LTP prediction was 0.78 (95% CI: 0.70-0.85) on HAP and 0.84 (95% CI: 0.78-0.91) on PVP (P= 0.002). During CT-guided percutaneous thermal ablation, MAM measured based on tumors segmented on PVP images conferred a higher predictive accuracy of ablation outcomes among CRLM patients than those segmented on HAP images, supporting the use of PVP rather than HAP images for segmentation during ablation of CRLMs.

Malignancy classification of thyroid incidentalomas using 18F-fluorodeoxy-d-glucose PET/computed tomography-derived radiomics.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Code is available at https://github.com/HealthX-Lab/TextSAM-EUS .

Interpretable Deep Learning Approaches for Reliable GI Image Classification: A Study with the HyperKvasir Dataset

Wahid, S. B., Rothy, Z. T., News, R. K., Rieyan, S. A.

medrxiv logopreprintJul 23 2025
Deep learning has emerged as a promising tool for automating gastrointestinal (GI) disease diagnosis. However, multi-class GI disease classification remains underexplored. This study addresses this gap by presenting a framework that uses advanced models like InceptionNetV3 and ResNet50, combined with boosting algorithms (XGB, LGBM), to classify lower GI abnormalities. InceptionNetV3 with XGB achieved the best recall of 0.81 and an F1 score of 0.90. To assist clinicians in understanding model decisions, the Grad-CAM technique, a form of explainable AI, was employed to highlight the critical regions influencing predictions, fostering trust in these systems. This approach significantly improves both the accuracy and reliability of GI disease diagnosis.
Page 40 of 100995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.