Sort by:
Page 13 of 73728 results

Automated characterization of abdominal MRI exams using deep learning.

Kim J, Chae A, Duda J, Borthakur A, Rader DJ, Gee JC, Kahn CE, Witschey WR, Sagreiya H

pubmed logopapersJul 25 2025
Advances in magnetic resonance imaging (MRI) have revolutionized disease detection and treatment planning. However, the growing volume and complexity of MRI data-along with heterogeneity in imaging protocols, scanner technology, and labeling practices-creates a need for standardized tools to automatically identify and characterize key imaging attributes. Such tools are essential for large-scale, multi-institutional studies that rely on harmonized data to train robust machine learning models. In this study, we developed convolutional neural networks (CNNs) to automatically classify three core attributes of abdominal MRI: pulse sequence type, imaging orientation, and contrast enhancement status. Three distinct CNNs with similar backbone architectures were trained to classify single image slices into one of 12 pulse sequences, 4 orientations, or 2 contrast classes. The models achieved high classification accuracies of 99.51%, 99.87%, and 99.99% for pulse sequence, orientation, and contrast, respectively. We applied Grad-CAM to visualize image regions influencing pulse sequence predictions and highlight relevant anatomical features. To enhance performance, we implemented a majority voting approach to aggregate slice-level predictions, achieving 100% accuracy at the volume level for all tasks. External validation using the Duke Liver Dataset demonstrated strong generalizability; after adjusting for class label mismatch, volume-level accuracies exceeded 96.9% across all classification tasks.

3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention.

Mahmoudi H, Ramadan H, Riffi J, Tairi H

pubmed logopapersJul 25 2025
Multimodal image registration is crucial in medical imaging, particularly for aligning Magnetic Resonance Imaging (MRI) and Transrectal Ultrasound (TRUS) data, which are widely used in prostate cancer diagnosis and treatment planning. However, this task presents significant challenges due to the inherent differences between these imaging modalities, including variations in resolution, contrast, and noise. Recently, conventional Convolutional Neural Network (CNN)-based registration methods, while effective at extracting local features, often struggle to capture global contextual information and fail to adapt to complex deformations in multimodal data. Conversely, Transformer-based methods excel at capturing long-range dependencies and hierarchical features but face difficulties in integrating fine-grained local details, which are essential for accurate spatial alignment. To address these limitations, we propose a novel 3D image registration framework that combines the strengths of both paradigms. Our method employs a Swin Transformer (ST)-CNN encoder-decoder architecture, with a key innovation focusing on enhancing the skip connection stages. Specifically, we introduce an innovative module named Wavelet-3D-Depthwise-Attention (WDA). The WDA module leverages an attention mechanism that integrates wavelet transforms for multi-scale spatial-frequency representation and 3D-Depthwise convolution to improve computational efficiency and modality fusion. Experimental evaluations on clinical MRI/TRUS datasets confirm that the proposed method achieves a median Dice score of 0.94 and a target registration error of 0.85, indicating an improvement in registration accuracy and robustness over existing state-of-the-art (SOTA) methods. The WDA-enhanced skip connections significantly empower the registration network to preserve critical anatomical details, making our method a promising advancement in prostate multimodal registration. Furthermore, the proposed framework shows strong potential for generalization to other image registration tasks.

Minimal Ablative Margin Quantification Using Hepatic Arterial Versus Portal Venous Phase CT for Colorectal Metastases Segmentation: A Dual-center, Retrospective Analysis.

Siddiqi NS, Lin YM, Marques Silva JA, Laimer G, Schullian P, Scharll Y, Dunker AM, O'Connor CS, Jones KA, Brock KK, Bale R, Odisio BC, Paolucci I

pubmed logopapersJul 24 2025
To compare the predictive value of minimal ablative margin (MAM) quantification using tumor segmentation on intraprocedural contrast-enhanced hepatic arterial (HAP) versus portal venous phase (PVP) CT on local outcomes following percutaneous thermal ablation of colorectal liver metastases (CRLM). This dual-center retrospective study included patients undergoing thermal ablation of CRLM with intraprocedural preablation and postablation contrast-enhanced CT imaging between 2009 and 2021. Tumors were segmented in both HAP and PVP CT phases using an artificial intelligence-based auto-segmentation model and reviewed by a trained radiologist. The MAM was quantified using a biomechanical deformable image registration process. The area under the receiver operating characteristic curve (AUROC) was used to compare the prognostic value for predicting local tumor progression (LTP). Among 81 patients (60 y±13, 53 men), 151 CRLMs were included. During 29.4 months of median follow-up, LTP was noted in 24/151 (15.9%). Median tumor volumes on HAP and PVP CT were 1.7 mL and 1.2 mL, respectively, with respective median MAMs of 2.3 and 4.0 mm (both P< 0.001). The AUROC for 1-year LTP prediction was 0.78 (95% CI: 0.70-0.85) on HAP and 0.84 (95% CI: 0.78-0.91) on PVP (P= 0.002). During CT-guided percutaneous thermal ablation, MAM measured based on tumors segmented on PVP images conferred a higher predictive accuracy of ablation outcomes among CRLM patients than those segmented on HAP images, supporting the use of PVP rather than HAP images for segmentation during ablation of CRLMs.

Malignancy classification of thyroid incidentalomas using 18F-fluorodeoxy-d-glucose PET/computed tomography-derived radiomics.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation.

AI-Driven Framework for Automated Detection of Kidney Stones in CT Images: Integration of Deep Learning Architectures and Transformers.

Alshenaifi R, Alqahtani Y, Ma S, Umapathy S

pubmed logopapersJul 24 2025
Kidney stones, a prevalent urological condition, associated with acute pain requires prompt and precise diagnosis for optimal therapeutic intervention. While computed tomography (CT) imaging remains the definitive diagnostic modality, manual interpretation of these images is a labor-intensive and error-prone process. This research endeavors to introduce Artificial Intelligence based methodology for automated detection and classification of renal calculi within the CT images. To identify the CT images with kidney stones, a comprehensive exploration of various ML and DL architectures, along with rigorous experimentation with diverse hyperparameters, was undertaken to refine the model's performance. The proposed workflow involves two key stages: (1) precise segmentation of pathological regions of interest (ROIs) using DL algorithms, and (2) binary classification of the segmented ROIs using both ML and DL models. The SwinTResNet model, optimized using the RMSProp algorithm with a learning rate of 0.0001, demonstrated optimal performance, achieving a training accuracy of 97.27% and a validation accuracy of 96.16% in the segmentation task. The Vision Transformer (ViT) architecture, when coupled with the ADAM optimizer and a learning rate of 0.0001, exhibited robust convergence and consistently achieved the highest performance metrics. Specifically, the model attained a peak training accuracy of 96.63% and a validation accuracy of 95.67%. The results demonstrate the potential of this integrated framework to enhance diagnostic accuracy and efficiency, thereby supporting improved clinical decision-making in the management of kidney stones.

MRI-Based Models Using Habitat Imaging for Predicting Distinct Vascular Patterns in Hepatocellular Carcinoma.

Xie Y, Zhang T, Liu Z, Yan Z, Yu Y, Qu Q, Gu C, Ding C, Zhang X

pubmed logopapersJul 24 2025
To develop two distinct models for predicting microvascular invasion (MVI) and vessels encapsulating tumor clusters (VETC) based on habitat imaging, and to integrate these models for prognosis assessment. In this multicenter retrospective study, patients from two different institutions were enrolled and categorized for MVI (n=295) and VETC (n=276) prediction. Tumor and peritumoral regions on hepatobiliary phase images were segmented into subregions, from which all relevant features were extracted. The MVI and VETC predictive models were constructed by analyzing these features using various machine learning algorithms, and classifying patients into high-risk and low-risk groups. Cox regression analysis was utilized to identify risk factors for early recurrence. The MVI and VETC prediction models demonstrated excellent performance in both the training and external validation cohorts (AUC: 0.961 and 0.838 for MVI; 0.931 and 0.820 for VETC). Based on model predictions, patients were classified into high-risk group (High-risk MVI/ High-risk VETC), medium-risk group (High-risk MVI/Low-risk VETC or Low-risk MVI/High-risk VETC), and low-risk group (Low-risk MVI/Low-risk VETC). Multivariable Cox regression analysis revealed that risk group, number of tumors, and gender were independent predictors of early recurrence. Models based on habitat imaging can be used for the preoperative, noninvasive prediction of MVI and VETC, offering valuable stratification and diagnostic insights for HCC patients.

A Multi-Modal Pelvic MRI Dataset for Deep Learning-Based Pelvic Organ Segmentation in Endometriosis.

Liang X, Alpuing Radilla LA, Khalaj K, Dawoodally H, Mokashi C, Guan X, Roberts KE, Sheth SA, Tammisetti VS, Giancardo L

pubmed logopapersJul 24 2025
Endometriosis affects approximately 190 million females of reproductive age worldwide. Magnetic Resonance Imaging (MRI) has been recommended as the primary non-invasive diagnostic method for endometriosis. This study presents new female pelvic MRI multicenter datasets for endometriosis and shows the baseline segmentation performance of two auto-segmentation pipelines: the self-configuring nnU-Net and RAovSeg, a custom network. The multi-sequence endometriosis MRI scans from two clinical institutions were collected. A multicenter dataset of 51 subjects with manual labels for multiple pelvic structures from three raters was used to assess interrater agreement. A second single-center dataset of 81 subjects with labels for multiple pelvic structures from one rater was used to develop the ovary auto-segmentation pipelines. Uterus and ovary segmentations are available for all subjects, endometrioma segmentation is available for all subjects where it is detectable in the image. This study highlights the challenges of manual ovary segmentation in endometriosis MRI and emphasizes the need for an auto-segmentation method. The dataset is publicly available for further research in pelvic MRI auto-segmentation to support endometriosis research.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Our code will be publicly available upon acceptance.
Page 13 of 73728 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.