Sort by:
Page 21 of 99990 results

Deep Learning-Based 3D and 2D Approaches for Skeletal Muscle Segmentation on Low-Dose CT Images.

Timpano G, Veltri P, Vizza P, Cascini GL, Manti F

pubmed logopapersAug 27 2025
Automated segmentation of skeletal muscle from computed tomography (CT) images is essential for large-scale quantitative body composition analysis. However, manual segmentation is time-consuming and impractical for routine or high-throughput use. This study presents a systematic comparison of two-dimensional (2D) and three-dimensional (3D) deep learning architectures for segmenting skeletal muscle at the anatomically standardized level of the third lumbar vertebra (L3) in low-dose computed tomography (LDCT) scans. We implemented and evaluated the DeepLabv3+ (2D) and UNet3+ (3D) architectures on a curated dataset of 537 LDCT scans, applying preprocessing protocols, L3 slice selection, and region of interest extraction. The model performance was evaluated using a comprehensive set of evaluation metrics, including Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95). DeepLabv3+ achieved the highest segmentation accuracy (DSC = 0.982 ± 0.010, HD95 = 1.04 ± 0.46 mm), while UNet3+ showed competitive performance (DSC = 0.967 ± 0.013, HD95 = 1.27 ± 0.58 mm) with 26 times fewer parameters (1.27 million vs. 33.6 million) and lower inference time. Both models exceeded or matched results reported in the recent CT-based muscle segmentation literature. This work offers practical insights into architecture selection for automated LDCT-based muscle segmentation workflows, with a focus on the L3 vertebral level, which remains the gold standard in muscle quantification protocols.

Validation of an Automated CT Image Analysis in the Prevention of Urinary Stones with Hydration Trial.

Tasian GE, Maalouf NM, Harper JD, Sivalingam S, Logan J, Al-Khalidi HR, Lieske JC, Selman-Fermin A, Desai AC, Lai H, Kirkali Z, Scales CD, Fan Y

pubmed logopapersAug 26 2025
<b><i>Introduction and Objective:</i></b> Kidney stone growth and new stone formation are common clinical trial endpoints and are associated with future symptomatic events. To date, a manual review of CT scans has been required to assess stone growth and new stone formation, which is laborious. We validated the performance of a software algorithm that automatically identified, registered, and measured stones over longitudinal CT studies. <b><i>Methods:</i></b> We validated the performance of a pretrained machine learning algorithm to classify stone outcomes on longitudinal CT scan images at baseline and at the end of the 2-year follow-up period for 62 participants aged >18 years in the Prevention of Urinary Stones with Hydration (PUSH) randomized controlled trial. Stones were defined as an area of voxels with a minimum linear dimension of 2 mm that was higher in density than the mean plus 4 standard deviations of all nonnegative HU values within the kidney. The four outcomes assessed were: (1) growth of at least one existing stone by ≥2 mm, (2) formation of at least one new ≥2 mm stone, (3) no stone growth or new stone formation, and (4) loss of at least one stone. The accuracy of the algorithm was determined by comparing its outcomes to the gold standard of independent review of the CT images by at least two expert clinicians. <b><i>Results:</i></b> The algorithm correctly classified outcomes for 61 paired scans (98.4%). One pair that the algorithm incorrectly classified as stone growth was a new renal artery calcification on end-of-study CT. <b><i>Conclusions:</i></b> An automated image analysis method validated for the prospective PUSH trial was highly accurate for determining clinical outcomes of new stone formation, stone growth, stable stone size, and stone loss on longitudinal CT images. This method has the potential to improve the accuracy and efficiency of clinical care and endpoint determination for future clinical trials.

Predicting Microsatellite Instability in Endometrial Cancer by Multimodal Magnetic Resonance Radiomics Combined with Clinical Factors.

Wei QY, Li Y, Huang XL, Wei YC, Yang CZ, Yu YY, Liao JY

pubmed logopapersAug 26 2025
To develop a nomogram integrating clinical and multimodal MRI features for non-invasive prediction of microsatellite instability (MSI) in endometrial cancer (EC), and to evaluate its diagnostic performance. This retrospective multicenter study included 216 EC patients (mean age, 54.68 ± 8.72 years) from two institutions (2017-2023). Patients were classified as MSI (n=59) or microsatellite stable (MSS, n=157) based on immunohistochemistry. Institution A data were randomly split into training (n=132) and testing (n=33) sets (8:2 ratio), while Institution B data (n=51) served as external validation. Eight machine learning algorithms were used to construct models. A nomogram combining radiomics score and clinical predictors was developed. Performance was evaluated via receiver operating characteristic (ROC) curves, calibration, and decision curve analysis (DCA). The T2-weighted imaging (T2WI) radiomics model showed the highest area under the receiver operating characteristic curve (AUC) among single sequences (training set:0.908; test set:0.838). The combined-sequence radiomics model achieved superior performance (AUC: training set=0.983, test set=0.862). The support vector machine (SVM) outperformed other algorithms. The nomogram integrating rad-score and clinical features demonstrated higher predictive efficacy than the clinical model (test set: AUC=0.904 vs. 0.654; p < 0.05) and comparable to the multimodal radiomics model. DCA indicated significant clinical utility for both nomogram and radiomics models. The clinical-radiomics nomogram effectively predicts MSI status in EC, offering a non-invasive tool for guiding immunotherapy decisions.

Beyond the norm: Exploring the diverse facets of adrenal lesions.

Afif S, Mahmood Z, Zaheer A, Azadi JR

pubmed logopapersAug 26 2025
Radiological diagnosis of adrenal lesions can be challenging due to the overlap between benign and malignant imaging features. The primary challenge in managing adrenal lesions is to accurately identify and characterize them to minimize unnecessary diagnostic examinations and interventions. However, there are substantial risks of underdiagnosis and misdiagnosis. This review article provides a comprehensive overview of typical, atypical, and overlapping imaging features of both common and rare adrenal lesions and explores emerging applications of artificial intelligence powered analysis of CT and MRI, which could play a pivotal role in distinguishing benign from malignant and functioning from non-functioning adrenal lesions with significant diagnostic accuracy, thereby enhancing diagnostic confidence and potentially reducing unnecessary interventions.

A Novel Model for Predicting Microsatellite Instability in Endometrial Cancer: Integrating Deep Learning-Pathomics and MRI-Based Radiomics.

Zhou L, Zheng L, Hong C, Hu Y, Wang Z, Guo X, Du Z, Feng Y, Mei J, Zhu Z, Zhao Z, Xu M, Lu C, Chen M, Ji J

pubmed logopapersAug 26 2025
To develop and validate a novel model based on multiparametric MRI (mpMRI) and whole slide images (WSIs) for predicting microsatellite instability (MSI) status in endometrial cancer (EC) patients. A total of 136 surgically confirmed EC patients were included in this retrospective study. Patients were randomly divided into a training set (96 patients) and a validation set (40 patients) in a 7:3 ratio. Deep learning with ResNet50 was used to extract deep-learning pathomics features, while Pyradiomics was applied to extract radiomics features specifically from sequences including T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and late arterial phase (AP). we developed a deep learning pathoradiomics model (DLPRM) by multilayer perceptron (MLP) based on radiomics features and pathomics features. Furthermore, we validated the DLPRM comprehensively, and compared it with two single-scale signatures-including the area under the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1-score. Finally, we employed shapley additive explanations (SHAP) to elucidate the mechanism of prediction model. After undergoing feature selection, a final set of nine radiomics features and 27 pathomics features were selected to construct the radiomics signature (RS) and the deep learning pathomics signature (DLPS). The DLPRM combining the RS and DLPS had favorable performance for the prediction of MSI status in the training set (AUC 0.960 [95% CI 0.936-0.984]), and in the validation set (AUC 0.917 [95% CI 0.824-1.000]). The AUCs of DLPS and RS ranged from 0.817 to 0.943 across the training and validation sets. The decision curve analysis indicated the DLPRM had relatively higher clinical net benefits. DLPRM can effectively predict MSI status in EC patients based on pretreatment pathoradiomics images with high accuracy and robustness, could provide a novel tool to assist clinicians in individualized management of EC.

Anatomy-aware transformer-based model for precise rectal cancer detection and localization in MRI scans.

Li S, Zhang Y, Hong Y, Yuan W, Sun J

pubmed logopapersAug 25 2025
Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.

TransSeg: Leveraging Transformer with Channel-Wise Attention and Semantic Memory for Semi-Supervised Ultrasound Segmentation.

Lyu J, Li L, Al-Hazzaa SAF, Wang C, Hossain MS

pubmed logopapersAug 25 2025
During labor, transperineal ultrasound imaging can acquire real-time midsagittal images, through which the pubic symphysis and fetal head can be accurately identified, and the angle of progression (AoP) between them can be calculated, thereby quantitatively evaluating the descent and position of the fetal head in the birth canal in real time. However, current segmentation methods based on convolutional neural networks (CNNs) and Transformers generally depend heavily on large-scale manually annotated data, which limits their adoption in practical applications. In light of this limitation, this paper develops a new Transformer-based Semi-supervised Segmentation Network (TransSeg). This method employs a Vision Transformer as the backbone network and introduces a Channel-wise Cross Attention (CCA) mechanism to effectively reconstruct the features of unlabeled samples into the labeled feature space, promoting architectural innovation in semi-supervised segmentation and eliminating the need for complex training strategies. In addition, we design a Semantic Information Storage (S-InfoStore) module and a Channel Semantic Update (CSU) strategy to dynamically store and update feature representations of unlabeled samples, thereby continuously enhancing their expressiveness in the feature space and significantly improving the model's utilization of unlabeled data. We conduct a systematic evaluation of the proposed method on the FH-PS-AoP dataset. Experimental results demonstrate that TransSeg outperforms existing mainstream methods across all evaluation metrics, verifying its effectiveness and advancement in semi-supervised semantic segmentation tasks.

DYNAFormer: Enhancing transformer segmentation with dynamic anchor mask for medical imaging.

Nguyen TC, Phung KA, Dao TTP, Nguyen-Mau TH, Nguyen-Quang T, Pham CN, Le TN, Shen J, Nguyen TV, Tran MT

pubmed logopapersAug 25 2025
Polyp shape is critical for diagnosing colorectal polyps and assessing cancer risk, yet there is limited data on segmenting pedunculated and sessile polyps. This paper introduces PolypDB_INS, a dataset of 4403 images containing 4918 annotated polyps, specifically for sessile and pedunculated polyps. In addition, we propose DYNAFormer, a novel transformer-based model utilizing an anchor mask-guided mechanism that incorporates cross-attention, dynamic query updates, and query denoising for improved object segmentation. Treating each positional query as an anchor mask dynamically updated through decoder layers enhances perceptual information regarding the object's position, allowing for more precise segmentation of complex structures like polyps. Extensive experiments on the PolypDB_INS dataset using standard evaluation metrics for both instance and semantic segmentation show that DYNAFormer significantly outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of the proposed techniques, highlighting the model's robustness for diagnosing colorectal cancer. The source code and dataset are available at https://github.com/ntcongvn/DYNAFormer https://github.com/ntcongvn/DYNAFormer.

Validation of automated computed tomography segmentation software to assess body composition among cancer patients.

Salehin M, Yang Chow VT, Lee H, Weltzien EK, Nguyen L, Li JM, Akella V, Caan BJ, Cespedes Feliciano EM, Ma D, Beg MF, Popuri K

pubmed logopapersAug 25 2025
Assessing body composition using computed tomography (CT) can help predict the clinical outcomes of cancer patients, including surgical complications, chemotherapy toxicity, and survival. However, manual segmentation of CT images is labor-intensive and can lead to significant inter-observer variability. In this study, we validate the accuracy and reliability of automatic CT-based segmentation using the Data Analysis Facilitation Suite (DAFS) Express software package, which rapidly segments single CT slices. The study analyzed single-slice images at the third lumbar vertebra (L3) level (n = 5973) of patients diagnosed with non-metastatic colorectal (n = 3098) and breast cancer (n = 2875) at Kaiser Permanente Northern California. Manual segmentation used SliceOmatic with Alberta protocol HU ranges; automated segmentation used DAFS Express with identical HU limits. The accuracy of the automated segmentation was evaluated using the DICE index, the reliability was assessed by intra-class correlation coefficients (ICC) with 95% CI, and the agreement between automatic and manual segmentations was assessed by Bland-Altman analysis. DICE scores below 20% and 70% were considered failed and poor segmentations, respectively, and underwent additional review. The mortality risk associated with each tissue's area was generated using Cox proportional hazard ratios (HR) with 95% CI, adjusted for patient-specific variables including age, sex, race/ethnicity, cancer stage and grade, treatment receipt, and smoking status. A blinded review process categorized images with various characteristics for sensitivity analysis. The mean (standard deviation, SD) ages of the colorectal and breast cancer patients were 62.6 (11.4) and 56 (11.8), respectively. Automatic segmentation showed high accuracy vs. manual segmentation, with mean DICE scores above 96% for skeletal muscle (SKM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), and above 77% for intermuscular adipose tissue (IMAT), with three failures, representing 0.05% of the cohort. Bland-Altman analysis of 5,973 measurements showed mean cross-sectional area differences of -5.73, -0.84, -2.82, and -1.02 cm<sup>2</sup> for SKM, VAT, SAT and IMAT, respectively, indicating good agreement, with slight underestimation in SKM and SAT. Reliability Coefficients ranged from 0.88-1.00 for colorectal and 0.95-1.00 for breast cancer, with Simple Kappa values of 0.65-0.99 and 0.67-0.97, respectively. Additionally, mortality associations for automated and manual segmentations were similar, with comparable hazard ratios, confidence intervals, and p-values. Kaplan-Meier survival estimates showed mortality differences below 2.14%. DAFS Express enables rapid, accurate body composition analysis by automating segmentation, reducing expert time and computational burden. This rapid analysis of body composition is a prerequisite to large-scale research that could potentially enable use in the clinical setting. Automated CT segmentations may be utilized to assess markers of sarcopenia, muscle loss, and adiposity and predict clinical outcomes.

Bias in deep learning-based image quality assessments of T2-weighted imaging in prostate MRI.

Nakai H, Froemming AT, Kawashima A, LeGout JD, Kurata Y, Gloe JN, Borisch EA, Riederer SJ, Takahashi N

pubmed logopapersAug 25 2025
To determine whether deep learning (DL)-based image quality (IQ) assessment of T2-weighted images (T2WI) could be biased by the presence of clinically significant prostate cancer (csPCa). In this three-center retrospective study, five abdominal radiologists categorized IQ of 2,105 transverse T2WI series into optimal, mild, moderate, and severe degradation. An IQ classification model was developed using 1,719 series (development set). The agreement between the model and radiologists was assessed using the remaining 386 series with a quadratic weighted kappa. The model was applied to 11,723 examinations that were not included in the development set and without documented prostate cancer at the time of MRI (patient age, 65.5 ± 8.3 years [mean ± standard deviation]). Examinations categorized as mild to severe degradation were used as target groups, whereas those as optimal were used to construct matched control groups. Case-control matching was performed to mitigate the effects of pre-MRI confounding factors, such as age and prostate-specific antigen value. The proportion of patients with csPCa was compared between the target and matched control groups using the chi-squared test. The agreement between the model and radiologists was moderate with a quadratic weighted kappa of 0.53. The mild-moderate IQ-degraded groups had significantly higher csPCa proportions than the matched control groups with optimal IQ: moderate (N = 126) vs. optimal (N = 504), 26.3% vs. 22.7%, respectively, difference = 3.6% [95% confidence interval: 0.4%, 6.8%], p = 0.03; mild (N = 1,399) vs. optimal (N = 1,399), 22.9% vs. 20.2%, respectively, difference = 2.7% [0.7%, 4.7%], p = 0.008. The DL-based IQ tended to be worse in patients with csPCa, raising concerns about its clinical application.
Page 21 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.