Sort by:
Page 116 of 1331322 results

Evolution of deep learning tooth segmentation from CT/CBCT images: a systematic review and meta-analysis.

Kot WY, Au Yeung SY, Leung YY, Leung PH, Yang WF

pubmed logopapersMay 26 2025
Deep learning has been utilized to segment teeth from computed tomography (CT) or cone-beam CT (CBCT). However, the performance of deep learning is unknown due to multiple models and diverse evaluation metrics. This systematic review and meta-analysis aims to evaluate the evolution and performance of deep learning in tooth segmentation. We systematically searched PubMed, Web of Science, Scopus, IEEE Xplore, arXiv.org, and ACM for studies investigating deep learning in human tooth segmentation from CT/CBCT. Included studies were assessed using the Quality Assessment of Diagnostic Accuracy Study (QUADAS-2) tool. Data were extracted for meta-analyses by random-effects models. A total of 30 studies were included in the systematic review, and 28 of them were included for meta-analyses. Various deep learning algorithms were categorized according to the backbone network, encompassing single-stage convolutional models, convolutional models with U-Net architecture, Transformer models, convolutional models with attention mechanisms, and combinations of multiple models. Convolutional models with U-Net architecture were the most commonly used deep learning algorithms. The integration of attention mechanism within convolutional models has become a new topic. 29 evaluation metrics were identified, with Dice Similarity Coefficient (DSC) being the most popular. The pooled results were 0.93 [0.93, 0.93] for DSC, 0.86 [0.85, 0.87] for Intersection over Union (IoU), 0.22 [0.19, 0.24] for Average Symmetric Surface Distance (ASSD), 0.92 [0.90, 0.94] for sensitivity, 0.71 [0.26, 1.17] for 95% Hausdorff distance, and 0.96 [0.93, 0.98] for precision. No significant difference was observed in the segmentation of single-rooted or multi-rooted teeth. No obvious correlation between sample size and segmentation performance was observed. Multiple deep learning algorithms have been successfully applied to tooth segmentation from CT/CBCT and their evolution has been well summarized and categorized according to their backbone structures. In future, studies are needed with standardized protocols and open labelled datasets.

Deep learning radiomics of left atrial appendage features for predicting atrial fibrillation recurrence.

Yin Y, Jia S, Zheng J, Wang W, Wang Z, Lin J, Lin W, Feng C, Xia S, Ge W

pubmed logopapersMay 26 2025
Structural remodeling of the left atrial appendage (LAA) is characteristic of atrial fibrillation (AF), and LAA morphology impacts radiofrequency catheter ablation (RFCA) outcomes. In this study, we aimed to develop and validate a predictive model for AF ablation outcomes using LAA morphological features, deep learning (DL) radiomics, and clinical variables. In this multicenter retrospective study, 480 consecutive patients who underwent RFCA for AF at three tertiary hospitals between January 2016 and December 2022 were analyzed, with follow-up through December 2023. Preprocedural CT angiography (CTA) images and laboratory data were systematically collected. LAA segmentation was performed using an nnUNet-based model, followed by radiomic feature extraction. Cox proportional hazard regression analysis assessed the relationship between AF recurrence and LAA volume. The dataset was randomly split into training (70%) and validation (30%) cohorts using stratified sampling. An AF recurrence prediction model integrating LAA DL radiomics with clinical variables was developed. The cohort had a median follow-up of 22 months (IQR 15-32), with 103 patients (21.5%) experiencing AF recurrence. The nnUNet segmentation model achieved a Dice coefficient of 0.89. Multivariate analysis showed that LAA volume was associated with a 5.8% increase in hazard risk per unit increase (aHR 1.058, 95% CI 1.021-1.095; p = 0.002). The model combining LAA DL radiomics with clinical variables demonstrated an AUC of 0.92 (95% CI 0.87-0.96) in the test set, maintaining robust predictive performance across subgroups. LAA morphology and volume are strongly linked to AF RFCA outcomes. We developed an LAA segmentation network and a predictive model that combines DL radiomics and clinical variables to estimate the probability of AF recurrence.

The extent of Skeletal muscle wasting in prolonged critical illness and its association with survival: insights from a retrospective single-center study.

Kolck J, Hosse C, Fehrenbach U, Beetz NL, Auer TA, Pille C, Geisel D

pubmed logopapersMay 26 2025
Muscle wasting in critically ill patients, particularly those with prolonged hospitalization, poses a significant challenge to recovery and long-term outcomes. The aim of this study was to characterize long-term muscle wasting trajectories in ICU patients with acute respiratory distress syndrome (ARDS) due to COVID-19 and acute pancreatitis (AP), to evaluate correlations between muscle wasting and patient outcomes, and to identify clinically feasible thresholds that have the potential to enhance patient care strategies. A collective of 154 ICU patients (100 AP and 54 COVID-19 ARDS) with a minimum ICU stay of 10 days and at least three abdominal CT scans were retrospectively analyzed. AI-driven segmentation of CT scans quantified changes in psoas muscle area (PMA). A mixed model analysis was used to assess the correlation between mortality and muscle wasting, Cox regression was applied to identify potential predictors of survival. Muscle loss rates, survival thresholds and outcome correlations were assessed using Kaplan-Meier and receiver operating characteristic (ROC) analyses. Muscle loss in ICU patients was most pronounced in the first two weeks, peaking at -2.42% and - 2.39% psoas muscle area (PMA) loss per day in weeks 1 and 2, respectively, followed by a progressive decline. The median total PMA loss was 48.3%, with significantly greater losses in non-survivors. Mixed model analysis confirmed correlation of muscle wasting with mortality. Cox regression identified visceral adipose tissue (VAT), sequential organ failure assessment (SOFA) score and muscle wasting as significant risk factors, while increased skeletal muscle area (SMA) was protective. ROC and Kaplan-Meier analyses showed strong correlations between PMA loss thresholds and survival, with daily loss > 4% predicting the worst survival (39.7%). To our knowledge, This is the first study to highlight the substantial progression of muscle wasting in prolonged hospitalized ICU patients. The mortality-related thresholds for muscle wasting rates identified in this study may provide a basis for clinical risk stratification. Future research should validate these findings in larger cohorts and explore strategies to mitigate muscle loss. Not applicable.

Rep3D: Re-parameterize Large 3D Kernels with Low-Rank Receptive Modeling for Medical Imaging

Ho Hin Lee, Quan Liu, Shunxing Bao, Yuankai Huo, Bennett A. Landman

arxiv logopreprintMay 26 2025
In contrast to vision transformers, which model long-range dependencies through global self-attention, large kernel convolutions provide a more efficient and scalable alternative, particularly in high-resolution 3D volumetric settings. However, naively increasing kernel size often leads to optimization instability and degradation in performance. Motivated by the spatial bias observed in effective receptive fields (ERFs), we hypothesize that different kernel elements converge at variable rates during training. To support this, we derive a theoretical connection between element-wise gradients and first-order optimization, showing that structurally re-parameterized convolution blocks inherently induce spatially varying learning rates. Building on this insight, we introduce Rep3D, a 3D convolutional framework that incorporates a learnable spatial prior into large kernel training. A lightweight two-stage modulation network generates a receptive-biased scaling mask, adaptively re-weighting kernel updates and enabling local-to-global convergence behavior. Rep3D adopts a plain encoder design with large depthwise convolutions, avoiding the architectural complexity of multi-branch compositions. We evaluate Rep3D on five challenging 3D segmentation benchmarks and demonstrate consistent improvements over state-of-the-art baselines, including transformer-based and fixed-prior re-parameterization methods. By unifying spatial inductive bias with optimization-aware learning, Rep3D offers an interpretable, and scalable solution for 3D medical image analysis. The source code is publicly available at https://github.com/leeh43/Rep3D.

Automated landmark-based mid-sagittal plane: reliability for 3-dimensional mandibular asymmetry assessment on head CT scans.

Alt S, Gajny L, Tilotta F, Schouman T, Dot G

pubmed logopapersMay 26 2025
The determination of the mid-sagittal plane (MSP) on three-dimensional (3D) head imaging is key to the assessment of facial asymmetry. The aim of this study was to evaluate the reliability of an automated landmark-based MSP to quantify mandibular asymmetry on head computed tomography (CT) scans. A dataset of 368 CT scans, including orthognathic surgery patients, was automatically annotated with 3D cephalometric landmarks via a previously published deep learning-based method. Five of these landmarks were used to automatically construct an MSP orthogonal to the Frankfurt horizontal plane. The reliability of automatic MSP construction was compared with the reliability of manual MSP construction based on 6 manual localizations by 3 experienced operators on 19 randomly selected CT scans. The mandibular asymmetry of the 368 CT scans with respect to the MSP was calculated and compared with clinical expert judgment. The construction of the MSP was found to be highly reliable, both manually and automatically. The manual reproducibility 95% limit of agreement was less than 1 mm for -y translation and less than 1.1° for -x and -z rotation, and the automatic measurement lied within the confidence interval of the manual method. The automatic MSP construction was shown to be clinically relevant, with the mandibular asymmetry measures being consistent with the expertly assessed levels of asymmetry. The proposed automatic landmark-based MSP construction was found to be as reliable as manual construction and clinically relevant in assessing the mandibular asymmetry of 368 head CT scans. Once implemented in a clinical software, fully automated landmark-based MSP construction could be clinically used to assess mandibular asymmetry on head CT scans.

Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Chou CJ, Yang HC, Lee CC, Jiang ZH, Chen CJ, Wu HM, Lin CF, Lai IC, Peng SJ

pubmed logopapersMay 26 2025
This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs). The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic. The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis. This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications. not applicable.

A novel network architecture for post-applicator placement CT auto-contouring in cervical cancer HDR brachytherapy.

Lei Y, Chao M, Yang K, Gupta V, Yoshida EJ, Wang T, Yang X, Liu T

pubmed logopapersMay 25 2025
High-dose-rate brachytherapy (HDR-BT) is an integral part of treatment for locally advanced cervical cancer, requiring accurate segmentation of the high-risk clinical target volume (HR-CTV) and organs at risk (OARs) on post-applicator CT (pCT) for precise and safe dose delivery. Manual contouring, however, is time-consuming and highly variable, with challenges heightened in cervical HDR-BT due to complex anatomy and low tissue contrast. An effective auto-contouring solution could significantly enhance efficiency, consistency, and accuracy in cervical HDR-BT planning. To develop a machine learning-based approach that improves the accuracy and efficiency of HR-CTV and OAR segmentation on pCT images for cervical HDR-BT. The proposed method employs two sequential deep learning models to segment target and OARs from planning CT data. The intuitive model, a U-Net, initially segments simpler structures such as the bladder and HR-CTV, utilizing shallow features and iodine contrast agents. Building on this, the sophisticated model targets complex structures like the sigmoid, rectum, and bowel, addressing challenges from low contrast, anatomical proximity, and imaging artifacts. This model incorporates spatial information from the intuitive model and uses total variation regularization to improve segmentation smoothness by applying a penalty to changes in gradient. This dual-model approach improves accuracy and consistency in segmenting high-risk clinical target volumes and organs at risk in cervical HDR-BT. To validate the proposed method, 32 cervical cancer patients treated with tandem and ovoid (T&O) HDR brachytherapy (3-5 fractions, 115 CT images) were retrospectively selected. The method's performance was assessed using four-fold cross-validation, comparing segmentation results to manual contours across five metrics: Dice similarity coefficient (DSC), 95% Hausdorff distance (HD<sub>95</sub>), mean surface distance (MSD), center-of-mass distance (CMD), and volume difference (VD). Dosimetric evaluations included D90 for HR-CTV and D2cc for OARs. The proposed method demonstrates high segmentation accuracy for HR-CTV, bladder, and rectum, achieving DSC values of 0.79 ± 0.06, 0.83 ± 0.10, and 0.76 ± 0.15, MSD values of 1.92 ± 0.77 mm, 2.24 ± 1.20 mm, and 4.18 ± 3.74 mm, and absolute VD values of 5.34 ± 4.85 cc, 17.16 ± 17.38 cc, and 18.54 ± 16.83 cc, respectively. Despite challenges in bowel and sigmoid segmentation due to poor soft tissue contrast in CT and variability in manual contouring (ground truth volumes of 128.48 ± 95.9 cc and 51.87 ± 40.67 cc), the method significantly outperforms two state-of-the-art methods on DSC, MSD, and CMD metrics (p-value < 0.05). For HR-CTV, the mean absolute D90 difference was 0.42 ± 1.17 Gy (p-value > 0.05), less than 5% of the prescription dose. Over 75% of cases showed changes within ± 0.5 Gy, and fewer than 10% exceeded ± 1 Gy. The mean and variation in structure volume and D2cc parameters between manual and segmented contours for OARs showed no significant differences (p-value > 0.05), with mean absolute D2cc differences within 0.5 Gy, except for the bladder, which exhibited higher variability (0.97 Gy). Our innovative auto-contouring method showed promising results in segmenting HR-CTV and OARs from pCT, potentially enhancing the efficiency of HDR BT cervical treatment planning. Further validation and clinical implementation are required to fully realize its clinical benefits.

Sex-related differences and associated transcriptional signatures in the brain ventricular system and cerebrospinal fluid development in full-term neonates.

Sun Y, Fu C, Gu L, Zhao H, Feng Y, Jin C

pubmed logopapersMay 25 2025
The cerebrospinal fluid (CSF) is known to serve as a unique environment for neurodevelopment, with specific proteins secreted by epithelial cells of the choroid plexus (CP) playing crucial roles in cortical development and cell differentiation. Sex-related differences in the brain in early life have been widely identified, but few studies have investigated the neonatal CSF system and associated transcriptional signatures. This study included 75 full-term neonates [44 males and 31 females; gestational age (GA) = 37-42 weeks] without significant MRI abnormalities from the dHCP (developing Human Connectome Project) database. Deep-learning automated segmentation was used to measure various metrics of the brain ventricular system and CSF. Sex-related differences and relationships with postnatal age were analyzed by linear regression. Correlations between the CP and CSF space metrics were also examined. LASSO regression was further applied to identify the key genes contributing to the sex-related CSF system differences by using regional gene expression data from the Allen Human Brain Atlas. Right lateral ventricles [2.42 ± 0.98 vs. 2.04 ± 0.45 cm3 (mean ± standard deviation), p = 0.036] and right CP (0.16 ± 0.07 vs. 0.13 ± 0.04 cm3, p = 0.024) were larger in males, with a stronger volume correlation (male/female correlation coefficients r: 0.798 vs. 0.649, p < 1 × 10<sup>- 4</sup>). No difference was found in total CSF volume, while peripheral CSF (male/female β: 1.218 vs. 1.064) and CP (male/female β: 0.008 vs. 0.005) exhibited relatively faster growth in males. Additionally, the volumes of the lateral ventricular system, third ventricle, peripheral CSF, and total CSF were significantly correlated with their corresponding CP volume (r: 0.362 to 0.799, p < 0.05). DERL2 (Degradation in Endoplasmic Reticulum Protein 2) (r = 0.1319) and MRPL48 (Mitochondrial Large Ribosomal Subunit Protein) (r=-0.0370) were identified as potential key genes associated with sex-related differences in CSF system. Male neonates present larger volumes and faster growth of the right lateral ventricle, likely linked to corresponding CP volume and growth pattern. The downregulation of DERL2 and upregulation of MRPL48 may contribute to these sex-related variations in the CSF system, suggesting a molecular basis for sex-specific brain development.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.
Page 116 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.