Sort by:
Page 17 of 33324 results

Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Chou CJ, Yang HC, Lee CC, Jiang ZH, Chen CJ, Wu HM, Lin CF, Lai IC, Peng SJ

pubmed logopapersMay 26 2025
This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs). The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic. The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis. This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications. not applicable.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

The extent of Skeletal muscle wasting in prolonged critical illness and its association with survival: insights from a retrospective single-center study.

Kolck J, Hosse C, Fehrenbach U, Beetz NL, Auer TA, Pille C, Geisel D

pubmed logopapersMay 26 2025
Muscle wasting in critically ill patients, particularly those with prolonged hospitalization, poses a significant challenge to recovery and long-term outcomes. The aim of this study was to characterize long-term muscle wasting trajectories in ICU patients with acute respiratory distress syndrome (ARDS) due to COVID-19 and acute pancreatitis (AP), to evaluate correlations between muscle wasting and patient outcomes, and to identify clinically feasible thresholds that have the potential to enhance patient care strategies. A collective of 154 ICU patients (100 AP and 54 COVID-19 ARDS) with a minimum ICU stay of 10 days and at least three abdominal CT scans were retrospectively analyzed. AI-driven segmentation of CT scans quantified changes in psoas muscle area (PMA). A mixed model analysis was used to assess the correlation between mortality and muscle wasting, Cox regression was applied to identify potential predictors of survival. Muscle loss rates, survival thresholds and outcome correlations were assessed using Kaplan-Meier and receiver operating characteristic (ROC) analyses. Muscle loss in ICU patients was most pronounced in the first two weeks, peaking at -2.42% and - 2.39% psoas muscle area (PMA) loss per day in weeks 1 and 2, respectively, followed by a progressive decline. The median total PMA loss was 48.3%, with significantly greater losses in non-survivors. Mixed model analysis confirmed correlation of muscle wasting with mortality. Cox regression identified visceral adipose tissue (VAT), sequential organ failure assessment (SOFA) score and muscle wasting as significant risk factors, while increased skeletal muscle area (SMA) was protective. ROC and Kaplan-Meier analyses showed strong correlations between PMA loss thresholds and survival, with daily loss > 4% predicting the worst survival (39.7%). To our knowledge, This is the first study to highlight the substantial progression of muscle wasting in prolonged hospitalized ICU patients. The mortality-related thresholds for muscle wasting rates identified in this study may provide a basis for clinical risk stratification. Future research should validate these findings in larger cohorts and explore strategies to mitigate muscle loss. Not applicable.

Deep learning radiomics of left atrial appendage features for predicting atrial fibrillation recurrence.

Yin Y, Jia S, Zheng J, Wang W, Wang Z, Lin J, Lin W, Feng C, Xia S, Ge W

pubmed logopapersMay 26 2025
Structural remodeling of the left atrial appendage (LAA) is characteristic of atrial fibrillation (AF), and LAA morphology impacts radiofrequency catheter ablation (RFCA) outcomes. In this study, we aimed to develop and validate a predictive model for AF ablation outcomes using LAA morphological features, deep learning (DL) radiomics, and clinical variables. In this multicenter retrospective study, 480 consecutive patients who underwent RFCA for AF at three tertiary hospitals between January 2016 and December 2022 were analyzed, with follow-up through December 2023. Preprocedural CT angiography (CTA) images and laboratory data were systematically collected. LAA segmentation was performed using an nnUNet-based model, followed by radiomic feature extraction. Cox proportional hazard regression analysis assessed the relationship between AF recurrence and LAA volume. The dataset was randomly split into training (70%) and validation (30%) cohorts using stratified sampling. An AF recurrence prediction model integrating LAA DL radiomics with clinical variables was developed. The cohort had a median follow-up of 22 months (IQR 15-32), with 103 patients (21.5%) experiencing AF recurrence. The nnUNet segmentation model achieved a Dice coefficient of 0.89. Multivariate analysis showed that LAA volume was associated with a 5.8% increase in hazard risk per unit increase (aHR 1.058, 95% CI 1.021-1.095; p = 0.002). The model combining LAA DL radiomics with clinical variables demonstrated an AUC of 0.92 (95% CI 0.87-0.96) in the test set, maintaining robust predictive performance across subgroups. LAA morphology and volume are strongly linked to AF RFCA outcomes. We developed an LAA segmentation network and a predictive model that combines DL radiomics and clinical variables to estimate the probability of AF recurrence.

Impact of contrast-enhanced agent on segmentation using a deep learning-based software "Ai-Seg" for head and neck cancer.

Kihara S, Ueda Y, Harada S, Masaoka A, Kanayama N, Ikawa T, Inui S, Akagi T, Nishio T, Konishi K

pubmed logopapersMay 26 2025
In radiotherapy, auto-segmentation tools using deep learning assist in contouring organs-at-risk (OARs). We developed a segmentation model for head and neck (HN) OARs dedicated to contrast-enhanced (CE) computed tomography (CT) using the segmentation software, Ai-Seg, and compared the performance between CE and non-CE (nCE) CT. The retrospective study recruited 321 patients with HN cancers and trained a segmentation model using CE CT (CE model). The CE model was installed in Ai-Seg and applied to additional 25 patients with CE and nCE CT. The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were calculated between the ground truth and Ai-Seg contours for brain, brainstem, chiasm, optic nerves, cochleae, oral cavity, parotid glands, pharyngeal constrictor muscle, and submandibular glands (SMGs). We compared the CE model and the existing model trained with nCE CT available in Ai-Seg for 6 OARs. The CE model obtained significantly higher DSCs on CE CT for parotid and SMGs compared to the existing model. The CE model provided significantly lower DSC values and higher AHD values on nCE CT for SMGs than on CE CT, but comparable values for other OARs. The CE model achieved significantly better performance than the existing model and can be used on nCE CT images without significant performance difference, except SMGs. Our results may facilitate the adoption of segmentation tools in clinical practice. We developed a segmentation model for HN OARs dedicated to CE CT using Ai-Seg and evaluated its usability on nCE CT.

A novel network architecture for post-applicator placement CT auto-contouring in cervical cancer HDR brachytherapy.

Lei Y, Chao M, Yang K, Gupta V, Yoshida EJ, Wang T, Yang X, Liu T

pubmed logopapersMay 25 2025
High-dose-rate brachytherapy (HDR-BT) is an integral part of treatment for locally advanced cervical cancer, requiring accurate segmentation of the high-risk clinical target volume (HR-CTV) and organs at risk (OARs) on post-applicator CT (pCT) for precise and safe dose delivery. Manual contouring, however, is time-consuming and highly variable, with challenges heightened in cervical HDR-BT due to complex anatomy and low tissue contrast. An effective auto-contouring solution could significantly enhance efficiency, consistency, and accuracy in cervical HDR-BT planning. To develop a machine learning-based approach that improves the accuracy and efficiency of HR-CTV and OAR segmentation on pCT images for cervical HDR-BT. The proposed method employs two sequential deep learning models to segment target and OARs from planning CT data. The intuitive model, a U-Net, initially segments simpler structures such as the bladder and HR-CTV, utilizing shallow features and iodine contrast agents. Building on this, the sophisticated model targets complex structures like the sigmoid, rectum, and bowel, addressing challenges from low contrast, anatomical proximity, and imaging artifacts. This model incorporates spatial information from the intuitive model and uses total variation regularization to improve segmentation smoothness by applying a penalty to changes in gradient. This dual-model approach improves accuracy and consistency in segmenting high-risk clinical target volumes and organs at risk in cervical HDR-BT. To validate the proposed method, 32 cervical cancer patients treated with tandem and ovoid (T&O) HDR brachytherapy (3-5 fractions, 115 CT images) were retrospectively selected. The method's performance was assessed using four-fold cross-validation, comparing segmentation results to manual contours across five metrics: Dice similarity coefficient (DSC), 95% Hausdorff distance (HD<sub>95</sub>), mean surface distance (MSD), center-of-mass distance (CMD), and volume difference (VD). Dosimetric evaluations included D90 for HR-CTV and D2cc for OARs. The proposed method demonstrates high segmentation accuracy for HR-CTV, bladder, and rectum, achieving DSC values of 0.79 ± 0.06, 0.83 ± 0.10, and 0.76 ± 0.15, MSD values of 1.92 ± 0.77 mm, 2.24 ± 1.20 mm, and 4.18 ± 3.74 mm, and absolute VD values of 5.34 ± 4.85 cc, 17.16 ± 17.38 cc, and 18.54 ± 16.83 cc, respectively. Despite challenges in bowel and sigmoid segmentation due to poor soft tissue contrast in CT and variability in manual contouring (ground truth volumes of 128.48 ± 95.9 cc and 51.87 ± 40.67 cc), the method significantly outperforms two state-of-the-art methods on DSC, MSD, and CMD metrics (p-value < 0.05). For HR-CTV, the mean absolute D90 difference was 0.42 ± 1.17 Gy (p-value > 0.05), less than 5% of the prescription dose. Over 75% of cases showed changes within ± 0.5 Gy, and fewer than 10% exceeded ± 1 Gy. The mean and variation in structure volume and D2cc parameters between manual and segmented contours for OARs showed no significant differences (p-value > 0.05), with mean absolute D2cc differences within 0.5 Gy, except for the bladder, which exhibited higher variability (0.97 Gy). Our innovative auto-contouring method showed promising results in segmenting HR-CTV and OARs from pCT, potentially enhancing the efficiency of HDR BT cervical treatment planning. Further validation and clinical implementation are required to fully realize its clinical benefits.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.

SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours

Catalina Tan, Yipeng Hu, Shaheer U. Saeed

arxiv logopreprintMay 25 2025
Accurate tumour segmentation is vital for various targeted diagnostic and therapeutic procedures for cancer, e.g., planning biopsies or tumour ablations. Manual delineation is extremely labour-intensive, requiring substantial expert time. Fully-supervised machine learning models aim to automate such localisation tasks, but require a large number of costly and often subjective 3D voxel-level labels for training. The high-variance and subjectivity in such labels impacts model generalisability, even when large datasets are available. Histopathology labels may offer more objective labels but the infeasibility of acquiring pixel-level annotations to develop tumour localisation methods based on histology remains challenging in-vivo. In this work, we propose a novel weakly-supervised semantic segmentation framework called SPARS (Self-Play Adversarial Reinforcement Learning for Segmentation), which utilises an object presence classifier, trained on a small number of image-level binary cancer presence labels, to localise cancerous regions on CT scans. Such binary labels of patient-level cancer presence can be sourced more feasibly from biopsies and histopathology reports, enabling a more objective cancer localisation on medical images. Evaluating with real patient data, we observed that SPARS yielded a mean dice score of $77.3 \pm 9.4$, which outperformed other weakly-supervised methods by large margins. This performance was comparable with recent fully-supervised methods that require voxel-level annotations. Our results demonstrate the potential of using SPARS to reduce the need for extensive human-annotated labels to detect cancer in real-world healthcare settings.

Pulse Pressure, White Matter Hyperintensities, and Cognition: Mediating Effects Across the Adult Lifespan.

Hannan J, Newman-Norlund S, Busby N, Wilson SC, Newman-Norlund R, Rorden C, Fridriksson J, Bonilha L, Riccardi N

pubmed logopapersMay 25 2025
To investigate whether pulse pressure or mean arterial pressure mediates the relationship between age and white matter hyperintensity load and to examine the mediating effect of white matter hyperintensities on cognition. Demographic information, blood pressure, current medication lists, and Montreal Cognitive Assessment scores for 231 stroke- and dementia-free adults were retrospectively obtained from the Aging Brain Cohort study. Total WMH load was determined from T2-FLAIR magnetic resonance scans using the TrUE-Net deep learning tool for white matter segmentation. In separate models, we used mediation analysis to assess whether pulse pressure or MAP mediates the relationship between age and total white matter hyperintensity load, controlling for cardiovascular confounds. We also assessed whether white matter hyperintensity load mediated the relationship between age and cognitive scores. Pulse pressure, but not mean arterial pressure, significantly mediated the relationship between age and white matter hyperintensity load. White matter hyperintensity load partially mediated the relationship between age and Montreal Cognitive Assessment score. Our results indicate that pulse pressure, but not mean arterial pressure, is mechanistically associated with age-related accumulation of white matter hyperintensities, independent of other cardiovascular risk factors. White matter hyperintensity load was a mediator of cognitive scores across the adult lifespan. Effective management of pulse pressure may be especially important for maintenance of brain health and cognition.
Page 17 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.