Sort by:
Page 15 of 31302 results

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

Rate and Patient Specific Risk Factors for Periprosthetic Acetabular Fractures during Primary Total Hip Arthroplasty using a Pressfit Cup.

Simon S, Gobi H, Mitterer JA, Frank BJ, Huber S, Aichmair A, Dominkus M, Hofstaetter JG

pubmed logopapersMay 26 2025
Periprosthetic acetabular fractures following primary total hip arthroplasty (THA) using a cementless acetabular component range from occult to severe fractures. The aims of this study were to evaluate the perioperative periprosthetic acetabular fracture rate and patient-specific risks of a modular cementless acetabular component. In this study, we included 7,016 primary THAs (61.4% women, 38.6% men; age, 67 years; interquartile-range, 58 to 74) that received a cementless-hydroxyapatite-coated modular-titanium press-fit acetabular component from a single manufacturer between January 2013 and September 2022. All perioperative radiographs and CT (computer tomography) scans were analyzed for all causes. Patient-specific data and the revision rate were retrieved, and radiographic measurements were performed using artificial intelligence-based software. Following matching based on patients' demographics, a comparison was made between patients who had and did not have periacetabular fractures in order to identify patient-specific and radiographic risk factors for periacetabular fractures. The fracture rate was 0.8% (56 of 7,016). Overall, 33.9% (19 of 56) were small occult fractures solely visible on CT. Additionally, there were 21 of 56 (37.5%) with a stable small fracture. Both groups (40 of 56 (71.4%)) were treated nonoperatively. Revision THA was necessary in 16 of 56, resulting in an overall revision rate of 0.2% (16 of 7,016). Patient-specific risk factors were small acetabular-component size (≤ 50), a low body mass index (BMI) (< 24.5), a higher age (> 68 years), women, a low lateral-central-age-angle (< 24°), a high Extrusion-index (> 20%), a high sharp-angle (> 38°), and a high Tönnis-angle (> 10°). A wide range of periprosthetic acetabular fractures were observed following primary cementless THA. In total, 71.4% of acetabular fractures were small cracks that did not necessitate revision surgery. By identifying patient-specific risk factors, such as advanced age, women, low BMI, and dysplastic hips, future complications may be reduced.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.

SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours

Catalina Tan, Yipeng Hu, Shaheer U. Saeed

arxiv logopreprintMay 25 2025
Accurate tumour segmentation is vital for various targeted diagnostic and therapeutic procedures for cancer, e.g., planning biopsies or tumour ablations. Manual delineation is extremely labour-intensive, requiring substantial expert time. Fully-supervised machine learning models aim to automate such localisation tasks, but require a large number of costly and often subjective 3D voxel-level labels for training. The high-variance and subjectivity in such labels impacts model generalisability, even when large datasets are available. Histopathology labels may offer more objective labels but the infeasibility of acquiring pixel-level annotations to develop tumour localisation methods based on histology remains challenging in-vivo. In this work, we propose a novel weakly-supervised semantic segmentation framework called SPARS (Self-Play Adversarial Reinforcement Learning for Segmentation), which utilises an object presence classifier, trained on a small number of image-level binary cancer presence labels, to localise cancerous regions on CT scans. Such binary labels of patient-level cancer presence can be sourced more feasibly from biopsies and histopathology reports, enabling a more objective cancer localisation on medical images. Evaluating with real patient data, we observed that SPARS yielded a mean dice score of $77.3 \pm 9.4$, which outperformed other weakly-supervised methods by large margins. This performance was comparable with recent fully-supervised methods that require voxel-level annotations. Our results demonstrate the potential of using SPARS to reduce the need for extensive human-annotated labels to detect cancer in real-world healthcare settings.

A novel network architecture for post-applicator placement CT auto-contouring in cervical cancer HDR brachytherapy.

Lei Y, Chao M, Yang K, Gupta V, Yoshida EJ, Wang T, Yang X, Liu T

pubmed logopapersMay 25 2025
High-dose-rate brachytherapy (HDR-BT) is an integral part of treatment for locally advanced cervical cancer, requiring accurate segmentation of the high-risk clinical target volume (HR-CTV) and organs at risk (OARs) on post-applicator CT (pCT) for precise and safe dose delivery. Manual contouring, however, is time-consuming and highly variable, with challenges heightened in cervical HDR-BT due to complex anatomy and low tissue contrast. An effective auto-contouring solution could significantly enhance efficiency, consistency, and accuracy in cervical HDR-BT planning. To develop a machine learning-based approach that improves the accuracy and efficiency of HR-CTV and OAR segmentation on pCT images for cervical HDR-BT. The proposed method employs two sequential deep learning models to segment target and OARs from planning CT data. The intuitive model, a U-Net, initially segments simpler structures such as the bladder and HR-CTV, utilizing shallow features and iodine contrast agents. Building on this, the sophisticated model targets complex structures like the sigmoid, rectum, and bowel, addressing challenges from low contrast, anatomical proximity, and imaging artifacts. This model incorporates spatial information from the intuitive model and uses total variation regularization to improve segmentation smoothness by applying a penalty to changes in gradient. This dual-model approach improves accuracy and consistency in segmenting high-risk clinical target volumes and organs at risk in cervical HDR-BT. To validate the proposed method, 32 cervical cancer patients treated with tandem and ovoid (T&O) HDR brachytherapy (3-5 fractions, 115 CT images) were retrospectively selected. The method's performance was assessed using four-fold cross-validation, comparing segmentation results to manual contours across five metrics: Dice similarity coefficient (DSC), 95% Hausdorff distance (HD<sub>95</sub>), mean surface distance (MSD), center-of-mass distance (CMD), and volume difference (VD). Dosimetric evaluations included D90 for HR-CTV and D2cc for OARs. The proposed method demonstrates high segmentation accuracy for HR-CTV, bladder, and rectum, achieving DSC values of 0.79 ± 0.06, 0.83 ± 0.10, and 0.76 ± 0.15, MSD values of 1.92 ± 0.77 mm, 2.24 ± 1.20 mm, and 4.18 ± 3.74 mm, and absolute VD values of 5.34 ± 4.85 cc, 17.16 ± 17.38 cc, and 18.54 ± 16.83 cc, respectively. Despite challenges in bowel and sigmoid segmentation due to poor soft tissue contrast in CT and variability in manual contouring (ground truth volumes of 128.48 ± 95.9 cc and 51.87 ± 40.67 cc), the method significantly outperforms two state-of-the-art methods on DSC, MSD, and CMD metrics (p-value < 0.05). For HR-CTV, the mean absolute D90 difference was 0.42 ± 1.17 Gy (p-value > 0.05), less than 5% of the prescription dose. Over 75% of cases showed changes within ± 0.5 Gy, and fewer than 10% exceeded ± 1 Gy. The mean and variation in structure volume and D2cc parameters between manual and segmented contours for OARs showed no significant differences (p-value > 0.05), with mean absolute D2cc differences within 0.5 Gy, except for the bladder, which exhibited higher variability (0.97 Gy). Our innovative auto-contouring method showed promising results in segmenting HR-CTV and OARs from pCT, potentially enhancing the efficiency of HDR BT cervical treatment planning. Further validation and clinical implementation are required to fully realize its clinical benefits.

Sex-related differences and associated transcriptional signatures in the brain ventricular system and cerebrospinal fluid development in full-term neonates.

Sun Y, Fu C, Gu L, Zhao H, Feng Y, Jin C

pubmed logopapersMay 25 2025
The cerebrospinal fluid (CSF) is known to serve as a unique environment for neurodevelopment, with specific proteins secreted by epithelial cells of the choroid plexus (CP) playing crucial roles in cortical development and cell differentiation. Sex-related differences in the brain in early life have been widely identified, but few studies have investigated the neonatal CSF system and associated transcriptional signatures. This study included 75 full-term neonates [44 males and 31 females; gestational age (GA) = 37-42 weeks] without significant MRI abnormalities from the dHCP (developing Human Connectome Project) database. Deep-learning automated segmentation was used to measure various metrics of the brain ventricular system and CSF. Sex-related differences and relationships with postnatal age were analyzed by linear regression. Correlations between the CP and CSF space metrics were also examined. LASSO regression was further applied to identify the key genes contributing to the sex-related CSF system differences by using regional gene expression data from the Allen Human Brain Atlas. Right lateral ventricles [2.42 ± 0.98 vs. 2.04 ± 0.45 cm3 (mean ± standard deviation), p = 0.036] and right CP (0.16 ± 0.07 vs. 0.13 ± 0.04 cm3, p = 0.024) were larger in males, with a stronger volume correlation (male/female correlation coefficients r: 0.798 vs. 0.649, p < 1 × 10<sup>- 4</sup>). No difference was found in total CSF volume, while peripheral CSF (male/female β: 1.218 vs. 1.064) and CP (male/female β: 0.008 vs. 0.005) exhibited relatively faster growth in males. Additionally, the volumes of the lateral ventricular system, third ventricle, peripheral CSF, and total CSF were significantly correlated with their corresponding CP volume (r: 0.362 to 0.799, p < 0.05). DERL2 (Degradation in Endoplasmic Reticulum Protein 2) (r = 0.1319) and MRPL48 (Mitochondrial Large Ribosomal Subunit Protein) (r=-0.0370) were identified as potential key genes associated with sex-related differences in CSF system. Male neonates present larger volumes and faster growth of the right lateral ventricle, likely linked to corresponding CP volume and growth pattern. The downregulation of DERL2 and upregulation of MRPL48 may contribute to these sex-related variations in the CSF system, suggesting a molecular basis for sex-specific brain development.

Pulse Pressure, White Matter Hyperintensities, and Cognition: Mediating Effects Across the Adult Lifespan.

Hannan J, Newman-Norlund S, Busby N, Wilson SC, Newman-Norlund R, Rorden C, Fridriksson J, Bonilha L, Riccardi N

pubmed logopapersMay 25 2025
To investigate whether pulse pressure or mean arterial pressure mediates the relationship between age and white matter hyperintensity load and to examine the mediating effect of white matter hyperintensities on cognition. Demographic information, blood pressure, current medication lists, and Montreal Cognitive Assessment scores for 231 stroke- and dementia-free adults were retrospectively obtained from the Aging Brain Cohort study. Total WMH load was determined from T2-FLAIR magnetic resonance scans using the TrUE-Net deep learning tool for white matter segmentation. In separate models, we used mediation analysis to assess whether pulse pressure or MAP mediates the relationship between age and total white matter hyperintensity load, controlling for cardiovascular confounds. We also assessed whether white matter hyperintensity load mediated the relationship between age and cognitive scores. Pulse pressure, but not mean arterial pressure, significantly mediated the relationship between age and white matter hyperintensity load. White matter hyperintensity load partially mediated the relationship between age and Montreal Cognitive Assessment score. Our results indicate that pulse pressure, but not mean arterial pressure, is mechanistically associated with age-related accumulation of white matter hyperintensities, independent of other cardiovascular risk factors. White matter hyperintensity load was a mediator of cognitive scores across the adult lifespan. Effective management of pulse pressure may be especially important for maintenance of brain health and cognition.

CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation

Jiong Wu, Yang Xing, Boxiao Yu, Wei Shao, Kuang Gong

arxiv logopreprintMay 25 2025
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.
Page 15 of 31302 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.