Sort by:
Page 11 of 33324 results

Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.

Chatterjee D, Kanhere A, Doo FX, Zhao J, Chan A, Welsh A, Kulkarni P, Trang A, Parekh VS, Yi PH

pubmed logopapersJun 1 2025
Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).

Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.

Wang Q, He B, Yu J, Zhang B, Yang J, Liu J, Ma X, Wei S, Li S, Zheng H, Tang Z

pubmed logopapersJun 1 2025
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.

Cross-site Validation of AI Segmentation and Harmonization in Breast MRI.

Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ

pubmed logopapersJun 1 2025
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.

Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model.

Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M

pubmed logopapersJun 1 2025
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.

A systematic review on deep learning-enabled coronary CT angiography for plaque and stenosis quantification and cardiac risk prediction.

Shrivastava P, Kashikar S, Parihar PH, Kasat P, Bhangale P, Shrivastava P

pubmed logopapersJun 1 2025
Coronary artery disease (CAD) is a major worldwide health concern, contributing significantly to the global burden of cardiovascular diseases (CVDs). According to the 2023 World Health Organization (WHO) report, CVDs account for approximately 17.9 million deaths annually. This emphasizies the need for advanced diagnostic tools such as coronary computed tomography angiography (CCTA). The incorporation of deep learning (DL) technologies could significantly improve CCTA analysis by automating the quantification of plaque and stenosis, thus enhancing the precision of cardiac risk assessments. A recent meta-analysis highlights the evolving role of CCTA in patient management, showing that CCTA-guided diagnosis and management reduced adverse cardiac events and improved event-free survival in patients with stable and acute coronary syndromes. An extensive literature search was carried out across various electronic databases, such as MEDLINE, Embase, and the Cochrane Library. This search utilized a specific strategy that included both Medical Subject Headings (MeSH) terms and pertinent keywords. The review adhered to PRISMA guidelines and focused on studies published between 2019 and 2024 that employed deep learning (DL) for coronary computed tomography angiography (CCTA) in patients aged 18 years or older. After implementing specific inclusion and exclusion criteria, a total of 10 articles were selected for systematic evaluation regarding quality and bias. This systematic review included a total of 10 studies, demonstrating the high diagnostic performance and predictive capabilities of various deep learning models compared to different imaging modalities. This analysis highlights the effectiveness of these models in enhancing diagnostic accuracy in imaging techniques. Notably, strong correlations were observed between DL-derived measurements and intravascular ultrasound findings, enhancing clinical decision-making and risk stratification for CAD. Deep learning-enabled CCTA represents a promising advancement in the quantification of coronary plaques and stenosis, facilitating improved cardiac risk prediction and enhancing clinical workflow efficiency. Despite variability in study designs and potential biases, the findings support the integration of DL technologies into routine clinical practice for better patient outcomes in CAD management.

Automated Coronary Artery Segmentation with 3D PSPNET using Global Processing and Patch Based Methods on CCTA Images.

Chachadi K, Nirmala SR, Netrakar PG

pubmed logopapersJun 1 2025
The prevalence of coronary artery disease (CAD) has become the major cause of death across the world in recent years. The accurate segmentation of coronary artery is important in clinical diagnosis and treatment of coronary artery disease (CAD) such as stenosis detection and plaque analysis. Deep learning techniques have been shown to assist medical experts in diagnosing diseases using biomedical imaging. There are many methods which employ 2D DL models for medical image segmentation. The 2D Pyramid Scene Parsing Neural Network (PSPNet) has potential in this domain but not explored for the segmentation of coronary arteries from 3D Coronary Computed Tomography Angiography (CCTA) images. The contribution of present research work is to propose the modification of 2D PSPNet into 3D PSPNet for segmenting the coronary arteries from 3D CCTA images. The innovative factor is to evaluate the network performance by employing Global processing and Patch based processing methods. The experimental results achieved a Dice Similarity Coefficient (DSC) of 0.76 for Global process method and 0.73 for Patch based method using a subset of 200 images from the ImageCAS dataset.

HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation.

Ren S, Li X

pubmed logopapersJun 1 2025
Vision Transformer shows great superiority in medical image segmentation due to the ability to learn long-range dependency. For medical image segmentation from 3-D data, such as computed tomography (CT), existing methods can be broadly classified into 2-D-based and 3-D-based methods. One key limitation in 2-D-based methods is that the intraslice information is ignored, while the limitation in 3-D-based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner slice information. During the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3-D understanding of anatomy. Motivated by this fact, our key insight is to design a hybrid model that can first learn fine-grained inner slice information and then generate a 3-D understanding of anatomy by incorporating 3-D information. We present a novel Hybrid Residual TransFormer (HResFormer) for 3-D medical image segmentation. Building upon standard 2-D and 3-D Transformer backbones, HResFormer involves two novel key designs: 1) a Hybrid Local-Global fusion Module (HLGM) to effectively and adaptively fuse inner slice information from 2-D Transformers and intraslice information from 3-D volumes for 3-D Transformers with local fine-grained and global long-range representation and 2) residual learning of the hybrid model, which can effectively leverage the inner slice and intraslice information for better 3-D understanding of anatomy. Experiments show that our HResFormer outperforms prior art on widely used medical image segmentation benchmarks. This article sheds light on an important but neglected way to design Transformers for 3-D medical image segmentation.

A Large Convolutional Neural Network for Clinical Target and Multi-organ Segmentation in Gynecologic Brachytherapy with Multi-stage Learning

Mingzhe Hu, Yuan Gao, Yuheng Li, Ricahrd LJ Qiu, Chih-Wei Chang, Keyur D. Shah, Priyanka Kapoor, Beth Bradshaw, Yuan Shao, Justin Roper, Jill Remick, Zhen Tian, Xiaofeng Yang

arxiv logopreprintJun 1 2025
Purpose: Accurate segmentation of clinical target volumes (CTV) and organs-at-risk is crucial for optimizing gynecologic brachytherapy (GYN-BT) treatment planning. However, anatomical variability, low soft-tissue contrast in CT imaging, and limited annotated datasets pose significant challenges. This study presents GynBTNet, a novel multi-stage learning framework designed to enhance segmentation performance through self-supervised pretraining and hierarchical fine-tuning strategies. Methods: GynBTNet employs a three-stage training strategy: (1) self-supervised pretraining on large-scale CT datasets using sparse submanifold convolution to capture robust anatomical representations, (2) supervised fine-tuning on a comprehensive multi-organ segmentation dataset to refine feature extraction, and (3) task-specific fine-tuning on a dedicated GYN-BT dataset to optimize segmentation performance for clinical applications. The model was evaluated against state-of-the-art methods using the Dice Similarity Coefficient (DSC), 95th percentile Hausdorff Distance (HD95), and Average Surface Distance (ASD). Results: Our GynBTNet achieved superior segmentation performance, significantly outperforming nnU-Net and Swin-UNETR. Notably, it yielded a DSC of 0.837 +/- 0.068 for CTV, 0.940 +/- 0.052 for the bladder, 0.842 +/- 0.070 for the rectum, and 0.871 +/- 0.047 for the uterus, with reduced HD95 and ASD compared to baseline models. Self-supervised pretraining led to consistent performance improvements, particularly for structures with complex boundaries. However, segmentation of the sigmoid colon remained challenging, likely due to anatomical ambiguities and inter-patient variability. Statistical significance analysis confirmed that GynBTNet's improvements were significant compared to baseline models.

Comparison of Sarcopenia Assessment in Liver Transplant Recipients by Computed Tomography Freehand Region-of-Interest versus an Automated Deep Learning System.

Miller W, Fate K, Fisher J, Thul J, Ko Y, Kim KW, Pruett T, Teigen L

pubmed logopapersJun 1 2025
Sarcopenia, or the loss of muscle quality and quantity, has been associated with poor clinical outcomes in liver transplantation such as infection, increased length of stay, and increased patient mortality. Abdominal computed tomography (CT) scans are utilized to measure patient core musculature as a measurement of sarcopenia. Methods to extract information on core body musculature can be through either freehand region-of-interest (ROI) or machine learning algorithms to quantitate total body muscle within a given area. This study directly compares these two collection methods leveraging length of stay (LOS) outcomes previously found to be associated with freehand ROI measurements. A total of 50 individuals were included who underwent liver transplantation from our single center between January 1, 2016, and May 30, 2021, and had a non-contrast abdominal CT scan within 6-months of surgery. CT-derived skeletal muscle measures at the third lumbar vertebrae were obtained using freehand ROI and an automated deep learning system. Correlation analysis of freehand psoas muscle measures, psoas area index (PAI) and mean Hounsfield units (mHU), were significantly correlated to the automated deep learning system's total skeletal muscle measures at the level of the L3, skeletal muscle index (SMI) and skeletal muscle density (SMD), respectively (R<sup>2</sup> = 0.4221; p value < 0.0001; R<sup>2</sup> = 0.6297; p value < 0.0001). The automated deep learning model's SMI predicted ∼20% of the variability (R<sup>2</sup> = 0.2013; hospital length of stay) while the PAI variable only predicted about 10% of the variability (R<sup>2</sup> = 0.0919; total healthcare length of stay) of the length of stay variables. In contrast, both the freehand ROI mHU and the automated deep learning model's muscle density variables were associated with ∼20% of the variability in the inpatient length of stay (R<sup>2</sup> = 0.2383 and 0.1810, respectively) and total healthcare length of stay variables (R<sup>2</sup> = 0.2190 and 0.1947, respectively). Sarcopenia measurements represent an important risk stratification tool for liver transplantation outcomes. For muscle sarcopenia assessment association with LOS, freehand measures of sarcopenia perform similarly to automated deep learning system measurements.

Measurement of adipose body composition using an artificial intelligence-based CT Protocol and its association with severe acute pancreatitis in hospitalized patients.

Cortés P, Mistretta TA, Jackson B, Olson CG, Al Qady AM, Stancampiano FF, Korfiatis P, Klug JR, Harris DM, Dan Echols J, Carter RE, Ji B, Hardway HD, Wallace MB, Kumbhari V, Bi Y

pubmed logopapersJun 1 2025
The clinical utility of body composition in predicting the severity of acute pancreatitis (AP) remains unclear. We aimed to measure body composition using artificial intelligence (AI) to predict severe AP in hospitalized patients. We performed a retrospective study of patients hospitalized with AP at three tertiary care centers in 2018. Patients with computer tomography (CT) imaging of the abdomen at admission were included. A fully automated and validated abdominal segmentation algorithm was used for body composition analysis. The primary outcome was severe AP, defined as having persistent single- or multi-organ failure as per the revised Atlanta classification. 352 patients were included. Severe AP occurred in 35 patients (9.9%). In multivariable analysis, adjusting for male sex and first episode of AP, intermuscular adipose tissue (IMAT) was associated with severe AP, OR = 1.06 per 5 cm<sup>2</sup>, p = 0.0207. Subcutaneous adipose tissue (SAT) area approached significance, OR = 1.05, p = 0.17. Neither visceral adipose tissue (VAT) nor skeletal muscle (SM) was associated with severe AP. In obese patients, a higher SM was associated with severe AP in unadjusted analysis (86.7 vs 75.1 and 70.3 cm<sup>2</sup> in moderate and mild, respectively p = 0.009). In this multi-site retrospective study using AI to measure body composition, we found elevated IMAT to be associated with severe AP. Although SAT was non-significant for severe AP, it approached statistical significance. Neither VAT nor SM were significant. Further research in larger prospective studies may be beneficial.
Page 11 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.