Sort by:
Page 3 of 33324 results

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.

Upper Airway Volume Predicts Brain Structure and Cognition in Adolescents.

Kanhere A, Navarathna N, Yi PH, Parekh VS, Pickle J, Cloak CC, Ernst T, Chang L, Li D, Redline S, Isaiah A

pubmed logopapersJun 3 2025
One in ten children experiences sleep-disordered breathing (SDB). Untreated SDB is associated with poor cognition, but the underlying mechanisms are less understood. We assessed the relationship between magnetic resonance imaging (MRI)-derived upper airway volume and children's cognition and regional cortical gray matter volumes. We used five-year data from the Adolescent Brain Cognitive Development study (n=11,875 children, 9-10 years at baseline). Upper airway volumes were derived using a deep learning model applied to 5,552,640 brain MRI slices. The primary outcome was the Total Cognition Composite score from the National Institutes of Health Toolbox (NIH-TB). Secondary outcomes included other NIH-TB measures and cortical gray matter volumes. The habitual snoring group had significantly smaller airway volumes than non-snorers (mean difference=1.2 cm<sup>3</sup>; 95% CI, 1.0-1.4 cm<sup>3</sup>; P<0.001). Deep learning-derived airway volume predicted the Total Cognition Composite score (estimated mean difference=3.68 points; 95% CI, 2.41-4.96; P<0.001) per one-unit increase in the natural log of airway volume (~2.7-fold raw volume increase). This airway volume increase was also associated with an average 0.02 cm<sup>3</sup> increase in right temporal pole volume (95% CI, 0.01-0.02 cm<sup>3</sup>; P<0.001). Similar airway volume predicted most NIH-TB domain scores and multiple frontal and temporal gray matter volumes. These brain volumes mediated the relationship between airway volume and cognition. We demonstrate a novel application of deep learning-based airway segmentation in a large pediatric cohort. Upper airway volume is a potential biomarker for cognitive outcomes in pediatric SDB, offers insights into neurobiological mechanisms, and informs future studies on risk stratification. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Comparisons of AI automated segmentation techniques to manual segmentation techniques of the maxilla and maxillary sinus for CT or CBCT scans-A Systematic review.

Park JH, Hamimi M, Choi JJE, Figueredo CMS, Cameron MA

pubmed logopapersJun 3 2025
Accurate segmentation of the maxillary sinus from medical images is essential for diagnostic purposes and surgical planning. Manual segmentation of the maxillary sinus, while the gold standard, is time consuming and requires adequate training. To overcome this problem, AI enabled automatic segmentation software's developed. The purpose of this review is to systematically analyse the current literature to investigate the accuracy and efficiency of automatic segmentation techniques of the maxillary sinus to manual segmentation. A systematic approach to perform a thorough analysis of the existing literature using PRISMA guidelines. Data for this study was obtained from Pubmed, Medline, Embase, and Google Scholar databases. The inclusion and exclusion eligibility criteria were used to shortlist relevant studies. The sample size, anatomical structures segmented, experience of operators, type of manual segmentation software used, type of automatic segmentation software used, statistical comparative method used, and length of time of segmentation were analysed. This systematic review presents 10 studies that compared the accuracy and efficiency of automatic segmentation of the maxillary sinus to manual segmentation. All the studies included in this study were found to have a low risk of bias. Samples sizes ranged from 3 to 144, a variety of operators were used to manually segment the CBCT and segmentation was made primarily to 3D slicer and Mimics software. The comparison was primarily made to Unet architecture softwares, with the dice-coefficient being the primary means of comparison. This systematic review showed that automatic segmentation technique was consistently faster than manual segmentation techniques and over 90% accurate when compared to the gold standard of manual segmentation.

Effect of contrast enhancement on diagnosis of interstitial lung abnormality in automatic quantitative CT measurement.

Choi J, Ahn Y, Kim Y, Noh HN, Do KH, Seo JB, Lee SM

pubmed logopapersJun 3 2025
To investigate the effect of contrast enhancement on the diagnosis of interstitial lung abnormalities (ILA) in automatic quantitative CT measurement in patients with paired pre- and post-contrast scans. Patients who underwent chest CT for thoracic surgery between April 2017 and December 2020 were retrospectively analyzed. ILA quantification was performed using deep learning-based automated software. Cases were categorized as ILA or non-ILA according to the Fleischner Society's definition, based on the quantification results or radiologist assessment (reference standard). Measurement variability, agreement, and diagnostic performance between the pre- and post-contrast scans were evaluated. In 1134 included patients, post-contrast scans quantified a slightly larger volume of nonfibrotic ILA (mean difference: -0.2%), due to increased ground-glass opacity and reticulation volumes (-0.2% and -0.1%), whereas the fibrotic ILA volume remained unchanged (0.0%). ILA was diagnosed in 15 (1.3%), 22 (1.9%), and 40 (3.5%) patients by pre- and post-contrast scans and radiologists, respectively. The agreement between the pre- and post-contrast scans was substantial (κ = 0.75), but both pre-contrast (κ = 0.46) and post-contrast (κ = 0.54) scans demonstrated moderate agreement with the radiologist. The sensitivity for ILA (32.5% vs. 42.5%, p = 0.221) and specificity for non-ILA (99.8% vs. 99.5%, p = 0.248) were comparable between pre- and post-contrast scans. Radiologist's reclassification for equivocal ILA due to unilateral abnormalities increased the sensitivity for ILA (67.5% and 75.0%, respectively) in both pre- and post-contrast scans. Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance; however, radiologists may need to improve sensitivity reclassifying equivocal ILA. Question The effect of contrast enhancement on the automated quantification of interstitial lung abnormality (ILA) remains unknown. Findings Automated quantification measured slightly larger ground-glass opacity and reticulation volumes on post-contrast scans than on pre-contrast scans; however, contrast enhancement did not affect the sensitivity for interstitial lung abnormality. Clinical relevance Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance.

Deep learning-based automatic segmentation of arterial vessel walls and plaques in MR vessel wall images for quantitative assessment.

Yang L, Yang X, Gong Z, Mao Y, Lu SS, Zhu C, Wan L, Huang J, Mohd Noor MH, Wu K, Li C, Cheng G, Li Y, Liang D, Liu X, Zheng H, Hu Z, Zhang N

pubmed logopapersJun 3 2025
To develop and validate a deep-learning-based automatic method for vessel walls and atherosclerotic plaques segmentation for quantitative evaluation in MR vessel wall images. A total of 193 patients (107 patients for training and validation, 39 patients for internal test, 47 patients for external test) with atherosclerotic plaque from five centers underwent T1-weighted MRI scans and were included in the dataset. The first step of the proposed method was constructing a purely learning-based convolutional neural network (CNN) named Vessel-SegNet to segment the lumen and the vessel wall. The second step is using the vessel wall priors (including manual prior and Tversky-loss-based automatic prior) to improve the plaque segmentation, which utilizes the morphological similarity between the vessel wall and the plaque. The Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), etc., were used to evaluate the similarity, agreement, and correlations. Most of the DSCs for lumen and vessel wall segmentation were above 90%. The introduction of vessel wall priors can increase the DSC for plaque segmentation by over 10%, reaching 88.45%. Compared to dice-loss-based vessel wall priors, the Tversky-loss-based priors can further improve DSC by nearly 3%, reaching 82.84%. Most of the ICC values between the Vessel-SegNet and manual methods in the 6 quantitative measurements are greater than 85% (p-value < 0.001). The proposed CNN-based segmentation model can quickly and accurately segment vessel walls and plaques for quantitative evaluation. Due to the lack of testing with other equipment, populations, and anatomical studies, the reliability of the research results still requires further exploration. Question How can the accuracy and efficiency of vessel component segmentation for quantification, including the lumen, vessel wall, and plaque, be improved? Findings Improved CNN models, manual/automatic vessel wall priors, and Tversky loss can improve the performance of semi-automatic/automatic vessel components segmentation for quantification. Clinical relevance Manual segmentation of vessel components is a time-consuming yet important process. Rapid and accurate segmentation of the lumen, vessel walls, and plaques for quantification assessment helps patients obtain more accurate, efficient, and timely stroke risk assessments and clinical recommendations.

Deep Learning Pipeline for Automated Assessment of Distances Between Tonsillar Tumors and the Internal Carotid Artery.

Jain A, Amanian A, Nagururu N, Creighton FX, Prisman E

pubmed logopapersJun 3 2025
Evaluating the minimum distance (dTICA) between the internal carotid artery (ICA) and tonsillar tumors (TT) on imaging is essential for preoperative planning; we propose a tool to automatically extract dTICA. CT scans of 96 patients with TT were selected from the cancer imaging archive. nnU-Net, a deep learning framework, was implemented to automatically segment both the TT and ICA from these scans. Dice similarity coefficient (DSC) and average hausdorff distance (AHD) were used to evaluate the performance of the nnU-Net. Thereafter, an automated tool was built to calculate the magnitude of dTICA from these segmentations. The average DSC and AHD were 0.67, 2.44 mm, and 0.83, 0.49 mm for the TT and ICA, respectively. The mean dTICA was 6.66 mm and statistically varied by tumor T stage (p = 0.00456). The proposed pipeline can accurately and automatically capture dTICA, potentially assisting clinicians in preoperative evaluation.

Co-Evidential Fusion with Information Volume for Medical Image Segmentation

Yuanpeng He, Lijian Li, Tianxiang Zhan, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintJun 3 2025
Although existing semi-supervised image segmentation methods have achieved good performance, they cannot effectively utilize multiple sources of voxel-level uncertainty for targeted learning. Therefore, we propose two main improvements. First, we introduce a novel pignistic co-evidential fusion strategy using generalized evidential deep learning, extended by traditional D-S evidence theory, to obtain a more precise uncertainty measure for each voxel in medical samples. This assists the model in learning mixed labeled information and establishing semantic associations between labeled and unlabeled data. Second, we introduce the concept of information volume of mass function (IVUM) to evaluate the constructed evidence, implementing two evidential learning schemes. One optimizes evidential deep learning by combining the information volume of the mass function with original uncertainty measures. The other integrates the learning pattern based on the co-evidential fusion strategy, using IVUM to design a new optimization objective. Experiments on four datasets demonstrate the competitive performance of our method.

A first-of-its-kind two-body statistical shape model of the arthropathic shoulder: enhancing biomechanics and surgical planning.

Blackman J, Giles JW

pubmed logopapersJun 3 2025
Statistical Shape Models are machine learning tools in computational orthopedics that enable the study of anatomical variability and the creation of synthetic models for pathogenetic analysis and surgical planning. Current models of the glenohumeral joint either describe individual bones or are limited to non-pathologic datasets, failing to capture coupled shape variation in arthropathic anatomy. We aimed to develop a novel combined scapula-proximal-humerus model applicable to clinical populations. Preoperative computed tomography scans from 45 Reverse Total Shoulder Arthroplasty patients were used to generate three-dimensional models of the scapula and proximal humerus. Correspondence point clouds were combined into a two-body shape model using Principal Component Analysis. Individual scapula-only and proximal-humerus-only shape models were also created for comparison. The models were validated using compactness, specificity, generalization ability, and leave-one-out cross-validation. The modes of variation for each model were also compared. The combined model was described using eigenvector decomposition into single body models. The models were further compared in their ability to predict the shape of one body when given the shape of its counterpart, and the generation of diverse realistic synthetic pairs de novo. The scapula and proximal-humerus models performed comparably to previous studies with median average leave-one-out cross-validation errors of 1.08 mm (IQR: 0.359 mm), and 0.521 mm (IQR: 0.111 mm); the combined model was similar with median error of 1.13 mm (IQR: 0.239 mm). The combined model described coupled variations between the shapes equalling 43.2% of their individual variabilities, including the relationship between glenoid and humeral head erosions. The combined model outperformed the individual models generatively with reduced missing shape prediction bias (> 10%) and uniformly diverse shape plausibility (uniformity p-value < .001 vs. .59). This study developed the first two-body scapulohumeral shape model that captures coupled variations in arthropathic shoulder anatomy and the first proximal-humeral statistical model constructed using a clinical dataset. While single-body models are effective for descriptive tasks, combined models excel in generating joint-level anatomy. This model can be used to augment computational analyses of synthetic populations investigating shoulder biomechanics and surgical planning.

Patient-specific prostate segmentation in kilovoltage images for radiation therapy intrafraction monitoring via deep learning.

Mylonas A, Li Z, Mueller M, Booth JT, Brown R, Gardner M, Kneebone A, Eade T, Keall PJ, Nguyen DT

pubmed logopapersJun 3 2025
During radiation therapy, the natural movement of organs can lead to underdosing the cancer and overdosing the healthy tissue, compromising treatment efficacy. Real-time image-guided adaptive radiation therapy can track the tumour and account for the motion. Typically, fiducial markers are implanted as a surrogate for the tumour position due to the low radiographic contrast of soft tissues in kilovoltage (kV) images. A segmentation approach that does not require markers would eliminate the costs, delays, and risks associated with marker implantation. We trained patient-specific conditional Generative Adversarial Networks for prostate segmentation in kV images. The networks were trained using synthetic kV images generated from each patient's own imaging and planning data, which are available prior to the commencement of treatment. We validated the networks on two treatment fractions from 30 patients using multi-centre data from two clinical trials. Here, we present a large-scale proof-of-principle study of x-ray-based markerless prostate segmentation for globally available cancer therapy systems. Our results demonstrate the feasibility of a deep learning approach using kV images to track prostate motion across the entire treatment arc for 30 patients with prostate cancer. The mean absolute deviation is 1.4 and 1.6 mm in the anterior-posterior/lateral and superior-inferior directions, respectively. Markerless segmentation via deep learning may enable real-time image guidance on conventional cancer therapy systems without requiring implanted markers or additional hardware, thereby expanding access to real-time adaptive radiation therapy.

A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography.

Rogozinski M, Hurtado J, Sierra-Franco CA, R Hall Barbosa C, Raposo A

pubmed logopapersJun 3 2025
This study introduces a novel methodology to enhance nipple segmentation in digital mammography, a critical component for accurate medical analysis and computer-aided detection systems. The nipple is a key anatomical landmark for multi-view and multi-modality breast image registration, where accurate localization is vital for ensuring image quality and enabling precise registration of anomalies across different mammographic views. The proposed approach significantly outperforms baseline methods, particularly in challenging cases where previous techniques failed. It achieved successful detection across all cases and reached a mean Intersection over Union (mIoU) of 0.63 in instances where the baseline failed entirely. Additionally, it yielded nearly a tenfold improvement in Hausdorff distance and consistent gains in overlap-based metrics, with the mIoU increasing from 0.7408 to 0.8011 in the craniocaudal (CC) view and from 0.7488 to 0.7767 in the mediolateral oblique (MLO) view. Furthermore, its generalizability suggests the potential for application to other breast imaging modalities and related domains facing challenges such as class imbalance and high variability in object characteristics.
Page 3 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.