Sort by:
Page 9 of 99986 results

Assessing the spatial relationship between mandibular third molars and the inferior alveolar canal using a deep learning-based approach: a proof-of-concept study.

Lyu W, Lou S, Huang J, Huang Z, Zheng H, Liao H, Qiao Y, OuYang K

pubmed logopapersAug 6 2025
The distance between the mandibular third molar (M3) and the mandibular canal (MC) is a key factor in assessing the risk of injury to the inferior alveolar nerve (IAN). However, existing deep learning systems have not yet been able to accurately quantify the M3-MC distance in 3D space. The aim of this study was to develop and validate a deep learning-based system for accurate measurement of M3-MC spatial relationships in cone-beam computed tomography (CBCT) images and to evaluate its accuracy against conventional methods. We propose an innovative approach for low-resource environments, using DeeplabV3 + for semantic segmentation of CBCT-extracted 2D images, followed by multi-category 3D reconstruction and visualization. Based on the reconstruction model, we applied the KD-Tree algorithm to measure the spatial minimum distance between M3 and MC. Through internal validation with randomly selected CBCT images, we compared the differences between the AI system, conventional measurement methods on the CBCT, and the gold standard measured by senior experts. Statistical analysis was performed using one-way ANOVA with Tukey HSD post-hoc tests (p < 0.05), employing multiple error metrics for comprehensive evaluation. One-way ANOVA revealed significant differences among measurement methods. Subsequent Tukey HSD post-hoc tests showed significant differences between the AI reconstruction model and conventional methods. The measurement accuracy of the AI system compared to the gold standard was 0.19 for mean error (ME), 0.18 for mean absolute error (MAE), 0.69 for mean square error (MSE), 0.83 for root mean square error (RMSE), and 0.96 for coefficient of determination (R<sup>2</sup>) (p < 0.01). These results indicate that the proposed AI system is highly accurate and reliable in M3-MC distance measurement and provides a powerful tool for preoperative risk assessment of M3 extraction.

Conditional Fetal Brain Atlas Learning for Automatic Tissue Segmentation

Johannes Tischer, Patric Kienast, Marlene Stümpflen, Gregor Kasprian, Georg Langs, Roxane Licandro

arxiv logopreprintAug 6 2025
Magnetic Resonance Imaging (MRI) of the fetal brain has become a key tool for studying brain development in vivo. Yet, its assessment remains challenging due to variability in brain maturation, imaging protocols, and uncertain estimates of Gestational Age (GA). To overcome these, brain atlases provide a standardized reference framework that facilitates objective evaluation and comparison across subjects by aligning the atlas and subjects in a common coordinate system. In this work, we introduce a novel deep-learning framework for generating continuous, age-specific fetal brain atlases for real-time fetal brain tissue segmentation. The framework combines a direct registration model with a conditional discriminator. Trained on a curated dataset of 219 neurotypical fetal MRIs spanning from 21 to 37 weeks of gestation. The method achieves high registration accuracy, captures dynamic anatomical changes with sharp structural detail, and robust segmentation performance with an average Dice Similarity Coefficient (DSC) of 86.3% across six brain tissues. Furthermore, volumetric analysis of the generated atlases reveals detailed neurotypical growth trajectories, providing valuable insights into the maturation of the fetal brain. This approach enables individualized developmental assessment with minimal pre-processing and real-time performance, supporting both research and clinical applications. The model code is available at https://github.com/cirmuw/fetal-brain-atlas

EATHOA: Elite-evolved hiking algorithm for global optimization and precise multi-thresholding image segmentation in intracerebral hemorrhage images.

Abdel-Salam M, Houssein EH, Emam MM, Samee NA, Gharehchopogh FS, Bacanin N

pubmed logopapersAug 6 2025
Intracerebral hemorrhage (ICH) is a life-threatening condition caused by bleeding in the brain, with high mortality rates, particularly in the acute phase. Accurate diagnosis through medical image segmentation plays a crucial role in early intervention and treatment. However, existing segmentation methods, such as region-growing, clustering, and deep learning, face significant limitations when applied to complex images like ICH, especially in multi-threshold image segmentation (MTIS). As the number of thresholds increases, these methods often become computationally expensive and exhibit degraded segmentation performance. To address these challenges, this paper proposes an Elite-Adaptive-Turbulent Hiking Optimization Algorithm (EATHOA), an enhanced version of the Hiking Optimization Algorithm (HOA), specifically designed for high-dimensional and multimodal optimization problems like ICH image segmentation. EATHOA integrates three novel strategies including Elite Opposition-Based Learning (EOBL) for improving population diversity and exploration, Adaptive k-Average-Best Mutation (AKAB) for dynamically balancing exploration and exploitation, and a Turbulent Operator (TO) for escaping local optima and enhancing the convergence rate. Extensive experiments were conducted on the CEC2017 and CEC2022 benchmark functions to evaluate EATHOA's global optimization performance, where it consistently outperformed other state-of-the-art algorithms. The proposed EATHOA was then applied to solve the MTIS problem in ICH images at six different threshold levels. EATHOA achieved peak values of PSNR (34.4671), FSIM (0.9710), and SSIM (0.8816), outperforming recent methods in segmentation accuracy and computational efficiency. These results demonstrate the superior performance of EATHOA and its potential as a powerful tool for medical image analysis, offering an effective and computationally efficient solution for the complex challenges of ICH image segmentation.

Development of a deep learning based approach for multi-material decomposition in spectral CT: a proof of principle in silico study.

Rajagopal JR, Rapaka S, Farhadi F, Abadi E, Segars WP, Nowak T, Sharma P, Pritchard WF, Malayeri A, Jones EC, Samei E, Sahbaee P

pubmed logopapersAug 6 2025
Conventional approaches to material decomposition in spectral CT face challenges related to precise algorithm calibration across imaged conditions and low signal quality caused by variable object size and reduced dose. In this proof-of-principle study, a deep learning approach to multi-material decomposition was developed to quantify iodine, gadolinium, and calcium in spectral CT. A dual-phase network architecture was trained using synthetic datasets containing computational models of cylindrical and virtual patient phantoms. Classification and quantification performance was evaluated across a range of patient size and dose parameters. The model was found to accurately classify (accuracy: cylinders - 98%, virtual patients - 97%) and quantify materials (mean absolute percentage difference: cylinders - 8-10%, virtual patients - 10-15%) in both datasets. Performance in virtual patient phantoms improved as the hybrid training dataset included a larger contingent of virtual patient phantoms (accuracy: 48% with 0 virtual patients to 97% with 8 virtual patients). For both datasets, the algorithm was able to maintain strong performance under challenging conditions of large patient size and reduced dose. This study shows the validity of a deep-learning based approach to multi-material decomposition trained with in-silico images that can overcome the limitations of conventional material decomposition approaches.

Towards a zero-shot low-latency navigation for open surgery augmented reality applications.

Schwimmbeck M, Khajarian S, Auer C, Wittenberg T, Remmele S

pubmed logopapersAug 5 2025
Augmented reality (AR) enhances surgical navigation by superimposing visible anatomical structures with three-dimensional virtual models using head-mounted displays (HMDs). In particular, interventions such as open liver surgery can benefit from AR navigation, as it aids in identifying and distinguishing tumors and risk structures. However, there is a lack of automatic and markerless methods that are robust against real-world challenges, such as partial occlusion and organ motion. We introduce a novel multi-device approach for automatic live navigation in open liver surgery that enhances the visualization and interaction capabilities of a HoloLens 2 HMD through precise and reliable registration using an Intel RealSense RGB-D camera. The intraoperative RGB-D segmentation and the preoperative CT data are utilized to register a virtual liver model to the target anatomy. An AR-prompted Segment Anything Model (SAM) enables robust segmentation of the liver in situ without the need for additional training data. To mitigate algorithmic latency, Double Exponential Smoothing (DES) is applied to forecast registration results. We conducted a phantom study for open liver surgery, investigating various scenarios of liver motion, viewpoints, and occlusion. The mean registration errors (8.31 mm-18.78 mm TRE) are comparable to those reported in prior work, while our approach demonstrates high success rates even for high occlusion factors and strong motion. Using forecasting, we bypassed the algorithmic latency of 79.8 ms per frame, with median forecasting errors below 2 mms and 1.5 degrees between the quaternions. To our knowledge, this is the first work to approach markerless in situ visualization by combining a multi-device method with forecasting and a foundation model for segmentation and tracking. This enables a more reliable and precise AR registration of surgical targets with low latency. Our approach can be applied to other surgical applications and AR hardware with minimal effort.

NUTRITIONAL IMPACT OF LEUCINE-ENRICHED SUPPLEMENTS: EVALUATING PROTEIN TYPE THROUGH ARTIFICIAL INTELLIGENCE (AI)-AUGMENTED MUSCLE ULTRASONOGRAPHY IN HYPERCALORIC, HYPERPROTEIC SUPPORT.

López Gómez JJ, Gutiérrez JG, Jauregui OI, Cebriá Á, Asensio LE, Martín DP, Velasco PF, Pérez López P, Sahagún RJ, Bargues DR, Godoy EJ, de Luis Román DA

pubmed logopapersAug 5 2025
Malnutrition adversely affects physical function and body composition in patients with chronic diseases. Leucine supplementation has shown benefits in improving body composition and clinical outcomes. This study aimed to evaluate the effects of a leucine-enriched oral nutritional supplement (ONS) on the nutritional status of patients at risk of malnutrition. This prospective observational study followed two cohorts of malnourished patients receiving personalized nutritional interventions over 3 months. One group received a leucine-enriched oral supplement (20% protein, 100% whey, 3 g leucine), while other received a standard supplement (hypercaloric and normo-hyperproteic) with mixed protein sources. Nutritional status was assessed at baseline and after 3 months using anthropometry, bioelectrical impedance analysis, AI assisted muscle ultrasound, and handgrip strength RESULTS: A total of 142 patients were included (76 Leucine-ONS, 66 Standard-ONS), mostly women (65.5%), mean age 62.00 (18.66) years. Malnutrition was present in 90.1% and 34.5% had sarcopenia. Cancer was the most common condition (30.3%). The Leucine-ONS group showed greater improvements in phase angle (+2.08% vs. -1.57%; p=0.02) and rectus femoris thickness (+1.72% vs. -5.89%; p=0.03). Multivariate analysis confirmed associations between Leucine-ONS and improved phase angle (OR=2.41; 95%CI: 1.18-4.92; p=0.02) and reduced intramuscular fat (OR=2.24; 95%CI: 1.13-4.46; p=0.02). Leucine-enriched-ONS significantly improved phase angle and muscle thickness compared to standard ONS, supporting its role in enhancing body composition in malnourished patients. These results must be interpreted in the context of the observational design of the study, the heterogeneity of comparison groups and the short duration of intervention. Further randomized controlled trials are needed to confirm these results and assess long-term clinical and functional outcomes.

GRASPing Anatomy to Improve Pathology Segmentation

Keyi Li, Alexander Jaus, Jens Kleesiek, Rainer Stiefelhagen

arxiv logopreprintAug 5 2025
Radiologists rely on anatomical understanding to accurately delineate pathologies, yet most current deep learning approaches use pure pattern recognition and ignore the anatomical context in which pathologies develop. To narrow this gap, we introduce GRASP (Guided Representation Alignment for the Segmentation of Pathologies), a modular plug-and-play framework that enhances pathology segmentation models by leveraging existing anatomy segmentation models through pseudolabel integration and feature alignment. Unlike previous approaches that obtain anatomical knowledge via auxiliary training, GRASP integrates into standard pathology optimization regimes without retraining anatomical components. We evaluate GRASP on two PET/CT datasets, conduct systematic ablation studies, and investigate the framework's inner workings. We find that GRASP consistently achieves top rankings across multiple evaluation metrics and diverse architectures. The framework's dual anatomy injection strategy, combining anatomical pseudo-labels as input channels with transformer-guided anatomical feature fusion, effectively incorporates anatomical context.

A novel lung cancer diagnosis model using hybrid convolution (2D/3D)-based adaptive DenseUnet with attention mechanism.

Deepa J, Badhu Sasikala L, Indumathy P, Jerrin Simla A

pubmed logopapersAug 5 2025
Existing Lung Cancer Diagnosis (LCD) models have difficulty in detecting early-stage lung cancer due to the asymptomatic nature of the disease which leads to an increased death rate of patients. Therefore, it is important to diagnose lung disease at an early stage to save the lives of affected persons. Hence, the research work aims to develop an efficient lung disease diagnosis using deep learning techniques for the early and accurate detection of lung cancer. This is achieved by. Initially, the proposed model collects the mandatory CT images from the standard benchmark datasets. Then, the lung cancer segmentation is done by using the development of Hybrid Convolution (2D/3D)-based Adaptive DenseUnet with Attention mechanism (HC-ADAM). The Hybrid Sewing Training with Spider Monkey Optimization (HSTSMO) is introduced to optimize the parameters in the developed HC-ADAM segmentation approach. Finally, the dissected lung nodule imagery is considered for the lung cancer classification stage, where the Hybrid Adaptive Dilated Networks with Attention mechanism (HADN-AM) are implemented with the serial cascading of ResNet and Long Short Term Memory (LSTM) for attaining better categorization performance. The accuracy, precision, and F1-score of the developed model for the LIDC-IDRI dataset are 96.3%, 96.38%, and 96.36%, respectively.

Real-time 3D US-CT fusion-based semi-automatic puncture robot system: clinical evaluation.

Nakayama M, Zhang B, Kuromatsu R, Nakano M, Noda Y, Kawaguchi T, Li Q, Maekawa Y, Fujie MG, Sugano S

pubmed logopapersAug 5 2025
Conventional systems supporting percutaneous radiofrequency ablation (PRFA) have faced difficulties in ensuring safe and accurate puncture due to issues inherent to the medical images used and organ displacement caused by patients' respiration. To address this problem, this study proposes a semi-automatic puncture robot system that integrates real-time ultrasound (US) images with computed tomography (CT) images. The purpose of this paper is to evaluate the system's usefulness through a pilot clinical experiment involving participants. For the clinical experiment using the proposed system, an improved U-net model based on fivefold cross-validation was constructed. Following the workflow of the proposed system, the model was trained using US images acquired from patients with robotic arms. The average Dice coefficient for the entire validation dataset was confirmed to be 0.87. Therefore, the model was implemented in the robotic system and applied to clinical experiment. A clinical experiment was conducted using the robotic system equipped with the developed AI model on five adult male and female participants. The centroid distances between the point clouds from each modality were evaluated in the 3D US-CT fusion process, assuming the blood vessel centerline represents the overall structural position. The results of the centroid distances showed a minimum value of 0.38 mm, a maximum value of 4.81 mm, and an average of 1.97 mm. Although the five participants had different CP classifications and the derived US images exhibited individual variability, all centroid distances satisfied the ablation margin of 5.00 mm considered in PRFA, suggesting the potential accuracy and utility of the robotic system for puncture navigation. Additionally, the results suggested the potential generalization performance of the AI model trained with data acquired according to the robotic system's workflow.

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.
Page 9 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.