Sort by:
Page 6 of 1331322 results

An open deep learning-based framework and model for tooth instance segmentation in dental CBCT.

Zhou Y, Xu Y, Khalil B, Nalley A, Tarce M

pubmed logopapersSep 25 2025
Current dental CBCT segmentation tools often lack accuracy, accessibility, or comprehensive anatomical coverage. To address this, we constructed a densely annotated dental CBCT dataset and developed a deep learning model, OraSeg, for tooth-level instance segmentation, which is then deployed as a one-click tool and made freely accessible for non-commercial use. We established a standardized annotated dataset covering 35 key oral anatomical structures and employed UNetR as the backbone network, combining Swin Transformer and the spatial Mamba module for multi-scale residual feature fusion. The OralSeg model was designed and optimized for precise instance segmentation of dental CBCT images, and integrated into the 3D Slicer platform, providing a graphical user interface for one-click segmentation. OralSeg had a Dice similarity coefficient of 0.8316 ± 0.0305 on CBCT instance segmentation compared to SwinUNETR and 3D U-Net. The model significantly improves segmentation performance, especially in complex oral anatomical structures, such as apical areas, alveolar bone margins, and mandibular nerve canals. The OralSeg model presented in this study provides an effective solution for instance segmentation of dental CBCT images. The tool allows clinical dentists and researchers with no AI background to perform one-click segmentation, and may be applicable in various clinical and research contexts. OralSeg can offer researchers and clinicians a user-friendly tool for tooth-level instance segmentation, which may assist in clinical diagnosis, educational training, and research, and contribute to the broader adoption of digital dentistry in precision medicine.

A Deep Learning-Based Fully Automated Vertebra Segmentation and Labeling Workflow.

Lu H, Liu M, Yu K, Fang Y, Zhao J, Shi Y

pubmed logopapersSep 25 2025
<b>Aims/Background</b> Spinal disorders, such as herniated discs and scoliosis, are highly prevalent conditions with rising incidence in the aging global population. Accurate analysis of spinal anatomical structures is a critical prerequisite for achieving high-precision positioning with surgical navigation robots. However, traditional manual segmentation methods are limited by issues such as low efficiency and poor consistency. This work aims to develop a fully automated deep learning-based vertebral segmentation and labeling workflow to provide efficient and accurate preoperative analysis support for spine surgery navigation robots. <b>Methods</b> In the localization stage, the You Only Look Once version 7 (YOLOv7) network was utilized to predict the bounding boxes of individual vertebrae on computed tomography (CT) sagittal slices, transforming the 3D localization problem into a 2D one. Subsequently, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm was employed to aggregate the 2D detection results into 3D vertebral centers. This approach significantly reduces inference time and enhances localization accuracy. In the segmentation stage, a 3D U-Net model integrated with an attention mechanism was trained using the region of interest (ROI) based on the vertebral center as input, effectively extracting the 3D structural features of vertebrae to achieve precise segmentation. In the labeling stage, a vertebra labeling network was trained by combining deep learning architectures-ResNet and Transformer, which are capable of extracting rich intervertebral features, to obtain the final labeling results through post-processing based on positional logic analysis. To verify the effectiveness of this workflow, experiments were conducted on a dataset comprising 106 spinal CT datasets sourced from various devices, covering a wide range of clinical scenarios. <b>Results</b> The results demonstrate that the method performed excellently in the three key tasks of localization, segmentation, and labeling, with a Mean Localization Error (MLE) of 1.42 mm. The segmentation accuracy metrics included a Dice Similarity Coefficient (DSC) of 0.968 ± 0.014, Intersection over Union (IoU) of 0.879 ± 0.018, Pixel Accuracy (PA) of 0.988 ± 0.005, mean symmetric distance (MSD) of 1.09 ± 0.19 mm, and Hausdorff Distance (HD) of 5.42 ± 2.05 mm. The degree of classification accuracy reached up to 94.36%. <b>Conclusion</b> These quantitative assessments and visualizations confirm the effectiveness of our method (vertebra localization, vertebra segmentation and vertebra labeling), indicating its potential for deployment in spinal surgery navigation robots to provide accurate and efficient preoperative analysis and navigation support for spinal surgeries.

Artificial Intelligence-Led Whole Coronary Artery OCT Analysis; Validation and Identification of Drug Efficacy and Higher-Risk Plaques.

Jessney B, Chen X, Gu S, Huang Y, Goddard M, Brown A, Obaid D, Mahmoudi M, Garcia Garcia HM, Hoole SP, Räber L, Prati F, Schönlieb CB, Roberts M, Bennett M

pubmed logopapersSep 25 2025
Intracoronary optical coherence tomography (OCT) can identify changes following drug/device treatment and high-risk plaques, but analysis requires expert clinician or core laboratory interpretation, while artifacts and limited sampling markedly impair reproducibility. Assistive technologies such as artificial intelligence-based analysis may therefore aid both detailed OCT interpretation and patient management. We determined if artificial intelligence-based OCT analysis (AutoOCT) can rapidly process, optimize and analyze OCT images, and identify plaque composition changes that predict drug success/failure and high-risk plaques. AutoOCT deep learning artificial intelligence modules were designed to correct segmentation errors from poor-quality or artifact-containing OCT images, identify tissue/plaque composition, classify plaque types, measure multiple parameters including lumen area, lipid and calcium arcs, and fibrous cap thickness, and output segmented images and clinically useful parameters. Model development used 36 212 frames (127 whole pullbacks, 106 patients). Internal validation of tissue and plaque classification and measurements used ex vivo OCT pullbacks from autopsy arteries, while external validation for plaque stabilization and identifying high-risk plaques used core laboratory analysis of IBIS-4 (Integrated Biomarkers and Imaging Study-4) high-intensity statin (83 patients) and CLIMA (Relationship Between Coronary Plaque Morphology of Left Anterior Descending Artery and Long-Term Clinical Outcome Study; 62 patients) studies, respectively. AutoOCT recovered images containing common artifacts with measurements and tissue and plaque classification accuracy of 83% versus histology, equivalent to expert clinician readers. AutoOCT replicated core laboratory plaque composition changes after high-intensity statin, including reduced lesion lipid arc (13.3° versus 12.5°) and increased minimum fibrous cap thickness (18.9 µm versus 24.4 µm). AutoOCT also identified high-risk plaque features leading to patient events including minimal lumen area <3.5 mm<sup>2</sup>, Lipid arc >180°, and fibrous cap thickness <75 µm, similar to the CLIMA core laboratory. AutoOCT-based analysis of whole coronary artery OCT identifies tissue and plaque types and measures features correlating with plaque stabilization and high-risk plaques. Artificial intelligence-based OCT analysis may augment clinician or core laboratory analysis of intracoronary OCT images for trials of drug/device efficacy and identifying high-risk lesions.

Automated segmentation of brain metastases in magnetic resonance imaging using deep learning in radiotherapy.

Zhang R, Liu Y, Li M, Jin A, Chen C, Dai Z, Zhang W, Jia L, Peng P

pubmed logopapersSep 25 2025
Brain metastases (BMs) are the most common intracranial tumors and stereotactic radiotherapy improved the life quality of patient with BMs, while it requires more time and experience to delineate BMs precisely by oncologists. Deep Learning techniques showed promising applications in radiation oncology. Therefore, we proposed a deep learning-based automatic segmentation of primary tumor volumes for BMs in this work. Magnetic resonance imaging (MRI) of 158 eligible patients with BMs was retrospectively collected in the study. An automatic segmentation model called BUC-Net based on U-Net with cascade strategy and bottleneck module was proposed for auto-segmentation of BMs. The proposed model was evaluated using geometric metrics (Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95) and Average surface distance (ASD)) for the performance of automatic segmentation, and Precision recall (PR) and Receiver operating characteristic (ROC) curve for the performance of automatic detection, and relative volume difference (RVD) for evaluation. Compared with U-Net and U-Net Cascade, the BUC-Net achieved the average DSC of 0.912 and 0.797, HD95 of 0.901 mm and 0.922 mm, ASD of 0.332 mm and 0.210 mm for the evaluation of automatic segmentation in binary classification and multiple classification, respectively. The average Area Under Curve (AUC) of 0.934 and 0.835 for (Precision-Recall) PR and Receiver Operating Characteristic (ROC) curve for the tumor detection. It also performed the minimum RVD with various diameter ranges in the clinical evaluation. The BUC-Net can achieve the segmentation and modification of BMs for one patient within 10 min, instead of 3-6 h by the conventional manual modification, which is conspicuous to improve the efficiency and accuracy of radiation therapy.

CACTUS: Multiview classifier for Punctate White Matter Lesions detection & segmentation in cranial ultrasound volumes.

Estermann F, Kaftandjian V, Guy P, Quetin P, Delachartre P

pubmed logopapersSep 25 2025
Punctate white matter lesions (PWML) are the most common white matter injuries found in preterm neonates, with several studies indicating a connection between these lesions and negative long-term outcomes. Automated detection of PWML through ultrasound (US) imaging could assist doctors in diagnosis more effectively and at a lower cost than MRI. However, this task is highly challenging because of the lesions' small size and low contrast, and the number of lesions can vary significantly between subjects. In this work, we propose a two-phase approach: (1) Segmentation using a vision transformer to increase the detection rate of lesions. (2) Multi-view classification leveraging cross-attention to reduce false positives and enhance precision. We also investigate multiple postprocessing approaches to ensure prediction quality and compare our results with what is known in MRI. Our method demonstrates improved performance in PWML detection on US images, achieving recall and precision rates of 0.84 and 0.70, respectively, representing an increase of 2% and 10% over the best published US models. Moreover, by reducing the task to a slightly simpler problem (detection of MRI-visible PWML), the model achieves 0.82 recall and 0.89 precision, which is equivalent to the latest method in MRI.

PHASE: Personalized Head-based Automatic Simulation for Electromagnetic Properties in 7T MRI.

Lu Z, Liang H, Lu M, Martin D, Hardy BM, Dawant BM, Wang X, Yan X, Huo Y

pubmed logopapersSep 25 2025
Accurate and individualized human head models are becoming increasingly important for electromagnetic (EM) simulations. These simulations depend on precise anatomical representations to realistically model electric and magnetic field distributions, particularly when evaluating Specific Absorption Rate (SAR) within safety guidelines. State of the art simulations use the Virtual Population due to limited public resources and the impracticality of manually annotating patient data at scale. This paper introduces Personalized Head-based Automatic Simulation for EM properties (PHASE), an automated open-source toolbox that generates high-resolution, patient-specific head models for EM simulations using paired T1-weighted (T1w) magnetic resonance imaging (MRI) and computed tomography (CT) scans with 14 tissue labels. To evaluate the performance of PHASE models, we conduct semi-automated segmentation and EM simulations on 15 real human patients, serving as the gold standard reference. The PHASE model achieved comparable global SAR and localized SAR averaged over 10 grams of tissue (SAR-10 g), demonstrating its potential as a promising tool for generating large-scale human model datasets in the future. The code and models of PHASE toolbox have been made publicly available: https://github.com/hrlblab/PHASE.

SA<sup>2</sup>Net: Scale-adaptive structure-affinity transformation for spine segmentation from ultrasound volume projection imaging.

Xie H, Huang Z, Zuo Y, Ju Y, Leung FHF, Law NF, Lam KM, Zheng YP, Ling SH

pubmed logopapersSep 25 2025
Spine segmentation, based on ultrasound volume projection imaging (VPI), plays a vital role for intelligent scoliosis diagnosis in clinical applications. However, this task faces several significant challenges. Firstly, the global contextual knowledge of spines may not be well-learned if we neglect the high spatial correlation of different bone features. Secondly, the spine bones contain rich structural knowledge regarding their shapes and positions, which deserves to be encoded into the segmentation process. To address these challenges, we propose a novel scale-adaptive structure-aware network (SA<sup>2</sup>Net) for effective spine segmentation. First, we propose a scale-adaptive complementary strategy to learn the cross-dimensional long-distance correlation features for spinal images. Second, motivated by the consistency between multi-head self-attention in Transformers and semantic level affinity, we propose structure-affinity transformation to transform semantic features with class-specific affinity and combine it with a Transformer decoder for structure-aware reasoning. In addition, we adopt a feature mixing loss aggregation method to enhance model training. This method improves the robustness and accuracy of the segmentation process. The experimental results demonstrate that our SA<sup>2</sup>Net achieves superior segmentation performance compared to other state-of-the-art methods. Moreover, the adaptability of SA<sup>2</sup>Net to various backbones enhances its potential as a promising tool for advanced scoliosis diagnosis using intelligent spinal image analysis.

Clinical deployment and prospective validation of an AI model for limb-length discrepancy measurements using an open-source platform.

Tsai A, Samal S, Lamonica P, Morris N, McNeil J, Pienaar R

pubmed logopapersSep 24 2025
To deploy an AI model to measure limb-length discrepancy (LLD) and prospectively validate its performance. We encoded the inference of an LLD AI model into a docker container, incorporated it into a computational platform for clinical deployment, and conducted two prospective validation studies: a shadow trial (07/2024-9/2024) and a clinical trial (11/2024-01/2025). During each trial period, we queried for LLD EOS scanograms to serve as inputs to our model. For the shadow trial, we hid the AI-annotated outputs from the radiologists, and for the clinical trial, we displayed the AI-annotated output to the radiologists at the time of study interpretation. Afterward, we collected the bilateral femoral and tibial lengths from the radiology reports and compared them against those generated by the AI model. We used median absolute difference (MAD) and interquartile range (IQR) as summary statistics to assess the performance of our model. Our shadow trial consisted of 84 EOS scanograms from 84 children, with 168 femoral and tibial lengths. The MAD (IQR) of the femoral and tibial lengths were 0.2 cm (0.3 cm) and 0.2 cm (0.3 cm), respectively. Our clinical trial consisted of 114 EOS scanograms from 114 children, with 228 femoral and tibial lengths. The MAD (IQR) of the femoral and tibial lengths were 0.3 cm (0.4 cm) and 0.2 cm (0.3 cm), respectively. We successfully employed a computational platform for seamless integration and deployment of an LLD AI model into our clinical workflow, and prospectively validated its performance. Question No AI models have been clinically deployed for limb-length discrepancy (LLD) assessment in children, and the prospective validation of these models is unknown. Findings We deployed an LLD AI model using a homegrown platform, with prospective trials showing a median absolute difference of 0.2-0.3 cm in estimating bone lengths. Clinical relevance An LLD AI model with performance comparable to that of radiologists can serve as a secondary reader in increasing the confidence and accuracy of LLD measurements.

A novel hybrid deep learning model for segmentation and uzzy Res-LeNet based classification for Alzheimer's disease.

R S, Maganti S, Akundi SH

pubmed logopapersSep 24 2025
Alzheimer's disease (AD) is a progressive illness that can cause behavioural abnormalities, personality changes, and memory loss. Early detection helps with future planning for both the affected person and caregivers. Thus, an innovative hybrid Deep Learning (DL) method is introduced for the segmentation and classification of AD. The classification is performed by a Fuzzy Res-LeNet model. At first, an input Magnetic Resonance Imaging (MRI) image is attained from the database. Image preprocessing is then performed by a Bilateral Filter (BF) to enhance the quality of image by denoising. Then segmentation is carried out by the proposed O-SegUNet. This method integrates the O-SegNet and U-Net model using Pearson correlation coefficient-based fusion. After the segmentation, augmentation is carried out by utilizing Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance. After that, feature extraction is carried out. Finally, AD classification is performed by the Fuzzy Res-LeNet. The stages are classified as Mild Cognitive Impairment (MCI), AD, Cognitive Normal (CN), Early Mild Cognitive Impairment (EMCI), and Late Mild Cognitive Impairment (LMCI). Here, Fuzzy Res-LeNet is devised by integrating Fuzzy logic, ResNeXt, and LeNet. Furthermore, the proposed Fuzzy Res-LeNet obtained the maximum performance with an accuracy of 93.887%, sensitivity of 94.587%, and specificity of 94.008%.

HiPerformer: A High-Performance Global-Local Segmentation Model with Modular Hierarchical Fusion Strategy

Dayu Tan, Zhenpeng Xu, Yansen Su, Xin Peng, Chunhou Zheng, Weimin Zhong

arxiv logopreprintSep 24 2025
Both local details and global context are crucial in medical image segmentation, and effectively integrating them is essential for achieving high accuracy. However, existing mainstream methods based on CNN-Transformer hybrid architectures typically employ simple feature fusion techniques such as serial stacking, endpoint concatenation, or pointwise addition, which struggle to address the inconsistencies between features and are prone to information conflict and loss. To address the aforementioned challenges, we innovatively propose HiPerformer. The encoder of HiPerformer employs a novel modular hierarchical architecture that dynamically fuses multi-source features in parallel, enabling layer-wise deep integration of heterogeneous information. The modular hierarchical design not only retains the independent modeling capability of each branch in the encoder, but also ensures sufficient information transfer between layers, effectively avoiding the degradation of features and information loss that come with traditional stacking methods. Furthermore, we design a Local-Global Feature Fusion (LGFF) module to achieve precise and efficient integration of local details and global semantic information, effectively alleviating the feature inconsistency problem and resulting in a more comprehensive feature representation. To further enhance multi-scale feature representation capabilities and suppress noise interference, we also propose a Progressive Pyramid Aggregation (PPA) module to replace traditional skip connections. Experiments on eleven public datasets demonstrate that the proposed method outperforms existing segmentation techniques, demonstrating higher segmentation accuracy and robustness. The code is available at https://github.com/xzphappy/HiPerformer.
Page 6 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.