Sort by:
Page 48 of 1341332 results

Quantifying the Trajectory of Percutaneous Endoscopic Lumbar Discectomy in 3D Lumbar Models Based on Automated MR Image Segmentation-A Cross-Sectional Study.

Su Z, Wang Y, Huang C, He Q, Lu J, Liu Z, Zhang Y, Zhao Q, Zhang Y, Cai J, Pang S, Yuan Z, Chen Z, Chen T, Lu H

pubmed logopapersJul 31 2025
Creating a 3D lumbar model and planning a personalized puncture trajectory has an advantage in establishing the working channel for percutaneous endoscopic lumbar discectomy (PELD). However, existing 3D lumbar models, which seldom include lumbar nerves and dural sac reconstructions, primarily depend on CT images for preoperative trajectory planning. Therefore, our study aims to further investigate the relationship between different virtual working channels and the 3D lumbar model, which includes automated MR image segmentation of lumbar bone, nerves, and dural sac at the L4/L5 level. Preoperative lumbar MR images of 50 patients with L4/L5 lumbar disc herniation were collected from a teaching hospital between March 2020 and July 2020. Automated MR image segmentation was initially used to create a 3D model of the lumbar spine, including the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin. Thirty were then randomly chosen from the segmentation results to clarify the relationship between various virtual working channels and the lumbar 3D model. A bivariate Spearman's rank correlation analysis was used in this study. Preoperative MR images of 50 patients (34 males, mean age 45.6 ± 6 years) were used to train and validate the automated segmentation model, which had mean Dice scores of 0.906, 0.891, 0.896, 0.695, 0.892, and 0.892 for the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin, respectively. With an increase in the coronal plane angle (CPA), there was a reduction in the intersection volume involving the L4 nerves and atypical structures. Conversely, the intersection volume encompassing the dural sac, L4 inferior articular process, and L5 superior articular process increased; the total intersection volume showed a fluctuating pattern: it initially decreased, followed by an increase, and then decreased once more. As the cross-section angle (CSA) increased, there was a rise in the intersection volume of both the L4 nerves and the dural sac; the intersection volume involving the L4 inferior articular process grew while that of the L5 superior articular process diminished; the overall intersection volume and the intersection volume of atypical structures initially decreased, followed by an increase. In terms of regularity, the optimal angles for L4/L5 PELD are a CSA of 15° and a CPA of 15°-20°, minimizing harm to the vertebral bones, facet joint, spinal nerves, and dural sac. Additionally, our 3D preoperative planning method could enhance puncture trajectories for individual patients, potentially advancing surgical navigation, robots, and artificial intelligence in PELD procedures.

A brain tumor segmentation enhancement in MRI images using U-Net and transfer learning.

Pourmahboubi A, Arsalani Saeed N, Tabrizchi H

pubmed logopapersJul 31 2025
This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Towards Affordable Tumor Segmentation and Visualization for 3D Breast MRI Using SAM2

Solha Kang, Eugene Kim, Joris Vankerschaver, Utku Ozbulak

arxiv logopreprintJul 31 2025
Breast MRI provides high-resolution volumetric imaging critical for tumor assessment and treatment planning, yet manual interpretation of 3D scans remains labor-intensive and subjective. While AI-powered tools hold promise for accelerating medical image analysis, adoption of commercial medical AI products remains limited in low- and middle-income countries due to high license costs, proprietary software, and infrastructure demands. In this work, we investigate whether the Segment Anything Model 2 (SAM2) can be adapted for low-cost, minimal-input 3D tumor segmentation in breast MRI. Using a single bounding box annotation on one slice, we propagate segmentation predictions across the 3D volume using three different slice-wise tracking strategies: top-to-bottom, bottom-to-top, and center-outward. We evaluate these strategies across a large cohort of patients and find that center-outward propagation yields the most consistent and accurate segmentations. Despite being a zero-shot model not trained for volumetric medical data, SAM2 achieves strong segmentation performance under minimal supervision. We further analyze how segmentation performance relates to tumor size, location, and shape, identifying key failure modes. Our results suggest that general-purpose foundation models such as SAM2 can support 3D medical image analysis with minimal supervision, offering an accessible and affordable alternative for resource-constrained settings.

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility.

Goto M, Kamagata K, Andica C, Takabayashi K, Uchida W, Goto T, Yuzawa T, Kitamura Y, Hatano T, Hattori N, Aoki S, Sakamoto H, Sakano Y, Kyogoku S, Daida H

pubmed logopapersJul 31 2025
We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

SAMSA: Segment Anything Model Enhanced with Spectral Angles for Hyperspectral Interactive Medical Image Segmentation

Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou

arxiv logopreprintJul 31 2025
Hyperspectral imaging (HSI) provides rich spectral information for medical imaging, yet encounters significant challenges due to data limitations and hardware variations. We introduce SAMSA, a novel interactive segmentation framework that combines an RGB foundation model with spectral analysis. SAMSA efficiently utilizes user clicks to guide both RGB segmentation and spectral similarity computations. The method addresses key limitations in HSI segmentation through a unique spectral feature fusion strategy that operates independently of spectral band count and resolution. Performance evaluation on publicly available datasets has shown 81.0% 1-click and 93.4% 5-click DICE on a neurosurgical and 81.1% 1-click and 89.2% 5-click DICE on an intraoperative porcine hyperspectral dataset. Experimental results demonstrate SAMSA's effectiveness in few-shot and zero-shot learning scenarios and using minimal training examples. Our approach enables seamless integration of datasets with different spectral characteristics, providing a flexible framework for hyperspectral medical image analysis.

Advanced multi-label brain hemorrhage segmentation using an attention-based residual U-Net model.

Lin X, Zou E, Chen W, Chen X, Lin L

pubmed logopapersJul 31 2025
This study aimed to develop and assess an advanced Attention-Based Residual U-Net (ResUNet) model for accurately segmenting different types of brain hemorrhages from CT images. The goal was to overcome the limitations of manual segmentation and current automated methods regarding precision and generalizability. A dataset of 1,347 patient CT scans was collected retrospectively, covering six types of hemorrhages: subarachnoid hemorrhage (SAH, 231 cases), subdural hematoma (SDH, 198 cases), epidural hematoma (EDH, 236 cases), cerebral contusion (CC, 230 cases), intraventricular hemorrhage (IVH, 188 cases), and intracerebral hemorrhage (ICH, 264 cases). The dataset was divided into 80% for training using a 10-fold cross-validation approach and 20% for testing. All CT scans were standardized to a common anatomical space, and intensity normalization was applied for uniformity. The ResUNet model included attention mechanisms to enhance focus on important features and residual connections to support stable learning and efficient gradient flow. Model performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and directed Hausdorff distance (dHD). The ResUNet model showed excellent performance during both training and testing. On training data, the model achieved DSC scores of 95 ± 1.2 for SAH, 94 ± 1.4 for SDH, 93 ± 1.5 for EDH, 91 ± 1.4 for CC, 89 ± 1.6 for IVH, and 93 ± 2.4 for ICH. IoU values ranged from 88 to 93, with dHD between 2.1- and 2.7-mm. Testing results confirmed strong generalization, with DSC scores of 93 for SAH, 93 for SDH, 92 for EDH, 90 for CC, 88 for IVH, and 92 for ICH. IoU values were also high, indicating precise segmentation and minimal boundary errors. The ResUNet model outperformed standard U-Net variants, achieving higher multi-label segmentation accuracy. This makes it a valuable tool for clinical applications that require fast and reliable brain hemorrhage analysis. Future research could investigate semi-supervised techniques and 3D segmentation to further enhance clinical use. Not applicable.

SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.

Wang H, Guo S, Ye J, Deng Z, Cheng J, Li T, Chen J, Su Y, Huang Z, Shen Y, zzzzFu B, Zhang S, He J

pubmed logopapersJul 31 2025
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.

IHE-Net:Hidden feature discrepancy fusion and triple consistency training for semi-supervised medical image segmentation.

Ju M, Wang B, Zhao Z, Zhang S, Yang S, Wei Z

pubmed logopapersJul 31 2025
Teacher-Student (TS) networks have become the mainstream frameworks of semi-supervised deep learning, and are widely used in medical image segmentation. However, traditional TSs based on single or homogeneous encoders often struggle to capture the rich semantic details required for complex, fine-grained tasks. To address this, we propose a novel semi-supervised medical image segmentation framework (IHE-Net), which makes good use of the feature discrepancies of two heterogeneous encoders to improve segmentation performance. The two encoders are instantiated by different learning paradigm networks, namely CNN and Transformer/Mamba, respectively, to extract richer and more robust context representations from unlabeled data. On this basis, we propose a simple yet powerful multi-level feature discrepancy fusion module (MFDF), which effectively integrates different modal features and their discrepancies from two heterogeneous encoders. This design enhances the representational capacity of the model through efficient fusion without introducing additional computational overhead. Furthermore, we introduce a triple consistency learning strategy to improve predictive stability by setting dual decoders and adding mixed output consistency. Extensive experimental results on three skin lesion segmentation datasets, ISIC2017, ISIC2018, and PH2, demonstrate the superiority of our framework. Ablation studies further validate the rationale and effectiveness of the proposed method. Code is available at: https://github.com/joey-AI-medical-learning/IHE-Net.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.
Page 48 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.