Sort by:
Page 13 of 99986 results

Automated Assessment of Choroidal Mass Dimensions Using Static and Dynamic Ultrasonographic Imaging

Emmert, N., Wall, G., Nabavi, A., Rahdar, A., Wilson, M., King, B., Cernichiaro-Espinosa, L., Yousefi, S.

medrxiv logopreprintAug 1 2025
PurposeTo develop and validate an artificial intelligence (AI)-based model that automatically measures choroidal mass dimensions on B{square}scan ophthalmic ultrasound still images and cine loops. DesignRetrospective diagnostic accuracy study with internal and external validation. ParticipantsThe dataset included 1,822 still images and 283 cine loops of choroidal masses for model development and testing. An additional 182 still images were used for external validation, and 302 control images with other diagnoses were included to assess specificity MethodsA deep convolutional neural network (CNN) based on the U-Net architecture was developed to automatically measure the apical height and basal diameter of choroidal masses on B-scan ultrasound. All still images were manually annotated by expert graders and reviewed by a senior ocular oncologist. Cine loops were analyzed frame by frame and the frame with the largest detected mass dimensions was selected for evaluation. Outcome MeasuresThe primary outcome was the models measurement accuracy, defined by the mean absolute error (MAE) in millimeters, compared to expert manual annotations, for both apical height and basal diameter. Secondary metrics included the Dice coefficient, coefficient of determination (R2), and mean pixel distance between predicted and reference measurements. ResultsOn the internal test set of still images, the model successfully detected the tumor in 99.7% of cases. The mean absolute error (MAE) was 0.38 {+/-} 0.55 mm for apical height (95.1% of measurements <1 mm of the expert annotation) and was 0.99 {+/-} 1.15 mm for basal diameter (64.4% of measurements <1 mm). Linear agreement between predicted and reference measurements was strong, with R2 values of 0.74 for apical height and 0.89 for basal diameter. When applied to the control set of 302 control images, the model demonstrated a moderate false positive rate. On the external validation set, the model maintained comparable accuracy. Among the cine loops, the model detected tumors in 89.4% of cases with comparable accuracy. ConclusionDeep learning can deliver fast, reproducible, millimeter{square}level measurements of choroidal mass dimensions with robust performance across different mass types and imaging sources. These findings support the potential clinical utility of AI-assisted measurement tools in ocular oncology workflows.

SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.

Wang H, Guo S, Ye J, Deng Z, Cheng J, Li T, Chen J, Su Y, Huang Z, Shen Y, zzzzFu B, Zhang S, He J

pubmed logopapersJul 31 2025
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility.

Goto M, Kamagata K, Andica C, Takabayashi K, Uchida W, Goto T, Yuzawa T, Kitamura Y, Hatano T, Hattori N, Aoki S, Sakamoto H, Sakano Y, Kyogoku S, Daida H

pubmed logopapersJul 31 2025
We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Towards Affordable Tumor Segmentation and Visualization for 3D Breast MRI Using SAM2

Solha Kang, Eugene Kim, Joris Vankerschaver, Utku Ozbulak

arxiv logopreprintJul 31 2025
Breast MRI provides high-resolution volumetric imaging critical for tumor assessment and treatment planning, yet manual interpretation of 3D scans remains labor-intensive and subjective. While AI-powered tools hold promise for accelerating medical image analysis, adoption of commercial medical AI products remains limited in low- and middle-income countries due to high license costs, proprietary software, and infrastructure demands. In this work, we investigate whether the Segment Anything Model 2 (SAM2) can be adapted for low-cost, minimal-input 3D tumor segmentation in breast MRI. Using a single bounding box annotation on one slice, we propagate segmentation predictions across the 3D volume using three different slice-wise tracking strategies: top-to-bottom, bottom-to-top, and center-outward. We evaluate these strategies across a large cohort of patients and find that center-outward propagation yields the most consistent and accurate segmentations. Despite being a zero-shot model not trained for volumetric medical data, SAM2 achieves strong segmentation performance under minimal supervision. We further analyze how segmentation performance relates to tumor size, location, and shape, identifying key failure modes. Our results suggest that general-purpose foundation models such as SAM2 can support 3D medical image analysis with minimal supervision, offering an accessible and affordable alternative for resource-constrained settings.

SAMSA: Segment Anything Model Enhanced with Spectral Angles for Hyperspectral Interactive Medical Image Segmentation

Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou

arxiv logopreprintJul 31 2025
Hyperspectral imaging (HSI) provides rich spectral information for medical imaging, yet encounters significant challenges due to data limitations and hardware variations. We introduce SAMSA, a novel interactive segmentation framework that combines an RGB foundation model with spectral analysis. SAMSA efficiently utilizes user clicks to guide both RGB segmentation and spectral similarity computations. The method addresses key limitations in HSI segmentation through a unique spectral feature fusion strategy that operates independently of spectral band count and resolution. Performance evaluation on publicly available datasets has shown 81.0% 1-click and 93.4% 5-click DICE on a neurosurgical and 81.1% 1-click and 89.2% 5-click DICE on an intraoperative porcine hyperspectral dataset. Experimental results demonstrate SAMSA's effectiveness in few-shot and zero-shot learning scenarios and using minimal training examples. Our approach enables seamless integration of datasets with different spectral characteristics, providing a flexible framework for hyperspectral medical image analysis.

Quantifying the Trajectory of Percutaneous Endoscopic Lumbar Discectomy in 3D Lumbar Models Based on Automated MR Image Segmentation-A Cross-Sectional Study.

Su Z, Wang Y, Huang C, He Q, Lu J, Liu Z, Zhang Y, Zhao Q, Zhang Y, Cai J, Pang S, Yuan Z, Chen Z, Chen T, Lu H

pubmed logopapersJul 31 2025
Creating a 3D lumbar model and planning a personalized puncture trajectory has an advantage in establishing the working channel for percutaneous endoscopic lumbar discectomy (PELD). However, existing 3D lumbar models, which seldom include lumbar nerves and dural sac reconstructions, primarily depend on CT images for preoperative trajectory planning. Therefore, our study aims to further investigate the relationship between different virtual working channels and the 3D lumbar model, which includes automated MR image segmentation of lumbar bone, nerves, and dural sac at the L4/L5 level. Preoperative lumbar MR images of 50 patients with L4/L5 lumbar disc herniation were collected from a teaching hospital between March 2020 and July 2020. Automated MR image segmentation was initially used to create a 3D model of the lumbar spine, including the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin. Thirty were then randomly chosen from the segmentation results to clarify the relationship between various virtual working channels and the lumbar 3D model. A bivariate Spearman's rank correlation analysis was used in this study. Preoperative MR images of 50 patients (34 males, mean age 45.6 ± 6 years) were used to train and validate the automated segmentation model, which had mean Dice scores of 0.906, 0.891, 0.896, 0.695, 0.892, and 0.892 for the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin, respectively. With an increase in the coronal plane angle (CPA), there was a reduction in the intersection volume involving the L4 nerves and atypical structures. Conversely, the intersection volume encompassing the dural sac, L4 inferior articular process, and L5 superior articular process increased; the total intersection volume showed a fluctuating pattern: it initially decreased, followed by an increase, and then decreased once more. As the cross-section angle (CSA) increased, there was a rise in the intersection volume of both the L4 nerves and the dural sac; the intersection volume involving the L4 inferior articular process grew while that of the L5 superior articular process diminished; the overall intersection volume and the intersection volume of atypical structures initially decreased, followed by an increase. In terms of regularity, the optimal angles for L4/L5 PELD are a CSA of 15° and a CPA of 15°-20°, minimizing harm to the vertebral bones, facet joint, spinal nerves, and dural sac. Additionally, our 3D preoperative planning method could enhance puncture trajectories for individual patients, potentially advancing surgical navigation, robots, and artificial intelligence in PELD procedures.

A brain tumor segmentation enhancement in MRI images using U-Net and transfer learning.

Pourmahboubi A, Arsalani Saeed N, Tabrizchi H

pubmed logopapersJul 31 2025
This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.

Advanced multi-label brain hemorrhage segmentation using an attention-based residual U-Net model.

Lin X, Zou E, Chen W, Chen X, Lin L

pubmed logopapersJul 31 2025
This study aimed to develop and assess an advanced Attention-Based Residual U-Net (ResUNet) model for accurately segmenting different types of brain hemorrhages from CT images. The goal was to overcome the limitations of manual segmentation and current automated methods regarding precision and generalizability. A dataset of 1,347 patient CT scans was collected retrospectively, covering six types of hemorrhages: subarachnoid hemorrhage (SAH, 231 cases), subdural hematoma (SDH, 198 cases), epidural hematoma (EDH, 236 cases), cerebral contusion (CC, 230 cases), intraventricular hemorrhage (IVH, 188 cases), and intracerebral hemorrhage (ICH, 264 cases). The dataset was divided into 80% for training using a 10-fold cross-validation approach and 20% for testing. All CT scans were standardized to a common anatomical space, and intensity normalization was applied for uniformity. The ResUNet model included attention mechanisms to enhance focus on important features and residual connections to support stable learning and efficient gradient flow. Model performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and directed Hausdorff distance (dHD). The ResUNet model showed excellent performance during both training and testing. On training data, the model achieved DSC scores of 95 ± 1.2 for SAH, 94 ± 1.4 for SDH, 93 ± 1.5 for EDH, 91 ± 1.4 for CC, 89 ± 1.6 for IVH, and 93 ± 2.4 for ICH. IoU values ranged from 88 to 93, with dHD between 2.1- and 2.7-mm. Testing results confirmed strong generalization, with DSC scores of 93 for SAH, 93 for SDH, 92 for EDH, 90 for CC, 88 for IVH, and 92 for ICH. IoU values were also high, indicating precise segmentation and minimal boundary errors. The ResUNet model outperformed standard U-Net variants, achieving higher multi-label segmentation accuracy. This makes it a valuable tool for clinical applications that require fast and reliable brain hemorrhage analysis. Future research could investigate semi-supervised techniques and 3D segmentation to further enhance clinical use. Not applicable.

A successive framework for brain tumor interpretation using Yolo variants.

Priyadharshini S, Bhoopalan R, Manikandan D, Ramaswamy K

pubmed logopapersJul 31 2025
Accurate identification and segmentation of brain tumors in Magnetic Resonance Imaging (MRI) images are critical for timely diagnosis and treatment. MRI is frequently used to diagnose these disorders; however medical professionals find it challenging to manually evaluate MRI pictures because of time restrictions and unpredictability. Computerized methods such as R-CNN, attention models and earlier YOLO variants face limitations due to high computational demands and suboptimal segmentation performance. To overcome these limitations, this study proposes a successive framework that evaluates YOLOv9, YOLOv10, and YOLOv11 for tumor detection and segmentation using the Figshare Brain Tumor dataset (2100 images) and BraTS2020 dataset (3170 MRI slices). Preprocessing involves log transformation for intensity normalization, histogram equalization for contrast enhancement, and edge-based ROI extraction. The models were trained on 80% of the combined dataset and evaluated on the remaining 20%. YOLOv11 demonstrated superior performance, achieving 96.22% classification accuracy on BraTS2020 and 96.41% on Figshare, with an F1-score of 0.990, recall of 0.984, [email protected] of 0.993, and mAP@ [0.5:0.95] of 0.801 during testing. With a fast inference time of 5.3 ms and a balanced precision-recall profile, YOLOv11 proves to be a robust, real-time solution for brain tumor detection in clinical applications.
Page 13 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.