Back to all papers

Multiscale segmentation using hierarchical phase-contrast tomography and deep learning.

February 2, 2026pubmed logopapers

Authors

Zhou Y,Aslani S,Javanmardi Y,Brunet J,Stansby D,Carroll S,Bellier A,Ackermann M,Tafforeau P,Lee PD,Walsh CL

Affiliations (9)

  • Multiscale X-ray Imaging (MXI) Lab, Department of Mechanical Engineering, University College London, London, United Kingdom.
  • Satsuma Lab, Hawkes Institute, University College London, London, United Kingdom.
  • Department of Respiratory Medicine, University College London, London, United Kingdom.
  • European Synchrotron Radiation Facility, Grenoble, France.
  • Advanced Research Computing Centre, University College London, London, United Kingdom.
  • Univ. Grenoble Alpes, Department of Anatomy (LADAF), AGEIS, CIC INSERM, Grenoble, France.
  • Institute of Anatomy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany.
  • Institute of Pathology, Uniklinik RWTH Aachen, Aachen, Germany.
  • Institute of Pathology and Department of Molecular Pathology, Helios University Clinic Wuppertal, Wuppertal, Germany.

Abstract

Biomedical systems span multiple spatial scales, encompassing tiny functional units to entire organs. Interpreting these systems through image segmentation requires the effective propagation and integration of information across different scales. However, most existing segmentation methods are optimised for single-scale imaging modalities, limiting their ability to capture and analyse small functional units throughout complete human organs. To facilitate multiscale biomedical image segmentation, we utilised Hierarchical Phase-Contrast Tomography (HiP-CT), an advanced imaging modality that can generate 3D multiscale datasets from high-resolution volumes of interest (VOIs) at ca. 1 [Formula: see text]/voxel to whole-organ scans at ca. 20 [Formula: see text]/voxel. Building on these hierarchical multiscale datasets, we developed a deep learning-based segmentation pipeline that is initially trained on manually annotated high-resolution HiP-CT data and then extended to lower-resolution whole-organ scans using pseudo-labels generated from high-resolution predictions and multiscale image registration. As a case study, we focused on glomeruli in human kidneys, benchmarking four 3D deep learning models for biomedical image segmentation on a manually annotated high-resolution dataset extracted from VOIs, at 2.58 to ca. 5 [Formula: see text]/voxel, of four human kidneys. Among them, nnUNet demonstrated the best performance, achieving an average test Dice score of 0.906, and was subsequently used as the baseline model for multiscale segmentation in the pipeline. Applying this pipeline to two low-resolution full-organ data at ca. 25 [Formula: see text]/voxel, the model identified 1,019,890 and 231,179 glomeruli in a 62-year-old donor without kidney diseases and a 94-year-old hypertensive donor, enabling comprehensive morphological analyses, including cortical spatial statistics and glomerular distributions, which aligned well with previous anatomical studies. Our results highlight the effectiveness of the proposed pipeline for segmenting small functional units in multiscale bioimaging datasets and suggest its broader applicability to other organ systems.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 9,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.