Back to all news

New Framework Compares AI Segmentation Without Ground Truth Annotations

EurekAlertResearch
New Framework Compares AI Segmentation Without Ground Truth Annotations

Researchers introduce an open-source approach for evaluating AI anatomy segmentation models in medical imaging without requiring ground truth annotations.

Key Details

  • 1Six open-source AI segmentation models evaluated on chest CT scans from the National Lung Screening Trial (NLST).
  • 2All model outputs standardized using DICOM format and harmonized medical lexicons (SNOMED-CV) for fair comparison.
  • 3Strong agreement found across models for lung segmentation, but inconsistencies and systematic errors for bones and cardiac structures.
  • 4Interactive visualization tools built on OHIF Viewer and 3D Slicer plugin enabled detailed side-by-side model review.
  • 5Framework and datasets are open source; enables identification of reliable models and flags problematic cases.

Why It Matters

This framework addresses a major barrier for the radiology AI field by enabling fair, evidence-based model comparison in settings lacking expert-annotated ground truth. Such approaches help researchers select appropriate AI tools for large-scale studies and highlight the variability between widely used segmentation models.

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.