A probabilistic, non-parametric framework for inter-modality label fusion. Academic Article uri icon

Overview

abstract

  • Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm.

publication date

  • January 1, 2013

Research

keywords

  • Brain
  • Magnetic Resonance Imaging
  • Models, Anatomic
  • Models, Neurological
  • Models, Statistical
  • Pattern Recognition, Automated
  • Subtraction Technique

Identity

PubMed Central ID

  • PMC3974705

Scopus Document Identifier

  • 84894632325

Digital Object Identifier (DOI)

  • 10.1007/978-3-642-40760-4_72

PubMed ID

  • 24505808

Additional Document Info

volume

  • 16

issue

  • Pt 3