Readspeaker menu

GrabCut Extensions

Björn Fröhlich, Christoph Göring and Joachim Denzler


Overview

This work analyzes how to utilize the power of the popular GrabCut algorithm for the task of pixel-wise labeling of images, which is also known as semantic segmentation and an important step for scene understanding in various application domains. In contrast to the original GrabCut method, the aim is to segment objects in images in a completely automatic manner and label them as one of the previously learned object categories. In this paper, we introduce and analyze two different approaches that extend GrabCut to make use of training images. C-GrabCut generates multiple class-specific segmentations and classifies them by using shape and color information. L-GrabCut uses as a first step an object localization algorithm, which returns a classified bounding box as a hypothesis of an object in the image. Afterwards, this hypothesis is used as an initialization for the GrabCut algorithm. In our experiments, we show that both methods lead to similar results and demonstrate their benefits compared to semantic segmentation methods only based on local features.


SSG_framework.png
Figure 1: This flowchart shows both approaches. C-GrabCut: First an image is segmented using different models to get an initial segmentation. In the second step a classifier determines which possible segmentation is more likely to be the correct one. L-GrabCut: The class of the object and the bounding box is determined. Thereafter, GrabCut is started using the bounding box as initialization. The result of both methods is the class label and the segmentation of the foreground object.