Readspeaker Menü

Model Transfer for Active Appearance Models

Daniel Haase and Joachim Denzler


For the purpose of object modeling, landmark detection and analysis Active Appearance Models (AAM) are one of the most successful used methods. AAM are statistical generative models that can describe deformable objects based on a shape and a (shape-free) texture component. During training principle component analysis (PCA) is performed on training images with annotated landmarks to get a parameterised description of the shape and texture. A large drawback of this method is the large number of annotated of images, that is used for the training step. When learned only on a few examples AAMs show a weak generalisation ability.
A possible strategy would be to extend an existing AAM with a small amount of meaningful images. This can be done using transfer learning, a technique that has been gained more and more attention in the last years. By transferring the new information into the target model we overcome the above mentioned drawback of AAM.

Instance Weight Transfer Learning

Selective Weighted Transfer Learning Overview

Results on facial datasets

Results (quantitative) on IMM and CK+

The transfer method was tested on two popular face benchmark datasets: IMM (Stegmann et al., 2003) and CK+ (Lucey et al., 2010). Both datasets contain images with annotated landmarks and various standardised facial expressions. In the experiments two individual AAM were trained. On the one side the target model with a small amount of neutral images which show a frontal head pose and minimal facial expressiveness, thus that the model is not able to detect emotional expressions. On the other side the generic source model with the whole variety of facial actions.
The results are compared against state of the art AAM transfer methods: the Source only, Target only and union of source and target transfer by Daume (2007); the full source variation transfer by de la Hunty et al. (2010) and the subspace transfer by Theobald et al. (2007) and Anderson et al. (2013). Our approach outperforms all previous approaches on both evaluation datasets. On the first view the median geometric pixel error between the subspace transfer and our method are quiet close, but the differences are significant.

Results (qualitative) on IMM dataset

Qualitative results for different AAM transfer approaches for two example individuals of the IMM face dataset. The Target AAM was trained on only neutral images, thus it has never seen non-neutral expressions before.



Daniel  Haase, Erik Rodner and Joachim Denzler. Instance-weighted Transfer Learning of Active Appearance Models. Conference on Computer Vision and Pattern Recognition (CVPR). 2014. [pdf] [bib]