Readspeaker menu

Incremental Learning

How to deal with new objects and classes?

Alexander Freytag, Paul Bodesheim, Erik Rodner, Christoph Käding, and Joachim Denzler


Motivation

This project aims at solving relevant parts needed during the interesting problem of life-long learning. Our goal is to develop methods which allow for efficient learning when new examples of known classes become available or even completely new classes are discovered. Obviously, a tabula-rasa re-training is not feasible for realistic applications since it takes far too much time and memory ressources. Consequently, we aim at developing methods to efficiently incorporate new knowledge without loosing the possibility to differentiate between the categories known so far.


Incremental Learning with Deep Neural Networks

 

[Kaeding16_FDN]

Christoph Käding and Erik Rodner and Alexander Freytag and Joachim Denzler. Fine-tuning Deep Neural Networks in Continuous Learning Scenario. ACCV Workshop on Interpretation and Visualization of Deep Neural Nets (ACCV-WS). 2016. (Oral). [pdf] [bib]
Abstract:The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the network's attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.

Efficient Incremental Learning with Fast Gaussian Processes and Histogram Intersection Kernels

  GP-HIK_AL_ACCV

[Freytag12:RUC]

Alexander Freytag and Erik Rodner and Paul Bodesheim and Joachim Denzler. Rapid Uncertainty Computation with Gaussian Processes and Histogram Intersection Kernels. Asian Conference on Computer Vision (ACCV). 2012. (Oral). [pdf] [bib] Best Paper Honorable Mention
Short summary: The main focus of this paper is to speed up computations of the GP posterior variance as well as the model update when a new sample is included. Interestingly, these techniques can  successfully be used to speed up computing active learning scores as well. Experimental results highlight that both steps, active selection and model update, significantly benefit from our techniques in terms of computational times, which allows life-long learning also in scenarios with more than a handful of classes and samples.

Incremental Learning with Gaussian Processes

  Incremental Learning Scenario

[Luetz13:IWT]

Alexander Lütz and Erik Rodner and Joachim Denzler. I Want To Know More -Efficient Multi-Class Incremental Learning Using Gaussian Processes. Pattern Recognition and Image Analysis. 2013. Vol. 23. No. 3. 402--407. [pdf] [bib]

[Luetz11:EIL]

Alexander Lütz and Erik Rodner and Joachim Denzler. Efficient Multi-Class Incremental Learning Using Gaussian Processes. Open German-Russian Workshop on Pattern Recognition and Image Understanding (OGRW). 2011. 182--185. (Oral).  [pdf] [bib]
Short summary: One of the main assumptions in machine learning is that sufficient training data is available in advance and batch learning can be applied. However, because of the dynamics in a lot of applications, this assumption will break down in almost all cases over time. Therefore, classifiers have to be able to adapt themselves when new training data from existing or new classes becomes available, training data is changed or should be even removed. In this paper, we present a method allowing efficient incremental learning of a Gaussian process classifier. Experimental results show the benefits in terms of needed computation times compared to building the classifier from the scratch.

Publications

[Kaeding16_FDN]

Christoph Käding and Erik Rodner and Alexander Freytag and Joachim Denzler. Fine-tuning Deep Neural Networks in Continuous Learning Scenario. ACCV Workshop on Interpretation and Visualization of Deep Neural Nets (ACCV-WS). 2016. (Oral). [pdf] [bib]

[Luetz13:IWT]

Alexander Lütz and Erik Rodner and Joachim Denzler. I Want To Know More -Efficient Multi-Class Incremental Learning Using Gaussian Processes. Pattern Recognition and Image Analysis. 2013. Vol. 23. No. 3. 402--407. [pdf] [bib]

[Freytag12:RUC]

Alexander Freytag and Erik Rodner and Paul Bodesheim and Joachim Denzler. Rapid Uncertainty Computation with Gaussian Processes and Histogram Intersection Kernels. Asian Conference on Computer Vision (ACCV). 2012. (Oral). [pdf] [bib] Best Paper Honorable Mention

[Luetz11:EIL]

Alexander Lütz and Erik Rodner and Joachim Denzler. Efficient Multi-Class Incremental Learning Using Gaussian Processes. Open German-Russian Workshop on Pattern Recognition and Image Understanding (OGRW). 2011. 182--185. (Oral).  [pdf] [bib]