Readspeaker menu

Dipl.-Inf. Paul Bodesheim

Former Research Associate
bodesheim.jpg
 

Contact

Address:






Computer Vision Group
Institute for Computer Science
Department of Mathematics and Computer Science
Friedrich Schiller University Jena
Ernst-Abbe-Platz 2
07743 Jena
Germany
Room: 1219
Phone: +49 (0) 3641 9 46423
E-Mail: paul.bodesheim (at) uni-jena.de
XING:
https://www.xing.com/profile/Paul_Bodesheim
ResearchGate:
https://www.researchgate.net/profile/Paul_Bodesheim

 (Um diese Seite in Deutsch zu sehen, [hier] klicken!)

Research topics

 

          Detecting novel object categories

          Incremental learning of object categories

          Efficient large-scale Gaussian processes

(Click [here] to go to the project page.)

(Click [here] to go to the project page.)

(Click [here] to go to the project page.)

Interests

  • Pattern recognition and machine learning in computer vision
  • Object detection and object recognition
  • Object discovery, novelty detection, and one-class classification
  • Unsupervised learning, e.g. spectral clustering
  • Optical character recognition (OCR), e.g., license plate recognition

Studies

  • 2006-2011: Diploma in computer science at the Friedrich Schiller University of Jena
  • Focus: Digital image processing/computer vision
  • Student research project: Enhencements for the License Plate Recognition System LprJ - Confidence Measures for Plate Hypotheses and Preprocessing of Complex-Structured Plates
  • Diploma thesis: Object Discovery - Unsupervised Learning of Object Categories
 


Publications

(Please click [here] for a chronological and tight overview.)    

 

- Novelty detection and one-class classification


local_novdet

[Bodesheim15:LND]

Paul Bodesheim and Alexander Freytag and Erik Rodner and Joachim Denzler: Local Novelty Detection in Multi-class Recognition Problems. IEEE Winter Conference on Applications of Computer Vision (WACV). 2015. pages 813--820. [paper] [bib] [supplemental_material] [code] [github] [slides] [poster]
Abstract. In this paper, we propose using local learning for multi-class novelty detection, a framework that we call local novelty detection. Estimating the novelty of a new sample is an extremely challenging task due to the large variability of known object categories. The features used to judge on the novelty are often very specific for the object in the image and therefore we argue that individual novelty models for each test sample are important. Similar to human experts, it seems intuitive to first look for the most related images thus filtering out unrelated data. Afterwards, the system focuses on discovering similarities and differences to those images only. Therefore, we claim that it is beneficial to solely consider training images most similar to a test sample when deciding about its novelty. Following the principle of local learning, for each test sample a local novelty detection model is learned and evaluated. Our local novelty score turns out to be a valuable indicator for deciding whether the sample belongs to a known category from the training set or to a new, unseen one. With our local novelty detection approach, we achieve state-of-the-art performance in multi-class novelty detection on two popular visual object recognition datasets, Caltech-256 and ImageNet. We further show that our framework: (i) can be successfully applied to unknown face detection using the Labeled-Faces-in-the-Wild dataset and (ii) outperforms recent work on attribute-based unfamiliar class detection in fine-grained recognition of bird species on the challenging CUB-200-2011 dataset.



nullspaceExample

[Bodesheim13:KNS]

Paul Bodesheim and Alexander Freytag and Erik Rodner and Michael Kemmler and Joachim Denzler: Kernel Null Space Methods for Novelty Detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2013. pages 3374--3381. [paper] [bib] [project] [code] [github] [poster]

Abstract.
Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to model the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in comprehensive multi-class experiments using the publicly available datasets Caltech-256 and ImageNet. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other approaches.



approxGP-Var

[Bodesheim13:AOG]

Paul Bodesheim and Alexander Freytag and Erik Rodner and Joachim Denzler: Approximations of Gaussian Process Uncertainties for Visual Recognition Problems. Scandinavian Conference on Image Analysis (SCIA). 2013. pages 182--194. [paper] [bib] [project] [code]

Abstract.
Gaussian processes offer the advantage of calculating the classification uncertainty in terms of predictive variance associated with the classification result. This is especially useful to select informative samples in active learning and to spot samples of previously unseen classes known as novelty detection. However, the Gaussian process framework suffers from high computational complexity leading to computation times too large for practical applications. Hence, we propose an approximation of the Gaussian process predictive variance leading to rigorous speedups. The complexity of both learning and testing the classification model regarding computational time and memory demand decreases by one order with respect to the number of training samples involved. The benefits of our approximations are verified in experimental evaluations for novelty detection and active learning of visual object categories on the datasets C-Pascal of Pascal VOC 2008, Caltech-256, and ImageNet.



videoSegOCC

[Krishna13:VSB]

Mahesh Venkata Krishna and Paul Bodesheim and Joachim Denzler: Video Segmentation by Event Detection: A Novel One-class Classification Approach. 4th International Workshop on Image Mining, Theory and Applications (IMTA-4). 2013. [paper] [bib]


[Krishna14:TVS]

Mahesh Venkata Krishna and Paul Bodesheim and Marco Körner and Joachim Denzler: Temporal Video Segmentation by Event Detection: A Novelty Detection Approach. Pattern Recognition and Image Analysis (PRIA). April 2014. vol. 24(2): 243--255. [paper] [bib]


Abstract.
Segmenting videos into meaningful image sequences of some particular activities is an interesting problem in computer vision. In this paper, a novel algorithm is presented to achieve this semantic video segmentation. The goal is to make the system work unsupervised and generic in terms of application scenarios. The segmentation task is accomplished through event detection in a frame-by-frame processing setup. For event detection, we use a one-class classification approach based on Gaussian processes, which has been proved to be successful in object classification. The algorithm is tested on videos from a publicly available change detection database and the results clearly show the suitability of our approach for the task of video segmentation.



divergenceOCC

[Bodesheim12:DOC]

Paul Bodesheim and Erik Rodner and Alexander Freytag and Joachim Denzler: Divergence-Based One-Class Classification Using Gaussian Processes. British Machine Vision Conference (BMVC). 2012. pages 50.1--50.11. [paper] [bib] [project] [poster] [extendedAbstract]

Abstract. We present an information theoretic framework for one-class classification, which allows for deriving several new novelty scores. With these scores, we are able to rank samples according to their novelty and to detect outliers not belonging to a learnt data distribution. The key idea of our approach is to measure the impact of a test sample on the previously learned model. This is carried out in a probabilistic manner using Jensen-Shannon divergence and reclassification results derived from the Gaussian process regression framework. Our method is evaluated using well-known machine learning datasets as well as large-scale image categorisation experiments showing its ability to achieve state-of-the-art performance.

- Large-scale Gaussian processes with histogram intersection kernels

 blockDiagramGPHIK



GPHIK-AL-ImageNet

[Rodner12:LGP]

Erik Rodner and Alexander Freytag and Paul Bodesheim and Joachim Denzler. Large-Scale Gaussian Process Classification with Flexible Adaptive Histogram Kernels. European Conference on Computer Vision (ECCV). 2012. pages 85--98. [paper] [bib] [project] [poster]

[Freytag12:RUC]

Alexander Freytag and Erik Rodner and Paul Bodesheim and Joachim Denzler: Rapid Uncertainty Computation with Gaussian Processes and Histogram Intersection Kernels. Asian Conference on Computer Vision (ACCV). 2012. pages 511--524. [paper] [bib] [project] [poster] Best Paper Honorable Mention

[Freytag12:BCL]

Alexander Freytag and Erik Rodner and Paul Bodesheim and Joachim Denzler: Beyond Classification - Large-scale Gaussian Process Inference and Uncertainty Prediction. Big Data Meets Computer Vision: First International Workshop on Large Scale Visual Recognition and Retrieval (NIPS Workshop). 2012. [paper] [bib] [project]

Abstract 1. We present how to perform exact large-scale multi-class Gaussian process classification with parameterized histogram intersection kernels. In contrast to previous approaches, we use a full Bayesian model without any sparse approximation techniques, which allows for learning in sub-quadratic and classification in constant time. To handle the additional model flexibility induced by parameterized kernels, our approach is able to optimize the parameters with large-scale training data. A key ingredient of this optimization is a new efficient upper bound of the negative Gaussian process log-likelihood. Furthermore, we highlight the theoretical connection between efficiency of histogram intersection kernels and properties of Wiener processes. Experiments with image categorization tasks exhibit high performance gains with flexible kernels as well as learning within a few minutes and classification in microseconds for databases, where exact Gaussian process classification was not possible before.

Abstract 2. An important advantage of Gaussian processes is the ability to directly estimate classification uncertainties in a Bayesian manner. In this paper, we develop techniques that allow for estimating these uncertainties with a runtime linear or even constant with respect to the number of training examples. Our approach makes use of all training data without any sparse approximation technique while needing only a linear amount of memory. To incorporate new information over time, we further derive online learning methods leading to significant speed-ups and allowing for hyperparameter optimization on-the-fly. We conduct several experiments on public image datasets for the tasks of one-class classification and active learning, where computing the uncertainty is an essential task. The experimental results highlight that we are able to compute classification uncertainties within microseconds even for large-scale datasets with tens of thousands of training examples.

- Active learning with Gaussian process model updates

AL_Pitfalls_small

[Freytag13:LET]

Alexander Freytag and Erik Rodner and Paul Bodesheim and Joachim Denzler: Labeling examples that matter: Relevance-Based Active Learning with Gaussian Processes. German Conference on Pattern Recognition (GCPR). 2013. pages 282--291. [paper] [bib] [supplementary-material]



Abstract.
Active learning is an essential tool to reduce manual annotation costs in the presence of large amounts of unsupervised data. In this paper, we introduce new active learning methods based on measuring the impact of a new example on the current model. This is done by deriving model changes of Gaussian process models in closed form. Furthermore, we study typical pitfalls in active learning and show that our methods automatically balance between the exploitation and the exploration trade-off. Experiments are performed with established benchmark datasets for visual object recognition and show that our new active learning techniques are able to outperform state-of-the-art methods.

- Visual feature analysis

fromImgToBow                         accuracies-annotated

[Freytag14:STB]

Alexander Freytag and Johannes Rühle and Paul Bodesheim and Erik Rodner and Joachim Denzler: Seeing through bag-of-visual-word glasses: towards understanding quantization effects in feature extraction methods. ICPR Workshop on Features and Structures (FEAST). 2014. [extendedAbstact] [bib] [techReport] [poster]
Best Poster Award

Abstract. Vector-quantized local features frequently used in bag-of-visual-words approaches are the backbone of popular visual recognition systems due to both their simplicity and their performance. Despite their success, standard bag-of-words-histograms basically contain low-level image statistics (e.g., number of edges of different orientations). The question remains how much visual information is "lost in quantization" when mapping visual features to visual ``words'' and elements of a codebook?


- Spectral clustering of ROIs for object discovery

 SpectralClusteringOfROIs

[Bodesheim11:SCO]

Paul Bodesheim: Spectral Clustering of ROIs for Object Discovery. 33rd Annual Symposium of the German Association for Pattern Recognition (DAGM). 2011. pages 450--455. [paper] [bib[poster]


Abstract. Object discovery is one of the most important applications of unsupervised learning. This paper addresses several spectral clustering techniques to attain a categorization of objects in images without additional information such as class labels or scene descriptions. Due to the fact that background textures bias the performance of image categorization methods, a generic object detector based on some general requirements on objects is applied. The object detector provides rectangular regions of interest (ROIs) as object hypotheses independent of the underlying object class. Feature extraction is simply constrained to these bounding boxes to decrease the influence of background clutter. Another aspect of this work is the utilization of a Gaussian mixture model (GMM) instead of k-means as usually used after feature transformation in spectral clustering. Several experiments have been done and the combination of spectral clustering techniques with the object detector is compared to the standard approach of computing features of the whole image.

- Technical report about an approximation for Gaussian process regression

ApproxGP_RegExample

[Bodesheim13:AEA]

Paul Bodesheim and Alexander Freytag and Erik Rodner and Joachim Denzler: An Efficient Approximation for Gaussian Process Regression. Technical Report TR-FSU-INF-CV-2013-01, Computer Vision Group, Friedrich Schiller University Jena, Germany. 2013. [paper] [bib] [project]


Abstract.
Gaussian processes are a powerful tool for regression problems. Beside computing regression curves using predictive mean values, the uncertainty of the estimations can be computed in terms of predictive variances. However, the complexity of learning and testing the model is often too large for practical use. We present an efficient approximation of the Gaussian process regression framework leading to reduced complexities of both runtime and memory. The idea is to approximate Gaussian process predictive mean and variance using a special diagonal matrix instead of the full kernel matrix. We show that this simple diagonal matrix approximation of the Gaussian process predictive variance is a true upper bound for the exact variance. Experimental results are presented for a standard regression task.

- Motivation for the importance of novelty detection and open set recognition


Motivation. Current work on visual object recognition focuses on object classification and is implicitly based on the closed-world assumption, i.e., a test sample is assigned to the most plausible class out of a fixed set of classes known during training. Knowledge about objects and classes is usually available in terms of representative training data and is used for model training. However, in real-world applications it is often not possible to obtain training data for all categories that can occur in the test phase beforehand. An example is quality control, where it is not only impossible to define all future defects - even worse, possible defects are in most cases not even known to the human expert who supervises the training step. In addition, even if one would know possible defects a priori, the small number of training images leads to ill-posed problems. A second application is life-long learning where a system needs to identify new, unknown objects classes and has to incrementally add them to its knowledge base. Finally, complex event detection in videos is also to impossible to tackle with a fixed set of classes. Although several solutions for the novelty detection problem have been proposed during the past years, they usually suffer from strong limitations (e.g., model complexity), necessary assumptions (e.g., Gaussian distribution), or heuristics (e.g., separation from artificial negative data). On top of that, it is unknown so far whether or not such methods can successfully be applied in an open set scenario.  

[Denzler13:BTC]

Joachim Denzler and Erik Rodner and Paul Bodesheim and Alexander Freytag: Beyond the closed-world assumption: The importance of novelty detection and open set recognition. Unsolved Problems in Pattern Recognition and Computer Vision (GCPR Workshop). 2013. [extendedAbstract] [bib] [slides]


 



Locations of Site Visitors