Readspeaker menu

Traffic Video Scene Understanding

How to extract activities that are happening in a scene?

Mahesh Venkata Krishna and Joachim Denzler

Project Description

The goal of the project is to analyze traffic videos in an unsupervised manner, and extract activities happening in them. To this end, we use the Hierarchical Dirichlet Processes (HDP) models to cluster optical flow into activities in the scene. However, since the Bayesian inference step required for the HDP model has high time-complexity, we limit this step to the training stage, and use the extracted activities as labels to train a discriminative classifier, which will be used to analyze test videos.

Our framework can be seen in the figure below:

Extracted activities (The white arrows indicate the ground-truth as marked by a human expert):


Mahesh Venkata Krishna and Joachim Denzler. A Combination of Generative and Discriminative Models for Fast Unsupervised Activity Recognition from Traffic Scene Videos. Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2014. [bib]

Traffic Data-set

To download the data-set, click here.