We present a novel method for multiple people tracking that leverages a generalized model for capturing interactions among individuals.
At the core of our model lies a learned dictionary of interaction feature strings which capture relationships between the motions of targets.
These feature strings, created from low-level image features, lead to a much richer representation of the physical interactions between targets compared to hand-specified social force models that previous works have introduced for tracking. One disadvantage of using social forces is that all pedestrians must be detected in order for the forces to be applied, while our method is able to encode the effect of undetected targets, making the tracker more robust to partial occlusions.
The interaction feature strings are used in a Random Forest framework to track targets according to the features surrounding them.
Project page / code:
L. Leal-Taixé, M. Fenzi, A. Kuznetsova, B. Rosenhahn, S. Savarese. Learning an image-based motion context for multiple people tracking. In CVPR, 2014.
April 07, 2015 (4 years ago)
April 07, 2015 at 21:02:03 CET
2.6 GHz, 1 core
|2D MOT 2015||23.1||29.4||70.9||34.0||375.0||10,404||35,844||1,018|