Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms.
M. Yang, Y. Jia. Temporal dynamic appearance modeling for online multi-person tracking. In Computer Vision and Image Understanding, 2016.
August 26, 2015 (4 years ago)
April 21, 2015 at 11:05:50 CET
Project page / code:
3.4 GHZ, 1 Core
|2D MOT 2015||33.0||46.1||72.8||96 (13.3)||282 (39.1)||10,064||30,617||50.2||75.4||1.7||464 (9.2)||1,506 (30.0)|