In online multiple pedestrian tracking it is of great importance to construct reliable cost matrix for assigning observations to tracks. Each element of cost matrix is constructed by using similarity measure. Many previous works have proposed their own similarity calculation methods consisting of geometric model (e.g. bounding box coordinates) and appearance model. In particular, appearance model contains information with higher dimension compared to geometric model. Thanks to the recent success of deep learning based methods, handling of high dimensional appearance information becomes possible. Among many deep networks, a siamese network with triplet loss is popularly adopted as an appearance feature extractor. Since the siamese network can extract features of each input independently, it is possible to adaptively model tracks (e.g. linear
update). However, it is not suitable for multi-object setting that requires comparison with other inputs. In this paper we propose a novel track appearance modeling based on joint inference network to address this issue. The proposed method enables comparison of two inputs to be used for adaptive appearance
modeling. It contributes to disambiguating target-observation matching and consolidating the identity consistency. Intensive experimental results support effectiveness of our method. Ours has been awarded as a 3rd-highest tracker on MOTChallenge19, held in 4th BMTT workshop.
Project page / code:
Y. Yoon, D. Kim, K. Yoon, Y. Song, M. Jeon. Online Multiple Pedestrian Tracking using Deep Temporal Appearance Matching Association. In arXiv:1907.00831, 2019.
February 10, 2019 (1 year ago)
April 28, 2019 at 13:42:34 CET
3.7GHZ, 1 Core, no GPU
|MOT16||46.2||49.4||75.4||107 (14.1)||334 (44.0)||5,126||92,367||49.3||94.6||0.9||598 (12.1)||1,127 (22.8)|