1. Which detector did you use to generate the provided bounding boxes?
For the MOT16 Benchmark we used DPM v5 with a pre-trained model to obtain the bounding boxes. The cut-off threshold is set to -1, resulting in acceptable recall (~45%) but relatively low precision (~65%).
On previous benchmarks, we applied the Aggregate Channel Features (ACF) pedestrian detector to all sequences using the pre-trained INRIA model, scaled to 60% of its original size. The resulting precision / recall reached 60% / 52% for the training set and 60% / 46% for the test set, respectively.
2. How do I cite the results generated by MOTChallenge in my publication?
Please cite the following paper(s) when using our benchmark.
For 2DMOT2015 and 3DMOT2015:
MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking.
Laura Leal-Taixé, Anton Milan, Ian Reid, Stefan Roth, Konrad Schindler. arXiv:1504.01942@article{MOTChallenge2015, title = {{MOTC}hallenge 2015: {T}owards a Benchmark for Multi-Target Tracking}, shorttitle = {MOTChallenge 2015}, url = {http://arxiv.org/abs/1504.01942}, journal = {arXiv:1504.01942 [cs]}, author = {Leal-Taix\'{e}, L. and Milan, A. and Reid, I. and Roth, S. and Schindler, K.}, month = apr, year = {2015}, note = {arXiv: 1504.01942}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
MOT16: A Benchmark for Multi-Object Tracking.
Anton Milan, Laura Leal-Taixé, Ian Reid, Stefan Roth, Konrad Schindler. arXiv:1603.00831@article{MOT16, title = {{MOT}16: {A} Benchmark for Multi-Object Tracking}, shorttitle = {MOT16}, url = {http://arxiv.org/abs/1603.00831}, journal = {arXiv:1603.00831 [cs]}, author = {Milan, A. and Leal-Taix\'{e}, L. and Reid, I. and Roth, S. and Schindler, K.}, month = mar, year = {2016}, note = {arXiv: 1603.00831}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
CVPR19 Tracking and Detection Challenge: How crowded can it get?
Patrick Dendorfer, Hamid Rezatofighi, Anton Milan, Javen Shi, Daniel Cremers, Ian Reid, Stefan Roth, Konrad Schindler, Laura Leal-Taixe arXiv:1906.04567@article{MOT19_CVPR, title = {{CVPR19} Tracking and Detection Challenge: {H}ow crowded can it get?}, shorttitle = {MOT19}, url = {http://arxiv.org/abs/1906.04567}, journal = {arXiv:1906.04567 [cs]}, author = {Dendorfer, P. and Rezatofighi, H. and Milan, A. and Shi, J. and Cremers, D. and Reid, I. and Roth, S. and Schindler, K. and Leal-Taix\'{e}, L. }, month = jun, year = {2019}, note = {arXiv: 1906.04567}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
MOT20: A benchmark for multi object tracking in crowded scenes.
Patrick Dendorfer, Hamid Rezatofighi, Anton Milan, Javen Shi, Daniel Cremers, Ian Reid, Stefan Roth, Konrad Schindler, Laura Leal-Taixe arXiv:2003.09003@article{MOTChallenge20, title={MOT20: A benchmark for multi object tracking in crowded scenes}, shorttitle = {MOT20}, url = {http://arxiv.org/abs/1906.04567}, journal = {arXiv:2003.09003[cs]}, author = {Dendorfer, P. and Rezatofighi, H. and Milan, A. and Shi, J. and Cremers, D. and Reid, I. and Roth, S. and Schindler, K. and Leal-Taix\'{e}, L. }, month = mar, year = {2020}, note = {arXiv: 2003.09003}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
MOTS: Multi-Object Tracking and Segmentation.
Paul Voigtlaender, Michael Krause, Aljosa Osep, Jonathon Luiten, Berin Balachandar Gnana Sekar, Andreas Geiger and Bastian Leibe}, arXiv:1902.03604@article{MOTS20, title={MOTS: Multi-Object Tracking and Segmentation}, shorttitle = {MOTS20}, url = {http://arxiv.org/abs/1902.03604}, journal = {arXiv:1902.03604[cs]}, author={Paul Voigtlaender and Michael Krause and Aljosa Osep and Jonathon Luiten and Berin Balachandar Gnana Sekar and Andreas Geiger and Bastian Leibe}, year = {2019}, note = {arXiv: 1902.03604}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset.
Malte Pedersen, Joakim Bruslund Haurum, Stefan Hein Bengtson and Thomas B. Moeslund}, arXiv:2006.08466@article{3DZeF20, title={3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset}, shorttitle = {3DZeF20}, url = {https://arxiv.org/abs/2006.08466}, journal = {arXiv:2006.08466[cs]}, author={Malte Pedersen and Joakim Bruslund Haurum and Stefan Hein Bengtson and Thomas B. Moeslund}, year = {2020}, note = {arXiv: 2006.08466}, keywords = {Computer Science - Computer Vision and Pattern Recognition} }
TAO: A Large-Scale Benchmark for Tracking Any Object.
Achal Dave, Tarasha Khurana, Pavel Tokmakov, Cordelia Schmid and Deva Ramanan, arXiv:2005.10356@inproceedings{Dave:2020:ECCV, title={TAO: A Large-Scale Benchmark for Tracking Any Object}, author={Achal Dave and Tarasha Khurana and Pavel Tokmakov and Cordelia Schmid and Deva Ramanan}, url={https://arxiv.org/abs/2005.10356}, booktitle={European Conference on Computer Vision}, year={2020} }
3. I would like to present different versions of my method. Can I submit results for each one?
No. If you want to present results of your method with various settings (e.g. different features, inference method, etc.), please use the training set for this purpose and submit only one result to the test server! Only the latest result will be considered.
4. I found an error in the provided ground truth.
Thank you! We are aware of some deficiencies in the existing annotations and are working on improving these. We do appreciate all kind of feedback, so please don't hesitate to contact us to report any findings.
5. Can I register multiple accounts?
No! Registering multiple times with different email addresses violates the benchmark policy and may lead to a permanent ban.
6. Where do I find the raw tracking data?
The submitted bounding boxes for all published, i.e. non-anonymous trackers, can be found on the bottom of the respective detail pages. You can navigate there by clicking on a tracker's name in the main results table.