Frequently Asked Questions

1. Which detector did you use to generate the provided bounding boxes?

For the MOT16 Benchmark we used DPM v5 [1] with a pre-trained model to obtain the bounding boxes. The cut-off threshold is set to -1, resulting in acceptable recall (~45%) but relatively low precision (~65%).

On previous benchmarks, we applied the Aggregate Channel Features (ACF) pedestrian detector [2] to all sequences using the pre-trained INRIA model, scaled to 60% of its original size. The resulting precision / recall reached 60% / 52% for the training set and 60% / 46% for the test set, respectively.

2. How do I cite the results generated by MOTChallenge in my publication?
Please cite the following paper(s) when using our benchmark.
For 2DMOT2015 and 3DMOT2015:

MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking.
Laura Leal-Taixé, Anton Milan, Ian Reid, Stefan Roth, Konrad Schindler. arXiv:1504.01942

@article{MOTChallenge2015,
	title = {{MOTC}hallenge 2015: {T}owards a Benchmark for Multi-Target Tracking},
	shorttitle = {MOTChallenge 2015},
	url = {http://arxiv.org/abs/1504.01942},
	journal = {arXiv:1504.01942 [cs]},
	author = {Leal-Taix\'{e}, L. and Milan, A. and Reid, I. and Roth, S. and Schindler, K.},
	month = apr,
	year = {2015},
	note = {arXiv: 1504.01942},
	keywords = {Computer Science - Computer Vision and Pattern Recognition}
}

For MOT16, MOT17Det, MOT17:

MOT16: A Benchmark for Multi-Object Tracking.
Anton Milan, Laura Leal-Taixé, Ian Reid, Stefan Roth, Konrad Schindler. arXiv:1603.00831

@article{MOT16,
	title = {{MOT}16: {A} Benchmark for Multi-Object Tracking},
	shorttitle = {MOT16},
	url = {http://arxiv.org/abs/1603.00831},
	journal = {arXiv:1603.00831 [cs]},
	author = {Milan, A. and Leal-Taix\'{e}, L. and Reid, I. and Roth, S. and Schindler, K.},
	month = mar,
	year = {2016},
	note = {arXiv: 1603.00831},
	keywords = {Computer Science - Computer Vision and Pattern Recognition}
}

For MultiCam and PETS2017 please cite either of the above and the corresponding dataset publications.
Please note that you should also cite all corresponding papers that are listed as sources for various video sequences on the data page.

3. I would like to present different versions of my method. Can I submit results for each one?
No. If you want to present results of your method with various settings (e.g. different features, inference method, etc.), please use the training set for this purpose and submit only one result to the test server!

4. I found an error in the provided ground truth.
Thank you! We are aware of some deficiencies in the existing annotations and are working on improving these. We do appreciate all kind of feedback, so please don't hesitate to contact us to report any findings.

5. Can I register multiple accounts?
No! Registering multiple times with different email addresses violates the benchmark policy and may lead to a permanent ban.

6. Where do I find the raw tracking data?
The submitted bounding boxes for all published, i.e. non-anonymous trackers, can be found on the bottom of the respective detail pages. You can navigate there by clicking on a tracker's name in the main results table.

References:


[1] Felzenszwalb, P.F., Girshick, R.B., McAllester, D. & Ramanan, D. Object Detection with Discriminatively Trained Part Based Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627-1645, 2010.
[2] Dollár, P., Appel, R., Belongie, S. & Perona, P. Fast Feature Pyramids for Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8):1532-1545, 2014.