Welcome to the Multiple Object Tracking Benchmark!


MOT16-14 MOT16-07 MOT16-02 MOT16-03 MOT16-10 MOT16-04 MOT16-08 MOT16-06 MOT16-05 MOT16-11

In the recent past, the computer vision community has relied on several centralized benchmarks for performance evaluation of numerous tasks including object detection, pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state-of-the-art in the respective research fields. Interestingly, there has been rather limited work on the standardization of multiple target tracking evaluation. One of the few exceptions is the well-known PETS dataset, targeted primarily at surveillance applications. Even for this widely used benchmark, a common technique for presenting tracking results to date involves using different subsets of the available data, inconsistent model training and varying evaluation scripts.
With this benchmark we would like to pave the way for a unified framework towards more meaningful quantification of multi-target tracking.

What do we provide?


We have created a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide:

  • A large collection of datasets, some already in use and some new challenging sequences!
  • Detections for all the sequences.
  • A common evaluation tool providing several measures, from recall to precision to running time.
  • An easy way to compare the performance of state-of-the-art tracking methods.
  • Several challenges with subsets of data for specific tasks such as 3D tracking, surveillance, sports analysis (updates coming soon).

We rely on the spirit of crowdsourcing, and we encourage researchers to submit their sequences to our benchmark, so the quality of multiple object tracking systems can keep increasing and tackling more challenging scenarios.

News


  • 15.06.2016: The ECCV Workshop Challenge is now open for submission.
  • 11.04.2016: We will organize a workshop at ECCV 2016. Hope to see you in Amsterdam!
  • 01.03.2016: MOT16: a new release of the benchmark is online.
  • 26.08.2015: MOTChallenge is now served entirely over HTTPS.
  • 24.07.2015: We have started releasing the raw tracking data for all published submissions.
  • 08.04.2015: The manuscript on the MOTChallenge benchmark is now public.

License


The datasets provided on this page are published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. If you are interested in commercial usage you can contact us for further options.


Newsletter


Subscribe to our newletter to receive updates and other important information about the benchmark.

@

Your email will never be used for other purposes. You can unsubscribe at any time.