Welcome to MOTChallenge: The Multiple Object Tracking Benchmark!


MOT16-12 MOT16-11 MOT16-10 MOT16-07 MOT16-04 MOT16-06 MOT16-09 MOT16-14 MOT16-08 MOT16-01

In the recent past, the computer vision community has relied on several centralized benchmarks for performance evaluation of numerous tasks including object detection, pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state-of-the-art in the respective research fields. Interestingly, there has been rather limited work on the standardization of multiple target tracking evaluation. One of the few exceptions is the well-known PETS dataset, targeted primarily at surveillance applications. Even for this widely used benchmark, a common technique for presenting tracking results to date involves using different subsets of the available data, inconsistent model training and varying evaluation scripts.
With this benchmark we would like to pave the way for a unified framework towards more meaningful quantification of multi-target tracking.

What do we provide?


We have created a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide:

  • A large collection of datasets, some already in use and some new challenging sequences!
  • Detections for all the sequences.
  • A common evaluation tool providing several measures, from recall to precision to running time.
  • An easy way to compare the performance of state-of-the-art tracking methods.
  • Several challenges with subsets of data for specific tasks such as 3D tracking, surveillance, sports analysis (updates coming soon).

We rely on the spirit of crowdsourcing, and we encourage researchers to submit their sequences to our benchmark, so the quality of multiple object tracking systems can keep increasing and tackling more challenging scenarios.

News


  • Mar 20, 2023: We have opened two new challenges for video instance segmentation in closed-world and open-world settings! Moreover, we have opened permanent benchmarks for synthetic MOT and MOTS.
  • Aug 02, 2021: The MOTChallenge-STEP benchmark is now online here.
  • Jun 22, 2021: Head Tracking 21 (HT21) is now online!: HT21
  • Mar 26, 2021: MOTChallenge is now reporting HOTA metrics.
  • News archive

License


The datasets provided on this page are published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.