MOT17Det Results

Click on a measure to sort the table accordingly. See below for a more detailed description.



Benchmark Statistics

TrackerAPMODAMODPFAFTPFPFNRecallPrecisionF1Hz
AInnoDetV2
1.
0.89
±0.05
-58.4
±126.7
79.1 29.5 107,733 174,608 6,831 94.0 38.2 54.3 296.0
AInnovation: PC Attention Net
F_ViPeD_B
2.
0.89
±0.06
-14.4
±115.1
77.4 20.8 106,698 123,194 7,831 93.2 46.4 62.0 14.8
L. Ciampi, N. Messina, F. Falchi, C. Gennaro, G. Amato. Virtual to Real Adaptation of Pedestrian Detectors. In Sensors, 2020.
MHD
3.
0.49
±0.20
11.2
±42.9
69.9 8.8 64,637 51,801 49,927 56.4 55.5 56.0 3.0
Mobilenet-based Human Detection
DPM
4.
0.61
±0.14
31.2
±10.8
75.8 7.1 78,007 42,308 36,557 68.1 64.8 66.4 19.7
P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan. Object Detection with Discriminatively Trained Part Based Models. In TPAMI, 2010.
ISE_Detv2
5.
0.88
±0.05
67.4
±12.8
79.9 4.9 106,094 28,865 8,470 92.6 78.6 85.0 3.2
MIFD
KDNT
6.
0.89
±0.07
67.1
±22.4
80.1 4.8 105,473 28,623 9,091 92.1 78.7 84.8 0.8
F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, J. Yan. POI: Multiple Object Tracking with High Performance Detection and Appearance Feature. In BMTT, SenseTime Group Limited, 2016.
MixNet
7.
0.90
±0.07
74.3
±15.0
78.6 3.8 107,764 22,661 6,800 94.1 82.6 88.0 197.3
EDet
8.
0.88
±0.09
72.3
±23.2
78.1 3.6 103,940 21,057 10,624 90.7 83.2 86.8 197.3
YTLAB
9.
0.89
±0.07
76.7
±13.1
80.2 2.8 104,555 16,685 10,009 91.3 86.2 88.7 22.3
Z. Cai, Q. Fan, R. Feris, N. Vasconcelos. A unified multi-scale deep convolutional neural network for fast object detection. In European Conference on Computer Vision, 2016.
ACF
10.
0.32
±0.00
18.1
±0.0
72.1 2.8 37,312 16,539 77,252 32.6 69.3 44.3 74.0
P. Dollar, R. Appel, S. Belongie, P. Perona. Fast Feature Pyramids for Object Detection. In TPAMI, 2014.
TrackerAPMODAMODPFAFTPFPFNRecallPrecisionF1Hz
YLHDv2
11.
0.46
±0.11
56.9
±12.5
73.2 2.5 80,093 14,938 34,471 69.9 84.3 76.4 11.8
https://arxiv.org/abs/1612.08242
PA_Det_NJ
12.
0.89
±0.06
82.1
±20.1
79.3 2.4 108,480 14,417 6,084 94.7 88.3 91.4 59.2
PA_TECH_NJ
SeedDet
13.
0.90
±0.07
81.8
±12.1
82.8 2.3 107,291 13,631 7,273 93.7 88.7 91.1 8.2
seedland multi-target detection
ZIZOM
14.
0.81
±0.05
72.0
±22.0
79.8 2.2 95,414 12,990 19,139 83.3 88.0 85.6 2.4
C. Lin, L. Jiwen, G. Wang, J. Zhou. Graininess-Aware Deep Feature Learning for Pedestrian Detection. In ECCV, 2018.
UNV_Det
15.
0.89
±0.05
81.9
±16.2
79.0 2.1 106,478 12,704 8,086 92.9 89.3 91.1 12.8
Anonymous submission
FRCNN
16.
0.72
±0.13
68.5
±0.0
78.0 1.7 88,601 10,081 25,963 77.3 89.8 83.1 5.1
S. Ren, K. He, R. Girshick, J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NIPS, 2015.
SDP
17.
0.81
±0.12
76.9
±16.2
78.0 1.3 95,699 7,599 18,865 83.5 92.6 87.9 0.6
F. Yang, W. Choi, Y. Lin. Exploit All the Layers: Fast and Accurate CNN Object Detector With Scale Dependent Pooling and Cascaded Rejection Classifiers. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
HDGP
18.
0.45
±0.20
42.1
±20.1
76.4 1.3 55,680 7,436 58,884 48.6 88.2 62.7 0.6
A. Garcia-Martin, R. Sanchez-Matilla, J. Martinez. Hierarchical detection of persons in groups. In Signal, Image and Video Processing, 2017.
VDet
19.
0.44
±0.19
44.7
±19.3
75.7 1.0 56,980 5,765 57,584 49.7 90.8 64.3 5.9
Vitrociset Detection Algorithm
SequencesFramesTrajectoriesBoxes
75919785188076

Difficulty Analysis

Sequence difficulty (from easiest to hardest, measured by average AP)

MOT17-08

MOT17-08

(0.83 AP)

MOT17-03

MOT17-03

(0.80 AP)

MOT17-06

MOT17-06

(0.75 AP)

...

...

MOT17-01

MOT17-01

(0.66 AP)

MOT17-14

MOT17-14

(0.55 AP)


Evaluation Measures

Lower is better. Higher is better.
Measure Better Perfect Description
AP higher 1Average Precision taken over a set of reference recall values (0:0.1:1)
MODA higher 100%Multi-Object Detection Accuracy [1]. This measure combines false positives and missed targets.
MODP higher 100%Multi-Object Detection Precision [1]. The misalignment between the annotated and the predicted bounding boxes.
FAF lower 0The average number of false alarms per frame.
TP higher #GTThe total number of true positives.
FP lower 0The total number of false positives.
FN lower 0The total number of false negatives (missed targets).
Recall higher 100%Ratio of correct detections to total number of GT boxes.
Precision higher 100%Ratio of TP / (TP+FP).
F1 higher 100%Harmonic mean of precision and recall.
Hz higher Inf.Processing speed (in frames per second excluding the detector) on the benchmark. The frequency is provided by the authors and not officially evaluated by the MOTChallenge.

Legend

Symbol Description
online method This is an online (causal) method, i.e. the solution is immediately available with each incoming frame and cannot be changed at any later time.
using public detections This method used the provided detection set as input.
using private detections This method used a private detection set as input.
new This entry has been submitted or updated less than a week ago.

References:


[1] Bernardin, K. & Stiefelhagen, R. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. Image and Video Processing, 2008(1):1-10, 2008.