CVPR 2020 MOTS Challenge Results

Click on a measure to sort the table accordingly. See below for a more detailed description.



Abstract submission deadline: May 30, 2020

Each participant of the challenge should send an abstract (max. 1200 characters) to motsegmentation@motchallenge.net by May 30th to publish a short paper or to present their method in the workshop.

Benchmark Statistics

TrackersMOTSAIDF1MOTSAMOTSP MODSAMTMLTPFPFNRecallPrecisionID Sw.FragHz
TrackRCNN
1. using public detections
40.6
±13.8
42.4
±9.2
55.2 76.1 56.9 127 (38.7)71 (21.6)19,628 1,261 12,641 60.8 94.0 567 (932.2)868 (1,427.0)2.0
P. Voigtlaender, M. Krause, A. O\usep, J. Luiten, B. Sekar, A. Geiger, B. Leibe. MOTS: Multi-Object Tracking and Segmentation. In CVPR, 2019.
Mcmots
2. online method using undisclosed detections
51.1
±5.2
63.9
±8.1
66.0 79.7 66.9 141 (43.0)53 (16.2)23,752 2,166 8,517 73.6 91.6 290 (394.0)764 (1,038.0)3.9
Anonymous submission
GMPHD_SAF
3. online method using public detections
61.8
±9.9
64.3
±8.0
73.4 84.8 75.1 214 (65.2)28 (8.5)24,698 477 7,571 76.5 98.1 524 (684.6)770 (1,006.0)3.8
Anonymous submission
DD_Vision
4. using undisclosed detections
61.8
±8.2
69.8
±7.8
76.8 81.6 78.0 225 (68.6)19 (5.8)26,385 1,231 5,884 81.8 95.5 361 (441.5)593 (725.2)1.6
Anonymous submission
Lif_TS
5. using public detections
65.3
±5.9
75.2
±5.7
77.8 84.5 78.3 216 (65.9)43 (13.1)26,143 879 6,126 81.0 96.7 149 (183.9)457 (564.1)2.3
Anonymous submission
USN
6. using public detections
63.7
±5.6
62.8
±7.0
76.3 84.6 78.7 226 (68.9)20 (6.1)26,430 1,038 5,839 81.9 96.2 764 (932.8)1,015 (1,239.2)1.0
Anonymous submission
PA
7. using public detections
66.2
±7.1
76.4
±5.3
78.9 84.6 79.5 235 (71.6)21 (6.4)26,516 849 5,753 82.2 96.9 216 (262.9)449 (546.4)2.5
Anonymous submission
COSTA
8. using undisclosed detections
64.1
±4.9
65.7
±8.0
77.5 84.0 80.8 218 (66.5)23 (7.0)27,069 1,003 5,200 83.9 96.4 1,054 (1,256.5)1,341 (1,598.6)26.6
Anonymous submission
PT
9. using public detections
66.8
±4.9
67.3
±6.8
79.9 84.5 81.1 234 (71.3)20 (6.1)27,215 1,059 5,054 84.3 96.3 370 (438.7)629 (745.8)0.4
Anonymous submission
PTPM
10. using public detections
68.8
±3.5
68.5
±6.2
82.6 84.1 83.7 244 (74.4)19 (5.8)28,108 1,084 4,161 87.1 96.3 368 (422.5)560 (642.9)10.1
Anonymous submission
TrackersMOTSAIDF1MOTSAMOTSP MODSAMTMLTPFPFNRecallPrecisionID Sw.FragHz
ReMOTS
11. using undisclosed detections
69.9
±3.6
75.0
±5.6
83.9 84.0 85.1 248 (75.6)12 (3.7)28,270 819 3,999 87.6 97.2 388 (442.9)621 (708.8)0.3
Anonymous submission
SequencesFramesTrajectoriesBoxes
4304432832269

Difficulty Analysis

Sequence difficulty (from easiest to hardest, measured by average sMOTSA)

MOTS20-06

MOTS20-06

(70.2 sMOTSA)

MOTS20-12

MOTS20-12

(65.2 sMOTSA)

MOTS20-01

MOTS20-01

(64.1 sMOTSA)

MOTS20-07

MOTS20-07

(53.2 sMOTSA)


Evaluation Measures

Lower is better. Higher is better.
Measure Better Perfect Description
MOTA higher 100 % Multiple Object Tracking Accuracy [1]. This measure combines three error sources: false positives, missed targets and identity switches.
MOTP higher 100 % Multiple Object Tracking Precision [1]. The misalignment between the annotated and the predicted bounding boxes.
IDF1 higher 100 % ID F1 Score [2]. The ratio of correctly identified detections over the average number of ground-truth and computed detections.
FAF lower 0 The average number of false alarms per frame.
MT higher 100 % Mostly tracked targets. The ratio of ground-truth trajectories that are covered by a track hypothesis for at least 80% of their respective life span.
ML lower 0 % Mostly lost targets. The ratio of ground-truth trajectories that are covered by a track hypothesis for at most 20% of their respective life span.
FP lower 0 The total number of false positives.
FN lower 0 The total number of false negatives (missed targets).
ID Sw. lower 0 The total number of identity switches. Please note that we follow the stricter definition of identity switches as described in [3].
Frag lower 0 The total number of times a trajectory is fragmented (i.e. interrupted during tracking).
Hz higher Inf. Processing speed (in frames per second excluding the detector) on the benchmark. The frequency is provided by the authors and not officially evaluated by the MOTChallenge.

Legend

Symbol Description
online method This is an online (causal) method, i.e. the solution is immediately available with each incoming frame and cannot be changed at any later time.
using public detections This method used the provided detection set as input.
using public detections This method used the provided detection set as input.
new This entry has been submitted or updated less than a week ago.
new This entry has been submitted or updated less than a week ago.

References:


[1] Bernardin, K. & Stiefelhagen, R. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. Image and Video Processing, 2008(1):1-10, 2008.
[2] Ristani, E., Solera, F., Zou, R., Cucchiara, R. & Tomasi, C. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. In ECCV workshop on Benchmarking Multi-Target Tracking, 2016.
[3] Li, Y., Huang, C. & Nevatia, R. Learning to associate: HybridBoosted multi-target tracker for crowded scene. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009.