Workshop

Synthetic data has the potential to enable the next generation of deep learning algorithms to thrive on unprecedented amounts of free labelled data while avoiding privacy and dataset bias concerns. As recently shown in our MOTSynth work, models trained on synthetic data can already achieve competitive performance when tested on real datasets.

At the 7th BMTT workshop we aim to bring the tracking community together to further explore the potential of synthetic data. We have an exciting line-up of speakers, and are organizing two challenges aiming to advance the state-of-the-art in synthetic-to-real tracking.

Info

Time Full day, June 20th, 2022
Venue CVPR 2022 (New Orleans, Louisiana)
Train data released February 21st, 2022
Test data released March 21st, 2022
Challenge submission deadline May 23rd, 2022
Technical report deadline May 30th, 2022
Recordings Will be available after the workshop!

Speakers

Schedule (EST)

TBD

Competitions

For this workshop edition we aim to shift the focus of the tracking community towards synthetic data and ask the following question: can we advance state-of-the-art methods in pedestrian tracking using only synthetic data?

To this end, we organize two challenges in which we require participants to develop their models using our recently proposed MOTSynth dataset as only source of training data, and evaluate them on real datasets.

MOTSynth2MOT17 track

For this track, participants can use all the annotation modalities in MOTSynth, and test their pedestrian bounding box tracking methods on the MOT17 test set, under the private detections setting.

Rules:
  • Models cannot be trained on any real data, except for ImageNet. Pretraining with COCO (or any other real dataset) is not allowed.
  • MOT17 public (or any other external) detections cannot be used. Detections need to be obtained from training on MOTSynth only.
  • MOT17 training data can only be used for validation and not for training or fine-tuning.
  • Participants will have to provide a technical report and code, showing that only the allowed training data was used.


Dataset: Train and test sequences are avalible at the MOTChallenge website.
Baselines: Our baselines, pre-trained models, and helper code for the dataset are avalible here.
Metric: HOTA will be used to rank participants.
Test server: The test server will be made available in the MOTChallenge website on March 21st.


MOTSynth2MOTS20 track

For this track, participants can use all the annotation modalities in MOTSynth, and test their tracking and segmentation methods on the MOTS20 (a.k.a. MOTSChallenge) test set.

Rules:
  • Models cannot be trained on any real data, except for ImageNet. Pretraining with COCO (or any other real dataset) is not allowed.
  • MOT17 public (or any other external) detections/masks cannot be used. Detections need to be obtained from training on MOTSynth only.
  • MOTS20 training data can only be used for validation and not for training or fine-tuning.
  • Participants will have to provide a technical report and code, showing that only the allowed training data was used.


Dataset: Train and test sequences are avalible at the MOTChallenge website.
Baselines: Our baselines, pre-trained models, and helper code for the dataset are avalible here.
Metric: HOTA will be used to rank participants.
Test server: The test server will be made available in the MOTChallenge website on March 21st.
For each challenge, we will award both the most innovative and best performing submissions. Challenge winners will receive a prize (to be announced), and will be asked to give a short presentation describing their approach at the workshop event.

Technical report format

Please follow a two-column layout for your submission. The technical report should at most contain 4 pages including references. However, shorter reports of 2 pages are very welcome. Submissions are not blind, hence, please include all authors on the submission. Only participants with a submitted report are considered for the reward and to present on the workshop. Please make your challenge entry public once submitted and make it clear to which method the report belongs. All reports should be sent to Guillem Brasó (guillem.braso [at] tum . de). The deadline is May 30th, 11:59 PST.

Organizers

Matteo Fabbri (UNIMORE/GoatAI)

Aljoša Ošep (TUM)

Orçun Cetintas (TUM)

Patrick Dendorfer (TUM)

Mark Weber (TUM)

Simone Calderara (UNIMORE/GoatAI)

Rita Cucchiara (UNIMORE/GoatAI)

Laura Leal-Taixé (TUM/ArgoAI)