Workshop

Synthetic data has the potential to enable the next generation of deep learning algorithms to thrive on unprecedented amounts of free labelled data while avoiding privacy and dataset bias concerns. As recently shown in our MOTSynth work, models trained on synthetic data can already achieve competitive performance when tested on real datasets.

At the 7th BMTT workshop we aim to bring the tracking community together to further explore the potential of synthetic data. We have an exciting line-up of speakers, and are organizing two challenges aiming to advance the state-of-the-art in synthetic-to-real tracking.

Info

Time Full day, June 20th, 2022
Venue CVPR 2022 (New Orleans, Louisiana)
Train data released February 21st, 2022
Test data released March 21st, 2022
Challenge submission deadline May 23rd, 2022
Technical report deadline June 1st, 2022, 11:59 PST
Recordings Will be available after the workshop!

Speakers

Schedule (EST)

Time Title Speaker
9:30-9:50 am Workshop introduction Organizers
9:50-10:20 am Talk 1. Fake It Till You Make It: Face analysis in the wild using synthetic data alone Tadas Baltrusaitis
10:20-10:50 am Talk 2. Hands-Up: Leveraging Synthetic Data for Hands-On-Wheel Detection Gil Ebaz
10:50-11:05 am Coffee break -
11:05-11:15 am Challenge Awards Michael Schoenberg
11:15-11:30 am Challenge Most Promising Direction Winner Talk tbd
11:30-11:45 am Challenge Winner Talk tbd
11:45-1:00 pm Lunch Break -
1:00-1:30 pm Talk 3 Jitendra Malik
1:30-2:00 pm Talk 4 Gül Varol
2:00-2:30 pm Talk 5. From Synthetic Data to Mixed Reality Aayush Prakash and Ido Gattegno
2:30-2:45 pm Coffee break -
2:45-3:15 pm Talk 6 Kate Saenko
3:15-3:45 pm Talk 7. Rethinking the role of tracking for embodied navigation Deva Ramanan
3:45-4:15 pm Talk 8. Capturing the invisible in Multi-Object Tracking Pavel Tokmakov
4:15-4:30 pm Coffee break -
4:30-5:00 pm Round table discussion All speakers

Competitions

For this workshop edition we aim to shift the focus of the tracking community towards synthetic data and ask the following question: can we advance state-of-the-art methods in pedestrian tracking using only synthetic data?

To this end, we organize a challenges in which we require participants to develop their models using our recently proposed MOTSynth dataset as only source of training data, and evaluate them on real datasets.

MOTSynth2MOT17 track

For this track, participants can use all the annotation modalities in MOTSynth, and test their pedestrian bounding box tracking methods on the MOT17 test set, under the private detections setting.

Rules:
  • Models cannot be trained on any real data, except for ImageNet. Pretraining with COCO (or any other real dataset) is not allowed.
  • MOT17 public (or any other external) detections cannot be used. Detections need to be obtained from training on MOTSynth only.
  • MOT17 training data can only be used for validation and not for training or fine-tuning.
  • Participants will have to provide a technical report and code, showing that only the allowed training data was used.


Dataset: Train and test sequences are avalible at the MOTChallenge website.
Baselines: Our baselines, pre-trained models, and helper code for the dataset are avalible here.
Metric: HOTA will be used to rank participants.
Test server: The test server will be made available in the MOTChallenge website on March 21st.


Technical report format

Please follow a two-column layout for your submission. The technical report should at most contain 4 pages including references. However, shorter reports of 2 pages are very welcome. Submissions are not blind, hence, please include all authors on the submission. Only participants with a submitted report are considered for the reward and to present on the workshop. Please make your challenge entry public once submitted and make it clear to which method the report belongs. All reports should be sent to Guillem Brasó (guillem.braso [at] tum . de). The deadline is June 1st, 11:59 PST.

Organizers

Matteo Fabbri (UNIMORE/GoatAI)

Aljoša Ošep (TUM)

Orçun Cetintas (TUM)

Patrick Dendorfer (TUM)

Mark Weber (TUM)

Simone Calderara (UNIMORE/GoatAI)

Rita Cucchiara (UNIMORE/GoatAI)

Laura Leal-Taixé (TUM/ArgoAI)