BMTT-PETS 2017
July 26th 2017, Honolulu, Hawaii, USA
In Conjunction with the Conference on Computer Vision and Pattern Recognition, CVPR 2017
Tracking and surveillance challenges 2017

In this workshop, we want to bring together the BMTT and PETS communities by organizing the First Joint Workshop on Tracking and Surveillance.

The idea behind PETS 2017 is to continue the evaluation theme of on-board surveillance systems for protection of mobile critical assets as set in PETS 2016. Such assets (including trucks, trains, and shipping vessels) could be considered as targets for criminals, activists or even terrorists. The sensors (visible and thermal cameras) are mounted on the asset itself and surveillance is performed around the asset. Two datasets are provided in PETS 2017: (1) a multi sensor dataset, as used for PETS2014 to PETS 2016, which addresses protection of trucks (the ARENA Dataset); and (2) an extended maritime dataset - the IPATCH Dataset - addressing the application of multi sensor surveillance to protect a vessel at sea from piracy. This dataset is unique in the sense that it comprises a suite of heterogeneous sensors (GPS, visual and thermal cameras) and fills a previous void of publicly available annotated datasets on the maritime domain. The extended dataset includes new videos with challenging boat movements to track.

From the BMTT 2017 side, we want to shift our attention to detections and their interaction with tracking. Several discussions during previous editions posed the question: How much is tracking really improving over detections? While it is clear that tracking accuracy depends heavily on detections, we have previously attempted to standardize the tracking community to use a fixed set of detections to focus on the tracking aspects and to allow for a more direct comparison of tracking methods. At the same time, however, deep learning methods on the detection task have enjoyed enormous success and rapid advances. Nonetheless, it turns out that pedestrians are a class where off-the-shelf methods like the Deformable Part-Based Models (DPM) still perform quite competitively, especially when it comes to generalization across different scenarios. To balance the desire of many members of the community to allow for using custom detectors and facilitating comparability of different trackers at the same time, MOTChallenge will change its format for this workshop: We will allow participants to submit their own sets of detections (which they have to make publicly available), and enforce all tracking methods to be tested on 3 different sets of detections. With these changes, we hope to evaluate trackers in a more comprehensive way, analyzing their behavior under different detection modalities. This should bring us closer to our goal of creating new evaluation metrics that measure only tracking accuracy, and are as decoupled from detections as much as possible.

Challenges

We have 5 exciting challenges for the first edition of the BMTT-PETS workshop!:

  • Challenge 1: Detection and tracking in low-density scenarios
  • Challenge 2: Detection in crowded scenarios
  • Challenge 3: Tracking in crowded scenarios
  • Challenge 4: Atomic event detection
  • Challenge 5: Complex threat event detection
Call for Papers

This is the 1st Joint BMTT-PETS Workshop on Tracking and Surveillance, held in conjunction with the Conference on Computer Vision and Pattern Recognition (CVPR) 2017 held in Honolulu, Hawaii, USA.

We are looking forward to welcoming researchers and industry affiliates in computer vision, machine learning, image analysis and related fields, to present and discuss their work. A single-track program with keynote talks, oral and poster presentations shall provide ample opportunities for scientific exchange and discussion.

BMTT-PETS 2017 invites submissions of high-quality research results as full papers.

Paper submission deadline: April 12th, 2017
Notification of acceptance: April 25th, 2017
Camera-ready: May 1st, 2017
Workshop date: July 26th, 2017

Full-paper submissions will undergo a selective double-blind peer-review process, normally by three members of the international reviewing committee. Submitted papers will be refereed on their scientific originality and relevance, presentation and empirical results. For details on formatting, submission and paper policies please see the instructions for authors.

There will be 5 challenges, 3 on pedestrian detection/tracking and 2 on surveillance/event detection. We encourage authors to submit their results to one or more of the challenges. For more details, please visit the website for each of the challenges.

Topics include, but are not limited to:

  • Detection for tracking
  • Multi-target tracking
  • Video segmentation
  • Visual surveillance and tracking in crowded scenes
  • Motion prediction and social models
  • Abnormal activity recognition
  • Multi-class tracking and holistic scene understanding
  • Evaluation criteria and metrics for multi-target tracking
  • Action/pose recognition
  • Motion trajectory analysis
  • Human walking behavior
  • Maritime abnormal event detection
  • Activity analysis and monitoring
  • Multi-camera analysis
  • Interaction/sequential analysis
  • Event detection
  • Indexing and retrieval of human behaviors in video sequences
  • Visual feature extraction
  • Behaviour and ambient intelligence
  • Context analysis
  • Learning models and evaluation
  • Dataset proposals and bias analysis

As organizers of BMTT-PETS 2017 we are looking forward to your contributions and to welcoming you in Honolulu.

Laura Leal-Taixé
Luis Patino
Anton Milan
Tom Cane
Ian Reid
Daniel Cremers
James L. Crowley
Stefan Roth
Konrad Schindler
James Ferryman

Call for Participation

We are glad to present the

BMTT 2017 Challenge on Detection and Tracking in Crowded Scenarios
This year we have two new exciting challenges focused on the role of detections in multi-object tracking:

1. MOT17Det
For the first time, we welcome researchers to submit pedestrian detection results on the challenging MOTChallenge sequences.

2. MOT17
Following the success of MOT15 and MOT16, we are opening a new tracking challenge with a slightly different philosophy. For this year's challenge, we provide *THREE* sets of detections, and we ask users to submit tracking results *FOR ALL 3 SETS*. The goal is not to treat detection and tracking as two separate processes but to actually study the behavior of each tracker when given different detections as starting point.

BMTT-PETS 2017 invites submissions of high-quality research results through the MOTChallenge website https://motchallenge.net/
We welcome submissions from both original work and existing methods.

Important Dates

Challenge Deadline: July 1st, 2017
Workshop date: July 26th, 2017

Best performing methods will be awarded an oral presentation during the BMTT-PETS2017 workshop at CVPR.

Some notes related to previous editions:

  • Note that we require to submit results for both challenges for both training and test sets.
  • Please indicate your intention whether or not you are planning to attend CVPR 2017 and the BMTT-PETS workshop.
  • Please download the latest devkit to get access to the evaluation code.
  • Note that the sequences for the new challenges are identical to MOT16. However, we have refined the ground truth to make it even more complete and accurate than previously.

We look forward to welcoming researchers and industry affiliates in computer vision, machine learning, image analysis and related fields, to present and discuss their work. As organizers of BMTT-PETS 2017 we are looking forward to your contributions and to welcoming you in Honolulu.

This workshop is part of


Participating institutions