Artificial Intelligence 6 min read

Overview of the CVPR 2019 WAD Autonomous Driving Challenge and Participation Details

The CVPR 2019 WAD Autonomous Driving Challenge, hosted in Long Beach, introduces four new tasks—including object‑detection and tracking transfer‑learning tracks using Didi’s massive D²‑City and Berkeley’s BDD100K datasets, plus a large‑scale detection interpolation track—aimed at advancing vision algorithms under diverse, difficult driving conditions, with global teams invited to register by May 31 and winners announced at the workshop on June 17.

Didi Tech
Didi Tech
Didi Tech
Overview of the CVPR 2019 WAD Autonomous Driving Challenge and Participation Details

CVPR (Conference on Computer Vision and Pattern Recognition) is the premier global conference on computer vision and pattern recognition, scheduled for June 16‑20, 2019 in Long Beach, USA.

The CVPR 2019 WAD (Workshop on Autonomous Driving) Challenge is an internationally recognized top‑level evaluation competition focused on autonomous‑driving vision. It is known for its large data scale and high difficulty, attracting leading teams from both industry and academia each year.

This year’s challenge offers four brand‑new tasks based on multiple driving datasets. Didi jointly proposes three tasks—object‑detection transfer learning, object‑tracking transfer learning, and large‑scale detection interpolation—and provides a massive, high‑quality real‑world driving video dataset, D²‑City ( https://gaia.didichuxing.com/d2city ), which contains annotations for 12 classes of traffic‑related objects.

The two transfer‑learning tracks are built on Didi’s D²‑City dataset and the BDD100K dataset released by Berkeley DeepDrive. In the object‑detection transfer track, participants train a detection model on the US‑collected BDD100K data and apply it to the China‑collected D²‑City data. In the object‑tracking transfer track, participants train on D²‑City and evaluate on BDD100K.

The large‑scale detection interpolation track requires participants to complete frame‑wise detection results for entire videos that only provide key‑frame annotations, encouraging research that combines detection, interpolation, tracking, and domain adaptation. Participants may leverage BDD100K, other publicly available datasets, or partially manually corrected annotations to improve their results.

Compared with existing autonomous‑driving datasets, D²‑City offers more challenging scenarios such as low illumination, rain/fog, traffic congestion, and low image clarity, collected across various Chinese cities. The dataset also plans to provide extensive precise annotations, including detection labels on thousands of video segments and hundreds of thousands of key frames, as well as tracking labels on nearly a thousand video segments.

Participation is open to global enterprises, research institutions, and universities. Teams can register on the challenge website ( http://wad.vision ). The registration deadline is May 31, 2019, and the winning teams will be announced at the CVPR 2019 Autonomous Driving Workshop on June 17.

Didi states that the competition aims to build an efficient, open, and sustainable future‑mobility ecosystem. By encouraging transfer‑learning approaches, the challenge seeks to accelerate the practical deployment of autonomous‑driving vision algorithms across diverse environments, improve annotation speed and quality, and reduce labeling costs. Didi welcomes top algorithm experts worldwide to join and drive further technological innovation.

For more details, schedule, rewards, and the latest updates, please visit the activity homepage: https://z.didi.cn/WAD .

computer visionAItransfer learningdatasetautonomous drivingChallenge
Didi Tech
Written by

Didi Tech

Official Didi technology account

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.