Udacity Logo and Didi Logo
Get Started
Udacity Logo and Didi Logo

第二届滴滴Di-Tech算法大赛

滴滴-Udacity
无人驾驶」大挑战

DiDi-Udacity Self-Driving Car Challenge 2017
  • 滴滴标志
  • Di-tech 标志
  • Udacity 标志
Udacity Logo and Didi Logo

Welcome to the Self-Driving Car Challenge 2017

Competition Rules

No Ranking

The Challenge

One of the most important aspects of operating an autonomous vehicle is understanding the surrounding environment in order to make safe decisions. Udacity and Didi Chuxing are partnering together to provide incentive for students to come up with the best way to detect obstacles using camera and LIDAR data. This challenge will allow for pedestrian, vehicle, and general obstacle detection that is useful to both human drivers and self-driving car systems.

Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art.

Specifically, students will be competing against each other in the Kitti Object Detection Evaluation Benchmark. While a current leaderboard exists for academic publications, Udacity and Didi will be hosting our own leaderboard specifically for this challenge, and we will be using the standard object detection development kit that enables us to evaluate approaches as they are done in academia and industry.

New datasets for both testing and training will be released in a format that adheres to the Kitti standard, and participants will be able to use all of the associated tools to process and evaluate their own approaches. While Udacity is currently producing datasets for the challenge, all participants can get started today using the existing Kitti data.

Round 1 - Vehicles

The first round will provide data collected from sensors on a moving car, and competitors must identify position as well as dimensions of multiple stationary and moving obstacles.

Round 2 - Vehicles, Pedestrians

The second round will also challenge participants to identify estimated orientation, in addition to added cyclists and pedestrians.

Data/Inputs

UPDATE AS OF MARCH 22

Datasets are in a similar format to what Udacity has released previously, with minor changes based on new sensors and a conscious effort to remain close to the Kitti Datasets. Camera, Radar, and LIDAR data are present in the released ROS bags, as well as RTK GPS information for the capture vehicle (which has the aforementioned sensors onboard) and each tracked obstacle ground truth. This data is not synchronized in any way, but we will soon be releasing an open source tool that interpolates obstacle and capture vehicle positions to the camera image frames in order to create XML tracklet files.

Training data will mirror the Kitti Datasets, enabling all Kitti data to be used by competitors in order to train and refine their models. Below is a detailed description of the data available, retrieved from the Kitti Raw Data site. Please note that while the data released by Udacity and Didi for this challenge will be similar, it is guaranteed to have at least a few minor differences. Radar information will be present, and we will only be capturing imagery from a single, forward-facing monocular camera. Based on the requirements of this challenge, other changes may be made to the data format as required, and competitors should focus on designing a high-level solution to this problem then overfitting a solution to the Kitti datasets.

The dataset comprises the following information, captured and synchronized at 10 Hz:

  • Raw (unsynced+unrectified) and processed (synced+rectified) grayscale stereo sequences (0.5 Megapixels, stored in png format)
  • Raw (unsynced+unrectified) and processed (synced+rectified) color stereo sequences (0.5 Megapixels, stored in png format)
  • 3D Velodyne point clouds (100k points per frame, stored as binary float matrix)
  • 3D GPS/IMU data (location, speed, acceleration, meta information, stored as text file)
  • Calibration (Camera, Camera-to-GPS/IMU, Camera-to-Velodyne, stored as text file)
  • 3D object tracklet labels (cars, trucks, trams, pedestrians, cyclists, stored as xml file)

Here, "unsynced+unrectified" refers to the raw input frames where images are distorted and the frame indices do not correspond, while "synced+rectified" refers to the processed data where images have been rectified and undistorted and where the data frame numbers correspond across all sensor streams. For both settings, files with timestamps are provided. Most people require only the "synced+rectified" version of the files.

Requirements

Using the given data, competitors must:

Evaluation and Judging Criteria

Student submissions will be automatically evaluated using the method put forth by Kitti in their CVPR 2012 publication, which uses the PASCAL criteria for object detection and orientation estimation performance. Specifically, we will be using an average of the “Moderate” evaluation parameters for both cars and pedestrians for ranking, which are specified on the Object Detection Evaluation Benchmark page. Extensive documentation on getting started with the dataset format and the evaluation procedure is available on the Kitti object detection evaluation page, and specifically within the Development Kit which is available here.

UPDATE AS OF MARCH 27

In order to participate in the competition (both Rounds 1 and 2), you must upload one of these tracklet XML files that represent the results of your processing on the released test datasets, synchronized to camera frames. This XML file will be the only metric used to quantify performance and placement on the leaderboard, once submissions are open. The deliverable at the end of the competition for the finals at the Udacity Mountain View HQ will be a ROS package/node that takes camera, radar, and LIDAR data and processes them to achieve the same results demonstrated by your leaderboard submissions. Both Python and C/C++ ROS nodes are acceptable, but real time performance is necessary, as defined in the official rules.

Prizes

First Place – US$100,000 Cash Prize

Second Place – US$3,000 Cash Prize

Third Place – US$1,500 Cash Prize

Top 5 Teams – Airfare and hotel accomodation for two representatives from each team to attend the award ceremony in Silicon Valley from their place of residence, and chance to run code on the Udacity self-driving car.

Timeline

The submission deadlines are as noted below. All start dates start at 12:00 AM PST and end-dates/deadlines end at 11:59 PM PST on the noted dates.

March 8 – March 21 — Competitor and Team Registration

Competition platform opens for account and team registration

Competitors can register with Udacity accounts or create a new account

Team leaders can recruit team members through Forum

March 22 – April 21 — Round 1

Data set for Round 1 is released

New user and team registration closes on April 21

Submission for Round 1 closes on April 21 11:59 PM PST

April 22 – April 30 — Round 1 Code Evaluation

Top 75 teams will be asked to submit runnable code

Code will be spot-checked to prevent fraudulent submissions

Of that group the Top 50 qualified teams will progress to next round

May 1 – May 31 — Round 2

Data set for Round 2 is released

Teams will no longer be able to add or remove members after May 21

Jun 1 – Jun 14 — Finalist Evaluation

Top teams required to submit identity verification documents and runnable code

Code will be evaluated and output compared against scores on final leaderboard

Top 5 teams will be invited to attend final award ceremony at Udacity headquarters in Mountain View, California

Jun 15 – July 12 — Travel arrangements

5 week break for teams to arrange visas and travel that Udacity will help with

Jul 12 — Final Award Ceremony

Top 5 teams present their solutions to a panel of Udacity and DiDi executives and have chance to run their code on Udacity’s self-driving car

Rules

In addition to Contest Rules, the following administrative rules shall be applied to the Contest:

Eligibility

See the Contest Rules for all terms of eligiblity and any restrictions.

Teams

Submissions

Prizes

Terms & Conditions

For complete set of Terms & Conditions see here.