Udacity Logo and Didi Logo
Get Started
Udacity Logo and Didi Logo

第二届滴滴Di-Tech算法大赛

滴滴-Udacity
无人驾驶」大挑战

DiDi-Udacity Self-Driving Car Challenge 2017
  • 滴滴标志
  • Di-tech 标志
  • Udacity 标志
Udacity Logo and Didi Logo

Welcome to the Self-Driving Car Challenge 2017

Competition Rules

No Ranking

The Challenge

One of the most important aspects of operating an autonomous vehicle is understanding the surrounding environment in order to make safe decisions. Udacity and Didi Chuxing are partnering together to provide incentive for students to come up with the best way to detect obstacles using camera and LIDAR data. This challenge will allow for pedestrian, vehicle, and general obstacle detection that is useful to both human drivers and self-driving car systems.

Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art.

Students will be competing against each other to achieve the highest score on Udacity's intersection over union evaluation metric. While a current leaderboard exists for academic publications, Udacity and Didi will be hosting our own leaderboard specifically for this challenge, and we will be using the standard object detection development kit that enables us to evaluate approaches as they are done in academia and industry.

Round 1 - Vehicles

The first round will provide data collected from sensors on a moving car, and competitors must identify position of a single moving obstacles.

Round 2 - Vehicles, Pedestrians

The second round will challenge participants to identify estimated dimensions and orientation, in addition to added pedestrians.

Data/Inputs

Training data will be similar to the Kitti Datasets. Below is a detailed description of the data available, retrieved from the Kitti Raw Data site. Please note that while the data released by Udacity and Didi for this challenge will be similar, it is guaranteed to have at least a few minor differences. Radar information will be present, and we will only be capturing imagery from a single, forward-facing monocular camera. Based on the requirements of this challenge, other changes may be made to the data format as required, and competitors should focus on designing a high-level solution to this problem then overfitting a solution to the Kitti datasets.

The dataset comprises the following information in a ROS bag file:

  • Grayscale stereo sequences (0.5 Megapixels, stored in png format)
  • Color stereo sequences (0.5 Megapixels, stored in png format)
  • 3D Velodyne point clouds (100k points per frame, stored as binary float matrix)
  • 3D GPS/IMU data (location, speed, acceleration, meta information, stored as text file)

Udacity provides a script for synchronizing the data so that is more aligned with the Kitti format. It create XML tracklet files by interpolating obstacle and capture vehicle positions to the camera image frames.

Round 1 Data

Here are links to the datasets we've released for Round 1:

Notes

Since we are not evaluating orientation, we are using a sphere to bounding box IOU to minimize the effect of wrong orientation. This means that the overlap will be low but uniform for all contestants.

Calibration bag is provided. Contestants should generate calibrations between sensors using this bag:

Requirements

Using the given data, competitors must meet the following requirements:

Submissions, Evaluation and Judging Criteria

Student submissions will be automatically evaluated using an intersection over union technique described in the competition documentation.

In order to participate in the competition (both Rounds 1 and 2), you must upload a tracklet XML files that represent the results of your processing on the released test datasets, synchronized to camera frames. This XML file will be the only metric used to quantify performance and placement on the leaderboard, once submissions are open.

To qualify for Round 2, teams must submit their predict code and ROS node at the end of Round 1 to be checked by Udacity and DiDi teams, however it is not necessary to achieve real-time performance to quality for Round 2.

The deliverable at the end of the competition for the finals at the Udacity Mountain View HQ will be a ROS package/node that takes camera, radar, and LIDAR data and processes them to achieve the same results demonstrated by your leaderboard submissions. Both Python and C/C++ ROS nodes are acceptable, but real time performance is necessary, as defined in the official rules.

This GitHub repository holds the data and code required for getting started with the Udacity/Didi self-driving car challenge. To generate tracklets (annotation data) from the released datasets, check out the Docker code in the /tracklet folder. For sensor transform information, check out /mkz-description folder.

Please note that tracklets cannot be generated for Dataset 1 without modifying this code, as we added an additional RTK GPS receiver onto the capture vehicle in order to determine orientation. The orientation determination to enable world to capture vehicle transformations on Dataset 2 is currently being written, with a release target for 4/4/2017 EOD.

Resources

Starting Guides:

Here's a list of the projects we've open sourced already that may be helpful:

Prizes

First Place – US$100,000 Cash Prize (before tax)

Second Place – US$3,000 Cash Prize (before tax)

Third Place – US$1,500 Cash Prize (before tax)

Top 5 Teams – Airfare and hotel accomodation for two representatives from each team to attend the award ceremony in Silicon Valley from their place of residence, and chance to run code on the Udacity self-driving car.

Timeline

The submission deadlines are as noted below. All start dates start at 12:00 AM PDT and end-dates/deadlines end at 11:59 PM PDT on the noted dates.

March 8 – March 21 — Competitor and Team Registration

Competition platform opens for account and team registration

Competitors can register with Udacity accounts or create a new account

Team leaders can recruit team members through Forum

March 22 – May 31 — Round 1

Data set for Round 1 is released

New user and team registration closes on April 21

Submission for Round 1 closes on May 31 11:59 PM PDT

June 1 – June 4 — Round 1 Evaluation

Top 75 teams will be asked to submit runnable code

Code will be spot-checked to prevent fraudulent submissions

Of that group the Top 50 qualified teams will progress to next round

June 5 – July 3 — Round 2

Data set for Round 2 is released

Teams will no longer be able to add or remove members after June 26

July 4 – July 11 — Finalist Evaluation

Top teams required to submit identity verification documents and runnable code

Code will be evaluated and output compared against scores on final leaderboard

Top 5 teams will be invited to attend final award ceremony at Udacity headquarters in Mountain View, California

After July 11 — Travel arrangements

5 week break for teams to arrange visas and travel that Udacity will help with

TBD — Final Award Ceremony

Top 5 teams present their solutions to a panel of Udacity and DiDi executives and have chance to run their code on Udacity’s self-driving car

Rules

In addition to Contest Rules, the following administrative rules shall be applied to the Contest:

Eligibility

See the Contest Rules for all terms of eligiblity and any restrictions.

Teams

Prizes

Terms & Conditions

For complete set of Terms & Conditions see here.