In November of 2016, Tesla released a video of a Model X with the second-generation hardware package driving along the roads surrounding company headquarters in Silicon Valley. The video offers us insight into what the Autopilot system “sees” as it drives. The Model X can individually identify obstacles in the travel lane ahead from oncoming obstacles in other lanes. It can also distinguish road signs from traffic lights.
Tesla CEO, Elon Musk, tweeted the demonstration, which shows off a passenger sitting in the driver’s seat (who was required to be there by law) as the car navigated to its destination traffic, stop lights, and other road conditions before it parks itself. In addition to the view from the car’s interior, the video shows off what the car’s left rear, medium range, and right rear cameras are seeing.
According to Musk, “human driving is twice as dangerous as automated”, an opinion that is aligned with the UN, which estimates that a person dies in the world every four seconds because of a traffic accident and main reasons for that are human mistakes, speeding, misdirected driving or alcohol and drug intake.
All forthcoming Tesla cars, including the Model 3, and of course Model S, will be fitted with the hardware necessary for them to drive without human input. It will be a total autonomy, the so-called Level 5, which reduces human interference to only starting the car and setting the destination. In fact, they must stop at Level 4, as these cars will be able to drive along without the driver, but should remain there due to legal mandatory.
When Tesla says that their cars will incorporate an Automation Level 5, it refers to the classification that established the Society of Automotive Engineers in 2014 on six levels that measure human involvement in driving (from attendance to full autonomy). The lowest level, zero, refers to automatic systems that do not control the vehicle at all and only emit warnings. The rest of the levels incorporate less human driver attentions until reaching Level 5, which completely ignores human interference. These levels are the following:
Level 0: Total control and command of all functions by the driver.
Level 1: Automation of some specific functions (wipers are activated if it rains, brakes automatically in case of emergency).
Level 2: Combined function automation (self-adapting cruise control, where the car can take control of the situation without having permission).
Level 3: Limited automation of self-driving (the car has more control, but there must be a driver available).
Level 4: Total automation of self-driving (the car performs all essential functions for safety), but the driver must be present.
Level 5: Total automation of the driving, without the driver’s presence.
WHAT IS “AUTOPILOT”?
Autopilot “level one” or rather called Tesla’s Enhanced Autopilot software was expected to complete validation and be rolled out to cars via an over-the-air update in December 2016, subject to regulatory approval.
Autopilot “level one” adds these new capabilities to the Tesla Autopilot driving experience. Tesla car will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near and self-park when you arrive to your destination.
FULL SELF-DRIVING CAPABILITY (HARDWARE 2)
The new hardware consists of eight surround cameras that provide 360 degrees of visibility around the car up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing the detection of hard and soft objects at nearly twice the distance of the previous system. A forward-facing radar with enhanced processor provides additional data about the world on a redundant wavelength that can see through heavy rain, fog, dust and even the car ahead.
All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and will take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself.
Regulatory approval to autonomous driving is 1 to 3 years off and will probably vary by country. Meanwhile Tesla is operating the new system in what it calls “shadow mode.” It collects data just as if the system were commanding the car. That data is shared with Tesla software engineers, who use it to compare how human drivers respond to real-world driving situations with how the computer would respond if allowed to.
For Tesla, its customers and their partially autonomous cars are a widely distributed test fleet. The hardware required for true autonomy is already in place, so the transition can play out in software updates. Musk has said that could be technically feasible—if not legally so—within two years.
PROCESSING POWER INCREASED 40 TIMES BY NVIDIA
To make sense of all this data given by cameras and sensors of Tesla’s cars, a new onboard computer developed by NVIDIA, with over 40 times the computing power of the previous generation, runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously, and on wavelengths that go far beyond the human senses.
This new onboard computer consists of the NVIDIA DRIVE ™ PX 2 – the world’s most powerful engine for in-car artificial intelligence – and the NVIDIA DriveWorks ™ software, which helps car developers quickly implement deep learning techniques in autonomously driven vehicles.
The creation and training of the AI neural network is one of the most important processes in building an autonomous driving vehicles. The neural network must be refined continuously and rapidly to learn new driving scenarios that car manufacturers want to enable. DRIVE PX 2 is complemented by the NVIDIA DIGITS ™ GPU Deep Learning Training System, which provides a state-of-the-art solution for the construction of autonomous driving vehicles.
DRIVE PX 2 delivers unprecedented processing power, with the size of a tablet. It incorporates two next-generation Tegra® processors and two next generation GPU, based on a Pascal ™ architecture. It is able to perform up to 24 billion operations per second while processing deep learning neural networks. This represents 10 times more computational power than the product of the previous generation.
Manufacturers of autonomous driving vehicles from around the world have praised NVIDIA for its contribution.
Written by Francisco Javier García