Self-driving cars—or autonomous vehicles (AV)—represent a grand challenge to the artificial intelligence (AI) community. Many types of computation must be integrated in order to present a coherent plan of action in real time. The AI has to answer three types of questions:
- What happened? Why did it happen? What will happen now?
- What will happen in the future?
- What should I do?
Let’s take a look at the different types of computation methods that AI uses to answer those questions.
For self-driving cars, multiple neural networks are running in parallel interpreting the images from each camera into a coherent perception of the physical space every given second. For example, Tesla’s electric car system brings in multiple data streams and inserts them into queues for neural network processing on specialized chips that are specifically optimized for speedy and accurate calculations.
Real-time AV requires managing the various tasks that must happen in parallel in order to give commands to the car, such as speed up, brake slowly, or swerve slightly right to avoid a pedestrian. Some tasks involve perception, such as perceiving the other cars on the road or perceiving traffic lights. Others involve planning, such as selecting the route or determining the best speed.
The AV system must also prioritize certain actions, identifying those that are safety-critical versus those that are not. This prioritization has to occur continuously, as road conditions change. As Dr. Kevin Zhou explains, convolutional neural networks allow the system to model “both time and space correlations” from multiple sensors.
This is particularly valuable because AV vehicles must predict what will happen immediately and must take input from multiple sensors to do so. For example, Bell is using a combination of both deep learning to identify potential landing sites for helicopters and also a different AI method that actually guides the helicopter to the selected spot.
AI has solved the problem of prescriptive learning—used to play chess—with reinforcement learning, which is similar in concept to the type of learning that children engage in when they play catch or tic-tac-toe.
Deep reinforcement learning is a method for starting with an objective or goal and then using a system of rewards and penalties to “teach” the network how to reach the goal efficiently. For games, an AI system plays multiple times against itself to learn techniques for winning. For example, it took 44 million games against itself over the span of nine hours to produce AlphaZero’s chess AI. Very importantly, deep learning systems can develop strategies for making near-term moves that pay off some time in the future. In the case of self-driving cars, this could mean avoiding a traffic jam ahead. But developing the proper “winning” criteria for driving is more complicated than teaching the network to win a chess match.
While AI models can eventually learn how to navigate and steer on myriad types of roadways in many weather conditions with descriptive analytics, they also have to anticipate the actions of pedestrians and drivers using prescriptive analytics.
This is very different from learning how to play chess. Games have rules and a limited number of opponents. However, cars will move through crowded city streets full of schoolchildren, scooters, bikes, and people with difficulty walking. Some drivers may be impaired, texting, or acting in unexpected ways. In order to interact effectively in such a world, we need to implement tools for “AI alignment” with humans.
Human collaboration is a very hard problem and the work in this area is still in its infancy. We have no such simulations for making real-time decisions in traffic. New methods are needed and deep reinforcement learning and human-augmented prescriptive analytics have shown promise in some AV tests.
Hands on the Wheel
Of course, any discussion of AI begets the important question of ethics. In the Georgetown University Master’s in Technology Management program, we show students how they can apply these exciting new technologies in a responsible way. We want to look beyond the hype to identify technologies that can be ethically applied in the real world.
Bottom line for self-driving cars: We will still have to keep our hands on the wheel for a while longer—in case that human driver runs the red light.