Tesla Autopilot Illustrates the Urgent Need for More Data Efficient Artificial Intelligence

In April 2018 Tesla CEO Elon Musk predicted that within two years drivers will be able to sleep in their Teslas while driving https://www.ted.com/talks/elon_musk_the_future_we_re_building_and_boring. One of the key catalysts behind the rapid progress in autonomous driving is the increasing amount of collected driving data. With every new mile Tesla gathers more data that can help it improve its cars and especially its semi-autonomous and soon fully-autonomous driving feature named Autopilot. JB Straubel showed in class just how rapidly the total mileage of the Tesla fleet is increasing. In fact, in March 2017 the Tesla fleet already reached a new milestone with a total mileage of 4 billion https://electrek.co/2017/03/20/tesla-global-fleet-reaches-4-billion-electric-miles-ahead-model-3/.

 

Worldwide still roughly 1.3 million people die in car crashes each year, and there is a fatal crash after on average every 60 millions miles https://www.un.org/press/en/2013/sgsm15005.doc.htm. As humans make mistakes, autonomous driving systems such as Autopilot could potentially make driving safer and save millions of lives https://www.un.org/press/en/2013/sgsm15005.doc.htm. However, currently the performance of deep learning models behind autonomous driving systems such as Autopilot are still contingent on massive datasets, and collecting these datasets takes time. This is not just a problem for autonomous driving; it slows down progress in almost every field that can benefit from better artificial intelligence. For example, automated brain tumor diagnosis tools have trouble segmenting brain tumors as a result of small amounts of available patient data, and cyber security software rarely makes us of deep learning models since there is insufficient data available of new breaches http://cs231n.stanford.edu/reports/2016/pdfs/322_Report.pdf, https://arxiv.org/abs/1511.07528. In other words, there is an urgent need for models that can learn more from only a few training samples. Similar to humans the models should learn causal relations, so that inferences can be made about unobserved cases. One of the main differences between human intelligence and artificial intelligence is that humans are able to generalize from few observed training samples. A child has on average learned almost all 30,000 existing object categories by age six http://people.csail.mit.edu/torralba/courses/6.870/papers/Biederman_RBC_1987.pdf. In contrast to machines, humans develop a form of common sense, meaning that they can come to conclusions by combining different types of available knowledge http://people.csail.mit.edu/torralba/courses/6.870/papers/Biederman_RBC_1987.pdf. Consider, for example, the image of Obama shown below on the blog of Andrej Karpathy, the head of Artificial Intelligence at Tesla http://karpathy.github.io/2012/10/22/state-of-computer-vision/.

As described by Karpathy, while we have never seen such a situation, we are able to understand why the image is funny by identifying relationships between available information and this new image http://karpathy.github.io/2012/10/22/state-of-computer-vision/. Current deep learning models fail to generalize from these existing observations, and understand the relationships between the president of the United States, a man in a suit who is measuring his weight, how physics works, Obama’s foot pushing down, the laughing people in the back, and the fact that the man is unaware of what Obama is doing http://web.mit.edu/cocosci/Papers/tkgg-science11-reprint.pdfhttp://karpathy.github.io/2012/10/22/state-of-computer-vision/. Humans, on the other hand, can quickly figure out what is happening in this picture. In contrast to deep learning models they do not need to train on massive datasets. Hopefully better artificial intelligence models will soon be developed that can reason based on prior knowledge, so that they do not need to be trained on massive datasets for each specific problem. There is great need for deep learning problems in a variety of sectors including healthcare, autonomous driving, and cyber security, and in some cases such as with Tesla’s Autopilot and autonomous driving such models could save millions of lives.

 

Biederman, Irving. “Recognition-by-components: a theory of human image understanding.” Psychological review 94.2 (1987): 115. http://people.csail.mit.edu/torralba/courses/6.870/papers/Biederman_RBC_1987.pdf

Elamri, Christopher, and Teun de Planque. “A New Algorithm for Fully Automatic Brain Tumor Segmentation with 3-D Convolutional Neural Networks.” http://cs231n.stanford.edu/reports/2016/pdfs/322_Report.pdf

J. B. Tenenbaum, “How to Grow a Mind: Statistics, Structure, and Abstraction” Knowl. Creat. Diffus. Util., vol. 1279, 2011. http://web.mit.edu/cocosci/Papers/tkgg-science11-reprint.pdf

Karpathy, Andrej. The state of Computer Vision and AI: we are really, really far away. N.p., 22 Oct. 2012. Web. 18 July 2017. http://karpathy.github.io/2012/10/22/state-of-computer-vision/

Lambert, Fred. “Tesla’s global fleet reaches 4 billion electric miles driven ahead of Model 3 launch.” Electrek. Electrek, 20 Mar. 2017. Web. 18 July 2017. https://electrek.co/2017/03/20/tesla-global-fleet-reaches-4-billion-electric-miles-ahead-model-3/

Musk, Elon. “The Future We’re Building.” TED. April 2017. Lecture. https://www.ted.com/talks/elon_musk_the_future_we_re_building_and_boring

Papernot, Nicolas, et al. “The limitations of deep learning in adversarial settings.” Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 2016. https://arxiv.org/abs/1511.07528

“Traffic Accidents Kill 1.3 Million People Each Year, but with Commitment Roads Can Be Made Safer for All, Secretary-General Says in Video Message | Meetings Coverage and Press Releases.” United Nations. United Nations, 6 May 2013. Web. 18 July 2017. https://www.un.org/press/en/2013/sgsm15005.doc.htm

0

5 comments on “Tesla Autopilot Illustrates the Urgent Need for More Data Efficient Artificial Intelligence”

  1. Hi Teun,

    Great post, I like the references you made to build the point.

    It’s still not legal for autonomous vehicles to drive around in all states in the US (http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx) and while regulatory changes are made, it might not be fast enough. Do you think the government will be an obstacle to this kind of innovation to receive mass adoption or will it be internal (the AI software etc.) or other external factors that create challenges?

    Thanks!

    1+

    Users who have LIKED this comment:

    • avatar
    1. Yes, I expect that the government will be a major obstacle that will slow down the release of autonomous cars. The government tends to be slow when catching up to the latest technology trends. I wish there were more people with technology backgrounds in the government. You might find this paper about the topic interesting https://arxiv.org/pdf/1510.03346.pdf

      0
  2. Hi Teun,

    Great post on illustrating the problems with artificial intelligence in autopilot.

    I’ve learned that a significant problem with autopilot is that it is not capable of detecting the intentions of other vehicles and pedestrians. As a driver, you are probably able to tell if a car in front of you might be changing lane by its slight change in speed or direction. Also, it’s rather easy for you to realize that a pedestrian on the side of the road might want to cross by simply observing that he/she is watching the traffic and the light. Seeing a thumb up of the driver, the pedestrian would know that the driver is letting him/her cross the road first. But AI is not able to do this. So far, it’s way to hard to make AI matches the ability of human brain. In my opinion, a more feasible way is to establish a link between AI vehicles, and if possible, pedestrian. Communications between vehicles and pedestrians are required to enable AI to understand the intentions of other vehicles and pedestrians. For example, a car that is changing lane can send a message to nearby vehicles. For the pedestrian part, though the walkers are not likely able to actively contact a autopilot vehicle, but the reverse is absolutely possible. A car can tell a pedestrian to cross the street by shining a green light or showing a certain figure on a LED panel in the front of the car.

    There is still a long way to go in making AI as capable as human, but we should also try out other approaches to maximize its capability.

    1+

    Users who have LIKED this comment:

    • avatar
  3. Class 238A

    Hi Teun

    Great article and good references !!! We are still at the very beginning of AI and ML. In order to expect machines to learn the human’s way of doing things, we need to look at the base building blocks. Machines are made of material compositions while humans are made of chemical compositions. As such, there is still a lot of room for innovation before we get those prefect thinking machines. The car companies are also in the process of learning, Since AI & ML can become a potential threat, especially in the connected space. which is the reason why governments are still buying more time before they can clear the regulatory environment for level 3 & 4.

    Imagine the dangers of teenage thinking in humans, knowledge and learning are elements that evolve with time. Even Tesla has only reached the stage of connected cars and data collection; it would take loads of more library before we see those perfectly safe autonomous self driving cars on roads. But yes we are definitely at the curve of a big revolution in the automobile space which could be very disruptive and the user driving or car ownership experience will completely change in the next 5 years.

    1+

    Users who have LIKED this comment:

    • avatar
  4. Great post. I think there is a definite need to improve the underlying AI. Last year, after a Tesla car crashed ,Tesla defended its position by quoting safety figures for its Autopilot mode. They explained that it was the first fatality in 130 million miles of Autopilot use, compared to a fatality every 94 million miles for regular cars. That however does not take into account the scale of truly driverless cars vs. traditional cars, which is much higher. In some ways that weakened my confidence in the AI of the car which if we go by the number of miles registered would imply a higher level of confidence.

    1+

    Users who have LIKED this comment:

    • avatar

Comments are closed.