Tesla: Driving Into the AI Future?

This post examines the impact of machine learning and public perception on Tesla's autonomous vehicle product development process.

Last summer, I sped down Chicago’s crowded I-90 highway behind the wheel of a Tesla. A car swerved in front of me and slowed down, and without any effort on my part, the Tesla braked on its own. I was experiencing Tesla’s Autopilot, a feature that allows all Tesla vehicles to function semi-autonomously. Though machine learning hastens product development in Tesla cars, the product development process for Tesla seems to be equally driven by the company’s desire to manage public perception.

Tesla’s cars embody machine learning at a fundamental level: they are both vehicles for data collection and users of a machine-learning-driven operating system that allows cars to function semi-autonomously. Andrej Karpathy, Tesla’s Director of Artificial Intelligence, details the company’s recent shift toward more robust machine learning, which he dubs “Software 2.0.”[1] Software 2.0 is a system of software that, rather than functioning based on a highly engineered architecture, takes data collected by Tesla’s expansive fleet of vehicles and “search[es] for a program that satisfies, simultaneously, the labels for all the images in its training data set.”[2,3] Due to the highly unpredictable nature of driving, this approach proves superior to simply using a “series of prewritten computer rules” because it takes into account what could be considered exceptions to the rule (see below for example).[4,5]  In this way, machine learning drives product development at Tesla: as cars in the fleet – each of which is equipped with sensors that export data to the cloud – proliferate, the machine learning algorithm grows more accurate and dispatches updates to all existing cars via a software update so that cars can more accurately perceive and react to their surroundings.[6]

Public perception of safety also generates highly publicized product updates. For example, in March of 2018, a driver who was not paying attention while using Tesla’s Autopilot crashed and died, a move which resulted in a $5 billion ding to Tesla’s market value.[7] After this fatality, Tesla responded by releasing enhanced features, such as Navigate on Autopilot and Autosteer, which aim to increase safety and visibility of the navigation system.[8] Machine learning, therefore, can only account for so much product development – a process which is ultimately driven by subjective human input and fiduciary responsibility to shareholders.

The 2018 fatality demonstrates a challenge that Tesla faces: where human variability interferes with artificial intelligence, costly accidents can happen. To address this in the short term, Elon Musk is approaching the problem from an education standpoint: he wants consumers to know that Autopilot is much safer than human driving. An official company release stated, “if you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”[9] The company also has a technical solution that will be implemented in the medium-to-long term: add more data and let machine learning learn. Karpathy explains that the Software 2.0 stack trains itself on additional data, so as the fleet size increases, more data will be added to the algorithm, which will make the product better, which creates a virtuous cycle of more cars and better product: “And if this can drive autonomously, then we will sell more of them, and if we sell more of them, then the future is going to be good.”[10]

Tesla rightly focuses on the potential of its technology and machine learning when it comes to the future of product development within its company. However, Tesla seemingly downplays the role of its technology in accountability for these deaths. As is common in AI, “in systems where machines and humans interact, users often become the ‘moral crumple zone’ for technological failure.”[11] This being said, I’d recommend that Tesla focus its attention on AI ethical issues so that it can control the future narrative around AI ethics. By doing so, Tesla can acknowledge that its cars must, at times, engage in life-or-death decision-making and should address this issue head-on. By doing so, Tesla can increase the level of trust consumers experience with the company, which will inevitably hasten the cycle of data collection and product improvement. A paper on ethics in AI states: “The AI research community realizes that machine ethics is a determining factor to the extent autonomous systems are permitted to interact with humans.”[12] By controlling the narrative, Tesla can increase its market share.

Tesla uses its growing fleet, which is estimated to reach 650,000 cars by the end of 2018, as both a collection tool for and user of machine learning.[13] While machine learning spurs constant, incremental product development, reactions to public perception also move product development forward. But, in a world where AI applications are growing, will humans ever be comfortable ceding decision-making power to machines when it comes to matters of life and death? If not, will Tesla’s autonomous vehicles ever succeed?

(797 words)

 

 

Footnotes:

[1] Andrej Karpathy, Director of AI, Tesla, remarks made at Train AI Conference, San Francisco, May 9, 2018. From video provided by Figure Eight on Vimeo, https://vimeo.com/272696002, accessed November 2018.

[2] Ibid.

[3] Elon Musk, CEO, Tesla, transcript of investor call, September 2016. From transcript provided by Electrek, https://electrek.co/2016/09/11/transcript-elon-musks-press-conference-tesla-autopilot-under-v8-0-update-part-2/, accessed November 2018.

[4] Harry Surden and Mary-Anne Williams, “Technological Opacity, Predictability, and Self-Driving Cars,” Colorado Law Scholarly Commons (2016): 147.

[5] Karpathy, remarks made at Train AI Conference.

[6] Bernard Marr, “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data,” Forbes, January 8, 2018,  https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/#ac07aaf42704, accessed November 2018.

[7] Dana Hull and Tim Smith, “Tesla Driver Died Using Autopilot, With Hands Off Steering Wheel,” Bloomberg, March 30, 2018, https://www.bloomberg.com/news/articles/2018-03-31/tesla-says-driver-s-hands-weren-t-on-wheel-at-time-of-accident, accessed November 2018.

[8] Tesla, “Discover Software Version 9.0,” https://www.tesla.com/support/software-v9, accessed November 2018.

[9] Tesla Team, “An Update on Last Week’s Accident,” Tesla Blog, March 30, 2018, https://www.tesla.com/blog/update-last-week%E2%80%99s-accident, accessed November 2018.

[10] Karpathy, remarks made at Train AI Conference.

[11] Jack Stilgoe, “Machine Learning, Social Learning and the Governance of Self-Driving Cars,” Social Studies of Science (2017): 5.

[12] Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser, Qiang Yang, “Building Ethics into Artificial Intelligence,” LILY Research Center (2018): 1.

[13] Bernard Marr, “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.”

Previous:

U.S. Department of Health and Human Services: Using Open Innovation to Fight the Opioid Epidemic

Next:

Gantri Shines a Light on 3D Printing for Home Décor

Student comments on Tesla: Driving Into the AI Future?

  1. Self-driving cars will never eliminate all fatalities but will reduce the number by orders of magnitude. There will come a time when humans will be comfortable ceding decision-making power to the machine but we aren’t there yet.

  2. The decision-making power of autonomous vehicles opens up a whole new world of ethical considerations. Consider for example a case where a vehicle must decide between hitting and killing a person crossing the road or instead swerving and killing the person behind the wheel. This is a difficult decision and one that actually requires comparing human lives. Imagine that the person crossing the road is in fact a child and the person behind the wheel is a grown adult. Should the car in fact be programmed to avoid taking children’s lives at the cost of adult lives? And if this is the case, who is to say unethical organizations can’t take this one step further and begin to decide on types of people that should be saved over others (e.g., celebrities)?

  3. I really enjoyed learning about how Tesla is incorporating machine learning into its autonomous vehicles. I think you very cleverly laid out how all of Tesla’s fleet (autonomous and normal) can be thought of as both a source for data collection as well as a testing ground for machine learning. My perspective on your question is that humans will become more and more comfortable with self-driving cars, particularly because the first versions of most autonomous vehicles allows for humans to override the car as needed. Additionally, although autonomous vehicle deaths are highly publicized, they are still much less frequent than humans driving cars, which I think is often left out of the conversation. As a last point, I’m curious how you see Tesla fitting into the conversation around ride sharing, particularly as more people move to urban areas and personal vehicles become less necessary.

  4. I’m all for autonomous vehicles, but I don’t think we’ll be living when the world is “fully autonomous” for many of the safety reasons you’ve touched upon in your note. For the foreseeable future, humans don’t seem comfortable ceding 100% control of their cars and ultimately their lives to software programs because there just isn’t enough data to support the success of these programs. Autonomous driving will take time because the auto manufactures need to amass a huge amount of data in order to create programs that can deal with any sort of driving situation. I commend Tesla for helping to lead the charge, but I just don’t see anything coming to fruition any time soon.

Leave a comment