Tuesday, July 5, 2016

Self Driving Cars - It's That Safety Mindset Again

There's been some attention to the safety aspects of technology in the last week, as reports of a fatality of a driver while using Tesla's 'Autopilot' feature in their Model S. The company reported this in a blog post, and they made sure to highlight that Autopilot was still in beta and it was the first known fatality in 130 million miles of driving, compared to an expected death toll of 1 per 96 million miles. They are at pains to point out that the customer's use of Autopilot "requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled.", and that neither the system nor the driver noticed the truck, so it's clearly not their fault, time to move on.

We've been hearing about self driving cars for some years - Google have been at the forefront of this, with all the other care manufacturers trying to play catchup, Tesla included. The promise of infallible computers taking control and driving us, eliminating human error, drunk driving, falling asleep behind the wheel, and other causes of injury and loss of life is a enticing. It's something we've seen in science fiction films for decades, and at some point in the future it will be a reality - but there are no companies selling cars that are fully autonomous - in fact on the NHTSA scale of 1 to 5 (4 to 5 being what most people would consider autonomous), Tesla's system might rank at '2'.

Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.

Google's self-driving car is listed as possibly a '3'. Google itself notes that:

There are test drivers aboard all vehicles for now. We look forward to learning how the community perceives and interacts with us, and uncovering situations that are unique to a fully self-driving vehicle.

So even though the NHTSA rates Google's car as more autonomous than Tesla's, Google do not allow the public to drive it, and make sure there is a test driver there at all times (presumably as paying full attention as part of their job) and are trying to encounter all the unique situations the car is likely to find itself in. The LA Times has an article going into more detail on this here, and why calling a feature 'Autopilot' may lull consumers into a false sense of security. I would place a bet that the engineering teams had a fit when marketing decided to use that term, but were not listened to.

So what's wrong with Tesla putting in these new advanced features? Absolutely nothing, in fact it's a great thing that they are looking to improve the technology in their cars, however I feel they have made some major mistakes in their introduction and their marketing that fail to take into account consumer safety. I've blogged before about "disruptive" way to get new products into the hands of consumers, to have them 'beta test' for you, has moved from apps and games where it's non-critical and so a good approach, to healthcare and other critical infrastructure - such as Theranos and their blood testing - where it's not. There's a generation of tech business leaders and investors who have never dealt with safety critical products, and they fail to approach them in the correct manner medical and other safety critical industries traditionally have - industries where the suggestion of 'beta testing' a new feature with impact on safety would be met with horror, and where if a user died there would be no hiding behind statements like "well, we did put a warning somewhere in the manual".

We accept that nothing is 100% safe, even walking down the stairs can result in a fatal fall, eating your dinner can result in choking to death, but the chance is so low that we don't even think about it, and act as if it's completely safe. In the USA in 2010, prior to any autonomous vehicles, there were 33,000 deaths, 1.5 million injuries, from 5.5 million accidents, and around 1,500 car trips per person per year for near 500 billion journeys over 3 trillion miles. That means 99.9997% of all car journeys in 2010 resulted in no injury at all, and 99.99999% of all trips didn't result in death - yet that's still 33,000 people taken from families and an estimated economic cost of $231 billion per year even though we consider driving 'safe'. Autonomous vehicles can reduce that cost, but how to develop and introduce this capability is still in question.

How safe should something be before it is released for use? There is no set number here to work from, but typically you start by viewing the necessity of the product and who will be using it. For example a new drug goes through many medical trials, FDA approval, and still remains under the control of a qualified doctor before being prescribed to a patient, but a new wireless router undergoes basic FCC and UL testing before being released to consumers.  If the feature being introduced can't cause harm, then a risk of failure is reasonable, but for safety critical it must exceed the performance of the existing system and you must be able to prove that.

Tesla state that 130 million miles have been driven on Autopilot, and this is the first death compared to an expected 96 million miles - proof it is more than safe, it has already saved a life! Except it isn't proven. 130 million miles may seem like a lot, but compared to 3 trillion miles driven a year, it's nothing - merely 0.004% of the yearly total. You would have to drive much more than that to get anything statistically significant, and that's what matters here, statistics. You need to prove that it's safe, that more than 99.9997% of journeys result in no injury (at minimum), by gathering the data in sufficient quantity that all possible conditions are covered, and showing your system is better than the existing before replacing it. 

That is an enormous task, requiring many times the data that Tesla has gathered so far, and highlights the difference between Google and Tesla - one is taking their time, gathering information and data before taking steps forward, the other is working from the belief that their product is safer than before (which it may well be), however they do not have the data to prove it. The data they need to prove it, is currently being gathered, though in a somewhat uncontrolled manner, as Tesla owners drive their cars and use their Autopilot. Imagine the following scenario:

A pharmaceutical company creates a new drug that they really believe reduces heart disease after their lab tests on 5 people. They immediately sell it over the counter to millions of people, with instructions in the packet about not taking with antacids and that it's in 'beta', and monitor everyone taking it. After a few years they have the data and show everyone they were right - it did save more lives than died from taking it incorrectly or side effects.

Was the hypothetical drug company vindicated and right to do that, or reckless in their approach and lucky it worked out well? By our current laws, they'd be shutdown and executives in jail for doing that. We need roll back this recent thinking of safety as something that is suitable for a beta test, and make sure that the critical things we rely on in our lives are proven safe and effective before they reach consumers.

6 comments:

  1. I'm not even sure this has validated the safety of the Tesla "Autopilot". It's certainly not a enough miles driven to be statistically significant, and it's not comparing apples to apples The 1 fatality in 96 million miles figure includes motorcycle fatalities which are far more likely than automobile fatalities, as well as much older cars still on the road. Additionally, we don't know the breakdown of fatalities per mile driven of highway driving (where the autopilot operates) vs. regular road driving (which is too complicated for the autopilot).

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. I personally would never allow my car to have full control over what happens when we are on the road. In a controlled environment where there are rules ad regulations put into place for the conduct of such vehicles than maybe, but still unlikely!

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete