Archive | July, 2016

“Accountability” and “Countability” – Misdirection in the “Autopilot” Safety Debate

7 Jul

To be clear, Tesla and Elon Musk were referring to “autopilot assisted”, and not “fully autonomous” driving when claiming that only one Tesla fatality has occurred in 130 million autopilot assisted miles driven. The same is true of Musk’s later (quite bizarre and scientifically unproven) statement that half a million lives would have been saved around the world had everyone been driving said Teslas. For this he presumably considered the 1 million worldwide auto fatalities per year, occurring at a rate of one for every 60 million miles driven.

Per Musk’s insistence, I thought I had better do the math for myself. Right off the bat, there is a major issue with his claim. Musk had at his disposal a Tesla sample size (everything to date) capable of producing just one death, and this was then compared to a yearly sample size large enough to have resulted in one million deaths (in the case of the worldwide figure). After allowing for Tesla’s relatively higher number of miles per death (130 million divided by 60 million), and then dividing the total worldwide deaths by this amount (1 million/2.16), you end up with the fact that the worldwide totals arose from a sample size roughly 460,000 times larger than Teslas’! You can think of it this way – if Tesla was to have had just one more fatal accident in which two passengers were killed – they would be down to one fatality per 43.3 million miles driven. Would Musk then be proclaiming that if everyone around the world had been driving autopilot assisted Teslas there would have been an additional 384,000 people killed?

And, what exactly were the “controls” in place for this comparison to be valid? Are Teslas equally distributed among the countries included in the worldwide figure? Is Tesla’s autopilot currently capable of handling all of the locally specific traffic laws and infrastructure as they currently exist around the world? In the U.S., we have 50 separate states doing their own thing when it comes to passing legislation. What about these further distinctions around the world?

STOP !!! ……. (Note to Self) ……. Enough already with the careful and literal translation of Tesla and Musk’s statements concerning the safety level of their system! I need to now continue writing this under the assumption that what most people actually heard was “self-driving” car, not “autopilot assisted”. Why do I feel this way? Because I am a pretty smart guy with a background in quality assurance and a demonstrated interest in road safety – and even I made this mistake before I carefully reread the claims.

Most troubling is the ease with which these claims were made, and the lack of any proactive clarification on the part of Tesla. This is likely symptomatic of a greater problem as put forth earlier in my essay “Testing ‘Self-Driving’ Cars – The Buck Stops Where?!” also located here at

To a most useful end, I would like now to explain how far off, and scientifically invalid these safety claims would be if a person was to (incorrectly) apply them to the idea of “hands off”, “fully autonomous” operation.

To misapply the data would be to neglect all of those instances in which Tesla drivers quickly corrected a potentially fatal mistake made by autopilot. Some examples can be found on Youtube and I suspect there have been more than reported publicly. Examples include the car suddenly attempting to exit the highway at the last second, or the car continuing to follow the car in front instead of staying in lane.  If just a few of these (otherwise fatal) events have occurred (to date) it would represent a massively lower “fully autonomous” safety level. What is so disturbing is that Tesla is not openly disclosing this type of data.  Not collecting it would be even worse.  Anyone out there know the answer?  If this data has been neglected, it would fit expectations considering that the Federal D.O.T. announced they would be relying on the self-reporting of auto manufacturers when verifying “driverless-car” safety. (Again – see my earlier essay).

In fact even if the above (calculated) considerations were to be added in at this point, the adjusted safety level would still be massively under estimated.  Currently Tesla’s autopilot is only recommended for less complicated scenarios such as highway driving.  Not included are the greater complexities encountered with city driving, pedestrian traffic, construction sites, police-directed situations, emergency maneuvers, and more.  A person need only imagine the potential for these to change the overall safety rating.

So, as your average idiot can now see – there is absolutely no way to obtain an accurate guestimate to the question “How safe (more specifically ‘deadly’) does the ‘hands off’ operation of Tesla’s autopilot appear to be at this point in time?” when using only the data at hand.  If Tesla, in fact does have the answers to this more fully considered question, they should be willing to discus it with any reporter who inquires.  If they don’t, then we should all be pointing our fingers at the Federal D.O.T asking “Just exactly what is going on with the testing of this ‘Self-Driving’ stuff?!”  (Again – read my earlier essay) ……… So there you go …….Reporters get to work now!


Nutty Questions Concerning “Self-Driving”Cars

6 Jul

And now a few questions surrounding the nuttiness of the current (non) discussions that are (not) widely occurring yet regarding this “self-driving” car stuff.

One of these (non) discussions is failing to occur when experts suggest that one day we won’t own our own cars, but will instead summon vehicles to come pick us up for our trip down the block for milk.  Would these empty cars not be racking up additional environmentally unfriendly miles every time they come to get us?  Or, if the plan is for them to pick up other passengers along the way (imagine the wait times), could we not just slap some rails underneath these bad boys (for safety’s sake) and create a “smart phone connected” rail system?  By definition, these customers aren’t looking to “grab the wheel” themselves anyway.

And what of the widely unchallenged notion that these cars will be using their communication powers to link up with other cars on the road, thus travelling in tight formation just inches apart?  I wasn’t previously aware that “tailgating” is bad because it is “hard to do”.  I always thought the concern was “reaction time”.  Not just mine, but also Isaac Newton’s (think “Apple Car”).

The list goes on and on  ………. Who do we give “the wave” to when overly cautious (driverless) cars let us into the roadway out of turn?  Who do we give the “finger” to when stuck behind these same cars?

Will driverless cars be able to get out of the way (by driving over the curb onto someone’s lawn, etc.) of a honking ambulance under all of the potentially infinite scenarios? (Correct answer being “no”).

When a driverless car stalls in the middle of a fast moving road (not due to a computer problem of course because computers don’t typically malfunction) it will surely turn on its flashers, but who will walk back 100 feet to wave off unsuspecting drivers zooming around the bend?  Will these cars tip the tow truck driver (or non-driver)?

When one of these empty cars runs someone down, it will surely dial ‘911’, but what will it tell the operator in relation to the severity of the situation?

What happens in the D.O.T’s record books when two driverless cars flatten each other?  Is this recorded as a “deadly accident”?

We’ve all heard of “cow tipping”, but what about the newest craze – covering the sensors of rich people’s driverless cars so that their owners cannot “summon” them.  Oh, that’s right, we will all be on camera 24 hours a day with these cars around.

…. Snow, sand, ice, “continue on” wave’s from pedestrians, deer and dogs given same priority (or not?) as humans, police directed emergencies, plastic bags floating across the street, jumpy dog in the middle of the highway (I actually saw this once), flooded out roads, missing or recently altered street signs, approaching cop car with lights flashing (C’est pour moi? says Mr. Peugeot) ………. on and on and on and on.

Of course, regardless of the degree of immunity granted up front, what will actually happen most often is that these cars will be out there in overly cautious mode, slowing everyone down due to their (quite necessarily) overly cautious programming and – as now required by the insurance companies – strict adherence to speed limits.  I could have sworn that some road study scientists once discovered that if there exists a steady flow of highway traffic and just two drivers (side by side) slow down for just a few seconds, it creates a backward moving ripple effect that slows down the trailing cars for miles, and lasts a very long time.  Perhaps automated cars can handle this situation better than humans?  It makes you wonder, will the D.O.T. now recommend installing the opposite of “HOV” lanes on the highways?  Perhaps we will now have “LOVE” lanes? (Low Occupancy Vehicle – Electric)

Jay Leno once said (paraphrased) that in the end, we are not going to have a bunch of driverless cars roaming the streets.  Instead, this technology will be incorporated as safety additions similar to ‘ABS’ braking and so on.  So, if Mr. Leno (an expert in both “comedy” and “cars”) is in fact correct – why the big push (by “safety experts”) to get our hands off the wheel?