Jaywalking is, of course, very common. It is even the most common way to cross the street in many locations. Inevitably, many “self-driving car” accidents will involve jaywalking pedestrians. Because of this reality, and in comparison to traditional levels of safety, there doesn’t seem to be a good reason for automatically classifying all such future accidents as caused by the “jaywalking” itself. In fact, if computer code developers had not already conceded this on their own, we would have seen many more incidents by now. Clearly, in the case of injuries and deaths to jaywalkers, hindsight investigation into the software and hardware performance of these vehicles should still be required.
In the immediate aftermath of the Arizona incident in which a jaywalking woman was hit and killed by a “self-driving car” as she pushed her bicycle across the road at night – the Tempe Arizona police chief said that vehicle footage suggested the victim herself may have been to blame as she appeared suddenly out of the shadows, and that it would have been difficult to avoid this accident under any mode of operation. My unofficial (and poorly researched) impression is similar after viewing a snippet of this footage online. Within just weeks, however, family members of the victim had obtained a financial settlement with Uber (the operator of this vehicle test). And subsequently, the National Transportation Safety Board (NTSB) stated that the vehicle itself did – in fact – “see” the jaywalker in advance, however, the car’s emergency braking system was intentionally disabled (in this humanly-monitored mode of testing) so as to avoid any unwanted erratic behavior. And then, more recently, the Tempe police have stated that the driver was in fact streaming a program on HULU around the time of the accident, and that in their opinion the crash would have been “entirely avoidable” had the human driver been paying attention. ……………. And so, at this preliminary stage we appear to have the following (non official and possibly conflicting) evolutionary discussion as to who is to blame:
- The victim was at fault – crash was unavoidable by the human driver or autonomous system.
- The car’s autonomous system saw the victim in advance but did nothing, however, no statements suggesting fault on the part of Uber test designers or the car manufacturer.
- The human “driver monitor” now said (in the police investigation) to be at fault – in stark contrast to the first impression listed above.
It is interesting to note that the only “purely objective” information within the “evolution of blame” above is that the car’s autonomous system was aware of the jaywalker and did nothing. The NTSB’s investigation has not reached its’ final conclusions, but a few questions – seemingly missed by the press – come to mind:
- With the non-intuitive revelation that “automatic braking” was intentionally disabled during this testing – has anyone asked whether Uber and the car manufacturers were intending to include these “miles driven” in any future “miles driven” claims supporting the “proven safety” of these cars?
- Is it possible (I ask in complete ignorance) that the Tempe police department has now concluded – as a matter of convenience – that this accident was the fault of the human monitor simply because she was doing something she shouldn’t have been doing at the time? (You know, in the same way a drunk driver may automatically be assumed “at fault” regardless of what actually caused a particular accident). Should we allow “Batman” to blame “Robin” in these cases?
- Has there been any immediate/emergency discussion between the NHTSA/NTSB and “self-driving car” testers concerning this decision to deactivate the automatic braking system?
- (As previously doubted) Is there actually any scientific evidence showing that a human driver monitor can rightfully be expected to correct for all, or even “most” of the types of errors a “self-driving vehicle” might be expected to make? Keeping in mind, of course, that any sudden and unexpected behavior by an autonomous vehicle will likely set in motion a time consuming cognitive process such as “Maybe there is a good reason why the car is swerving?” within its’ human monitor. Have testers and lawmakers even “thought” about – let alone “tested” – their assumptions?
- It appears that these “self-driving cars” conveniently morph in and out of “self awareness” and feelings of guilt. With the continued use of the misnomers “self-driving” and “autonomous” – are manufacturers and testers of these vehicles dodging an otherwise proper degree of scrutiny in terms of “fault” through a simple mistake in “semantics”?
- Considering the length of time the NTSB is intending to put into just this one investigation (I read “14 months”) ……….Has anyone asked what will happen in the (“inevitable”) future when many of these cars are out there, and the accidents start rolling in? These investigations will be similar to airplane crash investigations in a number of ways. They will involve black boxes and no surviving witnesses at times. In addition, if a software “decision making” design flaw is to blame in a particular crash, then – unlike in the case of a dead human driver – this issue will continue to put the general public (encountering or driving the same car) at risk until the problem is fixed. Are we to expect that the same relative degree of attention and resources will be applied to all such accidents in the very busy future?
It seems fairly inevitable that the NTSB (or whoever?) will be forced to put accidents into categories with differing priorities. I suspect that – as with drunk driving – convenient conclusions will be assigned in order to cut down on the case load. Even worse, I can imagine politicians enacting legislation which will become stricter in the automatic assigning of blame – but which will bring us further from the truth and cause added overall danger. I know personally how hard it is to get politicians’ attention concerning those specific laws and road designs installed in recent years which are (clearly) untested and adding new dangers for pedestrians.
Many absurd statements surrounding “driver-less cars” go unchallenged by our officials and the press. I heard a “self-driving car” representative say – in relation to this story about the first pedestrian death – that his company’s cars have travelled over five million miles (“without killing anyone” I guess was his point). He seemed to think this was a big deal. However, this is the equivalent of only ten “lifetime’s worth” of driving (assuming someone drove an average of 10,000 miles a year and stopped driving at age 67). So his statement would be like me standing in a room surrounded by ten retired people and saying “Wow, isn’t it amazing that no one in this room ever killed someone with their car”! Add to this the likelihood that the testing he was referring to – as with the Uber testing – was likely not, in any way, all encompassing.
When I took “Driver’s Ed” in high school, I was taught that a “moving car” is “like a weapon”, and that it was illegal to leave a car running without a driver inside. We’ve come a long way baby!