Archive | Uncategorized RSS feed for this section

These Road Designs and “Safety Improvements” Will Provide Some Hearty Laughs at Your Funeral

11 Jan

The following road design SNAFUS fall under various jurisdictions (local, county, state) here in New Jersey.  In each case I did my best to bring these potentially deadly mistakes (photos and descriptions) to the correct parties (repeatedly over recent years).  Apparently the brainstorming parties are still going on, as I have received no proper responses and these issues still exist.

Sidewalk “Bump Outs” (in front of a school in this example):

Children naturally corral at sidewalk’s edge (now within inches of oncoming traffic as the shoulder suddenly disappears at these locations).  In order for this design to “slow traffic”, students must effectively assume “human traffic cone” status.  One question for designers – Why didn’t you simply place some flexible (non-human) objects out in the street, 100 feet or so prior to the point of crossing?  In addition, this is a multi-hazard for bicyclists, especially at night.  They need to either (#1) Crash into the curb, (#2) Stop suddenly (dangerous if among a trailing group of cyclists), (#3) Hop the curb, (#4) Suddenly enter the direct lane of much faster moving car traffic while hoping to reach the far edge of the extension before the cars catch up.  In fact, just minutes prior to this writing, I drove through the location photographed and saw my neighbor on his bicycle teetering on the curb as he twisted around trying to decide when he should make the mad dash to get past this obstacle.


Middle of the Road “Islands” Obstructing Crosswalk View:

This project (see photo) also somehow made it to fruition without anyone stopping (or at least “altering”) it along the way! Perhaps just another example of these modern times in which the wheel is constantly reinvented and in which it has become acceptable to treat every idea as one that can be put out there live for the public to beta test.  I’ll simply list the dangers here:

Uno – Crosswalk goes completely across the street without forcing pedestrians to stop at the island to reassess their situation.  (This becomes relevant below).

Dos – Fountain obstructs driver’s view.

Tres – Black beams obstruct driver’s view (and mimic pedestrians).

Cuatro – Vegetation completely blocks the driver’s view of small children and those in wheelchairs.  How quickly did you notice the woman posing in the middle of the street for my photo? This view represents the position and height of an approaching (small car) driver.

(Under the best of conditions)


After reporting the above dangers with this “mid-road island”, my town doubled down by adding a Christmas tree – thereby making pedestrians along the entire left side of this crosswalk completely invisible to all drivers for a continuous one and a half blocks of driver travel!

(Christmas Hazard)


Mother Nature then tripled down by adding this new ice hazard (from the island’s down-flow) which could not be reached by de-icing trucks. It was literally impossible for me to stand in the middle of this crosswalk without sliding.  An unsuspecting pedestrian crossing the street could very easily slip/fall forward, directly into the oncoming lane of car traffic that could not see them up to that point!  This is dangerous beyond belief!

(Ice hazard)


“Left Corner Obstructions”:

The following “left corner obstructions” – applicable to “T” and “stop sign” locations – force drivers to inch forward, nearly in line with the direct flow of oncoming traffic, just in order to see what is coming. This puts them at risk of a driver’s side broadsiding.  After many months of complaining, my town finally moved this planter, but then allowed the county to build the (similarly dangerous) county park wall seen in the follow-up photo.


On scene observers (at the location below) will notice that every single driver is now required to pull a full car length ahead of the official “stop line” in order to even begin seeing what is coming at them along the main street. As I had anticipated ahead of time, a car that was stopped here – well ahead of what would otherwise have been necessary – was unnecessarily struck as collateral damage in an accident involving two opposing cars on the main road.


Bizarre “Diagonal Crosswalks”:

This design would be comical if not so dangerous.  The first photo shows the exact location where I was slowly (very slowly) turning left when an absentminded jogger suddenly darted out directly in front (inches away) from my car (jogging right to left in photo).  Leaving aside the fact that these crosswalks (in each case) terminate at a sidewalk-less patch of grass, here are a few observations:

1) These are completely unexpected by the driver, and sometimes faded (as with other crosswalks).

2) By the nature of their much longer length, pedestrians are exposed to traffic for a much longer period of time!

3) These increase the number of directions of simultaneous traffic the pedestrian is exposed to (due to the added risk of cars turning onto or off of this additional road).  Extra confusion all around!


Trick Question for Lawmakers (see photo below) –

If I were a handicapped person in a wheel chair, looking to cross to the far side as quickly as possible, and I decided not to travel within the crosswalk (left to right) towards a sidewalk-less patch of grass – but instead took the shorter more direct route straight across to the handicap accessible ramp and sidewalk – Would I be guilty of “Jaywalking”?


Exit Ramp “Pedestrian Crossing” Stop Signs:

You probably wouldn’t think of placing a “stop sign” at the beginning of a high speed exit ramp – so why did they place this “pedestrian crossing” stop sign here?! In fact, many drivers are actually looking in their rear-view mirrors as they quickly merge right at this location.  Even if they were able to suddenly stop in time for a pedestrian – the cars from behind would likely knock them into the pedestrian anyway!


Fortunately, since few people use the lengthy “pedestrian crossing” bridge in the background, few also encounter this crossing.


“Faux Brick Sidewalk/Crosswalk” Dangers:

This “faux brick” sidewalk (leading away from my local train station) forces me out into this busy street with my rolling luggage every time I return from the airport. I have no other option!  This same result would apply to children on rollerblades and perhaps (I assume) those in wheelchairs.  In addition, this is a continuous “trip hazard” for seniors (stroke victims) etc.




It is often impossible for drivers to see police and emergency crews standing in the middle of the street as they arrive upon their overly bright (modern) emergency lights.  These lights also make it hard for drivers to see those slow moving cars approaching from the opposite direction.  It’s as if designers had never heard of the tactic whereby a person might shine a bright light directly into his enemy’s eyes for the purpose of temporarily “blinding” him.  Or – if in fact this design was intentional for the situational benefit of the police – the obvious questions then become …….. #1) Why does this “overkill” brightness level exist on other emergency vehicles as well ……..  and #2) Why do many officers seem so clueless (perhaps “untrained”?) when you try to politely inform them of the danger they’re in.

“TOO NUMEROUS AND INCONSISTENT CROSSWALK DESIGNS” – A driver should not need a degree in abstract art to interpret what “is” and “isn’t” a crosswalk!

“TOO MANY SIGNS IN GENERAL” – The proliferation of signs (if actually read by drivers) would represent one of the largest distractions out there.  In reality, of course, drivers treat these like “Terms of Agreement” contracts attached to new software (meaning that they “could not”, and therefore “do not” read them all).



Interesting Questions Surrounding the “Self-Driving Car” Pedestrian Death in Arizona

31 Mar

Jaywalking is, of course, very common.  It is even the most common way to cross the street in many locations.  Inevitably, many “self-driving car” accidents will involve jaywalking pedestrians.  Because of this reality, and in comparison to traditional levels of safety, there doesn’t seem to be a good reason for automatically classifying all such future accidents as caused by the “jaywalking” itself.  In fact, if computer code developers had not already conceded this on their own, we would have seen many more incidents by now.  Clearly, in the case of injuries and deaths to jaywalkers, hindsight investigation into the software and hardware performance of these vehicles should still be required.

In the immediate aftermath of the Arizona incident in which a jaywalking woman was hit and killed by a “self-driving car” as she pushed her bicycle across the road at night – the Tempe Arizona police chief said that vehicle footage suggested the victim herself may have been to blame as she appeared suddenly out of the shadows, and that it would have been difficult to avoid this accident under any mode of operation.  My unofficial (and poorly researched) impression is similar after viewing a snippet of this footage online.  Within just weeks, however, family members of the victim had obtained a financial settlement with Uber (the operator of this vehicle test).  And subsequently, the National Transportation Safety Board (NTSB) stated that the vehicle itself did – in fact – “see” the jaywalker in advance, however, the car’s emergency braking system was intentionally disabled (in this humanly-monitored mode of testing) so as to avoid any unwanted erratic behavior.  And then, more recently, the Tempe police have stated that the driver was in fact streaming a program on HULU around the time of the accident, and that in their opinion the crash would have been “entirely avoidable” had the human driver been paying attention.  ……………. And so, at this preliminary stage we appear to have the following (non official and possibly conflicting) evolutionary discussion as to who is to blame:

  • The victim was at fault – crash was unavoidable by the human driver or autonomous system.
  • The car’s autonomous system saw the victim in advance but did nothing, however, no statements suggesting fault on the part of Uber test designers or the car manufacturer.
  • The human “driver monitor” now said (in the police investigation) to be at fault – in stark contrast to the first impression listed above.

It is interesting to note that the only “purely objective” information within the “evolution of blame” above is that the car’s autonomous system was aware of the jaywalker and did nothing.  The NTSB’s investigation has not reached its’ final conclusions, but a few questions – seemingly missed by the press – come to mind:

  • With the non-intuitive revelation that “automatic braking” was intentionally disabled during this testing – has anyone asked whether Uber and the car manufacturers were intending to include these “miles driven” in any future “miles driven” claims supporting the “proven safety” of these cars?
  • Is it possible (I ask in complete ignorance) that the Tempe police department has now concluded – as a matter of convenience – that this accident was the fault of the human monitor simply because she was doing something she shouldn’t have been doing at the time?  (You know, in the same way a drunk driver may automatically be assumed “at fault” regardless of what actually caused a particular accident).  Should we allow “Batman” to blame “Robin” in these cases?
  • Has there been any immediate/emergency discussion between the NHTSA/NTSB and “self-driving car” testers concerning this decision to deactivate the automatic braking system?
  • (As previously doubted) Is there actually any scientific evidence showing that a human driver monitor can rightfully be expected to correct for all, or even “most” of the types of errors a “self-driving vehicle” might be expected to make?  Keeping in mind, of course, that any sudden and unexpected behavior by an autonomous vehicle will likely set in motion a time consuming cognitive process such as “Maybe there is a good reason why the car is swerving?” within its’ human monitor.  Have testers and lawmakers even “thought” about – let alone “tested” – their assumptions?
  • It appears that these “self-driving cars” conveniently morph in and out of “self awareness” and feelings of guilt.  With the continued use of the misnomers “self-driving” and “autonomous” – are manufacturers and testers of these vehicles dodging an otherwise proper degree of scrutiny in terms of “fault” through a simple mistake in “semantics”?
  • Considering the length of time the NTSB is intending to put into just this one investigation (I read “14 months”) ……….Has anyone asked what will happen in the (“inevitable”) future when many of these cars are out there, and the accidents start rolling in?  These investigations will be similar to airplane crash investigations in a number of ways.  They will involve black boxes and no surviving witnesses at times.  In addition, if a software “decision making” design flaw is to blame in a particular crash, then – unlike in the case of a dead human driver – this issue will continue to put the general public (encountering or driving the same car) at risk until the problem is fixed.  Are we to expect that the same relative degree of attention and resources will be applied to all such accidents in the very busy future?

It seems fairly inevitable that the NTSB (or whoever?) will be forced to put accidents into categories with differing priorities.  I suspect that – as with drunk driving – convenient conclusions will be assigned in order to cut down on the case load.  Even worse, I can imagine politicians enacting legislation which will become stricter in the automatic assigning of blame – but which will bring us further from the truth and cause added overall danger.  I know personally how hard it is to get politicians’ attention concerning those specific laws and road designs installed in recent years which are (clearly) untested and adding new dangers for pedestrians.

Many absurd statements surrounding “driver-less cars” go unchallenged by our officials and the press.  I heard a “self-driving car” representative say – in relation to this story about the first pedestrian death – that his company’s cars have travelled over five million miles (“without killing anyone” I guess was his point).  He seemed to think this was a big deal.  However, this is the equivalent of only ten “lifetime’s worth” of driving (assuming someone drove an average of 10,000 miles a year and stopped driving at age 67).  So his statement would be like me standing in a room surrounded by ten retired people and saying “Wow, isn’t it amazing that no one in this room ever killed someone with their car”!  Add to this the likelihood that the testing he was referring to – as with the Uber testing – was likely not, in any way, all encompassing.

When I took “Driver’s Ed” in high school, I was taught that a “moving car” is “like a weapon”, and that it was illegal to leave a car running without a driver inside.  We’ve come a long way baby!

Pedestrian Deaths Rising – The 3 Major Controllable (Yet Still Unaddressed!) Factors ……. Can You Hear Me Now?

2 Mar

I just heard another report saying that Pedestrian Deaths have increased disproportionately since 2010.  So – considering that cell phones had already been distracting us for years by that time – could there possibly be other factors more closely corresponding to the period in question? …. Hhhhmmm? …….. A recent incident, in which a young girl was hit by a car – no brakes applied – directly in front of my apartment and (according to the driver) under one of the exact scenarios I have been warning about (“just after dark, children now expecting cars will stop for them, driver could not see her in the glare of opposing headlights”), seems to suggest so!  (New readers here should try to get up to speed by reading my earlier essay “‘Stop and Stay Stopped’ Crosswalk Law Has Created Many New Dangers” about this law enacted (at least in New Jersey) in mid 2010.  Experienced visitors, and those on the go, can check out the (painfully condensed) summary below which is my attempt to quickly describe the three sweatiest suspects in this unfortunate increase in deaths (and presumably injuries as well).

#1) The “Stop and Stay Stopped” Crosswalk Law (closely tied to the time period of increased pedestrian fatalities) compounds the “driver distraction” issue; misinterprets the only safe purpose of a crosswalk (which would be to corral pedestrians to the safest crossing locations where unambiguous stop lights and stop signs exist); dangerously changes the assumptions of safety on the part of pedestrians; creates logistical impossibilities for drivers as they – unlike with stop signs and lights – now need to continually monitor (and interpret) the intentions of jittery pedestrians along the entire right side of the road and simultaneously do the same for the left, all while keeping their third eye “on the road ahead” as taught in driving class; and has created numerous other problems meticulously described in the essay referred to above.  Readers should note that this essay focusses on those new dangers occurring after implementation of this law.  No doubt “Stop and Stay Stopped” sounded great on paper, but it is clear (from my professional experience) that it was never actually tested, or was wholly inadequately tested prior to implementation.  This is an emergency situation!  Resolution of this problem will require political and professional “fortitude” on the part of designers and politicians (who may have to admit they were wrong) in order to save lives. …….. I have noticed what appears to be a bit of “deflection” by officials when they are asked about the rise in pedestrian deaths (which at times are occurring in direct contradiction to concurrent drops in other types of traffic fatalities).  This failure to look honestly at what is actually going on out there is very concerning.

#2) Ill-conceived and Untested Road Safety “Improvement” Projects For these I can provide photos.  Examples include “curb bump-outs” such as the one in front of our middle school which suddenly eliminates the shoulder, turns children at curb’s edge into “human traffic cones” as they now stand within inches of oncoming traffic, and causes a crash risk for bicyclists.  These cyclists, in fact, are more often seen making the sudden forced dart into the direct lane of much faster moving car traffic as they attempt to make it to the far edge of the bump-out without being hit.  On one occasion I saw my handicapped neighbor teetering on the curb of one of these bump-outs, twisted around and trying to decide when to make a run for it!  ……. There are also “middle of the road islands” with beams, fountains, statues, and vegetation that completely block a driver’s view of those pedestrians crossing at the poorly considered far end crosswalk!  Our local example even included black ice in the middle of the crosswalk – just off the end of the island – caused by the new inability of salt trucks to reach this spot.  In fact, I was nearly killed there as I tried to stay upright while taking photos.  Unsuspecting pedestrians would be at even greater risk were they to (50% of the time) fall forward into the fast moving, narrow lane of traffic that could not see them up to that point.  ……. And (a danger to drivers) there are poorly conceived “left corner view obstructions” such as planters, county park walls, and other beautification projects that require stopped drivers to inch their way forward, nearly in line with an intersection’s oncoming traffic, just to see what is coming.  This puts them at grave risk of being the victim of a driver’s side broadsiding!  These “left corner view” obstructions may also lead to a driver being unnecessarily involved, as a third party, in someone else’s crash (as I also witnessed a couple blocks from home)   ……….. Plus many more asinine and untested road projects.

#3) There is no mechanism set-up by which citizens can easily report “Road Design/Law Dangers” such that this information will be automatically routed to the correct jurisdictions for remedy, and the lessons learned documented and made viewable for all other road designers and legislators around the country so as to avoid future proliferation of these mistakes.  This unchecked spreading of bad road design, and bad “safety” laws seems to be occurring now as states and localities rush to “fix” safety issues.  Currently – if citizens reporting dangers are lucky enough to finally get their many hours of labor into the correct hands – they find they are speaking to the people who have the most to fear (professionally) if knowledge of their “design mistake” is more widely disseminated!  In addition, I have found after questioning that police and fire crews are reluctant to get involved in the reporting of bad road design to their more politically connected higher ups.  So valuable “real world” feedback is being missed here as well.  This systemic failure to obtain citizen feedback is not only shockingly ironic in terms of these new “safety” projects (“beta tested” at the public’s risk), but also in terms of the present push to test “autonomous vehicles” at the public’s risk.  Does not the lack of a citizen feedback mechanism totally defeat the purpose of a “beta test”?  Beta testing works by recruiting a massive number of additional “testers”.  Will the only feedback obtained from the public be their participation as non-communicative “death statistics”?!  Case in point – I was nearly run over in a parking lot recently as a driver backed up while relying only on his “rear camera” view as displayed on the panel in front of him.  This camera does not provide views to the sides.  Who do I report this major “real world” result to before people are killed or horribly injured?  (Note: Perhaps “rear camera” views should only display on screens located behind the driver, thus requiring that they are actually turned around?  ……. Just a thought).  Anyways – I have an idea for a low cost citizen’s “Road Design/Law Danger” reporting system that would not require individual states to drastically change the way they do things, or for them to pass new laws.  Officials should contact me if interested.

The Federal Automated Vehicles Policy – A Software Tester’s Concerns

23 Nov

As was the Healthcare (or “Obamacare”) Website, the adoption and testing of “autonomous vehicles” (I’m tempted to say “Obama Car”) is overwhelmingly a “software testing” project. The analogy between the two, and the lack of solid “buck stops here” ownership is very concerning as I view the “Federal Automated Vehicles Policy”.

In a nutshell, the Federal DOT and NHTSA are (kind of) claiming overall ownership of this project while also admitting they can only make “Best Practice” suggestions.  In fact, there appears to be even less direct control over this project than existed for the healthcare website.  The size and complexities of this new computer code will no doubt dwarf that created for the healthcare website, and the consequences here are much graver.

The existence of different proprietary systems, and questions as to how (or “if”) they will talk to each other, is one analogous issue.  Another is that this new “Highly Automated Vehicle” project (also) involves “retrofitting” software (in this case to a massive and diverse infrastructure that was not designed with these vehicles in mind).  Add to this the fact that our roads, bridges, signs, and laws are closely guarded turf under the control of 50 different state jurisdictions.  In fact, the NHTSA and Federal DOT have emphatically quoted this jurisdictional issue to me when explaining why they could not address, or even comment on a number of road safety dangers I brought to their attention.  So, has something suddenly and magically changed such that these discussions – “prohibited by law” – are now acceptable?  Those interested should read my essays located at (In the case of New Jersey’s “Stop and Stay Stopped” crosswalk law, subsequent rises in pedestrian deaths – contrary to concurrent drops in all other types of auto deaths during some of these same years – seemed to confirm my fears, but that is a whole other story!)

People should keep in mind here that the REAL safety testing of automated vehicles will only occur after they are set free on our public streets. This will be what software testers call a “beta test”.   There is no similarity whatsoever between the NHTSA’s oversight of straightforward, non varying “crash” tests, and the new responsibilities it has been assigned.  For auto manufacturers, let alone the NHTSA, it will be logistically, perhaps even “cognitively” impossible to come even close to the level of testing that would be required (ahead of time) in order to verify safety under the endless number of real world scenarios these cars will encounter.  Not only would the costs be prohibitively expensive, but the use of live, walking and talking test subjects (simulating pedestrians for example) would be unethical at full speed operation.  So get ready people – we are all the “stakeholders” here!

From what I can see, the NHTSA is not yet requiring the establishment of an easily accessible, always available system by which the general population can report their dangerous or questionable encounters with these vehicles.  I thought the whole purpose of a “beta test” was to provide a massive increase in “tester” manpower.  By not providing this avenue for feedback, it defeats the whole purpose of a “beta” test!

And, concerning the preliminary testing that will take place, a reliance on the “self-reporting” of results by manufacturers should keep all of our eyebrows raised. Are we really to believe that every time a Tesla driver needs to retake control of his car from the automated system (where a potentially fatal crash would have occurred) this is being tabulated as a “likely fatal incident” in terms of its’ theoretical “fully autonomous” operation?  Of course Tesla points out here that their cars are not yet intended to be “fully autonomous”, but the illustration still applies.

I fully appreciate the awkward position the Federal DOT and NHTSA have been forced into by the President’s push to promote this automation. The NHTSA – traditionally the watchdog of physical and design defects after the fact – is now (kind of) claiming ownership of many aspects of the upfront planning phase of this massively impactful, hugely complicated, and loosely defined project.  The NHTSA is effectively establishing for itself a future “conflict of interest”.  Secretary Fox even said “What we’re doing here is building safety in at the ground floor” when announcing the public release of the FAV Policy.  It should be noted that the NHTSA is (kind of) claiming this ownership at the same time it has not been given the resources, expertise, or even the mandate to take proper control.  Am I the only person to see an analogy here between this current situation and the inadequate resources in the hands of FEMA prior to their (criticized) responses to a number of subsequent disasters?  When bad things happen – and they will – automakers will be able to point to the NHTSA’s (sort of) claims of ownership over the early phases of this project.  At the same time, manufacturers will also likely claim immunity due to the lack of specificity established upfront.  It is easy to imagine the potential for instances in which the NHTSA might be tempted to cover something up in order to avoid receiving flak from the public.  The NHTSA, of all organizations, should have a strong understanding of the environmental conditions leading to poor quality (and “recalls” involving negligence).  It is foolish to assume their own employees are somehow immune to these dynamics of self-preservation.

I have also noticed a failure to use unambiguous language, as would be required in the design stage of any software project. This is visible in the language used by the agency as it promotes “this technology”.  There is in fact no single or easily encapsulated “technology” here.  There are numerous physical technologies (that will no doubt change over time) and an even larger ongoing commitment to producing tons of “new and improved” computer code.  If the claim is that “computer code” itself is a “new” thing – this is news (or “olds”) to me.  Encompassing everything into one verbally convenient phrase such as “this technology” serves no real purpose.  Computer coders cannot code, fix, or be held accountable for “this technology”.

Another ambiguous reference occurs on Page #10 of the FAV Policy. The text toggles “primary responsibility” between the “human operator” and the “automated system”.  “Automated systems” are not cognizant beings, don’t “bleed”, and do not pay with their lives when things go wrong.  This may sound academic, but confounding these concepts – even when primarily an issue of semantics – creates further “wiggle room” for those car manufacturers (or computer programmers) when things go wrong in the future.  This becomes instantly obvious in a legal sense.  I am not a lawyer but I am pretty sure the courts would actually hold the driver partially negligible – despite the NHTSA’s claims that the “automated system” was responsible – should an accident or death occur where the driver had previous knowledge that the automated system was not performing up to expectations.  I am curious as to just how literal we are to take these descriptions?

We are already seeing auto manufacturers running wild with their proprietary claims surrounding the promise of their own future autonomous vehicles. No doubt much of this is due to their fear of seeming technologically inferior or “behind the curve”.  They apparently have no fear that the NHTSA will call them out when it comes to these statements.  Elon Musk recently claimed that “half a million lives” would have been saved worldwide had everyone been driving Teslas with the activated “autopilot” feature.  He then told people to “Just do the math!”   Well, I not only did the math, I also applied some basic scientific considerations such as “sample size”.  With this it becomes instantly apparent that his claim (at this point in time) is ludicrous!  Again, see “‘Accountability’ and ‘Countability’ – Misdirection in the ‘Autopilot’ Safety Debate” located at for more on this.

There also seems to be a very important (likely high volume and deadly) mistake in the logic applied by the NHTSA when discussing the automation levels of these cars. There is no reason for the NHTSA, or anyone else for that matter, to assume that a human driver – even when fully attentive – will be able to react in time to every mistake made by an automated system!  One needs only to imagine themselves in the following situation.  If a driver is concentrating intensely on the road ahead and a passenger (out of nowhere) suddenly jerks the steering wheel to the side for no reason, it will spark all sorts of reflexes and reactions within the driver’s mind as he/she attempts to make sense of what just happened.  The brain’s response might be “Don’t adjust the wheel because there must have been a good reason why my passenger did this”.  Or it could be the exact opposite reaction, thus creating an over compensation in steering.  These episodes will always occur – by definition – as complete surprises.  There is absolutely no way for drivers to safely practice, or anticipate these realities ahead of time!  It is preposterous for the NHTSA to be validating this “assumption of ultimate responsibility” (over mistakes made by the automated system) by applying it to this project!  This “clause” is being used as a “catch all” by those involved as a way of avoiding a more complex and realistic discussion surrounding true causal factors.

My overall recommendation to the Federal DOT and NHTSA is that – considering their very limited degree of true ownership over this project – they absolutely must wield every possible element of control they have at their disposal during these early stages. At the very least, the following actions should be taken:

#1) The NHTSA should mandate and monitor the use of a single (overall) “Final Stage Test Plan” (created and updated within a single software application) that is shared, viewed, accessed, and updated by all of the car manufacturers. This single (overall) “Final Stage Test Plan” will list all of the real world scenarios (each one representing a single “test case”) that the cars of each particular “automation level” will need to navigate safely.  This particular stage of testing – by definition here – must be conducted using a completely assembled car at speed, with all systems activated (individual component testing to be handled separately).  These scenarios (“test cases”) should be reviewed ahead of time for completeness, shared among all manufacturers, and of course then tested by each manufacturer under their own proprietary systems.  As new “tricky and dangerous” scenarios are discovered, these new “test cases” must be added to the original test plan (instantly viewable by all – as before).  Each manufacturer must assign ownership of the testing of each individual test case to a single tester who will be responsible for literally “signing off” (as in actual “signature”!) when a vehicle passes a particular test.  Keep in mind that this “Final Stage Test Plan” only describes the real world scenarios these cars must navigate safely (applicable to all manufacturers) and does not require the recording or revealing of any proprietary information. The potentially proprietary discussions related to the handling of problems (or “bugs”) will be controlled separately under each companies’ individual “project tracking” system as described below.   

#2) Though not established by the NHTSA, each manufacturer should of course have their own “project tracking” system to track problems and their resolutions as they occur.  This would of course follow standard software practices (assigning a unique identifier to each issue; stating whose hands a particular issue is in at any given moment in time; cross referencing the applicable test case if relevant; and so on).

#3) As these vehicles “go live” on our roadways (the “beta test”), there must be the establishment of an easy to use method by which the general population (bigtime “stakeholders”) can report any and all dangerous encounters.  Of course this will lead to duplicate entries as some of the same problems repeat themselves.  Therefore, on the same webpage, the NHTSA needs to continually update a “known problems” section enabling citizens to quickly log a “this happened to me also” entry.  This will not only save everyone time and trouble, but it will also add important emphasis to particular dangers.

#4) The NHTSA must eliminate completely the notion that drivers can be held ultimately responsible (under any “level” of automation) for mistakes made by an automated system.

A few additional observations on the “FEDERAL AUTOMATED VEHICLES POLICY”

Page 9: SAE Levels “2” and “3” are poorly defined

Page 38: The DOT anticipates increased responsibilities similar to “licensing” of the non-human driver in the future. This is surprising as my local (state) inspection station doesn’t even have the resources to check my tire “tread wear” anymore.  Perhaps I am missing something here?

Page 44: The FAV Policy glosses over the issue of liability and insurance coverage as related to the complexities and differences that will occur between the states. This is no small issue.  Let’s not forget that in the case of accidents and deaths, all parties will have it in their interest to go after the same entity – that being the “auto manufacturers”.  Therefore, it is clear that the first order of business for the manufacturers will be the seeking of legal immunity.

Page 59: As already noted, these automated cars – required to make decisions under an endless array of real world scenarios – will really only be tested once they are released in the real world.  This fact greatly hinders the concept of granting “exemptions” based only on limited prior (non-real world) testing.  Something to think about!

Page 72: Referring again to the complexities involved in testing the endless scenarios that would need to be handled by an automated car – the NHTSA is kidding itself if it thinks it will have the manpower and resources to adequately test – by itself – even one such vehicle before release to the real world.

“Accountability” and “Countability” – Misdirection in the “Autopilot” Safety Debate

7 Jul

To be clear, Tesla and Elon Musk were referring to “autopilot assisted”, and not “fully autonomous” driving when claiming that only one Tesla fatality has occurred in 130 million autopilot assisted miles driven. The same is true of Musk’s later (quite bizarre and scientifically unproven) statement that half a million lives would have been saved around the world had everyone been driving said Teslas. For this he presumably considered the 1 million worldwide auto fatalities per year, occurring at a rate of one for every 60 million miles driven.

Per Musk’s insistence, I thought I had better do the math for myself. Right off the bat, there is a major issue with his claim. Musk had at his disposal a Tesla sample size (everything to date) capable of producing just one death, and this was then compared to a yearly sample size large enough to have resulted in one million deaths (in the case of the worldwide figure). After allowing for Tesla’s relatively higher number of miles per death (130 million divided by 60 million), and then dividing the total worldwide deaths by this amount (1 million/2.16), you end up with the fact that the worldwide totals arose from a sample size roughly 460,000 times larger than Teslas’! You can think of it this way – if Tesla was to have had just one more fatal accident in which two passengers were killed – they would be down to one fatality per 43.3 million miles driven. Would Musk then be proclaiming that if everyone around the world had been driving autopilot assisted Teslas there would have been an additional 384,000 people killed?

And, what exactly were the “controls” in place for this comparison to be valid? Are Teslas equally distributed among the countries included in the worldwide figure? Is Tesla’s autopilot currently capable of handling all of the locally specific traffic laws and infrastructure as they currently exist around the world? In the U.S., we have 50 separate states doing their own thing when it comes to passing legislation. What about these further distinctions around the world?

STOP !!! ……. (Note to Self) ……. Enough already with the careful and literal translation of Tesla and Musk’s statements concerning the safety level of their system! I need to now continue writing this under the assumption that what most people actually heard was “self-driving” car, not “autopilot assisted”. Why do I feel this way? Because I am a pretty smart guy with a background in quality assurance and a demonstrated interest in road safety – and even I made this mistake before I carefully reread the claims.

Most troubling is the ease with which these claims were made, and the lack of any proactive clarification on the part of Tesla. This is likely symptomatic of a greater problem as put forth earlier in my essay “Testing ‘Self-Driving’ Cars – The Buck Stops Where?!” also located here at

To a most useful end, I would like now to explain how far off, and scientifically invalid these safety claims would be if a person was to (incorrectly) apply them to the idea of “hands off”, “fully autonomous” operation.

To misapply the data would be to neglect all of those instances in which Tesla drivers quickly corrected a potentially fatal mistake made by autopilot. Some examples can be found on Youtube and I suspect there have been more than reported publicly. Examples include the car suddenly attempting to exit the highway at the last second, or the car continuing to follow the car in front instead of staying in lane.  If just a few of these (otherwise fatal) events have occurred (to date) it would represent a massively lower “fully autonomous” safety level. What is so disturbing is that Tesla is not openly disclosing this type of data.  Not collecting it would be even worse.  Anyone out there know the answer?  If this data has been neglected, it would fit expectations considering that the Federal D.O.T. announced they would be relying on the self-reporting of auto manufacturers when verifying “driverless-car” safety. (Again – see my earlier essay).

In fact even if the above (calculated) considerations were to be added in at this point, the adjusted safety level would still be massively under estimated.  Currently Tesla’s autopilot is only recommended for less complicated scenarios such as highway driving.  Not included are the greater complexities encountered with city driving, pedestrian traffic, construction sites, police-directed situations, emergency maneuvers, and more.  A person need only imagine the potential for these to change the overall safety rating.

So, as your average idiot can now see – there is absolutely no way to obtain an accurate guestimate to the question “How safe (more specifically ‘deadly’) does the ‘hands off’ operation of Tesla’s autopilot appear to be at this point in time?” when using only the data at hand.  If Tesla, in fact does have the answers to this more fully considered question, they should be willing to discus it with any reporter who inquires.  If they don’t, then we should all be pointing our fingers at the Federal D.O.T asking “Just exactly what is going on with the testing of this ‘Self-Driving’ stuff?!”  (Again – read my earlier essay) ……… So there you go …….Reporters get to work now!

Nutty Questions Concerning “Self-Driving”Cars

6 Jul

And now a few questions surrounding the nuttiness of the current (non) discussions that are (not) widely occurring yet regarding this “self-driving” car stuff.

One of these (non) discussions is failing to occur when experts suggest that one day we won’t own our own cars, but will instead summon vehicles to come pick us up for our trip down the block for milk.  Would these empty cars not be racking up additional environmentally unfriendly miles every time they come to get us?  Or, if the plan is for them to pick up other passengers along the way (imagine the wait times), could we not just slap some rails underneath these bad boys (for safety’s sake) and create a “smart phone connected” rail system?  By definition, these customers aren’t looking to “grab the wheel” themselves anyway.

And what of the widely unchallenged notion that these cars will be using their communication powers to link up with other cars on the road, thus travelling in tight formation just inches apart?  I wasn’t previously aware that “tailgating” is bad because it is “hard to do”.  I always thought the concern was “reaction time”.  Not just mine, but also Isaac Newton’s (think “Apple Car”).

The list goes on and on  ………. Who do we give “the wave” to when overly cautious (driverless) cars let us into the roadway out of turn?  Who do we give the “finger” to when stuck behind these same cars?

Will driverless cars be able to get out of the way (by driving over the curb onto someone’s lawn, etc.) of a honking ambulance under all of the potentially infinite scenarios? (Correct answer being “no”).

When a driverless car stalls in the middle of a fast moving road (not due to a computer problem of course because computers don’t typically malfunction) it will surely turn on its flashers, but who will walk back 100 feet to wave off unsuspecting drivers zooming around the bend?  Will these cars tip the tow truck driver (or non-driver)?

When one of these empty cars runs someone down, it will surely dial ‘911’, but what will it tell the operator in relation to the severity of the situation?

What happens in the D.O.T’s record books when two driverless cars flatten each other?  Is this recorded as a “deadly accident”?

We’ve all heard of “cow tipping”, but what about the newest craze – covering the sensors of rich people’s driverless cars so that their owners cannot “summon” them.  Oh, that’s right, we will all be on camera 24 hours a day with these cars around.

…. Snow, sand, ice, “continue on” wave’s from pedestrians, deer and dogs given same priority (or not?) as humans, police directed emergencies, plastic bags floating across the street, jumpy dog in the middle of the highway (I actually saw this once), flooded out roads, missing or recently altered street signs, approaching cop car with lights flashing (C’est pour moi? says Mr. Peugeot) ………. on and on and on and on.

Of course, regardless of the degree of immunity granted up front, what will actually happen most often is that these cars will be out there in overly cautious mode, slowing everyone down due to their (quite necessarily) overly cautious programming and – as now required by the insurance companies – strict adherence to speed limits.  I could have sworn that some road study scientists once discovered that if there exists a steady flow of highway traffic and just two drivers (side by side) slow down for just a few seconds, it creates a backward moving ripple effect that slows down the trailing cars for miles, and lasts a very long time.  Perhaps automated cars can handle this situation better than humans?  It makes you wonder, will the D.O.T. now recommend installing the opposite of “HOV” lanes on the highways?  Perhaps we will now have “LOVE” lanes? (Low Occupancy Vehicle – Electric)

Jay Leno once said (paraphrased) that in the end, we are not going to have a bunch of driverless cars roaming the streets.  Instead, this technology will be incorporated as safety additions similar to ‘ABS’ braking and so on.  So, if Mr. Leno (an expert in both “comedy” and “cars”) is in fact correct – why the big push (by “safety experts”) to get our hands off the wheel?

Testing “Self-Driving” Cars – The Buck Stops Where?!

22 Jan

Hello D.O.T.,

I am very concerned regarding the U.S. Department of Transportation’s recently implied ownership of the “Self-Driving Car” testing process.

In truth, the testing of “self-driving” cars is another major software testing project (since cars do not, and never will, make decisions themselves).  This will necessitate the verification of acceptable vehicle behaviors under a huge number of real world scenarios.  All kinds of computer coding decisions will be made regarding the legal and safety “choices” exhibited by these cars. Perhaps these decisions will even be made (on occasion) by young urban computer whizzes who have not yet gotten around to obtaining their own driver’s licenses.  In addition, these scenarios, errors, and fixes will need to be documented, addressed and verified not just once, but for all of the competing proprietary systems utilized by the various car companies.  These separate “self-reported” real world “beta tests” will be occurring at the public’s risk.

Ideally, of course, this would all be combined and addressed by the D.O.T. as one overall beta test.  And by definition, this beta test would include the establishment of an appropriate project tracking system in which the public is encouraged to “report bugs”, in which issues are “logged” using a unique identifier, in which “accountability” or the “present ownership” of a particular bug is immediately identifiable at any given moment in time, and which would include the establishment of a clear decision making hierarchy regarding legalities, project priorities, and the postponing of less critical issues.

My concerns are as follows:

From my own experience – in which the D.O.T. has failed to assume ownership of any of the numerous road dangers I have documented over the past few years – it seems the D.O.T. does not currently have a public oriented, problem reporting system similar to the one described above.  Incidentally, there are certainly a number of “Best Practices” principles to be gleaned from my previous correspondences (more on that later).

On two separate occasions I was told by your employees that the US D.O.T. is prohibited from direct involvement in relation to state specific legislation.  I have also learned through experience that state legislators are not likely to admit that one of the safety “improvements” they previously voted for turned out to be dangerous.  On top of this, future dangers occurring as a result of interactions between “self-driving” cars and people will likely be a bit tricky to describe.  This is because many of society’s most logical thinkers (the aforementioned computer programmers) will have already taken into account the handling of many of the more commonly encountered driving scenarios.  These new “state specific dangers” will require much more involved, mind numbing explanations leading to even less likelihood that politicians will expend their political capital on such “I was wrong” campaigns.  And given the “recent trend” nature of the dangers I myself have been warning about, I am concerned that the D.O.T. does not understand the scope, magnitude, or irreversibility of the problems created should it let the genie out of the bottle prematurely!

One of the hardest hitting fruits the D.O.T. will be wielding, as overseers of this project, is the (largely non-binding) “Best Practices” report it will assemble in relation to self-driving cars.  I am not knowledgeable as to the inner workings of the D.O.T. or what it has in mind.  I will say, however, that nearly every one of my previous “dangerous scenarios” involved a tricky real life situation that should prove even trickier for driverless cars – I mean “computer programmers”.  When the D.O.T. discusses plans to issue this report, I can only wonder nervously – “Based on what?” “Coming from who?”  And prior to issuing these recommendations – should there not at least be a few tentative demands made?  Perhaps in relation to an end goal of “complete compatibility” between the now competing and proprietary computerized systems under development by the various car companies?  After all, aren’t these cars going to be “talking” to each other?  Wouldn’t it be nice if in the future, every time we need to correct their grammar, or slap them on the wrist, these adjustments could be made in just one location without the need to duplicate, triplicate, or quadruplicate our efforts?  Will the current climate surrounding these unseen computer codes (or “competitive advantages”) be addressed such that, if one particular company designs code that is far superior at handling a particular road danger, this life saving knowledge will be immediately disseminated to all involved before my dog “Buffy” is run over by a competitor’s car?  Wouldn’t it be nice if the D.O.T. – before handing over the keys – laid down some ground rules for these young cars who have just received their learner’s permits?

Some things in life – like “blindingly bright” modern headlight technology (also not yet addressed nationally) – can be seen from miles away.  In the case of future accidents caused by these cars, I mean “computer programs”, there will be one thing we can say for sure – “victims” and “owners” alike will be looking to put the blame on the auto manufacturers.  And since we already know “there ain’t a snowball’s chance in global warming” that any of these cars are going to end up on the roads without some degree of formal or informal immunity granted upfront to these auto companies, the question then becomes – “Who will Kenneth Feinberg be working for in these quandaries?”  …..  GM?  Takata?  …..  Hakuna  ….. Matata?  Comprende?  …..(DeNada).  It seems a bizarre possibility that those working within the federal D.O.T. – who, remember, are not the actual programmers or project managers of this computer code, who are not directly performing the testing, who do not have a complete and comprehensive setup for overseeing this testing (my deduction), and who tell me that they are prohibited by law from getting involved with state legislative decisions – will have it in their self-interests (personally and organizationally) to maintain a certain distance during this testing process.  They may actually be headed towards a bizarre alliance with the D.O.T.’s usual arch nemesis – “Plausible Deniability”.  Wow!  What a web!  And not a simple “Charlotte’s Web”, but a “Jack Webb”! (Extra ‘b’ in there)

I seriously wonder if the buck is going to stop anywhere on this project.  If the buck ends up stopping in the middle of the road and is then hit by one of these “self-driving” cars, I hope the D.O.T. will at least properly record the “Cause of Accident” under the newly added description – “Computer Crash”.