The Federal Automated Vehicles Policy – A Software Tester’s Concerns

23 Nov

As was the Healthcare (or “Obamacare”) Website, the adoption and testing of “autonomous vehicles” (I’m tempted to say “Obama Car”) is overwhelmingly a “software testing” project. The analogy between the two, and the lack of solid “buck stops here” ownership is very concerning as I view the “Federal Automated Vehicles Policy”.

In a nutshell, the Federal DOT and NHTSA are (kind of) claiming overall ownership of this project while also admitting they can only make “Best Practice” suggestions.  In fact, there appears to be even less direct control over this project than existed for the healthcare website.  The size and complexities of this new computer code will no doubt dwarf that created for the healthcare website, and the consequences here are much graver.

The existence of different proprietary systems, and questions as to how (or “if”) they will talk to each other, is one analogous issue.  Another is that this new “Highly Automated Vehicle” project (also) involves “retrofitting” software (in this case to a massive and diverse infrastructure that was not designed with these vehicles in mind).  Add to this the fact that our roads, bridges, signs, and laws are closely guarded turf under the control of 50 different state jurisdictions.  In fact, the NHTSA and Federal DOT have emphatically quoted this jurisdictional issue to me when explaining why they could not address, or even comment on a number of road safety dangers I brought to their attention.  So, has something suddenly and magically changed such that these discussions – “prohibited by law” – are now acceptable?  Those interested should read my essays located at (In the case of New Jersey’s “Stop and Stay Stopped” crosswalk law, subsequent rises in pedestrian deaths – contrary to concurrent drops in all other types of auto deaths during some of these same years – seemed to confirm my fears, but that is a whole other story!)

People should keep in mind here that the REAL safety testing of automated vehicles will only occur after they are set free on our public streets. This will be what software testers call a “beta test”.   There is no similarity whatsoever between the NHTSA’s oversight of straightforward, non varying “crash” tests, and the new responsibilities it has been assigned.  For auto manufacturers, let alone the NHTSA, it will be logistically, perhaps even “cognitively” impossible to come even close to the level of testing that would be required (ahead of time) in order to verify safety under the endless number of real world scenarios these cars will encounter.  Not only would the costs be prohibitively expensive, but the use of live, walking and talking test subjects (simulating pedestrians for example) would be unethical at full speed operation.  So get ready people – we are all the “stakeholders” here!

From what I can see, the NHTSA is not yet requiring the establishment of an easily accessible, always available system by which the general population can report their dangerous or questionable encounters with these vehicles.  I thought the whole purpose of a “beta test” was to provide a massive increase in “tester” manpower.  By not providing this avenue for feedback, it defeats the whole purpose of a “beta” test!

And, concerning the preliminary testing that will take place, a reliance on the “self-reporting” of results by manufacturers should keep all of our eyebrows raised. Are we really to believe that every time a Tesla driver needs to retake control of his car from the automated system (where a potentially fatal crash would have occurred) this is being tabulated as a “likely fatal incident” in terms of its’ theoretical “fully autonomous” operation?  Of course Tesla points out here that their cars are not yet intended to be “fully autonomous”, but the illustration still applies.

I fully appreciate the awkward position the Federal DOT and NHTSA have been forced into by the President’s push to promote this automation. The NHTSA – traditionally the watchdog of physical and design defects after the fact – is now (kind of) claiming ownership of many aspects of the upfront planning phase of this massively impactful, hugely complicated, and loosely defined project.  The NHTSA is effectively establishing for itself a future “conflict of interest”.  Secretary Fox even said “What we’re doing here is building safety in at the ground floor” when announcing the public release of the FAV Policy.  It should be noted that the NHTSA is (kind of) claiming this ownership at the same time it has not been given the resources, expertise, or even the mandate to take proper control.  Am I the only person to see an analogy here between this current situation and the inadequate resources in the hands of FEMA prior to their (criticized) responses to a number of subsequent disasters?  When bad things happen – and they will – automakers will be able to point to the NHTSA’s (sort of) claims of ownership over the early phases of this project.  At the same time, manufacturers will also likely claim immunity due to the lack of specificity established upfront.  It is easy to imagine the potential for instances in which the NHTSA might be tempted to cover something up in order to avoid receiving flak from the public.  The NHTSA, of all organizations, should have a strong understanding of the environmental conditions leading to poor quality (and “recalls” involving negligence).  It is foolish to assume their own employees are somehow immune to these dynamics of self-preservation.

I have also noticed a failure to use unambiguous language, as would be required in the design stage of any software project. This is visible in the language used by the agency as it promotes “this technology”.  There is in fact no single or easily encapsulated “technology” here.  There are numerous physical technologies (that will no doubt change over time) and an even larger ongoing commitment to producing tons of “new and improved” computer code.  If the claim is that “computer code” itself is a “new” thing – this is news (or “olds”) to me.  Encompassing everything into one verbally convenient phrase such as “this technology” serves no real purpose.  Computer coders cannot code, fix, or be held accountable for “this technology”.

Another ambiguous reference occurs on Page #10 of the FAV Policy. The text toggles “primary responsibility” between the “human operator” and the “automated system”.  “Automated systems” are not cognizant beings, don’t “bleed”, and do not pay with their lives when things go wrong.  This may sound academic, but confounding these concepts – even when primarily an issue of semantics – creates further “wiggle room” for those car manufacturers (or computer programmers) when things go wrong in the future.  This becomes instantly obvious in a legal sense.  I am not a lawyer but I am pretty sure the courts would actually hold the driver partially negligible – despite the NHTSA’s claims that the “automated system” was responsible – should an accident or death occur where the driver had previous knowledge that the automated system was not performing up to expectations.  I am curious as to just how literal we are to take these descriptions?

We are already seeing auto manufacturers running wild with their proprietary claims surrounding the promise of their own future autonomous vehicles. No doubt much of this is due to their fear of seeming technologically inferior or “behind the curve”.  They apparently have no fear that the NHTSA will call them out when it comes to these statements.  Elon Musk recently claimed that “half a million lives” would have been saved worldwide had everyone been driving Teslas with the activated “autopilot” feature.  He then told people to “Just do the math!”   Well, I not only did the math, I also applied some basic scientific considerations such as “sample size”.  With this it becomes instantly apparent that his claim (at this point in time) is ludicrous!  Again, see “‘Accountability’ and ‘Countability’ – Misdirection in the ‘Autopilot’ Safety Debate” located at for more on this.

There also seems to be a very important (likely high volume and deadly) mistake in the logic applied by the NHTSA when discussing the automation levels of these cars. There is no reason for the NHTSA, or anyone else for that matter, to assume that a human driver – even when fully attentive – will be able to react in time to every mistake made by an automated system!  One needs only to imagine themselves in the following situation.  If a driver is concentrating intensely on the road ahead and a passenger (out of nowhere) suddenly jerks the steering wheel to the side for no reason, it will spark all sorts of reflexes and reactions within the driver’s mind as he/she attempts to make sense of what just happened.  The brain’s response might be “Don’t adjust the wheel because there must have been a good reason why my passenger did this”.  Or it could be the exact opposite reaction, thus creating an over compensation in steering.  These episodes will always occur – by definition – as complete surprises.  There is absolutely no way for drivers to safely practice, or anticipate these realities ahead of time!  It is preposterous for the NHTSA to be validating this “assumption of ultimate responsibility” (over mistakes made by the automated system) by applying it to this project!  This “clause” is being used as a “catch all” by those involved as a way of avoiding a more complex and realistic discussion surrounding true causal factors.

My overall recommendation to the Federal DOT and NHTSA is that – considering their very limited degree of true ownership over this project – they absolutely must wield every possible element of control they have at their disposal during these early stages. At the very least, the following actions should be taken:

#1) The NHTSA should mandate and monitor the use of a single (overall) “Final Stage Test Plan” (created and updated within a single software application) that is shared, viewed, accessed, and updated by all of the car manufacturers. This single (overall) “Final Stage Test Plan” will list all of the real world scenarios (each one representing a single “test case”) that the cars of each particular “automation level” will need to navigate safely.  This particular stage of testing – by definition here – must be conducted using a completely assembled car at speed, with all systems activated (individual component testing to be handled separately).  These scenarios (“test cases”) should be reviewed ahead of time for completeness, shared among all manufacturers, and of course then tested by each manufacturer under their own proprietary systems.  As new “tricky and dangerous” scenarios are discovered, these new “test cases” must be added to the original test plan (instantly viewable by all – as before).  Each manufacturer must assign ownership of the testing of each individual test case to a single tester who will be responsible for literally “signing off” (as in actual “signature”!) when a vehicle passes a particular test.  Keep in mind that this “Final Stage Test Plan” only describes the real world scenarios these cars must navigate safely (applicable to all manufacturers) and does not require the recording or revealing of any proprietary information. The potentially proprietary discussions related to the handling of problems (or “bugs”) will be controlled separately under each companies’ individual “project tracking” system as described below.   

#2) Though not established by the NHTSA, each manufacturer should of course have their own “project tracking” system to track problems and their resolutions as they occur.  This would of course follow standard software practices (assigning a unique identifier to each issue; stating whose hands a particular issue is in at any given moment in time; cross referencing the applicable test case if relevant; and so on).

#3) As these vehicles “go live” on our roadways (the “beta test”), there must be the establishment of an easy to use method by which the general population (bigtime “stakeholders”) can report any and all dangerous encounters.  Of course this will lead to duplicate entries as some of the same problems repeat themselves.  Therefore, on the same webpage, the NHTSA needs to continually update a “known problems” section enabling citizens to quickly log a “this happened to me also” entry.  This will not only save everyone time and trouble, but it will also add important emphasis to particular dangers.

#4) The NHTSA must eliminate completely the notion that drivers can be held ultimately responsible (under any “level” of automation) for mistakes made by an automated system.

A few additional observations on the “FEDERAL AUTOMATED VEHICLES POLICY”

Page 9: SAE Levels “2” and “3” are poorly defined

Page 38: The DOT anticipates increased responsibilities similar to “licensing” of the non-human driver in the future. This is surprising as my local (state) inspection station doesn’t even have the resources to check my tire “tread wear” anymore.  Perhaps I am missing something here?

Page 44: The FAV Policy glosses over the issue of liability and insurance coverage as related to the complexities and differences that will occur between the states. This is no small issue.  Let’s not forget that in the case of accidents and deaths, all parties will have it in their interest to go after the same entity – that being the “auto manufacturers”.  Therefore, it is clear that the first order of business for the manufacturers will be the seeking of legal immunity.

Page 59: As already noted, these automated cars – required to make decisions under an endless array of real world scenarios – will really only be tested once they are released in the real world.  This fact greatly hinders the concept of granting “exemptions” based only on limited prior (non-real world) testing.  Something to think about!

Page 72: Referring again to the complexities involved in testing the endless scenarios that would need to be handled by an automated car – the NHTSA is kidding itself if it thinks it will have the manpower and resources to adequately test – by itself – even one such vehicle before release to the real world.


“Accountability” and “Countability” – Misdirection in the “Autopilot” Safety Debate

7 Jul

To be clear, Tesla and Elon Musk were referring to “autopilot assisted”, and not “fully autonomous” driving when claiming that only one Tesla fatality has occurred in 130 million autopilot assisted miles driven. The same is true of Musk’s later (quite bizarre and scientifically unproven) statement that half a million lives would have been saved around the world had everyone been driving said Teslas. For this he presumably considered the 1 million worldwide auto fatalities per year, occurring at a rate of one for every 60 million miles driven.

Per Musk’s insistence, I thought I had better do the math for myself. Right off the bat, there is a major issue with his claim. Musk had at his disposal a Tesla sample size (everything to date) capable of producing just one death, and this was then compared to a yearly sample size large enough to have resulted in one million deaths (in the case of the worldwide figure). After allowing for Tesla’s relatively higher number of miles per death (130 million divided by 60 million), and then dividing the total worldwide deaths by this amount (1 million/2.16), you end up with the fact that the worldwide totals arose from a sample size roughly 460,000 times larger than Teslas’! You can think of it this way – if Tesla was to have had just one more fatal accident in which two passengers were killed – they would be down to one fatality per 43.3 million miles driven. Would Musk then be proclaiming that if everyone around the world had been driving autopilot assisted Teslas there would have been an additional 384,000 people killed?

And, what exactly were the “controls” in place for this comparison to be valid? Are Teslas equally distributed among the countries included in the worldwide figure? Is Tesla’s autopilot currently capable of handling all of the locally specific traffic laws and infrastructure as they currently exist around the world? In the U.S., we have 50 separate states doing their own thing when it comes to passing legislation. What about these further distinctions around the world?

STOP !!! ……. (Note to Self) ……. Enough already with the careful and literal translation of Tesla and Musk’s statements concerning the safety level of their system! I need to now continue writing this under the assumption that what most people actually heard was “self-driving” car, not “autopilot assisted”. Why do I feel this way? Because I am a pretty smart guy with a background in quality assurance and a demonstrated interest in road safety – and even I made this mistake before I carefully reread the claims.

Most troubling is the ease with which these claims were made, and the lack of any proactive clarification on the part of Tesla. This is likely symptomatic of a greater problem as put forth earlier in my essay “Testing ‘Self-Driving’ Cars – The Buck Stops Where?!” also located here at

To a most useful end, I would like now to explain how far off, and scientifically invalid these safety claims would be if a person was to (incorrectly) apply them to the idea of “hands off”, “fully autonomous” operation.

To misapply the data would be to neglect all of those instances in which Tesla drivers quickly corrected a potentially fatal mistake made by autopilot. Some examples can be found on Youtube and I suspect there have been more than reported publicly. Examples include the car suddenly attempting to exit the highway at the last second, or the car continuing to follow the car in front instead of staying in lane.  If just a few of these (otherwise fatal) events have occurred (to date) it would represent a massively lower “fully autonomous” safety level. What is so disturbing is that Tesla is not openly disclosing this type of data.  Not collecting it would be even worse.  Anyone out there know the answer?  If this data has been neglected, it would fit expectations considering that the Federal D.O.T. announced they would be relying on the self-reporting of auto manufacturers when verifying “driverless-car” safety. (Again – see my earlier essay).

In fact even if the above (calculated) considerations were to be added in at this point, the adjusted safety level would still be massively under estimated.  Currently Tesla’s autopilot is only recommended for less complicated scenarios such as highway driving.  Not included are the greater complexities encountered with city driving, pedestrian traffic, construction sites, police-directed situations, emergency maneuvers, and more.  A person need only imagine the potential for these to change the overall safety rating.

So, as your average idiot can now see – there is absolutely no way to obtain an accurate guestimate to the question “How safe (more specifically ‘deadly’) does the ‘hands off’ operation of Tesla’s autopilot appear to be at this point in time?” when using only the data at hand.  If Tesla, in fact does have the answers to this more fully considered question, they should be willing to discus it with any reporter who inquires.  If they don’t, then we should all be pointing our fingers at the Federal D.O.T asking “Just exactly what is going on with the testing of this ‘Self-Driving’ stuff?!”  (Again – read my earlier essay) ……… So there you go …….Reporters get to work now!

Nutty Questions Concerning “Self-Driving”Cars

6 Jul

And now a few questions surrounding the nuttiness of the current (non) discussions that are (not) widely occurring yet regarding this “self-driving” car stuff.

One of these (non) discussions is failing to occur when experts suggest that one day we won’t own our own cars, but will instead summon vehicles to come pick us up for our trip down the block for milk.  Would these empty cars not be racking up additional environmentally unfriendly miles every time they come to get us?  Or, if the plan is for them to pick up other passengers along the way (imagine the wait times), could we not just slap some rails underneath these bad boys (for safety’s sake) and create a “smart phone connected” rail system?  By definition, these customers aren’t looking to “grab the wheel” themselves anyway.

And what of the widely unchallenged notion that these cars will be using their communication powers to link up with other cars on the road, thus travelling in tight formation just inches apart?  I wasn’t previously aware that “tailgating” is bad because it is “hard to do”.  I always thought the concern was “reaction time”.  Not just mine, but also Isaac Newton’s (think “Apple Car”).

The list goes on and on  ………. Who do we give “the wave” to when overly cautious (driverless) cars let us into the roadway out of turn?  Who do we give the “finger” to when stuck behind these same cars?

Will driverless cars be able to get out of the way (by driving over the curb onto someone’s lawn, etc.) of a honking ambulance under all of the potentially infinite scenarios? (Correct answer being “no”).

When a driverless car stalls in the middle of a fast moving road (not due to a computer problem of course because computers don’t typically malfunction) it will surely turn on its flashers, but who will walk back 100 feet to wave off unsuspecting drivers zooming around the bend?  Will these cars tip the tow truck driver (or non-driver)?

When one of these empty cars runs someone down, it will surely dial ‘911’, but what will it tell the operator in relation to the severity of the situation?

What happens in the D.O.T’s record books when two driverless cars flatten each other?  Is this recorded as a “deadly accident”?

We’ve all heard of “cow tipping”, but what about the newest craze – covering the sensors of rich people’s driverless cars so that their owners cannot “summon” them.  Oh, that’s right, we will all be on camera 24 hours a day with these cars around.

…. Snow, sand, ice, “continue on” wave’s from pedestrians, deer and dogs given same priority (or not?) as humans, police directed emergencies, plastic bags floating across the street, jumpy dog in the middle of the highway (I actually saw this once), flooded out roads, missing or recently altered street signs, approaching cop car with lights flashing (C’est pour moi? says Mr. Peugeot) ………. on and on and on and on.

Of course, regardless of the degree of immunity granted up front, what will actually happen most often is that these cars will be out there in overly cautious mode, slowing everyone down due to their (quite necessarily) overly cautious programming and – as now required by the insurance companies – strict adherence to speed limits.  I could have sworn that some road study scientists once discovered that if there exists a steady flow of highway traffic and just two drivers (side by side) slow down for just a few seconds, it creates a backward moving ripple effect that slows down the trailing cars for miles, and lasts a very long time.  Perhaps automated cars can handle this situation better than humans?  It makes you wonder, will the D.O.T. now recommend installing the opposite of “HOV” lanes on the highways?  Perhaps we will now have “LOVE” lanes? (Low Occupancy Vehicle – Electric)

Jay Leno once said (paraphrased) that in the end, we are not going to have a bunch of driverless cars roaming the streets.  Instead, this technology will be incorporated as safety additions similar to ‘ABS’ braking and so on.  So, if Mr. Leno (an expert in both “comedy” and “cars”) is in fact correct – why the big push (by “safety experts”) to get our hands off the wheel?

Testing “Self-Driving” Cars – The Buck Stops Where?!

22 Jan

Hello D.O.T.,

I am very concerned regarding the U.S. Department of Transportation’s recently implied ownership of the “Self-Driving Car” testing process.

In truth, the testing of “self-driving” cars is another major software testing project (since cars do not, and never will, make decisions themselves).  This will necessitate the verification of acceptable vehicle behaviors under a huge number of real world scenarios.  All kinds of computer coding decisions will be made regarding the legal and safety “choices” exhibited by these cars. Perhaps these decisions will even be made (on occasion) by young urban computer whizzes who have not yet gotten around to obtaining their own driver’s licenses.  In addition, these scenarios, errors, and fixes will need to be documented, addressed and verified not just once, but for all of the competing proprietary systems utilized by the various car companies.  These separate “self-reported” real world “beta tests” will be occurring at the public’s risk.

Ideally, of course, this would all be combined and addressed by the D.O.T. as one overall beta test.  And by definition, this beta test would include the establishment of an appropriate project tracking system in which the public is encouraged to “report bugs”, in which issues are “logged” using a unique identifier, in which “accountability” or the “present ownership” of a particular bug is immediately identifiable at any given moment in time, and which would include the establishment of a clear decision making hierarchy regarding legalities, project priorities, and the postponing of less critical issues.

My concerns are as follows:

From my own experience – in which the D.O.T. has failed to assume ownership of any of the numerous road dangers I have documented over the past few years – it seems the D.O.T. does not currently have a public oriented, problem reporting system similar to the one described above.  Incidentally, there are certainly a number of “Best Practices” principles to be gleaned from my previous correspondences (more on that later).

On two separate occasions I was told by your employees that the US D.O.T. is prohibited from direct involvement in relation to state specific legislation.  I have also learned through experience that state legislators are not likely to admit that one of the safety “improvements” they previously voted for turned out to be dangerous.  On top of this, future dangers occurring as a result of interactions between “self-driving” cars and people will likely be a bit tricky to describe.  This is because many of society’s most logical thinkers (the aforementioned computer programmers) will have already taken into account the handling of many of the more commonly encountered driving scenarios.  These new “state specific dangers” will require much more involved, mind numbing explanations leading to even less likelihood that politicians will expend their political capital on such “I was wrong” campaigns.  And given the “recent trend” nature of the dangers I myself have been warning about, I am concerned that the D.O.T. does not understand the scope, magnitude, or irreversibility of the problems created should it let the genie out of the bottle prematurely!

One of the hardest hitting fruits the D.O.T. will be wielding, as overseers of this project, is the (largely non-binding) “Best Practices” report it will assemble in relation to self-driving cars.  I am not knowledgeable as to the inner workings of the D.O.T. or what it has in mind.  I will say, however, that nearly every one of my previous “dangerous scenarios” involved a tricky real life situation that should prove even trickier for driverless cars – I mean “computer programmers”.  When the D.O.T. discusses plans to issue this report, I can only wonder nervously – “Based on what?” “Coming from who?”  And prior to issuing these recommendations – should there not at least be a few tentative demands made?  Perhaps in relation to an end goal of “complete compatibility” between the now competing and proprietary computerized systems under development by the various car companies?  After all, aren’t these cars going to be “talking” to each other?  Wouldn’t it be nice if in the future, every time we need to correct their grammar, or slap them on the wrist, these adjustments could be made in just one location without the need to duplicate, triplicate, or quadruplicate our efforts?  Will the current climate surrounding these unseen computer codes (or “competitive advantages”) be addressed such that, if one particular company designs code that is far superior at handling a particular road danger, this life saving knowledge will be immediately disseminated to all involved before my dog “Buffy” is run over by a competitor’s car?  Wouldn’t it be nice if the D.O.T. – before handing over the keys – laid down some ground rules for these young cars who have just received their learner’s permits?

Some things in life – like “blindingly bright” modern headlight technology (also not yet addressed nationally) – can be seen from miles away.  In the case of future accidents caused by these cars, I mean “computer programs”, there will be one thing we can say for sure – “victims” and “owners” alike will be looking to put the blame on the auto manufacturers.  And since we already know “there ain’t a snowball’s chance in global warming” that any of these cars are going to end up on the roads without some degree of formal or informal immunity granted upfront to these auto companies, the question then becomes – “Who will Kenneth Feinberg be working for in these quandaries?”  …..  GM?  Takata?  …..  Hakuna  ….. Matata?  Comprende?  …..(DeNada).  It seems a bizarre possibility that those working within the federal D.O.T. – who, remember, are not the actual programmers or project managers of this computer code, who are not directly performing the testing, who do not have a complete and comprehensive setup for overseeing this testing (my deduction), and who tell me that they are prohibited by law from getting involved with state legislative decisions – will have it in their self-interests (personally and organizationally) to maintain a certain distance during this testing process.  They may actually be headed towards a bizarre alliance with the D.O.T.’s usual arch nemesis – “Plausible Deniability”.  Wow!  What a web!  And not a simple “Charlotte’s Web”, but a “Jack Webb”! (Extra ‘b’ in there)

I seriously wonder if the buck is going to stop anywhere on this project.  If the buck ends up stopping in the middle of the road and is then hit by one of these “self-driving” cars, I hope the D.O.T. will at least properly record the “Cause of Accident” under the newly added description – “Computer Crash”.

The Inquiry and Software Testing’s ‘Dirty Little Secrets’

31 Oct

As a former software tester – inquiry into the ‘’ failures seems the perfect forum by which the public could be brought to speed regarding their assumptions that large institutions properly test their customer data and calculated results.  Of course, in reality, these complexities will be lost in the politics (and real concerns) surrounding the ‘Affordable Care Act’ itself, but I can dream, can’t I?

I’ve worked for a major life insurance company, a ‘dot com’, a well known payroll processing company, and one of the original ‘outsourcing’ companies handling ‘pension benefit’ data for client companies.  Though not claiming any specific knowledge surrounding the design, construction, or performance of the healthcare site, I suspect that the dynamics involved were not unlike my own experiences.

I’ve seen major differences in project quality, and accountability, depending on whether software was programmed and controlled entirely ‘in house’ or promised as an outsourced service.  I have also seen differences regarding expectations of quality when comparing the ‘old days’ (simpler in that ‘dumb terminals’ were used by clerks to communicate with company run mainframes) and the modern environment (in which customers interact directly through an endless variety of devices, operating systems, browsers, and internet connection technologies to databases that may be under the jurisdiction of more than one organization).

One of the ‘dirty little secrets’ in software creation is that – for certain segments of a project – ‘automated testing’ was a much more common mainstay in the pre-internet mainframe days.  Ideally, these automated testing tools (themselves ‘software’) are used to quickly and efficiently rerun large numbers of test cases, thus avoiding the need for a tester to sit down at a computer and perform these tasks manually.  In fact, following even a small change to programming code, no claim of quality can be made until all the old (unrelated) processing has been checked for inadvertent changes.  Unfortunately, the likelihood that these ‘regression’ tests have been automated is much lower these days due to the additional complexities mentioned above.

In the old environment, my automated tests were quite easy to maintain.  I simply identified the row and column where data was to be entered, listed the data, and followed this with a symbol representing the pushing of the ‘Enter’ key.  Currently, however, newer automation products need to take into account all sorts of additional environmental factors.  For example – Does the user have other applications open on the ‘desktop’ in addition to the one being tested?  What unexpected (operating system) messages might pop up and how will the testing tool deal with them?  And so on.  Any number of these surprise encounters – not under the control of the actual software being tested – could serve to derail the running of the automated test.  When they do, it is often unclear at first whether the test stopped running because of an error in the software being tested or the testing tool itself!  To make things worse, in order to differentiate their product, companies selling these tools fall into the temptation of ‘feature creep’.  Often, the time spent learning new features defeats its original purpose as a time saving tool.  Unfortunately purchasing decisions here are made by mangers who may not understand the nuts and bolts of the testing environment and/or what makes for efficient testing.  But wait, there’s more!

These automated tests need to run on the actual computing devices they intend to test.  This means that in order to simultaneously test 50 different hardware/software configurations – PC, MAC, different operating systems, different versions of operating systems, browsers etc. – you need to actually have 50 different physical machines available and running at one time!  This, most assuredly, never happens in the actual development environment.  Acknowledgement of the limited nature of today’s software testing is visible as software companies make available free ‘beta’ versions of their latest software in the hopes that users will function as testers by reporting ‘bugs’ ahead of the main ‘release’.  Of course, the availability of this testing strategy should depend on the nature of the software being tested.  You wouldn’t want to hear your bank say ‘Try our latest online banking software and let us know if your checking account balances are being displayed correctly’!

In reality, the amount of time required to create and maintain these automated tests is so high that companies often don’t get around to it.  Especially in cases where projects have been outsourced and there is no expectation of revisiting this (new) software in the future, automation may not be used.  Also disconcerting is that even where a collection of test cases has been automated, the speed with which these tests run depends on the same network traffic issues (live) people encounter.  In the end, for documentation and liability purposes, testers are usually required to put down in writing a description of all the scenarios they want tested, but these may simply end up as ‘cue cards’ for future manual testing.  Late specification changes to the software (such as the government adding new legal requirements) will often mean there is literally no time to rerun these ‘regression’ test cases – either manually or through the use of these clunky automation tools!

It’s also important, in the crisis at hand, to realize that company ‘websites’ are usually not synonymous with the product itself.  In the case of ‘Craigslist’ it may be.  For health insurance it certainly isn’t.  Life decisions may depend on whether calculations occurring behind the scenes (premium quotes, or eligibility for example) were performed correctly.  Simply being able to ‘sign up’ or to see your name formatted correctly does not mean you are out of the woods!  On the other hand, problems appearing in this ‘cosmetic’ realm may be strong indicators as to the extent and quality of the testing performed prior to release.

On a case by case basis, the testing of calculations requires a much larger time commitment than testing simple screen flow.  The tester must manually step through the lengthy and complicated business rules (legal or otherwise) in an attempt to verify that calculated results were spit back as expected.  The endless combination of relationships between input data and rules, such as ‘if this is true, that cannot be true’, often means that testers are only able to test a small sample of the possible scenarios.  With the ‘’ website I would be most concerned with errors in this realm – especially in regards to a law that no one seems to fully understand!

Among my own customer experiences I’ve been victim (or beneficiary?) of a $600 computational error within my auto insurance bill.  This was apparently never noticed, one way or the other, by my insurer.  I received neither ‘Sorry, we previously overcharged you by $600’ nor ‘Sorry, we recently billed you $600 instead of $1200’. In another example, Bank of America – after acquiring the company that previously owned my credit card account – failed to properly handle the cardholder rules as contracted.  Though I had not made a late payment (a prerequisite for any change to my ‘fixed’ rate) they improperly changed my rate to ‘variable’, nearly doubling my interest payment.  I have also heard relayed accounts of inadequate testing performed prior to other large institution ‘takeovers’.  Consumers should really look out in these situations!

Of course, in addition to the issues discussed above, there are also your garden variety concerns relating to things like ‘project management’ methods and outsourcing pitfalls.  A few of these include:

#1) Current trends to be ‘flexible’ and to ‘cross train’ can wreak havoc in terms of learning curves and accountability.  ‘Bug Tracking’ must be handled correctly so that all outstanding issues are individually identified and assigned unambiguously to one, and only one employee at any given moment in time.  Ideally, software testers should be given the power and responsibility of literally ‘signing out’ the end product.  This means that they put their signature down on paper and include notes as to the existence of any outstanding issues (to be fixed in the future).  This process is essentially defeated and meaningless when others, such as project managers are allowed to do this ‘signing out’ in place of those who did the actual testing.  In reality, only my first employer followed this most basic of practices.  Perhaps it creates fear these days in which deadlines and completion dates seem to take precedence over all else.

#2) Outsourcing providers may lack adequate understanding of the client company’s business requirements or existing source code prior to preparing their estimations of the time required for a project.

#3) Employees within the hiring company (like the government) may not understand the technical complications created by late requests for changes to the software.  The outsourcing company is unlikely to tell the client ‘That can’t be done properly within the current time frame’.

#4) In situations where cash penalties might be imposed for late project completion – and especially when coders and testers sense the unlikelihood of any future direct accountability – these employees often avoid being the ‘squeaky wheel’ in status meetings run by the very stressed, less closely involved, project managers.  I saw this become a group behavior on one of my projects in which everyone knew there were major problems, but only I spoke up.  The fear is that the project manager might assume that the person encountering issues is less competent than the others.  In my case, I had documented so many unique concerns along the way – all of which eventually rang true – that no one continued giving me a hard time.  Most employees in my situation would not have risked these waters.

I hope these notes on the true nature of today’s software testing environment prove useful to others diving into this topic.

“Stop and Stay Stopped” Crosswalk Law Has Created Many New Dangers

15 Feb

It appears quite apparent that adequate scientific study was not conducted prior to implementation of the “Stop and Stay Stopped” crosswalk law in states such as New Jersey.  This retraining of driver and pedestrian expectations and the associated logistical impossibilities has resulted in so many new dangers that listing them is like shooting fish in a barrel.

This law effectively mandates two additional “Primary Tasks” for drivers as they are now required to continually take their eyes off the road while attempting to process the ever evolving intentions of pedestrians at the side (or both sides) of the street!  This is analogous to the young male driver who is distracted by an attractive woman – make that two attractive women – standing on opposite sides of the street!  In addition, drivers are also expected to scan the street (often obstructed by the rise of a hill, rain, or wear) for the existence, perhaps proliferation, of unexpected crosswalks.  For a more scientific understanding of all this see the National Safety Council’s entries concerning “Distracted Driving” and “Primary Task”.  Keep in mind that the above difficulties are in no way analogous to the effort needed to interpret “stop” signs, red lights or crossing guards – all of which are unambiguous and require only one quick glance for a complete understanding as to the actions required of the driver.

Some of the new “Stop and Stay Stopped” dangers are described below.

•           The law effectively mandates driver distraction and violates the driver’s golden rule – “Keep your eyes on the road!”  Drivers have much less time to observe developments on the road ahead as they are now required to continually interpret the intentions of pedestrians at the side of the road (both sides for that matter).  This requires high level mental processing and prolonged attention (unlike the simple one time glance required for a “stop” sign).  Think of the classic (and true) scenario in which a male driver is briefly distracted by an attractive female pedestrian.  He momentarily forgets about his obligation to keep his eyes on the road and “bam!”  Don’t forget to multiply this distraction by two (for both sides of the road).

•           The law violates the spirit of the pedestrian golden rule – “Look both ways before crossing!”  This of course always included the implicit “Don’t cross when cars are coming!”  This was another way of saying that – should you have an encounter with a car – you (the pedestrian) are going to pay the price!  The proper behavioral options have always been to; cross at intersections with the green light; cross where crossing guards or “stop” signs exist; cross at pedestrian bridges; or be fully aware of the danger and cross at other locations.  None of these prior options negated the responsibility of the driver to drive safely and within the speed limit.

•           The proliferation of new crosswalk lines – often mid block with no signage – also violates the pedestrian’s second golden rule – “Cross at the green, not in-between!”  It appears that towns are now taking advantage of the new law by painting crosswalks anywhere they want – often mid block – in order to avoid dealing with the real issue which is that their roads are not “pedestrian safe”.  In addition, even where crosswalks don’t exist, many pedestrians now seem to believe that cars are required to stop for them.  Not so bright, but do they deserve to die?

•           Crossing the street in the absence of “approaching cars” has been made much less feasible, and even impossible at times!  One of the most ironic, and unnecessarily dangerous “real world” results of this law (exasperated by increased enforcement) has been the way it has taken away the pedestrian’s ability to cross the street at the time of her choosing in the absence of oncoming traffic.  This is even more prelevant on low traffic roads.  Previously, a person could stand at the edge of the curb, wait a few seconds, then cross without any approaching cars.  What happens now is that the few approaching cars tend to slow down greatly, without any legal obligation to do so.  The lead car may even stop.  A similar backlog may – or may not – exist from the opposite direction (far side) depending on whether those drivers decide to take the law literally.  Ironically, the “literal” interpretation would cause them to keep moving.  Our pedestrian now has a hazardous crossing with many stressors where none existed previously.  She knows that if she “waves” the (erroneously stopped) vehicle through, the delay could go on all night as the conditions repeat themselves with the trailing cars.  In addition, she may fear that pedestrians approaching from behind (unaware of this interaction) could be struck by the lead car.  So what we end up with is a situation in which children, the elderly, the handicapped, or simply the “shy” may feel pressured to not keep the lead driver waiting.  Many times, in fact, these pedestrians enter the roadway despite the fact that the far side traffic (obeying the law) is still moving across their intended path!  Again: “Old Scenario” = 0% risk ……. “New Scenario” = (you fill in the percent).

•           Pedestrians are not visible at night!  Often because the crosswalk is mid block with no lighting, often because they are in the street standing between a stopped cars’ blinding headlights, and sometimes compounded by the pedestrians’ wearing of official NY/NJ colors (black) – people now routinely cross the road with no idea how invisible they actually are!  This is made much worse by the very high numbers of aged drivers suffering from early stage cataracts (“night blindness”).  We all know how difficult it is to see an approaching car whose driver forgot to turn his headlights on, or the jogger/bicyclist who is not wearing light colored or reflective clothing.  Pedestrians located to the side, out of the driver’s line of sight, are now suddenly planting themselves directly in front of approaching cars!  This is an unfortunate consequence of a new undue confidence on the part of pedestrians.  By the way – highly reflective “pedestrian crossing” signs don’t help here – unless those crossing the street are wearing them!

•           The fear of receiving a ticket is often a danger in and of itself.  Drivers now suddenly stop mid-block where a crosswalk does not exist.  This risks serious rear end collisions and puts the jay walking pedestrian at risk of being run over by an unsuspecting car passing from behind.

•           The well known “back and forth” courtesy dance creates many new unforeseen dangers.  Pedestrians frequently wave cars through, unbeknownst to the pedestrians behind them!  In addition, stationary drivers who are focusing on the pedestrian located on one side of the road may – in a nervous and honest attempt to be polite – suddenly continue on after this wave having no idea that a third pedestrian just entered the roadway from the near side (right of driver).  This pedestrian (who had seen the car stop and who was now checking traffic from the opposite direction) could easily be run over with just one quick tap on the accelerator.  A number of other scenarios could be added to this category as well.

•           In real world terms – there are logistical impossibilities and dangers in this law.  Pedestrians are required to step into the line of traffic for the “Stop and Stay Stopped” requirement to take place.  If the goal of the law is to keep pedestrians safe, is this not counterproductive?  What about pedestrians entering the crosswalk from between parked cars?  Or problems originating from another questionable safety design issue in which “bump outs” have been installed in front of schools – effectively corralling students to the edge of traffic (like “human traffic cones”) thus eliminating the former shoulder?

And how should the very common situation in which pedestrians congregate on the street, just off the curb, with no intention of crossing be handled?  Perhaps drivers need to get out of their cars and ask the pedestrians to step back on the sidewalk?  It’s unclear under what conditions the driver would be allowed to proceed.

The online description of this law even (originally) stated the need for drivers and pedestrians to establish “eye contact” in some situations!  Here again – what about nighttime?  Tinted windows?  Sunglasses worn by one, or both, of the participants?  What if the participants do not have super vision and so on?

•           Pedestrians have developed a number of dangerous habits.  These include crossing high speed intersections against a red light.  And pedestrians visiting jurisdictions outside the effective area of the “Stop and Stay Stopped” law will presumably be at increased risk should they not remember to adjust their assumptions.  For example a person crossing against traffic in New York City.

•           There are a number of “passing” issues for drivers.  These include the situation in which a large vehicle in front slows to a near stop and activates its’ right blinker.  A smaller vehicle slowly passing from behind could easily run over an unsuspecting pedestrian.  In this scenario, the trailing driver is unable to see the pedestrian and has no idea that the driver of the larger vehicle (about to turn) is also actively encouraging the pedestrian to hurry up and cross the street.  This is another example of the “back and fourth” dance in combination with the fear of receiving a ticket.  Both a result of the retraining of expectations.

•           Crosswalks – wherever they exist – now function as part time “Stop” signs with plenty of room for human error.  Police cars (often with sirens off) regularly speed down my road on their way to catch the bad guys.  I assume they do not recklessly run red lights and “stop” signs as the likelihood of deadly results is well known and predictable.  However, I am not sure the police treat these numerous new (and existing) crosswalks with the same concern.  These are of course the legal equivalent of part time “stop” signs.  For regular drivers as well, these crosswalks – unlike red lights and “stop” signs – require continual attention and mental processing in order to distinguish whether the “stop” law applies or not.

•           Sun glare conditions are more hazardous now at these crosswalks.  When unexpectedly hit by sun glare, drivers suddenly slow down, but not so much as to be rear ended by the driver behind.  They then quickly adjust their visors while maintaining a steady head position enabling them to see a narrow patch of road directly ahead of them.  They can no longer divert their attention back and forth to the sides of the road.  Unfortunately the driver’s action of slowing down may be interpreted by the pedestrian as an intention to stop.  The pedestrian – facing away from the sun and unaware of any issue – may then turn his head to check traffic from the other direction as he steps into the crosswalk.

•           Oddities in the implementation of the law.  These include inconsistencies such as unsigned crosswalks; faded and unsigned crosswalks; or excessive numbers of these “pedestrian crossing” signs located right next to each other.  In one case, near my home, one of these large yellow “pedestrian crossing” signs was planted right next to an existing “stop” sign, actually blocking the view of the “stop” sign from drivers!  This is nuts!  Exactly who have we delegated these life and death decisions to?!

On a personal note, I’m tired of nearly running over pedestrians through no fault of my own!  The often quoted assumption that drivers don’t care about the safety of these pedestrians contradicts simple logic, observation, and human nature.  Does anyone reading this know a friend or family member who would not be seriously upset by the thought they had just run over someone with their car?  If you do, you may be hanging out with the wrong crowd.  In fact, the proof of the law’s danger lays in the obvious comparison between how often drivers blow through “stop” signs, and how often they do the same at many of these crosswalks.  As already noted – “stop” signs require just one quick glance for compliance, are unambiguous, and easily seen and interpreted at night.  Virtually no one “blows through” these for fear of danger to themselves or others (regardless of the legalities).  Yet these exact same drivers – well aware of the new law by now – are often seen driving through crosswalks in which a pedestrian had just entered!  Hint, hint – read that last sentence again!

In addition, because complying to this law creates – by necessity – an ongoing distraction for drivers; it means that as general compliance increases, so do many of the dangers described above.  In this way, safety gains resulting from increased compliance may be self limiting.  This cannot be said for traditional “stop” signs and lights.

Considering the deadly nature of the resulting interactions (car on person), parents will likely come away from this essay with one overwhelming piece of advice for their children as they walk to school – “Do not trust that cars are going to stop for you when you enter the crosswalk!”  So, it seems that the best way to stay safe is to not trust the effectiveness of this law designed to keep us safe!  This brings us back to “Look both ways and try to cross when cars are not coming”.  And, if this is in fact the safer approach, it means that the law actually requires pedestrians to go against the safer approach before the law kicks in (by stepping out into the road at a crosswalk).  The law is in effect creating, and then promoting, a false sense of security.  On the plus side, let’s not forget that the law cuts down on litigation time and helps avoid the greater issue of building roads that are pedestrian friendly.

Incredibly, the organizations in New Jersey most responsible for ensuring the safety of pedestrians have failed to respond directly to any of the observations listed above.  I eventually received only a generic “we are interested in safety” letter from the NJDOT.  After many months of not hearing back concerning numerous mailings (email and snail mail) in which I requested info as to “who I should be contacting” with these observations – I finally spoke again to the head of the Rutgers University organization responsible for the state’s annual “Pedestrian Safety” report.  He acknowledged (as I had suspected) that his predecessor was involved in the original recommendations leading to this “Stop and Stay Stopped” legislation.  In addition, he was personally responsible for running the training sessions showing the police how to perform “sting” operations in relation to this law.

Hopefully this essay will convince a few scientifically minded individuals to take a serious look at this very serious issue affecting nearly everyone living in states where this law exists!