Monday, June 5, 2017

Drivers are prone to error, but so are coders.

I'm finding it difficult to understand why, in an era when so many people interact directly with software on a regular basis — and know first hand how temperamental and buggy it very often is — we are so uncritical of the idea of the driverless car.
People are fond of saying that most car accidents are the result of human error. Consider this : 100% of software defects are the result of human error. Untold thousands of person-hours have been spent building operating systems (both smartphone and desktop) and the programs that run on them: systems that have a prescribed set of machine-readable inputs. And still we get bugs, software updates that render our computers unusable, viruses and other malware, the Blue Screen Of Death, the Spinning Beachball Of Death, core dumps, system panics, and the rest of it. No one has shown us why we should believe that the software running driverless cars — where the set of possible inputs is much larger and much less predictable — would be any better.
A driver-augmentation system that helps with lanekeeping in a prepared environment? Sure. Heads-up displays to conserve and focus driver attention? You bet. Awareness-management systems that alert drivers to potential trouble? Absolutely — as long as that much-maligned but still miraculous human situational intelligence is there to quarterback the whole thing.
Here's a funny thing about driverless cars and safety: people talk about humans being inherently dangerous behind the wheel, but we know for a fact that not all humans are equally dangerous behind the wheel. Even better, we know which humans tend to be most dangerous behind the wheel — because insurance companies know it, and they convey that data to us in the form of the premiums they charge. By a significant margin, the most dangerous drivers — that is, the most expensive to insure — are young people in their 20's. Want safer roads? We could start by raising the minimum driving age or enforcing better driver training and stricter testing.
Software is written by humans that work for corporations. Delphi, Intel, Volvo, Google - it doesn't matter. Reality check: corporations are driven by the bottom line. I don't want to be sitting in a car knowing that the corporation behind the software hit some quarterly deadline by taking shortcuts to get something out the door to ensure the board got it's compensation and the shareholders got their dividends. This happens every day. That's the reality of driverless cars.
If my smart home thermostat has a bug in it, the house might be too cold or too hot, but it's has zero potential to kill me. That is simply not true when it comes to driverless cars.

1 comment:

Anonymous said...

The concerns make sense, but I don't think the bottom line fits with reality. In my opinion, driverless cars are going to result in less accidents than traditional cars for a number of reasons.

As far as bugs go, the software we interact with on a daily basis has bugs because it can have bugs. Your computer, cellphone, etc have bugs precisely because they can't kill you. There are more critical systems such as airplanes, spacecraft, and nuclear facilities that are much more reliable because the alternative wouldn't suffice. These systems have strict stability requirements, so they use mature, tested, and highly redundant systems to keep everything working. New whizz-bang software like computer OS's keep trying to add new features to entice customers. And if it fails? Someone will restart their device and be a-ok.

Where I agree with the concern is with companies like Google (and possibly Apple) that thrive in the low-maturity, face-paced world of consumer software and are now getting into autonomous car software. I hope they keep their typical development philosophies off the to side and adopt a more rigorous and stable approach.

The other issue is malware and viruses. I think it goes without saying that the driving logic and the "infotainment" software should be totally separate, and the driving logic should not be accessible wirelessly. Someone should need to attach something physical to some internal port on the car to update the driving logic. Something that can't be done en-masse.

But in the end, anything well-defined can be done by a computer better than a person. As a certainty, autonomous cars will have accidents and people will die, but I see no reason to think that they would be any less safe than people are currently, and I see no reason to think that they wouldn't be significantly better. If they aren't significantly better at the start, like all technology, they will only get better over time. Humans 10,000 years ago and humans today are the same.