I know I vented about this a couple of months ago but now BMW and a handful of other manufacturers are touting their latest self-driving cars, it's time for another perspective. Google have already had one of their vehicles crash, back in August last year causing a 5-car accident. That raises the spectre of who is ultimately responsible for such an accident. Google said it was the driver in their case and there's the crux of the problem right away. If you weren't in control of the vehicle at the time, how can you be responsible for its actions? How about the vehicle manufacturer? Certainly they're partly to blame but they will offset the blame to the software authors and the sensor manufacturers. Did the car crash because of a bug in the software or because one of the sensors it relies on failed? How do you prove what caused the failure? What happens when someone in a regular car causes the accident? The manufacturers are going to great lengths to point out that their self-driving cars can adapt to and compensate for any eventuality. That's a rose-tinted view of the world - I guarantee any one of us could cause an automatic car to hit ours with the minimum of effort because just like human drivers, a computerised system laced up to sensors with hundreds of thousands of lines of code cannot compensate for human stupidity.
Now think about this: any vehicle with an outside connection (like OnStar) is insecure. OnStar in particular is so deeply integrated with the onboard computers that it opens the door to remote vehicle hacking. University researchers proved in 2010 how easy it is to hack into the CANBUS (OBD II) of any modern car. (Original report (PDF)). Yes they had physical access to the CANBUS but their followon paper published in 2011 (Second report (PDF)) performed the same attack via the vehicle telematics instead.
Given that precedent (and how ridiculously easy it is to steal any OnStar-equipped car via social engineering) do you really want a self-driving vehicle that is open to software attacks like this?
Going back to the crash scenario - how do you now prove that it wasn't a malicious software attack?
The lawyers will have a field day with self-driving cars if they ever become mainstream. Look at the furore over the Toyota floormats sticking the brake pedals down - that was a simple physical design problem and it nearly took Toyota down.
I truly hope they don't become mainstream. I can't imagine a bleaker future for drivers than cars that drive themselves. Not to mention the morally bankrupt idea that people want to admonish all personal responsibility when driving. That alone I think is reason enough to question the entire development effort behind these vehicles. If people are really so desperate to have the "drudgery" of driving removed from their lives, perhaps they should reconsider public transport and taxis. Or give up their "horrible" cars altogether and just stay inside.