If you're in the "too long, didn't read" generation, I'll summarise : Average number of damage-only crashes in the US is 0.3 per 100,000 miles driven. For Google's self-driving cars, it turns out that number is 3 per 140,000 - about 12 times higher than the national average.
This report in the NY Times has more detail : Self driving cars getting dinged in California.
What the NY Times article doesn't mention is that Google have only been required to report accidents (by California law) since September 2014. The program has been going for 6 years, so 5 and a half years of accidents have not been reported.
Chris Urmson, director of Google's program, has admitted to 11 damage accidents and another 15 'minor' accidents in an article he wrote for Medium (The view from the front seat of the google self driving car). So 26 fender benders in 6 months? That we know about?
Google also continue to tout the oversized mileage figures - now they're claiming 2 million miles - but again, they don't readily admit how many of those are in simulation, not on the real roads. Here's the skinny on that - Google's cars only drive on around 2,000 miles of actual real-world roads because that's the only part of the nation's road network that has been digitised in enough detail for their vehicles to work. So ask the question : 26 accidents in 6 months on roads that the cars are very familiar with. How does that scale up?
Google can mealy-mouth it however they like (someone was in control, it wasn't our fault) but that's the excuse for every car accident, isn't it? It was the other driver's fault. Google like to tell us that in all the accidents they've had, it's been the human that caused the problem, but by California law, the only drivers allowed in Google's cars right now have to be highly trained and certified. Isn't that worrying too, that the 'highly trained' drivers are the ones causing the problems? That begs the question about what happens when mere mortals are allowed to use them?
If drone cars are supposed to reduce accident rates by removing the reliance on the human factor, then the 'highly trained driver' argument, coupled the following statement, would seem to fly in the face of that idea : "For a vehicle to suddenly swerve to the right, a human would have to grab the steering wheel ... training becomes even more important" (Prof. Bryant Walker Smith, assistant professor and fellow at the Center for Automotive Research at Stanford).
Driver training is pretty pathetic in the US right now as it is - is he suggesting that drivers need to be more trained and more skilled in order to use drone cars? I can certainly see that argument - if a drone car suddenly needs human intervention, an untrained driver is the last person you want to grab the wheel.
Drone cars are an admirable project for sure, but unless we replace every car on the road with a drone car overnight, then I don't see the accident rate changing. If current indications are anything to go by, mixing drone cars and human-driven cars actually increases the accident rate.
Food for though : this is ONLY Google we're talking about here. Mercedes, BMW, Volvo and all the other manufacturers working on automation, have not reported their accident stats (nor are they currently obliged to).
1 comment:
I am not that big fan of self driving cars. Google just released their crash data and you can’t expect 100% accuracy in real condition. My cousin who works with a DUI lawyer told me that though cases of drunk and distracted driving will become less but one shouldn’t expect exceptional results.
Post a Comment