Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see this sentiment often. But is this comparison fair? For example, what does the distribution of risk look like in each cohort (AI vs human drivers)?

Presumably the risk of an accident is relatively evenly distributed between all AI drivers (they are using the same AI after all). But is the risk of a car accident evenly distributed between all people? Not even close. It’s perfectly possible to simultaneously reduce the overall risk for everyone while at the same time increasing the risk for a given individual by an order of magnitude.

Would you be willing to assume a greater risk of accidental death _personally_ to decrease overall risk of death? Not a question I imagine reaches broad consensus...

And what about the soft problems? Like responsibility. A self-driving car runs off the road and kills your daughter. Now what? Tesla is certainly not going to accept responsibility. So... you just “chalk it up” as bad luck? At least the current paradigm has the _ability_ to offer closure after a tragedy.

Reducing “self-driving cars” to a single metric is not only mathematically dubious, it’s ethically abhorrent and just plain stupid. I expect better.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: