Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A computer error is going to be scarier by far.

When a human kills someone with a car it is almost always in a way we can empathize. When you’re around cars you have pretty good mental models for the humans that drive them and how the car will behave.

When you’re around human piloted cars you can look at the driver and get a pretty good idea of intent. You can tell if the driver sees you, you can tell what their mental state is, if they’re paying attention, what they intend to do. You can sum up a person with a glance, this is the power of evolution, we’re really good at figuring things out about other living things.

Crossing the street in front of a car is a leap of faith, not troubling at all when there’s a human there, but a robot? There’s no body posture, no gestures, no facial expressions, nothing to go on. There’s a computer in control of a powerful heavy machine that you’re just expected to trust.

Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.

It won’t take but a couple of cute kids killed in a surprising matter to shut down the whole autonomous experiment.



From experience I can say that's it's pretty disturbing when a human driver breaks your model, too. And this happens more often than I'd like - I was second in a set of pedestrians about to cross the street (two lane arterial with a 25mph limit - heavily used by both cars and peds - at a cross walk I use several times a week) - the guy in front of me and I both judged that the driver was looking in our direction (it was dark admittedly but street is well-lit) and the car slowed in a fashion that led us to think it was going to stop.

The fellow in front of me steps in front of the car which suddenly accelerates and hits him knocking him down. Immediately before impact it decelerates and comes to a stop.

The fellow ended more shaken up than anything else, but it was a very near thing and a reminder that it's precisely that assumption of the predictability of human drivers that fails in many crash situations.

In other words, your judgement that you understand what other drivers are going to do is more fragile than you imagine.


It seems that most of the radar systems today in cars (in the US) are pretty adept at detecting pedestrians reliably and engaging the automatic emergency braking system to at least minimize the potential for harm.

On my 2018 Lexus, AEB won't come to a complete stop, but it will definitely slow from say 40mph to 20mph, so hitting a pedestrian at 20 hopefully has a much better outcome than hitting them at 40+.


> Crossing the street in front of a car is a leap of faith, not troubling at all when there’s a human there, but a robot? There’s no body posture, no gestures, no facial expressions, nothing to go on. There’s a computer in control of a powerful heavy machine that you’re just expected to trust.

This is something that's very solvable though. Robot cars should and almost definitely will have a way to communicate to pedestrians. I agree with the general point though around a greater possibility of very out of the norm mistakes.


There's an opportunity for them to communicate better with pedestrians than the average human driver. Drivers tend to assume that their intent to stop or not to stop is obvious and don't bother with a clear signal like flashing their lights or waving visibly.

From the pedestrian's perspective, it can be hard to see the driver at all (small movements of the hand can be invisible in sun glare; direction of gaze likewise), and also hard to tell what they're doing. Just because they're slowing somewhat as they approach doesn't mean they see you or intend to stop.


One could imagine a standardized set of signals on the front of the car. Red = stopped/stopping. Yellow = about to start moving; Green = moving and continuing. Something like that.


Drive AI (now acquired by Apple I believe) used to have LED matrix displays that communicated that way with other road users. I recall seeing them say things like "waiting for you to cross" or "driving autonomously" with an icon.


They really just needed a pair of animated eyes.


White parka crossing a snow covered road. Will the AI make the right decision?


When a human kills someone with a car because they're drunk, or texting, I don't have much empathy for them.

I read a statistic long ago - don't know how true it is, but it feels truthy - that half of all traffic fatalities happen between 9pm and 3am on friday and saturday nights. The fact that autonomous systems will never be intoxicated, distracted, or emotional makes me feel much safer.


A brick tied to the gas pedal will also never be intoxicated. It takes more than inability for intoxication to make a system that can drive a car safely.


That stat seems to be very untruthy. Fatal crashes seem to be distributed much more evenly than I would've guessed.

https://injuryfacts.nsc.org/motor-vehicle/overview/crashes-b...


Maybe not 50%, but there's certainly a strong bias in that data toward friday/saturday nights. Since the data resets at midnight rather than on bar hours, look at the difference in midnight-4am data on saturday and sunday mornings, vs the rest of the week.


Those are also the only nights when people are out at all. People who have to be at work on weekday mornings aren't driving home from visiting their parents late on a Tuesday night, they're doing it late on a Saturday night. Yeah, it probably is alcohol but there's a lot of confounding factors.


Indeed. A drunk driver alone on the road isn't really a danger to anyone but himself.


It only makes me feel safer if those systems are substantially safer than humans.

If the systems are broadly as safe as humans _including_ a significant set who are drunk / high / distracted, that feels subjectively much less safe even though the statistical number of accidents is the same.


Oh, I concur. I want it measured against skilled, sober, attentive drivers, not "bad" drivers.


Everybody drives better than the average, until they get distracted or tired. A AV driver that passes the standardised driving test 1M times in a row is good for me.


Well, now we've transferred the responsibility: Now the programmers of the car's AI must never be intoxicated, distracted, or emotional while creating it. Also, the business people managing the programmers must not be greedy, callous, or incompetent, which is a tough sell.


So we need AI driver certification then. Just like we have with people. If the AI can past the, perhaps 5x human capability test, its licensed to drive, just like the 16 year old teenager driving his new truck on the highway behind you with his four buddies scolling coors.


The one time I was hit by a car as a pedestrian was a driver who wasn't paying attention. He was making a perfectly legal left turn at a green light, except for the pedestrians in the way (me and my girlfriend).

The danger with an autonomous vehicle is it not seeing you. The danger with a driver is not noticing you.


I'd think that with the radar systems in place today, vehicles would have far greater reliability in "seeing" you.


But that doesn’t require self-driving, but a commonly implemented emergency brake in a ‘normal’ car.


> Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.

This expresses well the perceived difference between catastrophic human and machine errors.

I have two cars, a ’99 4Runner and a Tesla Model S.

When driving my S, I’m at an involuntary elevated state of alertness compared to my 4Runner. I think it’s discomfort at driving a car that might suddenly kill me through failure modes that I can’t possibly anticipate.

While I think my S’s technology is much more likely to result in my safe travel than my dumb iron 4Runner, my subconscious maintains a different assessment.

I expect this dichotomy in perceived risk is generational and will disappear after a decade or two of lower accident rates in autonomous vehicles compared to human drivers.


I outfitted my car with Comma's openpilot and I can concur with that my experience is similar with the additional effect that overall, my driving quality of life is significantly improved.

I can tell that I already have an inordinate amount of trust in the system several weeks in on boring highway driving.


We can eventually make AI do any of that better than a human by a long shot.

We tend to overestimate the power of the human brain. There is a lot we don’t know yet, but we shouldn’t treat it as magic and unsolvable by AI.


There is no evidence that AI can reach the level of skill and safety of a human driver. I’m not saying that it’s not possible, only there is no reason to be sure of the contrary. IMHO we are extremely far.


And IMHO we are extremely close. There is lots of evidence that AI can be much _better_ than a human driver, although currently on things like well mapped highway driving with clear conditions. Whats going on now is just making that general purpose for all different types of enviornments.


Out of curiosity, do you have a specific source for,

"There is lots of evidence that AI can be much _better_ than a human driver, although currently on things like well mapped highway driving with clear conditions."

The only thing I know about is claims from Tesla, comparing Autopilot (in "well mapped highway driving with clear conditions") versus general human drivers (and which, as I recall, had some further sketchiness).


They can easily surpass humans in really specified problems like looking at millions of cells and identifying which is likely cancerous. Today’s AIs are pretty bad at complex problems where each subproblem would require a specific AI and then some logical thinking on top measuring each results’ to each other. And driving in non-trivial circumstances is definitely in the latter category


Without special infra, we are atleast 50 years away.


> well mapped highway driving with clear conditions

and perfectly visible unambiguous markings. The bar is rather low.


Honda did an experiment where they added LCD's to the headlights to give the car expressive eyes to communicate to people outside the car.

Also while it is scary to us and it will take a while there are already self driven vehicles we just take for granted like elevators and driverless trams/trains. Sure they are much easier to make but they weren't trusted at first.


This comparison is just not a good one.

Trains are on a track, the scenario never really changes. The variables are low. Just like in an elevator, which is largely a mechanical device anyway. There is no real decision making.


and here I was about to make a joke about adding emoticons to the front of self driving cars


No need for anything so corny. Just need some standardized signaling. We already have it for brake lights and turn indicators.


> When a human kills someone with a car it is almost always in a way we can empathize

Nonsense. You can empathize with someone texting and killing someone?

This whole post reads like an attempt to appeal to people’s emotional attachment to human drivers coupled with fearmongering about robots.

You are placing far too much emphasis one our ability to “read” other drivers intent and the impact this has on automobile accident fatalities. Many accidents occur without any chance to see the offending driver e.g. accidents at night, someone switching lanes when you are in their blind spot, a drunk driver suddenly doing something erratic, etc. Moreover, this so-called advantage of human drivers is statistically meaningless unless you believe that the number of deaths due to automobile accidents is at an acceptable level and that it cannot be improved with technology, in this case, AV. I certainly don’t believe that. In the not too distant future, I believe this position will be laughable. Through adoption of autonomous vehicles, many predict we will drastically cut the number of fatalities. Will there be issues along the road? Most certainly. But as long as the overall number is falling by a significant amount, we simply cannot justify our love affair with humans “being in control”. We’ve proven to be perennially distracted, we have terrible reaction times, we have extremely narrow vision, we panic in situations instead of remaining calm, etc. and yes, these faults do lead to the deaths of children. These are not theoretical deaths like the robot scare tactic examples, these are actual deaths from human drivers.


>Through adoption of autonomous vehicles, many predict we will drastically cut the number of fatalities.

Who are these many people, and why should we believe their predictions?

> We’ve proven to be perennially distracted, we have terrible reaction times, we have extremely narrow vision, we panic in situations instead of remaining calm, etc. and yes, these faults do lead to the deaths of children.

We've also proven that all software has bugs, and developers keep introducing new bugs in every single release. There is no reason to think that self-driving car software will be any different. Whats worse is that when software is updated, these bugs will now be pushed out to tens of thousands of cars - instantly.

Bit much to call someones position nonsense when they're just skeptical of obvious stuff :)


I was referring the absurdity of empathizing with drivers who kill people while texting, drunk, etc. (hence the quotation). What part of that statement do you agree with?

But I’ll go further and double down and say the entire post is nonsense. Why? Because the author’s skepticism doesn’t extend to the human factor. The position is not an accurate representation of the facts i.e what causes accidents (humans) and the known data around AVs today. If AV risk is so obvious as you claim then why does the enormous amount of data show that AVs are involved in the less accidents and lead to less fatalities than cars operated by humans on a mile per mile basis? And how is the negligent human driver not obvious as a source of automobile fatalities? The notion that we are safe because we can read humans is not substantiated by anything. Maybe you believe this number of fatalities is acceptable or the best we can do but I certainly don’t. There will be flaws in autonomous vehicles, no doubt. But will there be a net reduction in automobile related fatalities as a result? Like anyone else, I can’t predict the future. But to paint a rosy picture about how our ability to read other drivers is somehow safer relative to AVs is nonsense. It just is. The data doesn’t support this argument. And separately, if we’re talking about will happen in the future, the notion that humans will ultimately prevail over AVs because for safety reasons seems preposterous. We can debate the “when” in terms of AVs but debating the “if” seems pretty out of touch with the way society has progressed with respect to our willingness to depend on technology.


>Because the author’s skepticism doesn’t extend to the human factor.

And your over-enthusiasm for AV doesn't extend to the human factor. We all have our own blinders ;)

>The notion that we are safe because we can read humans is not substantiated by anything.

That is your own misinterpretation. I did not read the comment that way.

>If AV risk is so obvious as you claim then why does the enormous amount of data show that AVs are involved in the less accidents and lead to less fatalities than cars operated by humans on a mile per mile basis?

What you mean when you say AV, is actually "AV + Human". We're running controlled experiments, limiting the unknowns, and we're mandating a human be present - because the current AV technology sucks.

> We can debate the “when” in terms of AVs but debating the “if” seems pretty out of touch with the way society has progressed with respect to our willingness to depend on technology.

People used to say that about flying cars 40 years ago.


A future where AVs exist that can replace human drivers is a future where so many requirements and drivers for personal mobility have changed as well: due to the possibility of replacing a human performing a highly complex task in situ.

That future may not even want or need cars.

The other, more realistic future is one where human-level AVs are always just out of reach. Where causes of accidents are just as intransparent as with human drivers, we're all a little bit safer but a patch can cause catastrophic divergent behavior due to the innate non-linearity of the problem.

That future may not want or need cars either but it may not even be considered.


> The other, more realistic future is one where human-level AVs are always just out of reach.

That sounds pretty unlikely to me. "just out of reach" is a very narrow band, and to be stuck there despite decades of improvements would be pretty strange. I think there are only two likely outcomes for the next 50 to 100 years: either we don't even get particularly close, or we'll slowly but surely surpass the median human driver.


>That future may not even want or need cars.

Agreed, 100%. Instead of AV, we could focus on remote presence and other tech to make cars and/or daily commute unnecessary.


> Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.

This.

It doesn't matter if autonomous cars are objectively safer, 5x, 10x, 100x, doesn't matter. They have to be subjectively safer, which is part PR problem, UI problem, and part human stubbornness problem.


> When you’re around human piloted cars you can look at the driver and get a pretty good idea of intent. You can tell if the driver sees you ...

This is a key area of driving that has been completely overlooked in AVs so far - giving feedback to other non-car road users.

Not hard on the face of it (excuse the pun).


> It won’t take but a couple of cute kids killed in a surprising matter to shut down the whole autonomous experiment.

That is sacrificing the counterfactual children that wouldn't have been killed if the bad human driver had been replaced by an average autonomous car.


Society isn't without emotion.

As much as we engineers want to look at numbers and say "see, safer!" That does little to help people who have had deaths, or the visceral impact that death can have.


You also end up reducing the number of people who are impacted by death or who have to feel those emotions. Yes, in the process of touching the system you also end up shuffling around who gets affected but in the end it's just part of the process of affecting fewer people.


Once self-driving is better we need to do is to create moral rules for AI.

There is a car controlled by computer. Pedestrian (child) abruptly enter into road from behind cover. The Computer knows that with current speed it is impossible to stop. Its other choices is to drive into the sidewalk killing an old lady, drive into the opposite lane risking the life of a car owner and people in another car.

A Human driver can decide on instinct, usually protecting themselves. The Computer needs to have an algorithm that decide who will live and who dies.


Why is it always about running to the sidewalk or the other lane? There's also the option to reduce speed as much as possible, aka break hard, that a computer can do a) earlier than a human would b) much harder than a human would. Yes, the people in the car might take a lot of negative Gs. But that's also an option. The car might still hit the kid, but the difference could be some broken bones, or with chance, bruises, vs. death.


> There's also the option to reduce speed as much as possible, aka break hard, that a computer can do a) earlier than a human would b) much harder than a human would.

a) A computer can initiate braking a small fraction of a second faster than a human, which is great but not such a huge difference in braking distance.

b) A computer cannot brake any harder than a human, certainly not "much harder". The max deceleration rate is traction-limited, which any remotely modern car (last ~25 years) can easily sustain with an untrained driver thanks to modern ABS.

(As a side hobby I instruct in car control and accident avoidance clinics. Blindly braking hard is not often the best answer.)


Most people don't actually hit the brakes as hard as they could or should.


> The Computer knows that with current speed it is impossible to stop.

Then the car was going too fast. Full stop. The rest of your scenario is irrelevant.


>Then the car was going too fast. Full stop. The rest of your scenario is irrelevant.

How can you possibly say that with a straight face?

There will be situations where a car is going a perfectly fine speed for the situation and then the situation changes in a way the car cannot have seen, known about, or anticipated resulting in a crash. This happens all the time with human drivers. It will happen with AI drivers too.

Furthermore, we don't subject anything else that moves to this burden, why would we do so for cars (AI or otherwise)?


> There will be situations where a car is going a perfectly fine speed for the situation and then the situation changes in a way the car cannot have seen, known about, or anticipated resulting in a crash.

The parent's situation was not one of those. This was a situation where "Pedestrian (child) abruptly enter into road from behind cover." and there's an old lady on the sidewalk. In other words, there's a situation with limited visibility, limited room to maneuver (since the only other option is to go up on the sidewalk) and pedestrians present (the child may not be known but the old lady was). If you're in that situation and you're going too fast that you can't stop on a dime, you were not going a "perfectly fine speed for the situation".


In European cities you will often see pedestrians walking 1m from cars driving 50-70 km/h. Human drivers can take this risk, AI to be useful needs to handle it well.


If 95% of human drivers do something, then you can't just call it "irrelevant" because you think it's wrong to do.

And humans don't go past parked cars on a street at 5mph.


20mph is slow enough to essentially ensure collisions aren't fatal.


But they still get hit. And the first paper in google suggests that 5% are still going to die. That's pretty far from sure.


My reference shows 1% at 30km/h. But even 1% is too high. But luckily you also usually get a chance to scrub off some speed through braking. Braking follows a square law, so driving a little bit slower gains a massive difference in stopping distances.


When you already believe that cars should not exist, it’s not much of a stretch to say that cars which do exist should be limited to less than a walking pace anywhere pedestrians might be present.

There’s a thing about the car companies having essentially stolen public space from the everyone else when they made it incumbent on pedestrians to watch out for cars, and a desire to reverse this.

Of course this would pretty much invalidate them as a transportation mechanism, but that’s the point.


I think you could simultaneously have a better improvement for pedestrians and less impact on traffic by turning those parking spaces into sidewalks and greenery.


> Furthermore, we don't subject anything else that moves to this burden, why would we do so for cars (AI or otherwise)?

Yes we do. There are many jurisdictions where in a pedestrian-car collision, the car is presumed to be at fault unless the driver can prove otherwise.


Have you as a human ever come into this situation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: