Fatal Tesla Crash

It begins. Their response? "It's a beta."

electrek.co/2016/06/30/tesla-autopilot-fata-crash-nhtsa-investigation/

Other urls found in this thread:

news.mit.edu/2016/pinpointing-vehicles-with-high-precision-under-adverse-weather-conditions-0623
archive.is/2016.07.03-182321/http://electrek.co/2016/04/09/tesla-autopilot-avoid-collision-video/
reuters.com/article/us-tesla-autopilot-dvd-idUSKCN0ZH5BW
youtube.com/watch?v=7PKx3kS7f4A
twitter.com/SFWRedditVideos

people will still cry over self driving car accidents even when they will be statistically 1000x safer than humans driving them

I will never have respect for people who actually fucking want self-driving cars. Aside from the ethical and security controversies, its just pure stupidity and laziness, there are automatics, and then there's fucking this, shameful

In the future all cars on the road will go 10 miles below the speed limit and sit at right turns for 5 hours before advancing and there will STILL be accidents

How is this noteworthy in any way? The autopilot did no worse of a job than the human sitting in the driver's seat.

What divided highways have cross streets?
And how the fuck do you not notice a huge truck in front of you?

I'd rather not leave the decision of who dies in a car crash up to Google. They'd go full Affirmative Accident.

Because the driver would have had lessened alertness because he has deferred responsibility to the AI

Archive you faggot.

This. If I ever chose to let a computer drive me around, then at least I intend to occupy the time otherwise.

leld

Sounds like it must've decapped the victim.

Considering that title says fatal crash ,I guess that you are correct.

Except Tesla never marketed this feature to have you sleep on your car or something, it is supposed to be a luxury, but they warned you still have to be alert for shortcomings like this one.

...

Archive my foot up your ass.

Only because it would look bad. Nobody wants to buy a driverless car if they have to sit and watch the road as if they're driving. Defeats the point.

Safer than the vast majority of incompetent fuckheads driving around in their metal death machines, yes.
It is better handled by a machine. Just because the machine is hackable doesn't mean it's a net safety loss.

Just like machine generated C would be better than the vast majority of shit regular programmers regularly shit out, when it happens.

This is what they get for sending out the driving AI as a fucking software update, and debugging it on the fly. And it just runs off of the car's camera, not LIDAR like Google's cars do.

I get security because it is easier to hack into a car then it is to hack into a human, but what ethical reasons are there against self driving cars? Are they are going to take people's jobs or something?

I'm not the guy you're responding to, but there is an ethical dilemma of placing someone in a high-risk mechanical system that they can completely relinquish control of. It may be a choice to do so, but what guarantees can be given to the users safety? What cost should be devoted to ensuring such a system is safe, and is that cost proportional to the human lives that will be at it's mercy?

As opposed to the human lives that are at their own, and others, mercy everyday.
If you drive you know that a large percentage of people on the road are incompetent faggots and the remaining percentage are not as foolproof as a system that's dedicated to total awareness of the surroundings, and a potentially self-governing system if the networking concepts work out.

The IOT is mostly insecure bullshit, but pretending that cars are a greater risk when they become self-driving compared to humans is total bullshit as well. Totally different set of risks.

I'm just pointing out the ethical concern my dude. Supplying a "self-driving" system in a vehicle opens up all sorts of potential legal drama around it.

Completely agree, which is why the question comes up in the first place. They don't know that they're incompetent faggots and would not quickly surrender their right to rampant roadfaggotry given such a premise, even if cleverly disguised. Even in pure logic, most normies think they know better than a machine.

kekekeke Americans confirmed for best drivers, shitskins shitty drivers.

There's legal concerns of who gets convicted of manslaughter or whatever in an accident but not exactly profound ethical concerns. Statistically they will always be safer than humans.

The accident in the OP was a case of the truck crossing the highway and not yielding to traffic. The human driver was obviously at fault here, moreso than you could fault machine for not properly reacting defensively.

Nor probably does it have the technology, like LIDAR, to really react well in many situations.

When the tech is developed and they get past the stage of training them in California and move on to actual dangerous areas, the riskiest thing on the road will still be human drivers.

This thread was better on soylentnews.

They would probably take truckers jobs for example.

Are you kidding me?
1.) if a self-driving car makes a mistake will the driver be at fault of the company?

2.) How will the car handle a situation similar to the trolley problem? Should the self-driving software prioritize the lives of pedestrians or the driver in the event of extenuating circumstances that result in the vehicle partially losing control?

Those 2 are at the top of my head

Then stay there.

*or the company

It will be the responsibility of the driver. The machines can not be held accountable like humans, no matter how much more efficient or better they may be with driving.

2.) How will the car handle a situation similar to the trolley problem? Should the self-driving software prioritize the lives of pedestrians or the driver in the event of extenuating circumstances that result in the vehicle partially losing control?

The laws of robotics were designed for this by Isaac Asimov. But he proved these laws may not be sufficient to apply to all situations the machines may come across.

Won't be long, really
news.mit.edu/2016/pinpointing-vehicles-with-high-precision-under-adverse-weather-conditions-0623

...

These people get it! It is frustrating to see how many people value convenience over freedom like

Dont worry goyim, self driving cars are statistically safer than human driving. Sure you may never crash because you drive carefully, and then die from your self driving car driving off a bridge, but it's still statistically safer. Why don't you understand science, goyim?

...

Holy fuck, it's right there. They're literally this retarded. I will never drive one of these ever.

Look at all other software that ever existed, none of this fixing-shit-as-it-happens approach works. People actually need to have a clue of how to solve the problem from the start. In the case of self driving cars, there's no known solution so you can't possibly have a clue. As they admit, they already know it will break, and so they try to fit a statistic on it (which misses so many points, but it's there). Computers are good for guided missiles and shit, not for driving your car. Anyone real software engineer or anyone who knows shit about computer security will laugh at this nonsense.

>archive.is/2016.07.03-182321/http://electrek.co/2016/04/09/tesla-autopilot-avoid-collision-video/
[OP's article, 3 months later]
[laughing_whores.jpg]


except you can be killed by glitches and hacking (which can already happen with things like uconnect and ECUs but it will only get worse)
it wont. no networked computer system has ever worked enough to bet my life on. I expect they will be so horribly broken I can find vulnerabilities and bugs in them by accident, which I do all the time

reuters.com/article/us-tesla-autopilot-dvd-idUSKCN0ZH5BW


The car might have done us all a favor.

Good thing you didn't even state whether you're talking about about a population or a single competent individual. For a population, it's extremely easy to beat even with the shittiest code. Most people drive while looking at their phones, while drunk, or they just ignore their surroundings while taking the workhome route for the 3000th time, etc. Most people don't even learn or practice driving, they just bruteforce their way past the driving test and get a car. If the slightest unexpected event happens, they will crash because they don't actually know how to drive.

For a competent individual, I highly doubt a self driving car is more safe. AI is not a solved problem, and driving a car isn't even a computable problem. Anyone who has a clue about this stuff will not be surprised in the slightest when a self-driving car inexplicably decides to drive off the side of a bridge or into a pole on the side of an empty road. That's the nature of computing. If we were on reddit, 1000 neckbeards would pile onto me at this point with the "there's no difference between human brain and algorithm" meme.

I would be the first to accept it if self-driving cars were statistically safer, yet I wouldn't even board one that is running proprietary software.

easy peasy

software would have caused the mistake. who should pay for the accident is a matter of how responsibility is transferred in contracts from programmers to car company to costumer. If you as a customer don't want to be hold accountable then don't fucking buy a self-driving car with nasty EULAs that place the responsibility on you.

how should a person handle the trolley dilemma? There's a reason why it's called a dilemma, although most people tend to switch the lever under the classical version of the trolley problem. Wanna kill pedestrians and save passengers? Wanna kill passengers and save pedestrians? Wanna run a random number generation in your code? Go for it. It's not like human drivers don't have to take these decisions. There are going to be deaths and trials either way.

fug

Why Asimov's Laws of Robotics Don't Work - Computerphile
youtube.com/watch?v=7PKx3kS7f4A

Isaac Asimov is a science fiction author, the laws were never meant to be taken seriously, and it would be practically impossible to define those three laws if you even tried to in the half-baked chinese bullshit that is being stuffed in those deathtraps.

I would like the freedom to have me or my children not be mudered by an incompetent drunk driver.
It's not free software, sure, people are simply saying that machines will always be safer.

>Good thing you didn't even state whether you're talking about about a population or a single competent individual. For a population, it's extremely easy to beat even with the shittiest code. Most people drive while looking at their phones, while drunk, or they just ignore their surroundings while taking the workhome route for the 3000th time, etc
Yes, exactly.
No one is saying "replace all drivers with machines."
We're saying machines are definitely safer in the mix because retards will be retarded.

We already have the technology to determine where we are on a fully covered road with LIDAR. It's getting to a point to say that, yes, automated driving is a net security gain.

There will always be bugs. But humans are buggier, even the more competent ones. I'm not going to trust a machine because I keep my full attention on the road at all times, refuse to check my phone, etc.
Because that is fundamentally morally wrong. I am driving a metal death machine, your attention needs to be on the road at all times, and you need to know the rules of the road.

And I would like the freedom to not be murdered when the piece of shit has a hanging process, puts the accelerator to the max, and causes a pileup in the middle of the freeway.
And remember, it'll the driver's fault that there was a machine malfunction with how fucked corporate law is now.
I could understand reducing the likelihood of death by a drunk driver, but why on this godforsaken earth would you trust a massive corporation in today's political climate? And why take personal agency away and replace it with code pumped out by outsourced workers Pajeet and Ahmed, hired by these massive corporations?

safer than the incompetent*
I'd also even take the bold step to say that 20 years in the future a well trained network with combination LIDAR and cameras will beat even a human at recognition and response times.
We all overestimate our competence.

I wouldn't take personal agency away, at all. That is always a risk, but the risk of undefined behavior will likely be smaller than risk of incompetence on the road that we all experience, every day.
That is, the known unknowns are less likely than the known knowns of biological imperfections, whether that's behavioral or a fundamental limit of a particular individual's biological machinery.

Fair enough. I'm sure you understand my concern though, it's a matter of ethics vs. statistics, and I'm not certain I can trust the statistics when there's examples like this Tesla crash, where it's simply brushed off as an anomaly instead of being addressed.
While the crash rate is considerably less, it only pushes the human error element back one or two steps, onto the manufacturers and software developers.

Fundamentally both even the most expensive LIDAR, as well as cameras coupled with computer vision have fundamental disadvantages. I'm not claiming they are perfect, at all. When combining both you have a lot of potential for undefined behavior, sure.
It's just that the technology is getting to a point where we can say that there's a net safety gain compared to biological machinery.

Also, even with a well trained network we don't even know why it works. That's what most of these people are betting on.

Even considering all of that, again, there is still a net safety gain. No one's saying "take away personal agency, all humans are incompetent."
It's more "most humans are incompetent and fuck off with candy crush at the wheel, I'd rather drive along side the machine with vast potential for undefined behavior than that sack of dogshit."

It _can't_ be addressed. They just add a new case to their code to try to handle this edge case. This is no different than Microsoft patching an 0day after they found 1 million machines were effected by it.

The only plausible argument I've heard so far is that self-driving cars, as shit as they are, will crash less overall than a large group of people measured over time. This has nothing to do with "biological machines" since the stats are probably just be due to a bunch of retards being drunk / not giving a fuck.

average drivers are shit. the main reason we can't have flying cars is because the crashes would be awful. once we have driverless cars down and our measurements get precise enough, automated airplanes will be safer than human pilots and we'll be able to rely on them to fly through the air without crashing into the side of a building electric car fuel can't melt steel beams


how about this one?

if the car is programmed to reduce fatalities, and it's safer to hit a big heavy truck or a volvo than a smaller car, is it unfair to prioritise hitting the more protected drivers?

or say you have to choose between hitting a cyclist with a helmet and one without. the cyclist with the helmet is more likely to survive, but is it fair to hit him if he's made more effort to be safe? if cars prioritise hitting those with helmets, there'd be an incentive not to wear a helmet so robot cars are less likely to kill you

there's shitloads of ethical issues that come with this

there's also shitloads of non-sequiters and fedora tipping

And temporary lapse of situational awareness? How about passing out/falling asleep? Poor peripheral vision or vision in general, whether that be due to environmental factors or biological factors? Slow reflexes, especially slower when you age? How about not knowing the fucking rules of the road in the first place?

Science fiction authors provide us with thought experiments that will have to be considered by future generations. Why would you be hostile this kind of thing?

That kind of thing is rare

Same and that's never happened to me. If you're talking about someone deciding to drive when he's tired, that still goes in the incompetence category, not biological limitation

It's illegal to drive if your vision doesn't pass a certain standard. But that doesn't matter. If you can't decide for yourself whether your vision is good enough to drive, that goes in the incompetence category, not biological limitation

Yes there are rare cases where you will die because your reflexes aren't fast enough. But self driving cars will randomly crash into some crap on the side of an empty road sometimes, because that's how heuristic algorithms fucking work. Which is safer? We need numbers. Look at the guy in OP's article. He posted to the internet boasting how Tessy saved him from a collision, and then died 3 months later because of a defect in Tessy (>>622693). Sure if you have 200 people in self driving cars and 200 in normal cars, the former may have less incidents, but this guy still died in a case where he could have easily avoided it.

into incompetence category, not biological limitation

call me a loony tinfoil hat-wearer but i don't want to drive a car that has a build in tracking device and can be remotely controlled by the gov't

Pro tip: it's not

You sound like those liberal pieces of shit who want to ban guns. You probably suck at driving yourself and project this onto everyone else. We don't need self-driving cars, we just need to license better drivers. Anyone who says human drivers are more dangerous than computers is strawmanning at this point since we have no solid evidence to come to that conclusion. Self-driving cars are very restricted to specific roads only and there are much fewer of them, and we still see them getting in accidents, get fucked you pieces of shit!

Not with future LIDAR systems.
news.mit.edu/2016/pinpointing-vehicles-with-high-precision-under-adverse-weather-conditions-0623

The statistics already show that Tesla's shitty autopilot is much safer than worldwide and US accidents.

You sound particularly asshurt.

What could possibly go wrong.

I think any good programmer would be terrified of self driving cars

you have to be extremely ignorant of software to think this fad is a good idea

watch out for fanboys. the news article was full of them. they are insufferable.

...

This. Brain dead normalfags would probably be better off with an AI driver, because they're so fucking dumb, but a competent individual could almost certainly do better.

At any rate I oppose driverless cars on grounds of not trusting the government/Google/etc. to not decide to accident me, I'm at the point where I'd be okay to sacrifice some potential safety for liberty. It's a bitter pill to swallow, given how driverless cars were for me a shining gem of a utopian future, and I used to be really looking forward to them. I came very close to actually working on them full-time, but Holla Forums and Holla Forums jolted me awake. You faggots just couldn't let me sleep.

If self-driving cars were nearly perfect, wouldn't it alert suspicion if anyone had an accident in one?

No, due to muh statistics. They get one free kill every ~100 million miles.

why would you (ever) want a proprietary program decide your safety let alone run this world?

Maybe the victim was a high value target.
I do think this is intentional. The fact that laws do not apply to robots it's the perfect way to kill.