The only reason it's a thing is because some shitty fucks can make money off of it.
Which is unfortunately true for the vast majority of all recent products and innovations reaching people's lives, technological or otherwise.
The only reason it's a thing is because some shitty fucks can make money off of it.
Putting people in prison after an accident happens doesn't actually do anything to stop the accident since it already happened.Saying that you don't want the technology to exist because it's harder to punish someone if things go wrong even if it will most likely lead to way less people dying overall seems really petty and vindictiveI don't like the idea of autonomous machines like this mingling with the public at large. I'd be fine with it if the driver was legally responsible for any death / dismemberment / injury that resulted from the car's actions. Because you can't put cars in prison.
I'd also appreciate an option to allow the car to sacrifice my life if in so doing it can save the Protoss homeworld.Perhaps there could be an option"My life before the life of other people/or after"?Also in defense of Mercedes just imagie if there are passengers, situations can be grey.
I think it would make people think twice before buying one.Putting people in prison after an accident happens doesn't actually do anything to stop the accident since it already happened.Saying that you don't want the technology to exist because it's harder to punish someone if things go wrong even if it will most likely lead to way less people dying overall seems really petty and vindictive
En taro Tassadar.I'd also appreciate an option to allow the car to sacrifice my life if in so doing it can save the Protoss homeworld.
I don't think anyone would go to jail if their self driving car malfunctioned unless they were modding the programming or something. Pretty clear that if anyone was held culpable it'd be the company behind the program
They said this years ago. I remember a thread with Mercedes stating this at the old place.
I see, OP is very late to the party
Haha. Good shit.
This is a perfectly good decision. No way should my car flip my family off a bridge because some suicidal dude jumps in the way.Well this will definitely go over well for Mercedes. Good luck to the PR team on this if they want to try and defend it lmao.
Intent doesn't need to exist. That's why it's an accident.This is a perfectly good decision. No way should my car flip my family off a bridge because some suicidal dude jumps in the way.
Its economics, people arent going to buy a car that its programmed to kill them
This is a perfectly good decision. No way should my car flip my family off a bridge because some suicidal dude jumps in the way.
The ethical considerations of autonomous vehicles cannot be understated, especially when a situation involves choosing between two victims, a pedestrian or passenger. As well, if a vehicle crash is unavoidable, does the car choose to crash into a small vehicle versus an SUV?
If algorithms are generally designed to do the former, not the latter, isn't this a form of discrimination against the owners of small vehicles who might not be able to afford larger ones?
Dieter Birnbacher, a professor of philosophy at the University of Duesseldorf, and Wolfgang Birnbacher, an FPGA system designer at IBEO Automotive Systems GmbH, concede that, as with any safety-critical technology, the general public is usually willing to accept a certain level of risk in exchange for the benefits of said technology.
"Despite all security measures, a residual risk is unavoidable, which raises questions: How safe is safe enough? How safe is too safe?" they wrote recently in the IEEE Intelligent Systems magazine.
Smart cars could reduce travel times, improve air quality, and eliminate virtually all accidents caused by human error. Ultimately, if public consensus determines an acceptable level of risk, the public becomes responsible for what happens in the face of that risk, the two experts said.
"It goes without saying that an egalitarian decision algorithm along these lines would lead to a radical shift of responsibility from the individual to the public. Neither the owner nor the passengers could be held responsible for the behavior of the vehicle any longer since risk preferences and conflict solving are determined in advance by societal consensus, leaving no room for individual intervention," the authors say.
The top three causes of car accidents are distracted driving, speeding, and drunk driving, according to the National Highway Traffic Safety Administration. Human error—which accounts for over 90% of accidents—would be largely eliminated with self-driving cars, leaving any remaining likelihood of accidents to autonomous vehicle design quality.
And there's the rub. How can car manufacturers make self-driving cars safer?
The answers are complex, but researchers from the University of Sao Paulo, Brazil propose one solution: an independent module, the Autonomous Vehicle Control (AVC).
The AVC is a safety system designed and built independently from the vehicle's uniquely manufactured system. The AVC can be installed into any vehicle and tested for industry safety standards across the board, no matter who the manufacturer is.
The idea is to implement an AVC that will both interact with the vehicle's systems and create a protection layer that is independent of the way the vehicle's system was developed, ensuring that, no matter how the car is designed by a manufacturer, it will meet all safety standards.
What they found
The researchers identified three relatively universal preferences.
On average, people wanted:
- to spare human lives over animals
- save more lives over fewer
- prioritize young people over old ones
That's not what's being said at allThis is almost a generic super villain statement, Mercedes is basically saying that the ends justify the means and regardless of what damage may come from this it won't outweigh the damage prevented by trusting the average human driver.
"Will be programmed to sacrifice pedestrians"
The article does a great job of fear mongering a bit, especially when it provides the Trolley problem as a real world scenario that most likely will not happen.
Here's some tidbits about safety features and ethical concerns of self driving cars.
What researchers found when they essentially quizzed people on the Trolley problem as well is basically you save more lives than the few.
The average human driver is fucking awful and doesn't deserve anyone's trustThis is almost a generic super villain statement, Mercedes is basically saying that the ends justify the means and regardless of what damage may come from this it won't outweigh the damage prevented by trusting the average human driver.
Who is more likely to survive, the person with airbags or people being mowed down by a chunk of metal?
I'm not talking about an accident. I'm talking about a situation where beyond reasonable doubt the AI was negligent. Who goes to jail? If no one then we're just disseminating responsibility to the point where no one is responsible for deathsI mean that would depend. If it was a freak accident I see no reason for anyone to be punished. If it's found to be a frequent problem that is the result of a coding oversight than the company should get in trouble. If the programming was hacked by the driver the driver is responsible
I think you're missing the part where humans as we already are are already terrible drivers as a whole, and while tech malfunctions may be a thing, this technology won't realistically be greenlit for public use until it's significantly safer than the typical human driver anywaysSome of you have some weird hope about machines and AI. I've witnessed plenty of tech go to shit for non-attributable reasons. Somehow coming to terms that a hairline fracture across the public space now defines life and death is a frightening jump for me.
If there was negligence in the coding of the AI then any reasonable person would put the liability on the company responsible. It's not particularly hard to figure outI'm not talking about an accident. I'm talking about a situation where beyond reasonable doubt the AI was negligent. Who goes to jail? If no one then we're just disseminating responsibility to the point where no one is responsible for deaths
Regulation will be a human input. Bugtest all you want. One is negligence, one is design. One is a moment, the other a purpose. We're coming hot off the heels off the 737 max here.I think you're missing the part where humans as we already are are already terrible drivers as a whole, and while tech malfunctions may be a thing, this technology won't realistically be greenlit for public use until it's significantly safer than the typical human driver anyways
uh plenty of people have been killed by trainsWhat if we just made a really efficient and convenient train system instead
As will all self driving vehicles unfortunatelyRead more here:
Autonomous murder machines, just rolling into the shop!
WHOIf there was negligence in the coding of the AI then any reasonable person would put the liability on the company responsible. It's not particularly hard to figure out
Except it is, the article explains that AI will chose to save the drivers in any scenario but it's ''ok'' because it will pose significantly less danger than human drivers.
If there was negligence in the coding of the AI then any reasonable person would put the liability on the company responsible. It's not particularly hard to figure out
I'm finding it hilarious how you seem to think it's more ok if many people are dying due to incompetence rather than a few people dying due to a decision making protocol who would probably die anyways if a human was behind the wheelBugtest all you want. One is negligence, one is design. One is a moment, the other a purpose. We're coming hot off the heels off the 737 max here.
Maybe we should be more focused on how to prevent accidents from happening in the first place rather than being vindictive assholes who care more about punishing people than saving lives?