Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

People Become More Utilitarian When They Face Moral Dilemmas In Virtual Reality

Unknown Lamer posted about 7 months ago | from the morbid-virtual-reality dept.

Science 146

First time accepted submitter vrml writes "Critical situations in which participant's actions lead to the death of (virtual) humans have been employed in a study of moral dilemmas which just appeared in the Social Neuroscience journal. The experiment shows that participants' behavior becomes more utilitarian (that is, they tend to minimize the number of persons killed) when they have to take a decision in Virtual Reality rather than the more traditional settings used in Moral Psychology which ask participants to read text descriptions of the critical situations. A video with some of the VR moral dilemmas is available, as is the paper."

cancel ×

146 comments

Sorry! There are no comments related to the filter you selected.

Utilitarianism (1, Funny)

turkeydance (1266624) | about 7 months ago | (#45910285)

it's the new you!

Utilitarimeter (0)

Anonymous Coward | about 7 months ago | (#45911349)

= Less people murdered per second

Captcha: slaying

Spoiler Alert (1)

Cryacin (657549) | about 7 months ago | (#45912731)

This is exactly like Enders Game. If you dissociate yourself from the consequences of reality, and think things are just a game, or exercise without consequences, that you want to do the best at, you will achieve the same goals faster and better with less losses than if you are empathising about the consequences from a the affected individuals perspective.

However, in life, our goals and purposes are changed by empathy for others, which is the driver to have evolved a moral compass.

Measurement of utility (5, Interesting)

jeffb (2.718) (1189693) | about 7 months ago | (#45910293)

So, we're assuming that all participants considered the death of (virtual) humans to be a bad thing?

Re: Measurement of utility (1)

Anonymous Coward | about 7 months ago | (#45910519)

So that makes it moral to kill people?

Re: Measurement of utility (1)

icebike (68054) | about 7 months ago | (#45910807)

Maybe, I, don't know, read the summary, (since the article is pay walled) ?

I know, right, read something more than the title on Slashdot? What was I thinking?!

Re: Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45912799)

"Killing" virtual people is a simulation of killing. A computer simulation is not alive, and therefore cannot be killed by definition [reference.com] . Nice strawman through.

Re:Measurement of utility (1)

Anonymous Coward | about 7 months ago | (#45910593)

It's pretty obvious that the participants thought they were playing a game, and they thought they were losing an equal number of virtual "points" for each virtual human that died.

(Most people are familiar with the concept of video games, the study took place in something that resembles a video game, and people like to "win" games by getting the highest possible score. What the F*** were you expecting, researchers?)

Re:Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45910713)

It doesn't appear that the *researchers* are the ones assigning 'points' to the various participants. Therefore, any assumptions the participants are making about the 'value' of the various virtual humans, lies entirely within the participant, uncolored by said participants' knowledge of the affected individuals.

Re:Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45910809)

In the absence of feedback about how many points you're losing per death, the default assumption is that all deaths cost the same number of points (namely: -1 point per death).

Re:Measurement of utility (1)

NatasRevol (731260) | about 7 months ago | (#45911627)

I guess the bigger question is, if this isn't a game or there is no assigned value to virtual humans, why would a participant value them at all?

They're pixels, not people.

Re:Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45910773)

It depends. There are still people who haven't even tried games despite (ab)using computers for decades. I know a number of people like that. IF the researchers have had such people as test subjects they have not had the score keeping mentality although I also think that such people perceive characters in games as less real than avid players do. So either way, I doubt the validity of the results. If I were to conduct this kind of utilitarianism testing experiment, I might have test subjects believe that they decide whether one person or several people receive an electric shock and in the latter case the test subjects must change the switch. Obviously, I'd also use actors to be humane but that's such a well known trick by now that maybe it would make those results invalid as well. And the difference between suffering pain vs. dieing is also pretty substantial.

Re:Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45911035)

When explaining the experiment to participants, they could just use an instructive video of the process, showing in-game peoples' death tied to puppies and kittens being electrocuted.
Could even feed some (fake) sounds back to them after their decisions in-game.

Re:Measurement of utility (0)

Anonymous Coward | about 7 months ago | (#45911105)

But in games you get points for killing people. I guess the participants weren't familiar with computer games or military service.

Re:Measurement of utility (1)

Koby77 (992785) | about 7 months ago | (#45912893)

I think that's the point of the experiment, to create more life-like situations to attempt to find out how people would actually react in real life. A test that actualls kills people would be obviously immoral. But would people react differently from a text description of events? What happens if we someday create a simulation so life-like that human participants believe the virtual victims are real? We can't answer that yet, but we're inching closer and closer. And it appears that we are trending towards "save more people" rather than "protect those in the right-of-way" or "don't change events if it leads to someone else's death".

Re:Measurement of utility (1)

icebike (68054) | about 7 months ago | (#45910751)

So, we're assuming that all participants considered the death of (virtual) humans to be a bad thing?

Not only that, but we are assuming that how humans in obvious simulations of one sort or another has any bearing on situations in the real world.

As best I can fathom, the report suggests people are less bored and more willing to play along in a VR simulation than they are when reading (and trying to imagine) text based scenarios.

Probably explains (yet again) more about why text based MUDs are entering extinction while every one and their brother is coming out with another on-line virtual reality game.

Re:Measurement of utility (2)

Rhacman (1528815) | about 7 months ago | (#45911695)

Personally I'd be overwhelmed with curiosity with how the game physics would respond to situations the developers may not have considered. What happens if you rapidly cycle the train switch or switch it right as the train is passing over it? Perhaps you could get the train to derail and start to accordion thus clearing both sides of the tracks and destroying the train itself.

Utilitarianism is correct (1)

i kan reed (749298) | about 7 months ago | (#45910295)

Every other moral system makes claims it can't provide real justification for. Minimizing harm and maximizing benefit is the best you can manage(and sometimes you don't know enough to do that either)

Re:Utilitarianism is correct (3, Interesting)

tiberus (258517) | about 7 months ago | (#45910459)

In the various versions of the train dilemma, you have two options 1) don't act and five people will die; or 2) act and only one person will die. While I see the logic of your argument, and tend to agree that it is the best overall or numerical result. It does seem to be a rather chilling choice. It avoids the premise that by taking action the actor becomes a murderer; having taken action that directly resulted in the death of another. In the other case the actor is only a witness to a tragic event.

Re:Utilitarianism is correct (4, Insightful)

Calydor (739835) | about 7 months ago | (#45910491)

He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45910547)

Sure, the choice is one count of murder or five counts of negligent homicide.

Re:Utilitarianism is correct (3, Funny)

Anonymous Coward | about 7 months ago | (#45910675)

Maybe it's best to try and switch the lines but fail. Then you only have one count of attempted murder.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911853)

You might still get hit with the five deaths, depends on your state's Good Samaritan laws.

Re: Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45912961)

It is in fact best to throw the switch halfway forcing the trolley to derail. Even with modern failsafe controls this can be done on purpose with correct timing.

Re:Utilitarianism is correct (5, Interesting)

AthanasiusKircher (1333179) | about 7 months ago | (#45910625)

He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

Yes, and that ultimately leads to the "next level" of utilitarian dilemmas. What if you're a doctor, with five terminal patients who all need different organs. In walks a healthy person who is (miraculously) compatible with all of them.

Should you kill the healthy person, harvest the organs, and save the five terminal patients? (For the sake of argument, we assume that the procedures involved have a high chance of success, so you'll definitely save a number of people by killing one.)

Many people who say we should flip the switch in the trolley problem think it's wrong to murder someone to harvest their organ and ensure the same outcome. Why is "inaction" appropriate for the doctor, but not in the case of the trolley?

(I'm not saying I have the right answers -- but once you start down the philosophical path of utilitarian hypotheticals, there's a whole world of wacko and bizarre situations waiting to challenge just about anyone's moral principles. I can't wait until the "I was kidnapped and forced to keep a famous violinist alive" scenarios come up!)

Re:Utilitarianism is correct (1)

DeadDecoy (877617) | about 7 months ago | (#45910831)

I think the scenarios handle the worth of the individual being sacrificed differently. In the clinical case, the individual is not just being killed, but they are objectified as a set of resources that can be exploited. In the train case, the individual is killed due to being in the wrong-place-at-the-wrong-time. I think most people would make this distinction out of empathy. That is, they may be more ok with dying due to an unfortunate set of circumstances, and they may not be ok with dying to suit other peoples' functional needs.

Re:Utilitarianism is correct (4, Insightful)

Anonymous Coward | about 7 months ago | (#45910853)

The healthy person isn't part of a potentially doomed set unless you harvest his organs.
You cannot ethically *start* the process of saving lives by unnecessarily killing someone.

In the train scenario, either 5 people die, or 1 person dies. There is no other option, because there's no way to stop the train in time. Your choice is simply whether to:
a) minimize the deaths by action, or
b) maximize them by inaction.

In the organ harvest scenario, you have a potentially doomed set, and a non-doomed set. You also have numerous options beyond:
a) kill the healthy guy for his organs, or
b) don't kill healthy guy for his organs.

For example, you also have:
c) convince the healthy guy to donate a subset of his organs which can be spared in order to save some of the terminal patients.
d) continue looking for compatible harvested organs.
e) harvest organs from the first terminal patient to pass on in order to save some of the other terminal patients.

There's more, but I think you can see the difference between the two scenarios.

Re:Utilitarianism is correct (2)

SleazyRidr (1563649) | about 7 months ago | (#45911439)

The one person wasn't "potentially doomed" at the start either. He was just crossing a set of empty railroad tracks.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911487)

You cannot ethically *start* the process of saving lives by unnecessarily killing someone.

So, no throwing the switch on the train tracks, right?

The key difference between the two scenarios is the level of risk undertaken by the "innocent" bystander. Walking on train tracks is known to be somewhat risky - walking into a doctor's office is an inherently low risk activity. (There are no zero-risk activities in life). Juxtapose the two, crank up the pressure from both sides of a moral argument (the doctor's patient is Hitler and the guy on the tracks is a pediatric brain surgeon!) until you set someone's moral compass a spinning, and uncomfortable little facts about how we value things and people emerge.

It doesn't matter what philosophical framework you pick, someone has contrived scenarios that will make you squirm. That's the fun part of moral philosophy. The not so fun part is accepting the philosophical hill you choose to die on. (Or, absent of picking a hill, watching yourself slowly become a moral nihilist)

Re:Utilitarianism is correct (2)

AthanasiusKircher (1333179) | about 7 months ago | (#45912977)

There's more, but I think you can see the difference between the two scenarios.

Most of your argument makes the assumption that the patients are not close to death. What if they are? (Disaster scenario or something.) And what if the healthy guy says, "No!" even to the idea of donating some organs?

If you find it repugnant to kill him, do you still favor forced "donation" of his organs if it won't kill him, but will save the lives of other people in imminent danger of dying? After all, you seem in favor of killing a guy in one scenario to save five people, what's wrong with stealing a kidney?

(To be clear, I'm not arguing either side of anything in these debates. I'm just bringing in the kind of questions that moral philosophers do....)

Re:Utilitarianism is correct (1)

Lazere (2809091) | about 7 months ago | (#45910859)

I think it's the level of involvement in the death of the one. In the train example, you would be directly responsible for the person's death, but ultimately, it's something else that actually does the killing. In the doctor example, not only would you be directly responsible for the person's death, but you'd be the one doing the killing and cutting. It's easier to distance yourself when somebody is being squashed by a train, not so easy if you're cutting them open and harvesting his/her organs. Additionally, it could be some strange sense of property that influences the decision. People may be less willing to take what they perceive as somebody's "property" to save the lives of others.

Re:Utilitarianism is correct (1)

icebike (68054) | about 7 months ago | (#45911031)

The inaction by the doctor is required by law. (More than a few doctors have or would have taken matters into their own hands).

So the situation is quite different.
The simulation presents false choices in unrealistic situations, and therefore you can't attribute much insight to the study, and would probably learn more watching a few rounds of team Capture the Flag.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45912853)

The doctor scenario is iterative -- it can happen again, and if we know this is a reasonable risk when you walk into a hospital, people won't walk into hospitals at all, nor do anything that could lead to their organ compatibility becoming known. The train scenario is a once-in-the-lifetime-of-civilizations scenario.

It also seems rather absurd that there's only one healthy option to harvest organs from (unless you introduce organ compatibility, but that gets back to the iterative dilemma of making it undesirable to do any test for organ compatibility). They can't drive outside and kidnap a random healthy hobo?

You have to set this on a desert island or something that mysteriously has medical equipment for it to parallel more closely.

It is a tough question, to be sure. I think an answer to the classic train problem that leads to 5 deaths instead of 1 is a wrong answer, and I think an answer that leads to people being kidnapped off the street and rendered into organs is also a wrong answer, so we have to navigate that.

The violinist can be resolved in a couple ways: one, you have the absolute right to try to defend yourself as the kidnappee (granted that applies to the violinist as well), and two, just because they have the right to live doesn't mean they have the right to live at the expense of another. In other words, you can't simply use utilitarianism to simply argue that, if somehow sacrificing the eyballs of 1000 people would save a man, then the eyeballs must be sacrificed.

Re:Utilitarianism is correct (1)

AthanasiusKircher (1333179) | about 7 months ago | (#45910749)

He is not only a witness if he KNOWS that he had the power

By the way, perhaps you were making a reference with your KNOWS. If not, I'd be careful about emphasizing the word knows when talking about the trolley problem, unless you've dug around in the vast philosophical literature on it, where knows in italics has special meaning. If you're not careful, pretty soon you end up piling on philosophical nonsense conundrums and end up with something like this [mindspring.com] .

Re:Utilitarianism is correct (1)

icebike (68054) | about 7 months ago | (#45910919)

He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

And thinking outside the box is an action as well. Like maybe shouting and blowing the train horn, throwing rocks, depending on where said witness stands. Even deaf people react to being it with a rock.

The dilemmas shown are false, and Its amazing that the participants would even take the simulation seriously enough to give meaningful results.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45912929)

The dilemmas are by definition true: they are posed *as* dilemmas where there is in fact no other option. What they might not be is realistic.

It's a fine distinction, but you can, in principle, make a rational decision with an unrealistic true dilemma whereas a realistic false dilemma is irrational.

In your case -- you can't hit any of them with a rock because they are very far away and there's strong loud wind anyway, which also blocks the sound. Also you have the same disease as Stephen Hawking and as such have almost no interaction with the outside world other than your eye movements which can only control the train switch.

As many "outside the box" solutions you can come up with, somebody can find an outside the box answer for it and put you back in the box.

If you don't want to play, that's fine, but I find it a bit frustrating when everybody rains on the parade. "Stop thinking these things guys!".

Re:Utilitarianism is correct (3, Interesting)

blackraven14250 (902843) | about 7 months ago | (#45910633)

It's a chilling choice, but the train dilemma is flawed when you consider that it would never happen in real life anyway. I'm not saying that the 5 vs. 1 scenario wouldn't happen, but I highly doubt someone is even going to consider the second option at all if presented with the scenario. If the thought doesn't even cross the person's mind, there's not a choice being made between the options. If no choice is being made in reality, the thought experiment is worthless as a way explain human behavior. The whole concept of the thought experiment is undermined when you realize that it's not something any person would ever end up doing because of another variable that the thought experiment does not consider.

Re:Utilitarianism is correct (1)

Livius (318358) | about 7 months ago | (#45910979)

The flaw is that people make decisions of that nature immediately and emotionally based on heuristics rooted in instinct. For tens or hundreds of thousands of years of natural selection, no-one has ever been presented with a situation that featured the ideal certainty that the train dilemma is based on.

Re:Utilitarianism is correct (1)

xevioso (598654) | about 7 months ago | (#45911465)

Well, that is clearly untrue; these sorts of examples happen all the time in war.

The Nazis were well known for presenting innocent people with these sorts of tortuous dilemmas before committing atrocities.
I.e., "Choose which one of your 5 children will die, Jewess, or I will shoot all of them."
Multiple accounts of these sorts of evil choices having been foisted on folks during WWII exist.

So while the Train dilemma does occur on occasion, I'd argue that in reality, there is no correct choice at all. They are all equally bad if you force someone to make a choice like that.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911845)

So while the Train dilemma does occur on occasion, I'd argue that in reality, there is no correct choice at all. They are all equally bad if you force someone to make a choice like that.

In reality, there's always a third choice, even if the likelyhood of success is low. In the above situation, I would assume that many would rather attempt to fight the nazi than choose the lesser evil.

Re:Utilitarianism is correct (1)

xevioso (598654) | about 7 months ago | (#45911917)

What if the person is tied up? Remember, not choosing is a choice. You can attack the Nazi, and after he knocks you back down or ties you up, he can ask you to make the choice again. These things really happened; there's multiple accounts of sadistic folks forcing people into morally repugnant situations throughout history.

Re:Utilitarianism is correct (5, Insightful)

Anonymous Coward | about 7 months ago | (#45910469)

Until you're faced with the choice of saving your sister versus five anonymous others.

Utilitarianism is false, because no human being can know how to globally maximize the good. They just believe they do, and then use "the end justifies the means" to commit atrocities.

Our quirky affective behavior is arguably an optimal heuristic in a world where you only have a peep-hole view of the global state of things. For example, in those trolley dilemma you're _told_ that the trolley is random. But we're hard wired to believe that nothing is random, which means you have to fight a belief that the trolley was purposefully sent to kill those five individuals. Maybe the lone individual would save the world. In any event, maintenance of the status quo (letting the five get killed) is, again, arguably an optimal behavior when there is insufficient information to justify doing something else.

Re:Utilitarianism is correct (1)

elfprince13 (1521333) | about 7 months ago | (#45910515)

This comment should be at +5 Insightful. Mods, do your duty.

Re:Utilitarianism is correct (1)

dpidcoe (2606549) | about 7 months ago | (#45910705)

I'm out of mod points :(

Re:Utilitarianism is correct (0)

i kan reed (749298) | about 7 months ago | (#45910883)

Being unable to attain a moral behavior, due to ones own needs, doesn't make it less moral.

Re:Utilitarianism is correct (1)

xevioso (598654) | about 7 months ago | (#45911493)

It's possible that neither choice is morally correct, and that a person is placed into a situation where both choices are equally immoral.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911543)

The real argument against utilitarianism is that it demands omniscience.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911029)

Deontology is false, because no human being can follow the logical consequence of his or her actions if those actions were to become universal axioms.
Virtue Ethics are false, because the mean for one individual between virtue and vice will be different for another, offering no meaningful morality.
So really... I guess normative positivism is correct and we can defined "good" and "bad/evil" as "things we like and make us feel warm and fuzy" and "things we don't like which make us uncomfortable or sad".

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911939)

>Utilitarianism is false, because no human being can know how to globally maximize the good. They just believe they do, and then use "the end justifies the means" to commit atrocities.

All systems of morality are "false" because they are all subject to people's interpretations. What is the alternative? Do we presuppose some system of universal morality? It seems likely that people don't know how to correctly pick a system of universal morality either. They just think they do. Then they use it to commit atrocities. For example, Kant says if there is a murderer at the door and he asks you where his intended victim is that you should tell him since lying is wrong and all. Then we have things like Holy wars as well. In the end, people pick the course of action they want to take and then rationalize a moral justification for it whether or not they are utilitarians.

Re:Utilitarianism is correct (1)

eepok (545733) | about 7 months ago | (#45912053)

There are many flavors of utilitarianism and like all forms of ethics, philosophy, and sciences, the later versions tend to be the best.

Utilitarianism is a sub-category of consequentialist ethics within which are multiple versions of Utilitarianism. One of the first descriptions sought to maximize pleasure and minimize pain. That's old and busted. Another sought to maximize happiness (slightly different). These versions of Utilitarianism are fairly easily defeated by what I call the World Cup Conundrum: You are in the centralized satellite transmission station responsible for beaming the World Cup out to 75% of the world's population. The score is tied and there's 60 seconds left on the clock. Someone is charging the goalie with the ball when the power cuts out. A major power cable has snapped and there is no time to fix it properly. The only way to resume transmitting the game is to throw your co-worker on the cable gap and have his body act as a transmission line. Do you do it? Do you sacrifice the life of one man for the 45-second happiness of billions of people? According to these archaic versions of Utilitarianism, it may be your duty to do so.

But philosophies evolve.

One of the most modern and widely accepted versions Utilitarianism shows that the utilitarian calculus should consider the *preferences* of all those affected by a decision and weigh those against the various potential actions. In the World Cup conundrum, instead of considering the happiness or pleasure of the audience, it's your duty to consider their preferences to be entertained and weigh the value of those preferences against the preference of your co-worker to live (and his family, etc. to keep him alive). Most people would agree that the preference for the ongoing life of your co-worker outweighs the preferences to be entertained of even billions of people. Thus, the best choice is simply to due your best to fix the problem while not killing your co-worker. If everyone misses the last minute of the game, oh well. It was the best of all known options.

Evaluation:
Did you use the Utilitarian calculus to choosing your course of action?
Did you use your best rational approximation of the preferences of those affected to choose your course of action?

If you answered yes to both, then your duty has been fulfilled. You've done the best thing you could do.

Re:Utilitarianism is correct (2)

CTachyon (412849) | about 7 months ago | (#45912715)

Utilitarianism is false, because no human being can know how to globally maximize the good.

This is like saying "mathematics is false, because no human being can know if a statement should be an axiom or not". In both cases the subordinate "because" clause is trivially true, but not logically related to the independent clause it pretends to justify. Mathematics is a tool for generating models, some of which are useful for approximating how the real world behaves; utilitarianism is a subtool within mathematics that's appropriate for generating models of the part of reality we call "human morality".

They just believe they do, and then use "the end justifies the means" to commit atrocities.

Every proposed moral system has been used to justify at least an atrocity or two at some point: utilitarianism, deontology, moral relativism, moral absolutism, every goddamn religion you care to name — even Buddhism! (What the hell, right?) The truth is that people choose an action, then they justify their action by creating a post hoc story that rationalizes why the chosen action was Right, and it makes no sense to blame the justification instead of the choice.

Morality itself is a pattern in the brain that shapes what one chooses — how one resolves the balance between conflicting goals — and it's not actually an object-level belief that one can directly observe with conscious thought. If you give people books to read about object-level moral beliefs, the readers don't become more moral or less moral, they just get better at crafting post hoc justifications.

(Also, as it turns out utilitarianism was not a great model for human behavior by itself, but it actually does pretty well if you extend it with uncertainty in the Bayesian sense. Moreso if you go the extra step and add causality to the model (fixing the edge cases that crop up in more nai:ve decision theories [wikipedia.org] that treat actions as evidence). If the space of possible futures is small enough, you can even wrestle the conditional probabilities into submission, e.g. using Judea Pearl's causal networks [wikipedia.org] , and get concrete answers that take that uncertainty into account — still a high bar, but more tractable than "noooo, it's not worth doing unless it's perfect". Many human behaviors that seem irrational in a Homo economicus utilitarian calculus suddenly look perfectly rational if you model the study participant as, say, a Pearlian estimator with a low computed probability for P(stranger will actually give $100|stranger says they'll give $100 if I were to do X AND I counterfactual-do(X)).)

Re:Utilitarianism is correct (3, Insightful)

elfprince13 (1521333) | about 7 months ago | (#45910499)

Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.

Re:Utilitarianism is correct (3, Insightful)

Rockoon (1252108) | about 7 months ago | (#45910905)

Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.

Exactly. I recognize full well that killing 1 will save 5, and in general I do not have a moral problem with choosing to alter fate to change the outcome to favor the 5, but I do not view any of the participants in the video cases as being faultless.

You and others are walking down the train tracks, a train is coming, and none of you move. Why arent you moving? Maybe that lone guy on the side track knows that the train isnt going to run down his track, which full well makes me a murderer if I divert the train to his track. The larger group has to take responsibility for their own damn actions.

That, my friend, is utilitarian in my eyes.

Re:Utilitarianism is correct (1)

xevioso (598654) | about 7 months ago | (#45911527)

That never exists is the real world though.

A real situation would involve war. A Nazi in a ghetto telling a woman to choose which of her five children will die? He will hand the gun to her, and she must shoot the child; otherwise the Nazi will kill all five.

THAT is a real world Train dilemma, and utilitarianism has no say here, because there is no good choice. Each choice is equally imorral. Probably the only thing the mother could do to absolve herself of the moral guilt she will surely feel is to tell the Nazi to flip a coin.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911815)

In that situation, the optimal choice is to shoot herself or (attempt to) shoot the Nazi.

She's being toyed with. There's clearly no imperative for the Nazi to follow his own rules, so cooperating with him is the least moral choice. Diverting his attention away from the children and toward herself will either cause him to leave the children alive (what's the point in killing them when it doesn't serve the purpose of torturing the mother) or kill the children, too (which likely would have happened had she cooperated anyway).

Re:Utilitarianism is correct (1)

xevioso (598654) | about 7 months ago | (#45912901)

Well, my example was somewhat facetious, I doubt there's an example of the Nazi giving the woman the gun to do the job, but rather to essentially demand that she make the choice. In the few cases I've read about (it's been many years) where this happened the woman placed herself in front of her kids demanding the Nazi spare her kids life in exchange for her own. of course you can imagine how that turned out.

The point is there is no correct choice in the real world in a situation like this.

Re:Utilitarianism is correct (1)

Livius (318358) | about 7 months ago | (#45911013)

Something harming the gods would certainly be vastly worse than something that only harms mere mortals. That's the utilitarian calculus we've had for most of human history.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45910799)

Every other moral system makes claims it can't provide real justification for.

What you say is completely subjective.

Re:Utilitarianism is correct (1)

i kan reed (749298) | about 7 months ago | (#45910923)

Ah, you caught me in my little joke. I didn't affirmatively make the implied claim that utilitarianism doesn't also fall into that category.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45910945)

I believe that the question studied is how well people really are able to act like that in the absurdly unlikely event that they face a situation resembling these. It's sort of like trying to get an answer to "would you give a lot to charity if you won the lottery?" You might think you can answer it correctly and honestly but in reality, until you're in that situation, you cannot know. It's such a life altering event that you change as a person (I personally know how a friend of a friend who used to be a generous, nice guy became a paranoid, greedy man obsessed with not losing any of his money after indeed winning the lottery). These scenarios are just as life altering situations - if they were real - and the question is about human nature.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911055)

Every other moral system makes claims it can't provide real justification for. Minimizing harm and maximizing benefit is the best you can manage(and sometimes you don't know enough to do that either)

Not just sometimes. Always. After all, that child you are saving might be the next Hitler.

All you can do is make a guess, or at best estimate probabilities.

Also, harm and benefit does not even have an universal total order. Does it more harm for someone to lose a few dollars, or for someone else to miss the bus?

Re:Utilitarianism is correct (2)

TapeCutter (624760) | about 7 months ago | (#45911097)

The first time I heard this dilemma it was posed by a bible basher attempting to recruit me into his church. In that version the individual is your own child. The point is that God would flip the switch and sacrifice his son to save everyone else whereas a mere human would normally save their child. Why an omnipotent God could not break the rules and save both the individual and the group was left unexplained.

Disclaimer: Grandad to three. The instinct to protect your child can overcome the instinct to defend yourself, sacrificing a bunch of strangers to save your own child is a no-brainer for most parents.

Re:Utilitarianism is correct (1)

i kan reed (749298) | about 7 months ago | (#45911277)

The first time I heard this dilemma it was posed by a bible basher attempting to recruit me into his church. In that version the individual is your own child. The point is that God would flip the switch and sacrifice his son to save everyone else whereas a mere human would normally save their child. Why an omnipotent God could not break the rules and save both the individual and the group was left unexplained.

Disclaimer: Grandad to three. The instinct to protect your child can overcome the instinct to defend yourself, sacrificing a bunch of strangers to save your own child is a no-brainer for most parents.

Which is the justification for all sorts of terrible things that are actually done. We hold it as high praise that one is self-sacrificing for their relatives, but evolutionarily, we also have to understand that as a self-serving position.

Re:Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45911295)

The first time I heard this dilemma it was posed by a bible basher attempting to recruit me into his church. In that version the individual is your own child. The point is that God would flip the switch and sacrifice his son to save everyone else whereas a mere human would normally save their child. Why an omnipotent God could not break the rules and save both the individual and the group was left unexplained.

Disclaimer: Grandad to three. The instinct to protect your child can overcome the instinct to defend yourself, sacrificing a bunch of strangers to save your own child is a no-brainer for most parents.

That's a terrible analogy. I'd sacrifice one of my children to save EVERYONE else. Who wants their kid to grow up all alone in a lifeless planet? "I am Legend" wasn't a happy existence after all, even if it would be cool for a few months.

Re:Utilitarianism is correct (0)

xevioso (598654) | about 7 months ago | (#45911567)

Except that the bullshit of this explanation is shown by the fact that "his son" came back three days later. There was no sacrifice.

Stephen, who dies a martyr, gave more than Christ ever did, because he knew he would not come back. Jesus, being part of an infinite being, surely knew he would come back. He gave up nothing except a few days on earth.

And this is partly why I am not a Christian.

Re: Utilitarianism is correct (0)

Anonymous Coward | about 7 months ago | (#45913027)

But that torture. Seek out for there is a better answer yet to be had.

Re:Utilitarianism is correct (2)

hey! (33014) | about 7 months ago | (#45911805)

Right. So you're a billionaire and I work for you. My embezzling a hundred thousand dollars from you to send my kid through school is moral because you won't really miss it and it does a great deal of good for my kid and no discernible harm to you. That's the *pure* utilitarian way of looking at it, although such purity in outlook is something at least very rare and very probably non-existent.

There's another way of looking at this problem that seems built into human beings which philosophers call deontological ethics -- the ethics of rights and responsibilities. I have no right to your money, furthermore in agreeing to work for you I have a responsibility to discharge my duties faithfully. Yet while most people would agree that embezzling money from you would be wrong, that doesn't mean they're pure rights-based thinkers. If we simply change what is stolen their thinking is apt to shift.

Suppose instead of money, I steal a loaf of stale bread from you to feed my starving child. Normally that bread would go to feed a pig. Framed this way, I think a lot of people would consider it immoral for me NOT to steal from you, to let bread that could save a starving child go to a pig.

I should mention there's another important and overlooked style of ethical reasoning: aretaic, or "virtue" ethics. Somethings we do because we want to be the kind of person who does such things, and some things we don't do because of what that would do to our character.

In practice everyone seems to mix these different styles. Those who claim to use only one system of ethics to guide their behavior inevitably do little tricks to import other outlooks into their "pure and simple" philosophical framework. Utilitarians for example may conceptualize a rights violation as a harm, thus allowing them to argue in deontological mode when it suits them. Self-described deontological thinkers are apt to invent responsibilities that allow them to argue in utilitarian mode or aretaic mode when it suits them.

I can't say I've ever met anyone who has managed to put all of of their morality a set of geometry-like postulates and who reasons morally exclusively from those postulates. I've met plenty who've deluded themselves into thinking they do exactly that, but somehow the claimed mathematical proof is never forthcoming. If you press people, they inevitably argue from rules of thumb, by paradigms, by analogies, and other convenient but not necessarily consistent means of getting to a workable answer. They never resort to pure reasoning from a simple set of moral postulates such as utilitarianism.

And I suspect that's the best we'll ever manage at justifying all the things we feel in our gut. If nature declines to provide us with any system of arithmetic that is both complete and consistent, why should we expect her provide us with morality that is so? There may be some dilemmas that can't ever be solved, either because our postulates lead to contradictory answers, or they don't lead to any answer at all.

Breaking News (2)

fazig (2909523) | about 7 months ago | (#45910297)

In games like Counter-Strike: Global Offense, I take hostages, set up bombs are willing to give up my virtual live to protect it from being defused, kill people.

While in reality I'm not a suicide bombing terrorist. Who would have guess?

Re:Breaking News (1)

war4peace (1628283) | about 7 months ago | (#45910635)

Even if CS went global, apparently I'm not offended. So, clearly, it failed.

Re:Breaking News (1)

fazig (2909523) | about 7 months ago | (#45910837)

Just a type, didn't mean to be offensive.

Re:Breaking News (1)

war4peace (1628283) | about 7 months ago | (#45911199)

You meant a typo? :)

Re:Breaking News (1)

fazig (2909523) | about 7 months ago | (#45911301)

I did. Alright, I'm just going to stop typing for today.

Yeah? (1)

Threni (635302) | about 7 months ago | (#45910319)

I get through a million virtual dollars in a single session of online poker. And you should have seem my driving on RollCage.

What's the point here?

Oh, come on. (4, Funny)

Anonymous Coward | about 7 months ago | (#45910353)

... (they tend to minimize the number of persons killed)

Anyone who's played Black & White knows that's not true. They don't even minimize the number of persons killed by poop.

Poorly-designed VR (3, Insightful)

Impy the Impiuos Imp (442658) | about 7 months ago | (#45910517)

"Become more utilitarian", i.e. they choose to save more lives, which is already at 88% in a non-VR, simple textual scenario like the trolly switch issue.

This is odd, because in most scenarios of VR, people seem to want to throw a switch to deliberately divert a trolly from one person to kill 5 instead, as long as they have a chat line where they can type "lolf49z!"

Re:Poorly-designed VR (1)

Anonymous Coward | about 7 months ago | (#45910689)

So a more proper test would be one with two levers. Pull none, 5 people get killed. Pull one of them, only one person dies. Pull the other and the trolley goes on a merry bloodbath through a crowded mall (with yakety sax playing in the background).

Re:Poorly-designed VR (0)

Anonymous Coward | about 7 months ago | (#45911163)

So a more proper test would be one with two levers. Pull none, 5 people get killed. Pull one of them, only one person dies. Pull the other and the trolley goes on a merry bloodbath through a crowded mall (with yakety sax playing in the background).

In a win-win scenario such as the one you describe, the solution is to flip a coin and pull a lever at random.

Virtual People Aren't People (0)

Anonymous Coward | about 7 months ago | (#45910529)

Neither are zombies, orcs, demons, pokemon, night elves, trolls, ogres, dragons, sprites byads, naga, Protoss, Zerg, Skrull, ents, vampires or werewolves.

I feel absolutely no guilt in making their little pixel bodies explode in geysers of blood and gore.

Responsibility (2)

Okian Warrior (537106) | about 7 months ago | (#45910703)

One issue that studies never seem to take into account is responsibility.

If a group of people will be killed but you could decide to kill a single person, there is a third option: you could choose not to decide.

When you switch the tracks you are taking responsibility for making the decision, and all consequences thereto. There will be an inquest, you will be brought up under charges for manslaughter, your actions will be made public in the newspaper... all sorts of bad things will happen, and your life will be forever changed.

For a recent example, consider the recent Asiana Airlines Flight 214 [wikipedia.org] , where a woman was run over by a fire truck. The battalion chief responsible for directing operations was put through the wringer by over-zealous bureaucrats looking for someone to blame. His helmet cam [sfgate.com] footage was all that saved him. Blameless, he only narrowly escaped taking the blame.

If you simply walk away, then it's not your problem. The responsibility lies somewhere else, no one can blame you for not making the decision. You weren't expected to handle it, it's not your fault.

This makes perfect sense in the current study: there's no consequences for killing virtual people, so it's easy to make the moral choice.

Real morality takes courage, and the willingness to sacrifice.

Re:Responsibility (1)

xevioso (598654) | about 7 months ago | (#45911593)

Choosing not to decide is still a choice.
You could choose to flip a coin, and let fate decide, that is to say, allow the choice to be a random one, but you are still chooing not to decide.

Re:Responsibility (1)

cmdr_klarg (629569) | about 7 months ago | (#45911673)

Choosing not to decide is still a choice.
You could choose to flip a coin, and let fate decide, that is to say, allow the choice to be a random one, but you are still chooing not to decide.

I will choose Freewill!

Re:Responsibility (1)

xevioso (598654) | about 7 months ago | (#45911895)

Ok, so then what will you choose with your Freewill, Geddy Lee?

No duh.... (0)

Anonymous Coward | about 7 months ago | (#45910727)

At a low level.. you know it;s not real. You don't have any moral attachments to any people. It's like playing risk versus leading real troops into battle.

Fucking trolley bullshit (4, Insightful)

TrumpetPower! (190615) | about 7 months ago | (#45910745)

I can't believe that people still think that these trolley car "thought experiments" are telling them anything novel about human moral instincts.

All they are are less-visceral variations on Milgram's famous work. An authority figure tells you you must kill either the hot chick on the left or the ugly fatty on the right and that you mustn't sound the alarm or call 9-1-1 or anything else. And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

Just look at the videos in question. The number of laws and safety regulations and bad designs of the evil-mad-scientist variety in each scenario are innumerable. They take it beyond Milgram's use of a white lab coat to establish authority and into psychotic Nazi commander territory. In the real world, the victims wouldn't be anywhere near where they are. If they were, there wouldn't be any operations in progress at the site. If there were, there would be competent operators at the controls, not the amateur being manipulated by the experimenter; and those operators would be well drilled in both standard and emergency procedures that would prevent the disaster or mitigate it if unavoidable -- for example, airline pilots trained to the point of instinct to avoid crashing a doomed plane into a crowded area.

The proper role of the experimenter's victims ("subjects") is to yell for help, to not fucking touch critical safety infrastructure in the event of a crisis unless instructed to by a competent professional, to render first aid to the best of their abilities once help is on the way, and to assist investigators however possible once the dust has settled.

Yet, of course, the experimenter is too wrapped up in the evil genius role to permit their victims to even consider anything like that, and instead convinces the victims that they're bad people who'll kill innocents when ordered to. Just as we already knew from Milgram.

How any of this bullshit makes it past ethics review boards is utterly beyond me.

Cheers,

b&

Re:Fucking trolley bullshit (1)

Oligonicella (659917) | about 7 months ago | (#45910959)

And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

No. He found out that people would do the horrific thing in the abstract. This proves not one thing about how they would react in reality. Some will, some won't. We have historic examples to prove that. Anyone who thinks his subjects didn't know it was an experiment is a fool.

Re:Fucking trolley bullshit (1)

TrumpetPower! (190615) | about 7 months ago | (#45911269)

So? Everybody participating n the Stanford Prison Experiment knew it was an experiment, too.

That's the most important lesson learned from the famous psychology experiments of the '50s and '60s: that those sorts of experiments were important to do once, and they should never be done again except in extraordinary and the most carefully controlled of circumstances. The ethical review boards were brought into existence explicitly to ensure that those sorts of experiments were never performed again unless for truly justifiable reasons.

I fail to notice any overwhelming, transcendental purposes in these mockeries of psychological research that warrant their execution.

Cheers,

b&

Re:Fucking trolley bullshit (0)

Anonymous Coward | about 7 months ago | (#45911623)

They didn't, though. They were lied to and they weren't informed that they could be deceived in the process of the study. It is now required by ethics review boards that, if you plan on deceiving the subjects of a psychological study, you must warn them ahead of time that they may be deceived, and then after the study you have to tell them how they were deceived and why. Milgram didn't do ANY of that, and his study is part of the reason that we have these laws today. In the absence of any concrete evidence suggesting that the subjects actually knew that it was an experiment and they were being lied to, it is fair to assume that the study participants really believed they were shocking the other "subjects." If you have ever watched a video of the Milgram experiment, you can see that the subjects of the study really are in distress over what they believe they are doing: http://www.youtube.com/watch?v=fCVlI-_4GZQ

Re:Fucking trolley bullshit (0)

Anonymous Coward | about 7 months ago | (#45912047)

>you must warn them ahead of time that they may be deceived

Citation? That sound like an good way to poison the results of your experiment. Surely it's too dumb to be true.

Re:Fucking trolley bullshit (0)

Anonymous Coward | about 7 months ago | (#45911157)

All they are are less-visceral variations on Milgram's famous work. An authority figure tells you you must kill either the hot chick on the left or the ugly fatty on the right and that you mustn't sound the alarm or call 9-1-1 or anything else. And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

Or like the 'Friends like these...' quest in Skyrim. The one where the assassin kidnaps you and forces you to pick one of the three people in the room to kill.

How many people actually thought of the fourth solution, kill the assassin? I know I felt like an idiot for not even thinking of that solution.

Re:Fucking trolley bullshit (3, Interesting)

xevioso (598654) | about 7 months ago | (#45911645)

There ARE real world versions of this. I pointed this out above, but the real world versions tend to involve atrocities during wartime, something that the armchair ethicists here don't seem to want to discuss much. A REAL scenario would involve a soldier telling a mother to shoot one of her children or the soldier would shoot all of them himself. These things have, and will continue to happen in real life on occasion.

What's the proper response here? Attack the soldier with the gun he gives you to shoot your kid? OK, what if he tells you to choose which child will die and he will do it himself while you are tied up? The point is, in th real world, it is the CHOICE ITSELF which is the atrocity, and there is NO correct decision. In the real world. Which is one of the many reasons why war is evil.

It's a game... (1)

CreatureComfort (741652) | about 7 months ago | (#45910775)

Like kobayashi maru, tic-tac-toe, and thermonuclear war...

The only way to win is not to play.

Re:It's a game... (1)

Anonymous Coward | about 7 months ago | (#45910891)

tic-tac-toe

The only winning move is to play, perfectly [xkcd.com] , waiting for your opponent to make a mistake.

FTFY

this "vr" is really stupid (1)

Anonymous Coward | about 7 months ago | (#45910815)

this "vr" is really stupid. The best approximation of reality they can do is to offer a binary choice. This is unnatural and not predictive of people's behavior. In real situations, people think they have more courses of action, even when they don't: they would scream at the people to get out of the way, in a (possibly futile) attempt to save everybody, in addition to playing with the switch and attempting other things as well.
Very rarely we are presented with dangerous situations in life where the choice is clearly between two and two only, mutually exclusive, time critical courses of action.

The difference between VR and life (1)

istartedi (132515) | about 7 months ago | (#45910971)

When the lifting magnet dropped the car on the guy's head, I laughed. It was funny because the door flung open, or maybe because it just had a certain cartoony look about it. It could have been an anvil or a safe, then it would have been even funnier. Then of course there's the whole premise of a bunch of guys sort of doing a slow dance in a salvage yard, and they don't even look like yard workers at all. It's just too surreal.

In real life, the car falls on the guy's head without any moral dilemma. It's just... bam! In the unlikely event that you have any time to react at all, there is no moral decision at all on the part of the operator. He'll just avoid hitting the first thing he sees, or fall back on training that's ingrained into his muscles. In real life, it's not funny.

Experiments with that kind of VR don't convince me of very much. OK, time to roll it again and see that guy get crushed, and then... maybe a classic Bugs Bunny cartoon.

Text vs VR vs Reality? (1)

MrLogic17 (233498) | about 7 months ago | (#45911023)

So... more realisting simulations yield more realistic results?

I suspect there's a film at 11.

Re:Text vs VR vs Reality? (0)

Anonymous Coward | about 7 months ago | (#45911597)

No, an interface closer to a game led slightly more people to choose to "win" by saving the many at the expense of the few.

Still nothing to be surprised by, but if you watch the video, it's hardly closer to "reality" than text--I'd argue that a text description would seem more realistic, since it's my mind interpreting the situation, rather than my eyes.

Concern. (0)

Anonymous Coward | about 7 months ago | (#45911337)

Does anyone else think this video is funny as shit?

How about... (1)

FuzzNugget (2840687) | about 7 months ago | (#45911891)

Yelling, "get the fuck off the train tracks, you fucking morons!"

Then let Darwin take care of the rest.

heh. (1)

Black Parrot (19622) | about 7 months ago | (#45912699)

I thought it said "Unitarian".

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>