×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Psychology's Replication Battle

Soulskill posted about 5 months ago | from the study-of-american-undergrads dept.

Science 172

An anonymous reader sends this excerpt from Slate: Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. ... Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable. ...Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate."

Sorry! There are no comments related to the filter you selected.

Freud's problem too (1)

turkeydance (1266624) | about 5 months ago | (#47592577)

good luck with that.

Re:Freud's problem too (5, Insightful)

Anonymous Coward | about 5 months ago | (#47592665)

When psychologists stop producing so many studies with obvious bias, subjective terminology, subjective conclusions, and stop arbitrarily coming to conclusions based on data flawed for those reasons, maybe it could be taken seriously. Obviously, replication is needed, too.

But so many people are fooled by it. Want a study that says video games cause people to be aggressive? There's a psychology study for you, but there's also one for your opponents. And all of them are bad science.

Re:Freud's problem too (5, Insightful)

sjwt (161428) | about 5 months ago | (#47592761)

Yup, like the recent one about men not being able to 'be alone with their own thouhgs' [washingtonpost.com] ..

That same data can also read 'Men, more willing to put up with pain' or 'Men, more curious and want to know what they may experience'

Re:Freud's problem too (1, Offtopic)

AthanasiusKircher (1333179) | about 5 months ago | (#47592885)

Yup, like the recent one about men not being able to 'be alone with their own thouhgs' [washingtonpost.com]..

Yeah, note it was already discussed here too [slashdot.org] .

That same data can also read 'Men, more willing to put up with pain' or 'Men, more curious and want to know what they may experience'

Perhaps the one thing more common than flawed social science experiments is Slashdot commenters who think they can find flaws but haven't actually read the paper or thought about it.

This "same data" really CAN'T be "read" that way: the researchers specifically asked the subjects to experience the shock FIRST (so we can't assume they were just curious). And that stage of the study specifically excluded those who weren't seriously offended by the shock (they only let people continue if they said they'd actually pay money not to be shocked again), so it's difficult to conclude that these guys were simply "more likely to put up with pain" since they also were specifically chosen for disliking it.

Now, there were some serious flaws with this study, and perhaps the data could be interpreted differently. But if we're going to sit around and be "back-seat researchers" critiquing what others have done, let's at least pay attention to what they did, rather than immediately assuming they are idiots and didn't try to control for some of the issues we wonder about.

Re:Freud's problem too (-1)

Anonymous Coward | about 5 months ago | (#47592899)

Niggers smell funny. most of them are FAT too even compared with averege americans. That experiment can be replicated!! so why dont you relax or something?

Re:Freud's problem too (1)

Lemmeoutada Collecti (588075) | about 5 months ago | (#47593281)

Of course, that selection bias could also be read as those who are willing to pay to avoid something unpleasant have less patience than those who are not. Or that those who have a lower tolerance for pain are also less likely to value quiet and solitude - e.g. they are more likely to be extroverted - than those who have a higher tolerance. The data is clear: based on the chosen subset of the male population, there is a correlation between the subset and the dislike of and/or inability to endure solitude. That is pretty much all it does clearly indicate. It is not generalizable to the population as a whole, nor even to the subset of the population as a whole.

Psychology already getting all the funding it need (0)

Anonymous Coward | about 5 months ago | (#47592671)

... the military have constantly funded large scale psy-ops which includes information warfare, trend setting, viewpoint shifting, etc, on the people

Easy to measure versus important (3, Insightful)

Tablizer (95088) | about 5 months ago | (#47592585)

Psychologists are up in arms

Perhaps they need some therapy :-)

a fundamental mistake: They assume that valuable science is limited to the "hard sciences."

Software engineering has a similar problem. Things that are objective to measure, such as code volume (lines of code) are often only part of the picture. The psychology of developers (perception, etc.), especially during maintenance, plays a big role, but is difficult and expensive to objectively measure.

Thus, arguments break out about whether to focus on parsimony or on "grokkability". Some will also argue that if your developers can't read parsimony-friendly code, they should be fired and replaced with those who can. This gets into tricky staffing issues as sometimes a developer is valued for their people skills or domain (industry) knowledge even if they are not so adept at "clever" code.

Thus, the "my code style can beat up your style" fights involve both easy-to-measure "solid" metrics and very difficult-to-measure factors about staffing, side knowledge, people skills, corporate politics, economics, etc.

Subjective (0)

Anonymous Coward | about 5 months ago | (#47592781)

Software engineering has a similar problem. Things that are objective to measure, ...

I see all too often opinion being expressed as fact - even by CS professors when I was in school.

Re:Easy to measure versus important (2)

Intrepid imaginaut (1970940) | about 5 months ago | (#47592925)

Completely different situation. In programming discussions are how to optimise the processes involved, the problem with psychology its that they aren't sure if they're working on computers or breakfast cereal boxes with a few rectangles drawn on them. The main value that psychologist bring to the table today is to fulfill the role of that good friend who isn't afraid to lay out a few home truths. Of course if you already have such a friend, the need to attend a psychologist is naturally obviated...

So, I'm just going to leave this here [arachnoid.com] .

Re:Easy to measure versus important (1)

tomhath (637240) | about 5 months ago | (#47593149)

Completely different situation. In programming discussions are how to optimise the processes involved

But the discussion is based on nonsense. "Separation of Concerns" is a Good Thing, right? Who says? Gang of Four patterns are the proper approach, right? Why?

Re:Easy to measure versus important (1)

Anonymous Coward | about 5 months ago | (#47593271)

A friend cannot really replace a counselor because friends have their own interests in your life that isn't necessarily your own interest. Rely on friends for everything, but not for helping you find out what to do with your life. They just have too much to lose. Some friends even depend on you being miserable and emotionally exploitable. And don't say " no true friend" because you don't know "true" from "false" friends when you're in a needy situation.

Re:Easy to measure versus important (2)

phantomfive (622387) | about 5 months ago | (#47594399)

the fascinating thing to me is that sometimes programmers with drastically different coding styles (say, a Lisp macro/functional style compared to an object-oriented small-objects-everywhere style), who would argue vehemently about how the other side is wrong, can still both write incredibly good code. That is, the code will get the job done, be readable, and be flexible.

Because drastically different styles can end up with good code, I see that as a sign that we as programmers haven't figured out the elements that actually comprise good code. Some programmers do it, but they aren't able to vocalize it, and focus on syntax, etc.

because it's wishful thinking (1)

Anonymous Coward | about 5 months ago | (#47592593)

Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate.

Because it's always wishful thinking and the 'findings' are always BS. About time it's called out for the non-science nonsense that it is.

"less likely to be accurate" (3, Funny)

Vinegar Joe (998110) | about 5 months ago | (#47592599)

That's a surprise.

A bunch of fraudsters... (-1)

Anonymous Coward | about 5 months ago | (#47592613)

'Psychiatrists' and 'psychologists' are all a bunch of drug pushing fraudsters, so of course their 'research' is going to be fraudulent and impossible to replicate too. And these are the scumbags people turn to when they are most unhappy and even suicidal. It must be absolutely awful to be suicidal and then have to talk to some fraudster jerkoff who obviously doesn't give a shit about your problems, because he himself hasn't fixed his OWN problems, because his 'psychiatry' doesn't work. Physician heal thyself.

Re:A bunch of fraudsters... (1)

mark_reh (2015546) | about 5 months ago | (#47592919)

Tom? Is that you?
Do you know where I can get a copy of Dianetics? I've heard its da bomb!

WTF? (5, Insightful)

Oidhche (1244906) | about 5 months ago | (#47592619)

it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable

Duh. That's because an experiment that is not replicable has *no* value.

Re:WTF? (0)

gTsiros (205624) | about 5 months ago | (#47593021)

physicist here

there *are* experiments that are non-replicable, but still valuable.

hell, we have thought experiments that are also valuable

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47593107)

Shoot, Pons and Fleischmann's cold fusion experiments were non replicable...lol

Seriously, if an experiment can not be replicated, then there is something wrong with the experiment. Thought experiments are in a different category.

Re:WTF? (1)

Zero__Kelvin (151819) | about 5 months ago | (#47593181)

I hate to break it to you, but "thought experiment" is a misnomer.

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47593539)

a thought experiment is a perfectly fine philosophical construct but you do indeed need to keep in mind that any answer you get through it will also be philosophical in nature

Re:WTF? (0)

Oligonicella (659917) | about 5 months ago | (#47593563)

A thought experiment is almost always constructed in a highly biased manner, excluding many, many alternative choices. It's primary concern is to give the questioner the moral high ground, allowing that questioner to point out the ethical failure of the respondent.

Re:WTF? (2)

justthinkit (954982) | about 5 months ago | (#47593211)

there *are* experiments that are non-replicable, but still valuable.

I missed your examples. Could you repeat them?

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47593549)

Recording supernovae, each one happens only once and is unique in various ways. Dissecting passenger pigeons, because they're extinct. Studying the medical complications of Thalidomide babies, since the teratogenic drug is now off the market.

Any scientific analysis of an event which occurred once may not be directly replicable. There may be a great deal of the experiment that _can_ safely and economically be replicated, and should, to ensure that the first experiment was not biased or a unique accident of probability. And the experimental uncertainties for many social experiments is quite overwhelming: the experiment in this article is a great example.

The outliers in the data, for example, need to be checked. What was with the guy who shocked himself 190 times? And where were his electrodes attached?

Re:WTF? (4, Informative)

Oligonicella (659917) | about 5 months ago | (#47593605)

Recording supernovae

Not an experiment.

Dissecting passenger pigeons

Not an experiment.

Studying the medical complications of Thalidomide babies

You got one.

Any scientific analysis of an event which occurred once may not be directly replicable.

Actually the analysis can be replicated ad nauseam.

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47594379)

Any experiment where when the results are known, will profoundly change the experimental subjects.

For animals, this is less of a problem, but can be part of experimental failures to replicate.
For human psychology, effects such as this, as well as mental states, culture, and tons of other biases, makes it actually *harder* than "hard sciences" to experiment hypothesis and create theories. Theories may also need to change because of the same theories!

This does not equate to psychology and minds not being "worth" studying. Heck, very few issues are more important in most people's lives!
People don't care about quantum effects or informational theories. It's typically not applicable.
Psychology, happiness and high ethics are applicable to EVERYBODY.

Re:WTF? (2, Interesting)

thesandtiger (819476) | about 5 months ago | (#47593257)

There's different levels of replication.

In physics, you can generally replicate an experiment vary precisely if you've got a handle on the factors that went into that experiment - control the environment, etc. You can have an almost perfect replication. Yay, science!

In social psychology research you can't ever even approach that same level of control over the environment the experiment takes place in. The subject will be different - even if it's the same subject used in the first experiment, because people change over time/exposure. The interviewer will be different because people change over time. The dynamic between interviewer and subject will be different. The history of the subject will be different as will the history of the interviewer as will the place the interview is taking place, etc. etc. etc.

The best such research can do is to either find that there is a tendency for x to happen in y circumstances, but it might not always be the case.

And, actually, there is a fair amount of basic replication that goes on in many psychological studies; when I was in the field working on studies we would routinely include certain basic measures that had been used in tens of thousands of studies before and compare anticipated vs. actual outcomes.

But even if it doesn't get replicated it actually has some value in that it would indicate that whatever the original experiment felt was a contributing factor to the main reported effect, a lack of easy replication under mostly similar circumstances indicates that that factor probably isn't as strong as hypothesized, and it cuts off a (probably) blind alley.

Re:WTF? (1)

Oligonicella (659917) | about 5 months ago | (#47593439)

In social psychology research you can't ever even approach that same level of control over the environment the experiment takes place in.

Exactly. This alone negates any claim as to whatever result is "found". That basic replication you refer to later is like saying "I used copper wiring in all my experiments" and never giving the length or gauge.
 
 

a lack of easy replication under mostly similar circumstances indicates that that factor probably isn't as strong as hypothesized, and it cuts off a (probably) blind alley.

Interesting you use probably twice in one sentence as you attempt to support those experiments as having value. Other than being your opinion, how is it you're sure of your conclusion? Provide some empirical confirmation please.

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47594333)

... except that the lack of control is why every science replicates things. You can't control everything, and that's why you attempt to replicate.

Replication is demonstrating that an effect is generalizable across undocumented, uncontrolled effects. So you give the length or gauge, but not the pressure, or all of the fields impinging on it.

Psychology isn't any different, it's all relative. There are plenty of effects that replicate very well.

Lack of replication isn't unique to psychology. Google Ioannidis and replication, and you'll see what I mean.

I'm starting to wonder if the reason people are getting so sensitive about replication in psychology is that they feel threatened about their own pet field of science. E.g., "I don't have to worry about personal biases in [physics/chemistry/biology/pharmacology/ecology] because it doesn't happen. We're not like those trash sciences ... " It allows them to keep their heads in the sand and ignore their own problems with scientific politics.

String theory anyone?

Re:WTF? (0)

Anonymous Coward | about 5 months ago | (#47594463)

It would help you if you took off your own glasses.

Most important things in people's lives are those things that are not quantifiable.
Social experiments can never become empirical the way you try to impose, since the social structures, relationships, modes and processes are always changing. They can even change directly caused by publishing the results of said experiments!

It's a feedback-loop, so it's provably chaotic. It's actually harder to study than "hard" sciences. Yet, can have much more positive impact on society and individuals.

If only the reporting on experiments where done correctly, we didn't have to reject so many articles about "X causes Y". That's bad reporting, not necessarily bad experiments.

Lastly, a negative outcome is also valuable. Everything we do raises our awareness and deepens our understanding.

Pro tip: Start reading about "Emotional Intelligence". Very very insightful research that are directly applicable to everybody, especially people in companies.

Re:WTF? (1)

fermion (181285) | about 5 months ago | (#47593537)

We also have to look at how repeatability works. One reads a paper, does one best to follow the work, perhaps calls one of the researchers to get clarification, combine this with known methods, and at the end of the day maybe get a similar result. If, as in the case of cold fusion, the result is not similar, then there is at least some carelessness if not fraud in the original result. Which is fine because it is just one result, and no one should thinks one result is conclusive.

In social sciences reproducibility is possible. For instance in epidemiology databases are crunched using well known statistical methods to determine correlations, then further science is applied to determine is these correlations might be causative. If a second party cannot do an equal statistical analysis and get similar results then the results are not valid. If a second party can go through the process of collecting the data and find systematic errors, then the results is not valid. This is in fact a big problem with education research. When subject to the process of real science, much if not most of the research has been shown to not meet those standards.

So social science research can be scientific, but there is a second issue. We expect research to be predictive. It is said that field such as astronomy are as unscientific as social science. But in astronomy there is an element of application. The results are used to predict other finding which then can be confirmed. This is the element that makes fields such as physiology less scientific.

mua va ban (-1)

Anonymous Coward | about 5 months ago | (#47592627)

http://tintuc.oho.vn/news/36089/4-cach-mua-ban-hieu-qua-de-thu-hut-khach-hang.html

tin tuc phat giao (-1)

Anonymous Coward | about 5 months ago | (#47592637)

http://tintuc.oho.vn/news/c191/Van-hoa-Viet-Tin-tuc-phat-giao.html

Not Just Psychology (3, Insightful)

jamesl (106902) | about 5 months ago | (#47592655)

The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view "positive" findings that announce a novel relationship or support a theoretical claim as more interesting than "negative" findings ...

This applies to all science, not just psychology.

Old saying (1)

jklovanc (1603149) | about 5 months ago | (#47592663)

Once is an anomaly
Twice is a coincidence
Three times is a pattern

Re:Old saying (2)

Zero__Kelvin (151819) | about 5 months ago | (#47593199)

"Three times is a pattern"

I sampled a random bit sequence just the other day. I can now assure you that a random bit stream is all ones! all friggin' ones I tell you!

So, it's not a science, it's a religion (3, Insightful)

Anonymous Coward | about 5 months ago | (#47592667)

Falling into the 'cult' category

Wrong premice (1)

jklovanc (1603149) | about 5 months ago | (#47592673)

I think that too many "studies" set out to prove a hypothesis instead of test a hypothesis. The drive to prove something puts bias into the study and skews the outcome. No one wants to be proven wrong. This is especially important when the measurements are subjective as in many psychology studies.

Re:Wrong premice (2)

martin-boundary (547041) | about 5 months ago | (#47592693)

The other problem is sample size. Psychology sample sizes are *way* too small. In a world of 8 billion people today, anything you find out in a psychological experiment that involves at most a few hundred subjects, often less, cannot have anything universal to say. The samples are just too small.

Here's an analogy. You plant a dozen tulips in your garden, and observe how well they grow when you do X. Now you claim all plants will grow like that when you do X. The claim is way too broad. Even if you had a dozen identical tulips, and you grew them on the himalaya while doing X, you'd have different results.

Re:Wrong premice (1)

ShanghaiBill (739463) | about 5 months ago | (#47592779)

The other problem is sample size. Psychology sample sizes are *way* too small. In a world of 8 billion people today, anything you find out in a psychological experiment that involves at most a few hundred subjects, often less, cannot have anything universal to say. The samples are just too small.

Sample size is independent of population size, and sample sizes far less than "a few hundred" can be significant. In the social sciences, errors are far more likely to be caused by sample bias than size. Most psychology experiments conducted on people use university undergraduates as subjects, which are more likely to be politically liberal, altruistic, trusting of others, etc. Increasing the sample size isn't going to fix that.

Re:Wrong premice (1)

martin-boundary (547041) | about 5 months ago | (#47592799)

On the contrary, increasing the sample size to big data sizes of say 2 billion subjects would definitely fix that bias problem. [of course this is unrealistic].

Re:Wrong premice (2)

khallow (566160) | about 5 months ago | (#47592895)

On the contrary, increasing the sample size to big data sizes of say 2 billion subjects would definitely fix that bias problem.

Not at all. For example, try extrapolating behavior from 2 billion young men to older women. You can have huge sample sizes and yet still have sample bias simply because you've excluded an important category (such as the people you actually wanted to study).

Re:Wrong premice (0)

Anonymous Coward | about 5 months ago | (#47592909)

On the contrary, increasing the sample size to big data sizes of say 2 billion subjects would definitely fix that bias problem.

Not at all. For example, try extrapolating behavior from 2 billion young men to older women. You can have huge sample sizes and yet still have sample bias simply because you've excluded an important category (such as the people you actually wanted to study).

Do you know how much effort it would take to get 2 billion people on board and not have a single woman or older person? It could be done but you'd REALLY have to be doing that on purpose.

Re:Wrong premice (1)

khallow (566160) | about 5 months ago | (#47593059)

Do you know how much effort it would take to get 2 billion people on board and not have a single woman or older person? It could be done but you'd REALLY have to be doing that on purpose.

They could just be pouring through health records for adults who serve or might serve in one of the world's militaries.

Even if they were doing that deliberately, what would your point be? Sample bias is bias whether it is accidental or not.

Re:Wrong premice (1)

mpe (36238) | about 5 months ago | (#47593887)

For example, try extrapolating behavior from 2 billion young men to older women. You can have huge sample sizes and yet still have sample bias simply because you've excluded an important category (such as the people you actually wanted to study).

Even if you try hard for a "representative sample" you can still have a problem where you lack a "box" to "tick" for something which turns out to be important.

Re:Wrong premice (0)

Anonymous Coward | about 5 months ago | (#47592801)

In social 'sciences,' sample sizes need to be higher simply because humans in different cultures or subcultures can have significantly different mentalities. This means you need enough representative samples from each culture, depending on the study.

Re:Wrong premice (0)

Anonymous Coward | about 5 months ago | (#47592809)

The samples are also almost always drawn from a single, usually Western, country, which makes it impossible to determine whether what is measured is a psychological universal or a culturally determined phenomenon. And even when they're not university students, the way samples are selected is often problematic-- for instance, many samples consist of patients in therapy, which is probably not representative of the human race at large.

Re:Wrong premice (1)

sandertje (1748324) | about 5 months ago | (#47592973)

Let alone the cultural environment. Behavioral psychology often attempts to extrapolate its findings on the whole Earth population, without taking into account that the cultural background of its subjects is (virtually) identical for each subject. The cultural background _most definitely_ influences behavior. Do the same study on Western Europeans, Arabs and Japanese, and you'll likely get huge differences per group.

Re:Wrong premise (1)

jd (1658) | about 5 months ago | (#47593321)

Fixed typo.

Agreed on study size, which is why social scientists look at meta-studies of hundreds of studies performed over as much as a decade, to eliminate the noise and other transient junk.

What they really need to do, though, is examine more hypotheses. You need 7-10 additional hypotheses, not including the null hypothesis, that are orthogonal to each other and to the hypothesis being tested. This would allow you to binary subdivide the problem space, not only showing what something isn't but also showing if the models being examined are founded on sound principles.

Re:Wrong premice (1)

mark_reh (2015546) | about 5 months ago | (#47592931)

my psychological disorder compels me to point out the misspelling of "premise"

Re:Wrong premice (1)

mpe (36238) | about 5 months ago | (#47593869)

I think that too many "studies" set out to prove a hypothesis instead of test a hypothesis. The drive to prove something puts bias into the study and skews the outcome. No one wants to be proven wrong. This is especially important when the measurements are subjective as in many psychology studies.

But hardly confined to "psychology". Possibly even not confined to "soft" sciences. Since attempts at falsification can easily turn out to be very politically incorrect.

"Social science can be just as valuable" (2, Insightful)

Anonymous Coward | about 5 months ago | (#47592685)

No, and it shouldn't carry the same "science" label to start with. Make it "social studies" or whatever. To call it science, one tries to put it on the same level as real science, where the processes are completely different on numerous levels. It's an insult to real science. For example, when a scientist builds a collider to find a particle, and he finds one, he puts up the results so they can be verified by peers, and if the collective brainpower finds an error and puts it down, the process is considered a success. In the meantime soft "scientists" will not be verified by peers and separate studies will have to point out the results are not even replicable, and people will bitch about and defend their research and the funding of their research.

Re:"Social science can be just as valuable" (1)

Anonymous Coward | about 5 months ago | (#47592697)

I once read a study that claimed that porn makes people have a callous attitude towards women. To 'prove' this, they asked college students how long rapists should be sent to prison. Then, they showed those students some porn videos. Afterwards, they asked the same question, and some of them supported reduced sentences for rapists. The arbitrary, subjective conclusion they came to in the face of the subjective data they gathered using biased methods was that porn makes people callous towards women. If you want rapists to be in prison for a million years, and then later say you want them in prison for 999,999 years, you're obviously callous towards women. Whatever.

High quality scientific research all around. Now, how about another 'great' study about the effects of video games on the human mind...

Re:"Social science can be just as valuable" (1)

mpe (36238) | about 5 months ago | (#47593931)

I once read a study that claimed that porn makes people have a callous attitude towards women. To 'prove' this, they asked college students how long rapists should be sent to prison. Then, they showed those students some porn videos. Afterwards, they asked the same question, and some of them supported reduced sentences for rapists. The arbitrary, subjective conclusion they came to in the face of the subjective data they gathered using biased methods was that porn makes people callous towards women.

Did they have a control group who were shown other videos for the same length of time? Were there any cases of the subjects increasing thei sentence length on the second round of questioning? Was it clear to everyone exactly what the definition of "rape" being used was?

Re:"Social science can be just as valuable" (0)

Anonymous Coward | about 5 months ago | (#47594271)

I once read a cherry-picked example on a tech site, therefore fuck psychology.

Who writes this crap (5, Insightful)

awol (98751) | about 5 months ago | (#47592705)

"Those who oppose funding for behavioral science make a fundamental mistake: They assume that valuable science is limited to the "hard sciences." Social science can be just as valuable, but it's difficult to demonstrate that an experiment is valuable when you can't even demonstrate that it's replicable."

No, those of us that oppose the funding of this crap recognise that if you cannot replicate your "study" then it is not an experiment. If what you are doing cannot be proved (one way or the other) by experiment then IT IS NOT SCIENCE. I don't really care what it gets called and some of it may even be valuable for some values of valuable however the amount of dross that is produce by social researchers that try and call themselves scientists is truly extraordinary and a plague on our world.

Behavioral economics (0)

Anonymous Coward | about 5 months ago | (#47592823)

...the amount of dross that is produce by social researchers that try and call themselves scientists is truly extraordinary and a plague on our world.

Dan Ariely, Daniel Kahneman, and few others have done extensive work that has shown the limitiations of how we think and how we actually perform economic activity.

The failure of the rational market economists is that they just study large, very well organized markets dominated by professionals that are now mostly run by computers - the finanancial markets (because there's a shit load of publically available data). So of course, the market looks completely rational. They then extrapolated their finding to everything else - even to people buying that new house that they just "fell in love with".

Re:Behavioral economics (0)

khallow (566160) | about 5 months ago | (#47592879)

The failure of the rational market economists is that they just study large, very well organized markets dominated by professionals that are now mostly run by computers - the finanancial markets (because there's a shit load of publically available data). So of course, the market looks completely rational.

In other words, their "failure" is that they use models which are descriptive of the markets that they study. That sounds more like science in action than failure to me.

They then extrapolated their finding to everything else - even to people buying that new house that they just "fell in love with".

This is a strawman. They don't actually do this.

Re:Behavioral economics (0)

Anonymous Coward | about 5 months ago | (#47592917)

This is a strawman. They don't actually do this.

Yes, they do. They assume that ALL markets are rational because of their studies on one particular market; in their case the financial markets. And I've seen applications that are ridiculous. Like people who buy that house that "they are in love with" are acting rationally because they are incorporating their emotions into the purchase and thereby including the utility of their feelings.

Also, what's with the physics envy in economics?

That's why I think the groundbreaking stuff in econ is coming from the psychologists. They are coming from the point of view that economic behavior is an emotional one - and rightfull so.

Henry Ford noticed that people buy emotionally years ago and failed to act on it.

He was an engineer/tinkerer. He built machines.

Sloan realized that cars are really a fashion statment (an emotional purchse) - not only a mode of transportation as Ford thought of it.

Sloan/GM kicked Ford's ass for years until he (his son actually) started making more models and with different colors.

Re:Behavioral economics (1)

khallow (566160) | about 5 months ago | (#47592993)

And I've seen applications that are ridiculous.

Then give an example rather than just make empty allegations. Henry Ford wasn't an economist.

Re:Behavioral economics (1)

Oligonicella (659917) | about 5 months ago | (#47593517)

You misrepresent what happened. Ford realized that first and foremost, people needed to be able to *afford* cars, so he designed and produced the T. Only after the car was ubiquitous did fashion exert any greater influence.

Re:Behavioral economics (1)

SEE (7681) | about 5 months ago | (#47594421)

Yeah, you are going to be seriously confused if you think "rational actor" economics assumes a Straw Vulcan who won't buy the chocolate ice cream which he likes better if the vanilla is a cent cheaper. But the fault, dear AC, is not in the economics, but your own skull.

Re:Who writes this crap (1)

DNS-and-BIND (461968) | about 5 months ago | (#47592915)

Yes, but it is still useful to achieve positive outcomes. There are a lot of people in society with mistaken ideas and if science like this can be used to push their repugnant ideas out of the mainstream then all the better. We all need to support these scientists and not take such a narrow view.

Re:Who writes this crap (5, Insightful)

Intrepid imaginaut (1970940) | about 5 months ago | (#47592959)

The above comment is precisely why these "social sciences" need to be delegitimised and rubber-roomed until they can figure out the meaning of the phrase "scientific method". Grant them no authority in deciding government policy, massively defund them in academia, get them out of the courtrooms, and generally pillory them for the witchdoctors they are.

If you have to ask why, you're part of the problem.

Re:Who writes this crap (0)

Anonymous Coward | about 5 months ago | (#47593583)

Maybe it's this guy?

                        http://grokbase.com/t/subversi... [grokbase.com]

The paper was "Reporting Masters and Slaves, Binding them in Cages, and Making Them Report Names and Addresses".

Re:Who writes this crap (0)

Anonymous Coward | about 5 months ago | (#47593813)

The part you've missed is that far too often "hard science" has the same problems. What you've done is a basic human failure - to assume the alternative is not full of problems too.

There is no reason that psychology is any more or less prone to problems with the scientific method because ultimately it is people performing the process regardless of what area of science.

Re:Who writes this crap (2, Insightful)

Anonymous Coward | about 5 months ago | (#47594135)

Here's my challenge to individuals such as yourself who denigrate psychological science:

How would *you* study behavior?

It's very easy to dismiss behavioral sciences when you're not trying to study behavior. It's a very complex, difficult topic. E.g., how do you define depression? How do you define psychosis? How do you determine whether or not early childhood interventions actually have an effect on adult outcomes?

Maybe you would argue that behavior shouldn't be approached scientifically, but that's a cop-out and leaving human experience to philosophers.

I'm sick of ignorant arm-chair narcissists denigrating psychology when they don't have the balls to admit they have no clue how to approach the subject because it's too hard for them to understand.

I'm sorry for sounding harsh, but then so are the critical comments here.

And no, neuroscience is not psychology. There's an extremely fuzzy boundary, and they overlap tremendously, but they're not the same. To find the neural substrates of depression, you have to be able to measure depression. So you either study behavior or you don't.

Yes, there's a replication crisis in psychology, but it's the same in all of science--it's everywhere in the biomedical sciences (e.g., everyone here knows of these studies, such as the big scandal over stem cell research that was all fake). And you don't hear physics being called a sham because of all the kooks publishing their poorly thought-out theories on studies on arXiv.org.

Get over yourself and start trying to solve the problems you belittle.

Re:Who writes this crap (1)

Zero__Kelvin (151819) | about 5 months ago | (#47593263)

This from a guy whose primary interests are networking and BDSM.*

* See his Slashdot alias, then laugh. It's funny.

Re:Who writes this crap (1)

russotto (537200) | about 5 months ago | (#47593331)

This from a guy whose primary interests are networking and BDSM.*

Must be mixed carefully. It's OK to use network cables for bondage, just don't put them back in the network afterwards.

Re:Who writes this crap (1)

phantomfive (622387) | about 5 months ago | (#47594453)

There are a lot of people in society with mistaken ideas and if science like this can be used to push their repugnant ideas out of the mainstream then all the better.

If their ideas are truly repugnant, then science can do the job of showing why they are mistaken. You don't need to use fake science. Fake science used the way you describe is more succinctly known as lying.

replication = good (1)

Spazmania (174582) | about 5 months ago | (#47592731)

Replicating scientific results (or failing to) is a good thing.

Being rude about it, as was apparently the case here, is plain old asshattery.

Re:replication = good (2)

awol (98751) | about 5 months ago | (#47592829)

No the asshat is not saying that if you cannot get the same results it's not science (in fact the exact opposite), but rather that if you cannot demonstrate that the experiment itself is replicable then it is not science. The contention in the article that in social sciences this lack of replication of experiment may just be a reality up with which we must put IS the reason why whatever you want to call it, it is not science.

Does anyone care? (0)

Anonymous Coward | about 5 months ago | (#47592763)

Honestly, does anyone even care? I mean, we should care, but we don't.
There are criminals, violent ones getting away with some "counseling", while a lot of people get to spend the better part of life institutionalized or so drugged up you could mistake them for brain dead, because you know, "treating" a drooling idiot is easier(cheaper) than someone needing constant attention.

The psychological climate (0)

smitty_one_each (243267) | about 5 months ago | (#47592807)

The psychological climate clearly calls for a shift to climate psychology.

You cannot replicate everything (0)

Anonymous Coward | about 5 months ago | (#47592825)

Sorry, real life is messy.

1 - Some replicable tests are a good idea
Some people see Aliens at Roswell when they are there at night and take drugs.
This is a replicable experiment - is it because they have taken drugs or because Aliens are sometimes there?

Generally (sadly) if you have a randomised double-blind controlled experement that controls for the likely deciding factors, you can decide whether or not it is more likely because people take drugs (happily you cannot be sure about the presence or absence of aliens)

2 - Some replicable tests are a bad idea
Do the really expensive cancer|baby-saving|altzhiemer etc drugs we use really help?
This is also replicable experiment

Give some people the drug and some a placebo.
Not too ethical even if you disclose that there might be a placebo

3 - Some things cannot be replicated

Was it right to have QE - did we have the right amount of QE
This is not replicable.

You dont get to re-run an economy for the last 6 years - all you can do is watch and measure and argue about causation afterwards.

In the scope of psychology, you get a mix of all 3 experiment types. All these questions are very good questions.
What troubles me is that there will be a growing tendency to not attempt to answer the hard ones.

Re:You cannot replicate everything (1)

sandertje (1748324) | about 5 months ago | (#47593007)

Sorry, real life is messy.

1 - Some replicable tests are a good idea
Some people see Aliens at Roswell when they are there at night and take drugs.
This is a replicable experiment - is it because they have taken drugs or because Aliens are sometimes there?

Generally (sadly) if you have a randomised double-blind controlled experement that controls for the likely deciding factors, you can decide whether or not it is more likely because people take drugs (happily you cannot be sure about the presence or absence of aliens)

2 - Some replicable tests are a bad idea
Do the really expensive cancer|baby-saving|altzhiemer etc drugs we use really help?
This is also replicable experiment

Give some people the drug and some a placebo.
Not too ethical even if you disclose that there might be a placebo

3 - Some things cannot be replicated

Was it right to have QE - did we have the right amount of QE
This is not replicable.

You dont get to re-run an economy for the last 6 years - all you can do is watch and measure and argue about causation afterwards.

In the scope of psychology, you get a mix of all 3 experiment types. All these questions are very good questions.
What troubles me is that there will be a growing tendency to not attempt to answer the hard ones.

1) Occam's razor already tells you it's the drugs. Unless aliens show up only when taking drugs, or we suddenly get super-alien-viewing-powers when using drugs, aliens could be there. That's (apart from being ridiculous) such a complicated model compared to the simple "your drugs give you hallucinations" model (which we even know is true) model that occam's razor can rule out the other ones.

2) Erm.. you know that this is EXACTLY how drugs are tested every day? Not unethical. Extremely common.

3) You could run a simulation.

Re:You cannot replicate everything (0)

Anonymous Coward | about 5 months ago | (#47593179)

1) Occam's razor already tells you it's the drugs. Unless aliens show up only when taking drugs, or we suddenly get super-alien-viewing-powers when using drugs, aliens could be there. That's (apart from being ridiculous) such a complicated model compared to the simple "your drugs give you hallucinations" model (which we even know is true) model that occam's razor can rule out the other ones.

2) Erm.. you know that this is EXACTLY how drugs are tested every day? Not unethical. Extremely common.

3) You could run a simulation.

Occam's Razor "tells" you no such thing. It suggests it is the drugs.

Occam's Razor is merely a pithy statement of the principle of parsimony. It is not a law in any sense, and it "rules out" nothing. It merely suggests that the simpler explanation is more likely to be correct.

Some of those papers hvaven't been discredited yet (0)

Anonymous Coward | about 5 months ago | (#47592827)

Only one or two though.

If you can't replicate it... (0)

PvtVoid (1252388) | about 5 months ago | (#47592835)

... then it ain't science. End of story.

Re:If you can't replicate it... (2)

Mister Liberty (769145) | about 5 months ago | (#47592853)

Define 'replicate'.

Re:If you can't replicate it... (0)

Anonymous Coward | about 5 months ago | (#47593091)

> Define 'replicate'.

Do the same thing the same way and see if the results are similar?

Re:Define 'replicate' (2)

petes_PoV (912422) | about 5 months ago | (#47593121)

To replicate an experiment, you take the description of the conditions, tasks, environment, fixed independent and dependent variables, analytical method and results provided by the original experimenter in the (peer-reviewed) paper they published.
If you can show the same results, with the same statistical significance, then it's reasonable to assume that the experiment shows a valid scientific phenomenon.

If you can't then one of the two experiments got it wrong and more work is needed.

The basic problem with social experiments, that are based on the judgement, feelings, or anything else that the studied group merely says it would / would-not do, thinks, feels, or otherwise emotes is completely subjective. Asking people how sad, happy, angry something makes them feel and rating that feeling - or the difference from previous values - has no scientific merit, as none of the terms used have any hard, scientific, definition and none of the participants have had their feelings "calibrated".

It's little different from a scientist (a proper one) measuring electric voltage by sticking their tongue across two electrodes, or measuring distance by eyeballing it. The level of accuracy and standardisation the social "sciences" have at present puts them on a par with chemical research: phlogiston, fixed air (CO2) in the 17th century.

As for being able to determine which variables are being measured - or even what all the variables are in their experiments, the social scientists have yet to discover their subject's version of fire.

Re:Define 'replicate' (1)

mpe (36238) | about 5 months ago | (#47593941)

To replicate an experiment, you take the description of the conditions, tasks, environment, fixed independent and dependent variables, analytical method and results provided by the original experimenter in the (peer-reviewed) paper they published. If you can show the same results, with the same statistical significance, then it's reasonable to assume that the experiment shows a valid scientific phenomenon.
If you can't then one of the two experiments got it wrong and more work is needed.


Actually it would be at least one of the two "got it wrong".

"Soft" Science... (0)

Anonymous Coward | about 5 months ago | (#47592947)

Social Psych has been called a "soft" science for a reason. Not because it is easy, but because it's not a "do this and this happens" discipline. When I was in college (back before the web), Social Psych was not even considered a "science" by the environmental science majors. It was considered a "rocks for jocks" type group of classes for people who didn't understand the scientific method. I have a cultural anthropology with a minor in linguistics, and even the forensic anthropology majors considered social psych to be a joke. Predomininately because there was no real analysis. FA's at least had historical trending, disease propogation models, statistical excavation, linguistic drift, shard analysis, and other "tools" to cross-check their work. True, someone with a limited knowledge of FA can still make just as many errors as a social psych Phd holder... Just read the first few pages of "Clan of the Cave Bear."

Stastical tools and significance (0)

Anonymous Coward | about 5 months ago | (#47592963)

Psychologists have largely relied on inferential statistics as tools for inference. Analysis of variance, t-tests, correlation, and regression are used to determine whether results are, or are not, "statistically significant." Too often the focus has been on the inference -- significant or not -- rather than on the descriptive data -- means, regression coefficients etc.

The problem is that tests of statistical significance can tell us only that the tested relationship is, or is not, plausibly due to random fluctuation or chance. For example, we can say that a correlation found to significant is unlikely to be zero. In this usage significant does not mean "important," it means not random. Binary decisions that apparent relationships in my data are random or real, do not provide much of a foundation for a developing science. Finding relationships that are not due to chance is a very small step toward real understanding.

Further, random data can easily be produced by weak manipulations, poor measurement tools, and any number of experimental glitches. Therefore, without statistically significant results, publication in a good journal is unlikely. It is easy to discount later failures to replicate obtaining non significant findings as due to problems with the replication study. Therefore, the replication study doesn't get published.

An additional problem is the challenge of obtaining adequate sample sizes to ensure the statistical power needed to assess replicability -- the vast majority of published studies are not supported by grant funds. We've known for 6 decades that even studies published in top journals are chronically underpowered -- the probability of a perfect executed replication study finding a key result to be significant is usually in the range of .5 (ouch!).

I think that the attention that these problems have gotten in many of the field's top journals may be embarrassing for the field, but it is necessary and positive step toward a better science.

They need to review their Feynman. (1, Insightful)

hsthompson69 (1674722) | about 5 months ago | (#47593045)

http://neurotheory.columbia.ed... [columbia.edu]

"It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated."

In the search of positive results, and p-hacking to get there, they're failing to demonstrate scientific integrity.

This is what we get... (0)

Anonymous Coward | about 5 months ago | (#47593057)

This is what we get for putting up with the global warming hysterics...Science, which is by definition "repeatable experiments" suddenly gets redefined so that repeatable experiments don't matter.

Ick!

Re:This is what we get... (0)

Anonymous Coward | about 5 months ago | (#47593139)

> Science, which is by definition "repeatable experiments"
    ^---- this is what we get for putting up with poor science education.

Simple solution (2)

jd (1658) | about 5 months ago | (#47593195)

Have a journal, call it Debunker's Weekly if you want, that is divided evenly between papers on replication and papers showing negative correlation at the start. Pay authors a nominal amount, according to the thoroughness of the work as judged by referees. Provide the journal free to University libraries. Submit summaries of major stories to Slashdot, The Guardian, various Skeptical societies and other places likely to raise the extreme ire of dodgy researchers. In fact, the more ire, the better.

The journal doesn't have to last long. Just long enough to force bad researchers to improve or quit, force regular journals to publish a wider range of findings to avoid humiliation, and to correct dangerously erroneous beliefs. Since there must be a stockpile of unpublished papers of this sort, you should probably be able to get six or seven bumper editions out before anyone notices the dates, and maybe another two before the journal is sued into oblivion for defamation.

That would be plenty to make some major course corrections and to "out" a few frauds.

Re:Simple solution (1)

petes_PoV (912422) | about 5 months ago | (#47593613)

Let's review:
"Pay authors" ... "Provide journal free ... "

The journal doesn't have to last long

Don't worry, it won't. I'd reckon on one edition.

Of course, what this whole field of study needs is a rich uncle (or sugar daddy) to provide funding for specific, basic, pieces of research. You'd think that for all the money they've made from social media, some of the FB/Twitter/others founders or major beneficiaries could put their hands in their pocket.

Or maybe they are the *last* people who want to make this subject rigourous and scientific?

Re:Simple solution (1)

jd (1658) | about 5 months ago | (#47594455)

It needs to be funded the same way as the British BBC, by license fee, or the same was as for public utilities, tax.

It needs to have a charter guaranteeing payment in advance for the requested service and guaranteeing immunity for any actions provided within the terms of the charter. (If it's not chartered, you'll have every drug company and its brother suing you for publishing the suppressed papers Ben Goldacre keeps talking about.)

If it's not free, it won't have readers. Negative results aren't as desirable and readers will spend their time at PlosONE unless you've something compelling. If it doesn't pay for submission, researchers have greater financial incentive to keep shtum. That narrows your list of options.

Re:Simple solution (0)

Anonymous Coward | about 5 months ago | (#47593951)

The journal doesn't have to last long. Just long enough to force bad researchers to improve or quit

Why would a researcher quit when he/she has tenure?

Re:Simple solution (1)

jd (1658) | about 5 months ago | (#47594367)

Because research is expensive and governments are cheap. If a researcher has been humiliated a couple of times, publicly, their papers become worthless to the big names financing the work. The corporations cut funding, so the universities cut funding. The researcher has a job, technically, but no office, no lab, no work. Further, the job isn't guaranteed. Tenure can be withdrawn for gross malpractice. Being exposed as a fraud probably qualifies. So, no job either.

Tenure is poorly understood. It does not mean a job for life, or even for a fixed period. Tenure merely means that you can't be fired for political reasons. That's all. It guarantees that producing results that conflict with the views of management cannot lead to you facing consequences. You actually have to do something genuinely wrong.

Besides, mist of academia has disposed of tenure. Damn fools. If you want to reach new shores of discovery, you have to know that nobody with a vested interest in dragon beliefs can blow you out of the water. That guarantee no longer exists, which is why timidity and fraud have increased in recent years.

Economics (1)

PopeRatzo (965947) | about 5 months ago | (#47593361)

If you think Psychology has a replication problem, get a load of Economics.

When it comes to "hard" sciences, Economics is basically remote viewing with a political agenda.

Re:Economics (1)

iggymanz (596061) | about 5 months ago | (#47594509)

Nonsense, economics has quantities and flows that can be measured. Psychology has none of that.

Re:Economics (1)

PopeRatzo (965947) | about 5 months ago | (#47594601)

It's not the data that's the problem with Economics, it's the postulates that are formed from whole cloth, and "laws" that are similarly . In fact, even the data in a lot of Economics is just hokum, based upon opinions more than anything measurable. MMT is a good example.

I would suggest that Psychology's reputation has caused researchers to become a lot more rigorous. The opposite has happened in Economics.

Re: Economics (0)

Anonymous Coward | about 5 months ago | (#47594659)

Economics is to psychology as chemistry is to physics.

Freud not in good shape right now. (0)

Anonymous Coward | about 5 months ago | (#47594169)

Everybody wants to be a psychiatrist and psychologist. Fu#knutbook's social experiments not an exception. What a flawed science.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?