Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Independent Labs To Verify High-Profile Research Papers

Soulskill posted more than 2 years ago | from the oh-the-drama dept.

Medicine 74

ananyo writes "Scientific publishers are backing an initiative to encourage authors of high-profile research papers to get their results replicated by independent labs. Validation studies will earn authors a certificate and a second publication, and will save other researchers from basing their work on faulty results. The problem of irreproducible results has gained prominence in recent months. In March, a cancer researcher at Amgen pharmaceutical company reported that its scientists had repeated experiments in 53 'landmark' papers, but managed to confirm findings from only six of the studies. And last year, an internal survey at Bayer HealthCare found that inconsistencies between published findings and the company's own results caused delays or cancellations in about two-thirds of projects. Now, 'Reproducibility Initiative,' a commercial online portal is offering authors the chance of getting their results validated (albeit for a price). Once the validation studies are complete, the original authors will have the option of publishing the results in the open access journal PLoS ONE, linked to the original publication."

cancel ×

74 comments

Sorry! There are no comments related to the filter you selected.

Bill O. (-1)

Anonymous Coward | more than 2 years ago | (#40998933)

OMG!!! LIBURULLLS!!!11!!1

cool! (0)

Anonymous Coward | more than 2 years ago | (#40998951)

Negative results are sometimes just as interesting as positive ones. As you usually learn something.

Re:cool! (3, Insightful)

Geoffrey.landis (926948) | more than 2 years ago | (#40999013)

Negative results are sometimes just as interesting as positive ones. As you usually learn something.

You would think.

In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

Re:cool! (3, Informative)

donaggie03 (769758) | more than 2 years ago | (#40999843)

Negative results are sometimes just as interesting as positive ones. As you usually learn something.

You would think.

In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

What are you talking about? Negative results doesn't mean someone screwed up a measurement. Negative results means the experiment ran correctly but the results went counter to the hypothesis. Negative results are the fruit of good science just as much as positive results are. Screwing up the measurements in an experiment is simply bad science, or not science at all.

I want to live on your planet [Re:cool!] (3, Insightful)

Geoffrey.landis (926948) | more than 2 years ago | (#41000163)

In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

What are you talking about? Negative results doesn't mean someone screwed up a measurement. Negative results means the experiment ran correctly but the results went counter to the hypothesis.

That would be nice if things were that simple.

Negative results are the fruit of good science just as much as positive results are. Screwing up the measurements in an experiment is simply bad science, or not science at all.

What planet are you from? I want to move to your planet, where science is so easy, and stuff always works unless it's "bad science," which apparently comes with a label so anybody can tell which is which.

On my planet, stuff doesn't always work. When it doesn't work, it's not always easy to figure out why it doesn't work. When you don't get a result, it's hard to be confident that you didn't do something wrong-- and the people who are confident that they didn't do something wrong... are often wrong. It's not always trivial to say whether or not you did something wrong, or whether the experiment set up had a flaw, or there was something that turns out to be important that you didn't know was important, or whether the result you're trying to replicate was just wrong to start with.

Re:I want to live on your planet [Re:cool!] (1)

donaggie03 (769758) | more than 2 years ago | (#41000369)

In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

What are you talking about? Negative results doesn't mean someone screwed up a measurement. Negative results means the experiment ran correctly but the results went counter to the hypothesis.

That would be nice if things were that simple.

Negative results are the fruit of good science just as much as positive results are. Screwing up the measurements in an experiment is simply bad science, or not science at all.

What planet are you from? I want to move to your planet, where science is so easy, and stuff always works unless it's "bad science," which apparently comes with a label so anybody can tell which is which.

On my planet, stuff doesn't always work. When it doesn't work, it's not always easy to figure out why it doesn't work. When you don't get a result, it's hard to be confident that you didn't do something wrong-- and the people who are confident that they didn't do something wrong... are often wrong. It's not always trivial to say whether or not you did something wrong, or whether the experiment set up had a flaw, or there was something that turns out to be important that you didn't know was important, or whether the result you're trying to replicate was just wrong to start with.

On my planet, I never said or implied that science was easy or trivial. Science IS hard, and if you are submitting to a journal without taking the hard steps of ensuring your measurements are accurate and your methods are not flawed, then you are contributing to bad science. I was not saying that every submitted experiment should be guaranteed to be flawless and come with a "label." I was saying that a scientific error in the experiment is not the same thing as a negative result. You can have scientific errors in papers that claim positive results as well as in papers that claim negative results.

In short, the search for errors in an experiment is one of the primary reasons for the journalling review process, and should not be confused with whether an experiment claims positive or negative results.

Finally, when you respond to this comment, geoffrey.landis, try to act like an adult. There is no need to be a smart-ass, or glib, or whatever it is you were trying to be.

Re:I want to live on your planet [Re:cool!] (1)

NeutronCowboy (896098) | more than 2 years ago | (#41001717)

Science IS hard, and if you are submitting to a journal without taking the hard steps of ensuring your measurements are accurate and your methods are not flawed, then you are contributing to bad science.

Spoken like someone who never did any research, where a flaky cable is responsible for that super-interesting trend you discovered.... until someone points out the flaky cable. Most people do exactly what you recommend, and are tripped up by flaws in their measurements that they did not foresee or find.

Finally, when you respond to this comment, geoffrey.landis, try to act like an adult. There is no need to be a smart-ass, or glib, or whatever it is you were trying to be.

If you want to be taken seriously when talking about how science works, what scientists should do to get results and how scientists should behave when working on their experiments, you might want to figure out what it is they're already doing. Otherwise, you just come across as naive and presumptuous.

Re:I want to live on your planet [Re:cool!] (1)

donaggie03 (769758) | more than 2 years ago | (#41003143)

Spoken like someone who never did any research, where a flaky cable is responsible for that super-interesting trend you discovered.... until someone points out the flaky cable. Most people do exactly what you recommend, and are tripped up by flaws in their measurements that they did not foresee or find.

To a large extent, I would say that a scientist has the responsibility to ensure his tools are working correctly before publishing findings; and yes, that includes verifying that a cable is not flaky. Besides that, I agree that not every error can be accounted for or foreseen, which is exactly why I said that sort of thing is that the journalling review process is for. But all that is just distracting from the my main point which is that making errors in an experiment is not the same thing as getting negative results. The two are not the same. That is all I've been trying to say. Whether or not scientists make mistakes is a side issue.

If you want to be taken seriously when talking about how science works, what scientists should do to get results and how scientists should behave when working on their experiments, you might want to figure out what it is they're already doing. Otherwise, you just come across as naive and presumptuous.

I have an idea of what scientists are already doing. I know that scientists try really hard to find errors in their methodology and equipment before publication, so they don't end up looking like fools when it turns out a cable was bad. I also know that scientists don't forgo that step just because "science is hard". Finally, I know that making mistakes during an experiment is not the same thing as getting negative results in that experiment. If any of this is wrong, please share.

Did you actually read my responses, because it sounds to me like you and I have pretty much the same idea of what a scientist does. This thread started simply because someone equated making mistakes with getting negative results. I'm not wrong for trying to correct this.

Do you have any idea what kind of work I do? Do you know how much research experience I have? Or perhaps are you being presumptuous?

Why are you so quick to defend someone being an ass? Even if I worked at McDonalds and had 0 research or scientific experience is no reason for the tone in some of these responses. Adults try to treat each other with respect, even if they think the other person is wrong. Teenagers treat each other with condescension and ridicule. It is not too much to ask for fellow slashdotters to treat each other with some civility. If you think I am wrong, please share.

Re:I want to live on your planet [Re:cool!] (0)

Anonymous Coward | more than 2 years ago | (#41004475)

You're getting unpleasant and argumentative responses because your rhetoric implies that you do not understand science as an institution, or your participation has been minimal at best, yet you still feel the urge to pontificate on scientific approach and methodology. By claiming that negative results are worth as much as positive results, you are ignoring that there is tremendous pressure for new, positive results. Your idealism, although presumably well-intended, is insulting.

People are going to be rude. You shouldn't be surprised; in my experience, researchers are some of the rudest people out there. Don't expect the world to conform to your milquetoast standards. Get over it and grow up.

Re:I want to live on your planet [Re:cool!] (1)

donaggie03 (769758) | more than 2 years ago | (#41005421)

By claiming that negative results are worth as much as positive results.

Please enlighten me on where I said or implied anything even close to that statement. My only point was that mistakes did not equate negative results. Nowhere did I suggest that negative results and positive results had the same worth. You and GP seem determined to read more into my statements than what I am actually typing. You seem to be reading somewhere that I think science is somehow perfect and researchers are without flaw. I said no such thing. The closest I came to such as statement was when I implied that was a scientist's goal, "so they don't look like fools." Are you seriously suggesting that since there are mistakes in science that we should just accept those mistakes because science is hard? If not, then surprise, we are in agreement.

Or perhaps you think I'm the original AC who started this thread, with the statement that we could learn as much from negative results as positive ones? Then I'd say the AC is correct that we could learn from negative results. I'd also say that political pressure for new, positive results has no bearing on the actual scientific value of those results. I find it laughable that you would insinuate that negative results have no value. It is a common viewpoint that the intrinsic worth of negative results is a primary flaw in the status quo of prestigious journals and Big Pharma.

Somehow you think that my posts show a lack of participation in the scientific community, and then you go on and attribute a statement to me that 1) is hardly debatable and 2) wasn't even mine. So besides this statement that I never made (until this post I suppose), show me what exactly I said anywhere in this thread that makes you believe I "do not understand science as an institution, or your participation has been minimal at best" because I'd like to know what stance I've taken that is so unpopular. Is it that sloppy mistakes make bad science? I'm sorry but they do. Is it that scientists try to minimize these mistakes? I would hope that is true. Is it that mistakes that do get through can be caught by the review process? That, too, is correct. Is it that mistakes are not the same as negative results? Also true. That pretty much sums up every position I've taken so far, so please, by all means help me out. Let me know what is so "insulting" about my views, because dispite your handful of $5 words, your post lacks substance and specifics.

People are going to be rude so grow up? Give me a break. If I'm trying to have an intelligent discussion and someone starts acting like I child, I will call them on it.

Re:I want to live on your planet [Re:cool!] (1)

AmonRa1979 (797618) | more than 2 years ago | (#41016039)

Hey donaggie, don't take what they say to heart. Some people will twist and contort some minor detail in your comment and then run with it like it must be the only way to interpret what was typed. You (and the initial AC) are absolutely correct that much can be learned from a properly set up experiment that did not produce the result the scientist was hoping for. For instance, I work in a field where the chemical reactions aren't easily predictable. I mix two precursors because the evidence I have says it will produce the material I want. However, I end up getting no reaction at all. This "Negative Result" still provides important information that I wish I could publish, but unfortunately most journals wouldn't accept it. Instead I have to keep searching for precursors that will produce the desired result and then I can sometimes include my negative results in with that. It would save a lot of time for me if I could easily find the negative results of others so that I don't repeat their procedure.

To comment on another reply concerning the "faulty cable", control experiments should be designed to find these things. Not including a control or check against a known sample is pretty bad science in my opinion. If you get extraordinary results, then it is absolutely up to the scientist to make sure that equipment is functioning properly well before you even begin to write a paper.

Finally, this constant pressure for new, positive results is as much of a byproduct of political involvement as it is scientists cutting corners to produce positive results faster for their own professional gain. I have seen in some aspects of my field that in order to accomplish proper science in the time allotted by nearly every government grant initiative that much of the "proposed" work has to already be done before you even start applying for the grant. It's my opinion (based solely on my limited personal experience) that this is a direct result of a moderate number of groups pushing out papers as fast as they can without properly checking their results... without any intention of ever running the experiment a second time. I've come across several papers (and patents) which claim that a particular set of precursors will result in a very specific product. Yet, when I perform the experiment precisely in the reported conditions, the result is absolutely no reaction. There are of course some conditions which are difficult to quantify and report in literature, but it seems hard to believe that I get absolutely no reaction rather than a slower reaction, faster reaction, or a result that's not the same stoichiometry as reported. In this case, one can always write a rebuttal. However, in the case where you perform an experiment properly without faulty equipment and get a negative result, it's difficult to get it published.

Re:I want to live on your planet [Re:cool!] (1)

donaggie03 (769758) | more than 2 years ago | (#41017483)

Thank you. I wish I could be as eloquent as you are. You said everything I was trying to say but did a much better job at it!

Re:I want to live on your planet [Re:cool!] (1)

tburkhol (121842) | more than 2 years ago | (#41002073)

Negative results are the fruit of good science just as much as positive results are. Screwing up the measurements in an experiment is simply bad science, or not science at all.

What planet are you from? I want to move to your planet, where science is so easy, and stuff always works unless it's "bad science," which apparently comes with a label so anybody can tell which is which.

Good science is designed so that even 'negative' results are reportable and interesting. Within many fields (eg, biology) few experiments are really designed that way. Many experimental designs are simple, two-factor designs such as [normal-population disease-population] X [placebo real-drug]. The outcome of this is: "Yes, the drug does something" or "We couldn't measure a significant effect." Failure to measure a significant effect is different from a negative result. Failure to measure a significant effect could be because the drug doesn't do anything, or it could be because there was more variability than expected, or the effect is smaller than you expected. It's generally harder to convince people of your failure-to-find, because ordinary designs are set up for a 5% chance of being wrong when you find an effect (p-value), but a 20% chance of being wrong when you fail-to-find (power). That's when anyone bothers to do an statistical design to determine sample size at all.

Good science is hard. It seems to be especially hard in the biological sciences, where technical inability to control things like transgene dose or knockdown efficiency lead to two-factor, two-level designs that only offer binary interpretation of outcomes. Or ethical concerns limit the number of samples you can collect. It turns out a lot of scientists are just sort of muddling through: every 3 years or so, a high-profile journal publishes a statistical or validation retrospective, and they always turn up about the same result: more than half of published papers turn out to be wrong.

Re:cool! (0)

Anonymous Coward | more than 2 years ago | (#41005283)

Screwing up the measurements in an experiment is simply bad science, or not science at all.

Clearly you're not a scientist... moving on...

Re:cool! (1)

donaggie03 (769758) | more than 2 years ago | (#41006365)

From context you should be able to see that I meant an egregious error that makes it's way through to publication in a research journal. I didn't mean some obscure problem that the scientist didn't account for. I used the term "screwing up" for a reason, because it implies something about the magnitude of the mistake, or, at the very least, it implies the scientist is being somehow neglectful. So you think random screw ups by incompetent scientists are A-OK to get published? That type of paper would only muddy the waters. confuse the issue at hand, and set the field back by however long it takes for the paper to get discredited. But surely I'M the one that's "clearly not a scientist." That is the best quote you could come up with, out of how many posts, clarifying my position? Yeah, maybe we should move on, because if that's the best you can come up with, and if you so wholeheartedly disagree with that sentence, then if YOU are a scientist, you are in name only. Perhaps you are so entrenched in the politics of your field that you forget what true science really is? Science is not about being sloppy, lazy, or imprecise, which is all you seem to be defending.

But let me make it easy on you. Howabout YOU tell ME how you think science REALLY is. Just tell us all, since I've CLEARLY got it wrong, why don't you enlighten us on what it is you think is right? You surely haven't countered any of my previous assertions, so instead of telling me how wrong I am, tell my your god-given truth, and then maybe I'll "get it"

Re:cool! (1)

donaggie03 (769758) | more than 2 years ago | (#41006391)

BTW for other readers, I am assuming this AC is the same AC that I've had a back and forth with further along this thread, because I asked him/her to point to a specific comment I made. If I'm correct, the two posts above this one should be read after the rest of this thread. Thanks

Re:cool! (0)

Anonymous Coward | more than 2 years ago | (#41001513)

Check figshare

Re:cool! (2)

Hatta (162192) | more than 2 years ago | (#41000263)

This has nothing to do with negative results. This is to weed out false positives.

Always been a problem (4, Interesting)

Geoffrey.landis (926948) | more than 2 years ago | (#40998973)

That's always been a problem; the journals usually want to publish new work, and aren't interested in publishing work that just repeats something already done.

I'm puzzled by this sentence, though: "Once the validation studies are complete, the original authors will have the option of publishing the results in PLoS ONE, linked to the original publication. "

They're saying that the people who did the work replicating the experiment don't get the credit for their work, but instead the authors of the paper that they're trying to replicate do?

And, what if the new work doesn't replicate the results? Does it get published? Credited to whom?

Re:Always been a problem (1)

G3ckoG33k (647276) | more than 2 years ago | (#40999107)

"That's always been a problem; the journals usually want to publish new work, and aren't interested in publishing work that just repeats something already done."

That has been a problem as you say.

"And, what if the new work doesn't replicate the results? Does it get published?"

Well, that is already part of the previous problem, also since long.

Very few journals have been publishing "contradictory results", unless "seriously warranted", or something along those lines.

"Credited to whom?"

Yes, this is curious. Why would the original authors be credited, again?

The entire circus makes me wonder if there is one or a few recent cases envolving some high-ranked but financially and morally skewed authors.

Do we have any suggestions for any major research bogus research producers? Nobel Prize winners even?

Re:Always been a problem (1)

ceoyoyo (59147) | more than 2 years ago | (#40999271)

Well, that is already part of the previous problem, also since long.

Very few journals have been publishing "contradictory results", unless "seriously warranted", or something along those lines.

I've never run into that. What usually happens is that researchers don't produce contradictory results, they produce inconclusive ones. As it is, lots of inconclusive studies get published with discussions and conclusions that imply they are negative findings.

Yes, this is curious. Why would the original authors be credited, again?

They paid for it. Perhaps the confirmers will also get their names on the paper.

Do we have any suggestions for any major research bogus research producers?

http://retractionwatch.wordpress.com/ [wordpress.com]

Re:Always been a problem (0)

Anonymous Coward | more than 2 years ago | (#40999365)

Well, that is already part of the previous problem, also since long.

Very few journals have been publishing "contradictory results", unless "seriously warranted", or something along those lines.

I've never run into that. What usually happens is that researchers don't produce contradictory results, they produce inconclusive ones. As it is, lots of inconclusive studies get published with discussions and conclusions that imply they are negative findings.

Do you have any fresh examples of when inconclusive studies trump decisive?

Yes, this is curious. Why would the original authors be credited, again?

They paid for it. Perhaps the confirmers will also get their names on the paper.

But if they don't confirm, will they get published?

Do we have any suggestions for any major research bogus research producers?

http://retractionwatch.wordpress.com/ [wordpress.com]

Thanks I'll look into that.

Re:Always been a problem (3, Informative)

Naffer (720686) | more than 2 years ago | (#40999293)

It’s much worse than this. The burden of proof on people attempting to publish studies showing that work cannot be replicated is extremely high. Often many-fold more experiments and controls are required to show that it isn’t simply a failure on the part of the group attempting to repeat the experiment. Frequently these sorts of papers must offer an alternative hypothesis to explain both the original and new results as well. These sorts of studies are very difficult and time consuming, and can’t be given to junior graduate students who haven’t already proven themselves to be capable experimentalists. Thus to do something like this you need to assign one or more very capable senior students/postdoctoral workers, which costs money and time and takes away from original research.

Re:Always been a problem (1)

ancienthart (924862) | more than 2 years ago | (#41003823)

It Thus to do something like this you need to assign one or more very capable senior students/postdoctoral workers, which costs money and time and takes away from original research.

As someone who gave up on research science in organic chemistry, due to the amount of faulty and incomplete synthesis procedures I encountered in the literature, not doing this costs money and time and takes away from original research. ("Oh yes. We've encountered difficulty getting a high yield in that step as well. We do this." "Um yeah, so why wasn't that in any of your published articles over the last four years?")

Re:Always been a problem (2)

fermion (181285) | more than 2 years ago | (#41000675)

Isn't this science? A paper is published and it is treated suspiciously until the work is repeated or applied to another result. An independent lab is not going to guarantee that a result is valid. In most fields, any lab that has the expertise to repeat a study is not going to be independent. All these scientist know each other. There wil always be a risk of collusion. If scientific dishonest is the issue, two dishonest scientist are not going to solve the issue, and if a random lab can't duplicate, the original scientist can just say they didn't have the expertise.

The reality is that any experiment is just a data point, with the conclusion not nearly as interesting as the process. For instance the Millikan oil drop experiment likely had some level of fraud and probably would not be reproducible in the modern sense. However the apparatus and process did give us a path to the electric charge that eventual lead to a something that was more defensible. Likewise Einstein's photoelectric effect experiment did not prove that light was a particle. The apparatus and process were vital to experiments done half a century later that did prove the nature of light.

So the problem is not that results are invalid or scientific dishonest. IMHO the problem comes from three other sources. First, is the idea that results, not process, is the key part of a scientific study. This comes from the bad way that science is taught, and the fraudulent presentation of the scientific method. Second, is the commercial journals and their desire for publicity and 'impact factor' to drive sales. This is where I think open journals might help. Third is the focus on soft sciences, like medicine, to represent all science. In medicine, the results are all that matter, which necessarily leads to the corruption and pervasion of the entire process.

This is being discussed at ITP (1)

paiute (550198) | more than 2 years ago | (#40999039)

I'm Spike Lee (-1)

Anonymous Coward | more than 2 years ago | (#40999057)

I'm Spike Lee and I'm voting for Obama because he's black,and you should too.

http://www.breitbart.com/Breitbart-TV/2012/08/15/Spike-Lee-Im%20going-to-do-Whatever-Can-to-Get-Obama-a-second-term-So-he-can-finally-do-what-he-wants

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#40999179)

Sounds good. Anything that pisses off rednecks is good enough for me.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#40999505)

Yes that's just what Spike Lee is saying too. Fuck the country, screw what's best for our children and our future. Government cheese and food stamps for everyone!

Vote Obama man 'cause he's gonna give it to the man.

The good news is that for the most part all of this is just talk, most of you idiots are too stupid to even be registered and when it comes time to vote you are just as likely to be high and forget to actually do it.

Obama=Epic Fail

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#40999753)

Why don't you just be honest and say your problem with Obama is he's an 'uppity' black man?

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#40999859)

Because that is not my problem with Obama. I despise Marxists.

I wonder why you would say such a thing about me, I have never said anything even remotely close to attacking Obama's race.

You on the other hand opened right up with 'Anything that pisses off rednecks is good enough for me.'

Pot, meet kettle.

Re:I'm Spike Lee (1)

Nadaka (224565) | more than 2 years ago | (#41000269)

Obama isn't a marxist. He isn't even a socialist.

  He is a moderately conservative pro corporate welfare capitalist.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41000701)

I continually am amazed at the lack of understanding of these things around here. You truly think of Obama as conservative?

First of all lets get our terms straight, I am talking about the western definition of conservative and liberal, not the classical liberal.

Now I have two questions for you, you say Obama is 'moderately conservative', please name one of these conservative things he has done?

Secondly, you really have to explain what the hell you mean by 'pro corporate welfare capitalist'. Do you even understand what the term capitalist means? Are you actually thinking of crony capitalism, which is a real term and BTW has nothing whatsoever to do with capitalism (more accurately referred to as the free market)? Crony capitalism in reality is essentially a form of fascism in which the leaders of industry and those of the state work in collusion to manipulate the market; case in point would be Obama bailing out GM (and in doing so what he really was trying to do was bail out the unions - you understand this right? Unions are socialist organizations at their core - you understand this right?), then working in GM's favor with tax breaks, regulations targeted at their competitors, etc. All of these things are facts - you understand this right? These are not the actions of a conservative by any stretch of the imagination.

So do tell, what things has Obama done that qualify him as a conservative? And what the hell exactly is a 'pro corporate welfare capitalist'?

Re:I'm Spike Lee (1)

Nadaka (224565) | more than 2 years ago | (#41011615)

You really are not worth replying to you anonymous coward. But I will inform you anyway.

Yes. Obama really is conservative.

The "American" definition of conservative is not conservative. It is a radical regressive movement attempting to overturn the liberties that we have fought and died for since the American and French revolutions. It is attempting to return to a dark age where a select class of gentry wields absolute authority and power through economic, political and superstitious means. In the past, this was done through the concept of the divine right of kings and through the threat of supernatural punishment/reward by various religious institutions. Now they use a perversion of the free market ideal instead of the divine right of kings to wield power through corporate proxies.

Obama is conservative. He is attempting to maintain the status quot and protect the institutions are currently in place. If you have actually read the wealth of nations by Adam Smith you would see that he makes very strong warnings about the inherent problems with his abstract model that require regulation to manage. So yes, Obama is a capitalist. However, he also plays authoritarian games and transfers wealth from the people to select interests through corporate tax breaks, grants and other programs. It is welfare for corporations, which only servers to enrich the wealthy.

He is not progressive, or liberal. And neither progressive, liberal or even socialist indicate Marxism at all.

I am not the ignorant one here. Anyone calling Obama a marxist is either stupid or ignorant.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41001965)

Where'd you go drone? You make a statement, do you stand by what you have said or not? I think you have no idea what you are talking about.

Silence is acceptance.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41008691)

So drone where did you go drone. You cannot support your statement. You are a coward.

I have read some of your other posts, frankly you are batshit crazy.

You are either very young or very stupid. Quite possibly both. Go ahead and try to support your contention that Obama is a conservative, you will only show everyone more fully how much I am right.

http://obamaism.blogspot.com/

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41000419)

Obama is a Marxist? Since when? What 'Marxist' laws has he signed into law? The one that required people to buy private health insurance or face fines? The ones he tried to get passed to favor the RIAA/MPAA corporations? The ones that gave the corporations who helped spy on US citizens immunity from lawsuit? The same guy whose campaign was financed by multi-billion dollar corporations? Yeah, that totally sounds like what a 'Marxist' would be like. Only in conservaturd land could an obvious crony capitalist and corporatist be called a 'Marxist'.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41000529)

Stick to the subject drone, you posit that I am a racist and I reject that. You are the one making racist statements.

As to Obama's Marxism this is well known, the problem is you don't even know what the term means so you wouldn't know one if you saw one.

Read and learn drone.

http://www.amazon.com/exec/obidos/ASIN/1451698097/theamericansp-20/

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41000803)

Address my points rather than trying to dodge them. How are any of the actions that I listed that Obama has done are even remotely considered 'Marxist'? The vast majority of the laws he signs and executive orders are basically corporation-enriching and basically further the fascist plice state setup by the Bush Administration. No Marxist would agree with the vast majority of the shit he's done.

Re:I'm Spike Lee (0)

Anonymous Coward | more than 2 years ago | (#41001097)

Stick to the subject drone, you posit that I am a racist and I reject that. You are the one making racist statements.

As to the rest of your rant look up two posts, research this

http://obamaism.blogspot.com/

And then go fuck yourself.

How can we implement this in practice? (3, Insightful)

Vario (120611) | more than 2 years ago | (#40999091)

The idea to reproduce important results is good and is part of the scientific method. In practice this is much harder to accomplish due to several constraints. I can only speak for my field but I think this applies to other fields as well that the reproduction is hard by itself.

This leads us to a bunch of problems. If it takes a graduate student a year to collect a data set on a custom made machine that is expensive and time consuming who has the resources to reproduce the results? In most branches we are limited by the available personnel. It is hard to imagine giving someone the task of 'just' reproducing someone else's result, as this does not generate high-impact publications nor can be used for grant applications.

The thought behind this would benefit the scientific progress, especially to weed out questionable results that can lead you far off track but someone needs to do it. And it better not be me, as I need time for my own research to publish new results. Any reviewer always asks him/herself whether this is really an achievement that it is worth publishing, which reviewer would accept a paper stating "We reproduced X and did not find any deviations from the already published results" ?

Fail [Re:How can we implement this in practice?] (4, Insightful)

Geoffrey.landis (926948) | more than 2 years ago | (#40999259)

Yes, the more I look at this, the more I see little chance for it to work.

A graduate student will typically spend ~2 years of dedicated study in a narrowly specialized field to learning enough lab technique to do a difficult experiment, often either building their own custom-made equipment, or using one-of-a-kind equipment hand-built by a previous graduate student, and do so with absurdly low pay, in order to produce new results. You can't just buy that; they're working for peanuts only because they are looking for the credit for an original contribution to the field. And then you're going to say "oh, by the way, the original authors get the publication credit for your work if it reproduces their results, and we won't publish at all if you don't."

And, frankly, why would the original researchers go for this? You're really asking institutions to pay in order to set up a laboratory at a rival institution, and then spend time and effort painstakingly tutoring their rivals in technique so as to bring them up to the cutting edge of research? And even if you can bring a team from zero up to cutting-edge enough to duplicate your work, what you get out of it is a publication in a database saying you were right in the first place?

Re:How can we implement this in practice? (1)

ISoldat53 (977164) | more than 2 years ago | (#41000191)

How would an independent lab replicate the results from something like the LHC? There's only one.

Re:How can we implement this in practice? (1)

Electricity Likes Me (1098643) | more than 2 years ago | (#41002693)

This is why the LHC has ATLAS and CMS as I understand it - different detectors looking at the same beamlines, with independent teams (at least as I understand it).

Dumb racket (3, Informative)

kencurry (471519) | more than 2 years ago | (#40999169)

There is simply no way this would be effective for major research topics. They can't be experts across all fields, e.g., they would not have regulatory clearance to do medical studies. They would not have equipment or experience to do esoteric materials or particle physics etc. So yeah, call me extremely skeptical.

Re:Dumb racket (1)

A beautiful mind (821714) | more than 2 years ago | (#40999281)

This is why we need mandatory trial registration, so that we have a paper trail for abandoned trials and trials which fail to confirm an effect.

Who will pay for this? (4, Insightful)

Joe Torres (939784) | more than 2 years ago | (#40999201)

The article says that the "authors will pay for validation studies themselves" at first. This is a nice idea, but it is not practical in an academic setting. Academic labs would rather spend money on more people or supplies instead of paying an independent lab to replicate data for them. New ideas are barley able to get funding these days, so why would extra grand money be spent to do the same exact studies over again. There could be a use for this in industry, but they would probably pay their own employees to do this instead if it is worth it.

Re:Who will pay for this? (0)

Anonymous Coward | more than 2 years ago | (#40999251)

This sounds more like some venture capital firm trying to make a quick buck. I know of no scientists that would use this because it sounds more like a scam. It doesn't address how science really works and is a solution to a problem that doesn't exist.

Reproduceability doesn't require a different lab. It just means you can show multiple data sets on the same apperatus.

Re:Who will pay for this? (2)

sexconker (1179573) | more than 2 years ago | (#40999299)

They're hoping to establish their certification as important, trustworthy, etc. It sounds nice on paper but when you get down to it it's still for-pay research seeking a previously determined outcome, and another in the long line of ultimately pointless certifications/accreditations.

If they're successful, then anyone at a public institution who wants to be published will strive for that certification, and will demand essentially double the grant money.
Universities will pay for it because they're mired in academic bullshit such as this. Private research institutions would simply fund their own, smaller study to verify or publish without a replicating the results, as usual, and tell the researcher that they should be glad they got the initial grant to study the effects of touching genitals with different temperature probes in the first place. (And no, that's not a made up example.)

Re:Who will pay for this? (1)

ceoyoyo (59147) | more than 2 years ago | (#40999305)

It seems to me the right people are currently paying for replication. If a drug company wants to use some results, they replicate them first. The drug company SHOULD pay for that study. If someone else in academia is interested in using a result, they replicate it first.

The problem seems to be that people, including most researchers, put entirely too much faith in individual studies.

Well now (3, Insightful)

symes (835608) | more than 2 years ago | (#40999235)

I can see this happening for some fairly small studies, but many very big studies simply can't be replicated. For example, a big public health study will possibly change the sampling population. What about the LHC? How could anyone realistically replicate that work? The deal is replication isn't really replication as you can't always copy what someone has already done. This idea just seems more like profiteering than anything else. What we really need are options for research groups to publish studies that failed but say something interesting about why they failed. This is much more useful. This way we all learn. Plus big labs aren't always free from suspicion themselves.

Re:Well now (1)

CCarrot (1562079) | more than 2 years ago | (#41000411)

I can see this happening for some fairly small studies, but many very big studies simply can't be replicated. For example, a big public health study will possibly change the sampling population.

Err...so? If the study results can't be supported across a different sampling population (that still meets the study's stated criteria), then the original study results should be revised or invalidated. In fact, it's a better acid test if they do change the population.

fictitious e.g.: if a study emerges that says eating X amount of tomatoes increases the chance of bearing twins in women ages 19-29, but changing the sampling population to people from a different country (still eating tomatoes, still 19-29 and still bearing some percentage of twins) reverses that finding or is inconclusive, then the original finding is either too broad or just plain wrong.

That being said, I agree with the rest of your post. Some experiments are just to big, too long and involving too much specialized equipment to be able to quickly replicate elsewhere. Trying to book time at the LHC for this would increase the costs far beyond reason, any research grants would basically have to be doubled to support it.

So much for "peer reviewed" papers from academia (0)

Anonymous Coward | more than 2 years ago | (#40999241)

When they're all in cahoots (for grant money) based on falsified research.

Re:So much for "peer reviewed" papers from academi (1)

Joe Torres (939784) | more than 2 years ago | (#40999323)

I think you are looking at this the wrong way. If anything it is more difficult to publish results that are consistent with other studies because there isn't much interest (unless it is controversial). Studies have a better chance of being published in high-impact (widely read) journals if they report something new that causes a change in the way the scientific field thinks.

Ratios (4, Interesting)

bmo (77928) | more than 2 years ago | (#40999263)

>53 'landmark' papers, but managed to confirm findings from only six of the studies.

That's 89 percent crap. (88.7%)

Sounds about right.

I repeat Sturgeonâ(TM)s Revelation, which was wrung out of me after twenty years of wearying defense of science fiction against attacks of people who used the worst examples of the field for ammunition, and whose conclusion was that ninety percent of SF is crud. Using the same standards that categorize 90% of science fiction as trash, crud, or crap, it can be argued that 90% of film, literature, consumer goods, etc. are crap. In other words, the claim (or fact) that 90% of science fiction is crap is ultimately uninformative, because science fiction conforms to the same trends of quality as all other artforms. -- Theodore Sturgeon

And as I get older, it seems that this observation holds true more every day, in everything.

--
BMO

Re:Ratios (1)

Joe Torres (939784) | more than 2 years ago | (#40999601)

I think the word "crap" is a little harsh. "It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics" These might have been "landmark" papers (whatever that means), but that doesn't mean that the conclusions will hold up in every model or in every application. A finding in one type of cancer cell (or inbred strain of worm, fly, mouse, rabbit, ape, etc.) will not necessarily directly lead to an effective therapy for humans. If scientists are afraid to publish risky results that have never been observed before, then scientific progress will slow down.

Re:Ratios (1)

bmo (77928) | more than 2 years ago | (#41000041)

> If scientists are afraid to publish risky results that have never been observed before, then scientific progress will slow down.

That's not what I'm advocating.

I'm just making the observation that Sturgeon's Law holds true here too. The overwhelming majority of stuff everywhere is mediocre at best. As for scientific studies, I think the problem is a result of publish-or-perish *and* a bias toward positive results, as opposed to negative results (we did this and it didn't work). As if coming up with a result that didn't match your hypothesis is bad science.

Here's the question: does doing science more carefully "slow down" science overall or does it increase the speed of scientific advance due to more reliable results? You may reference Voltaire's "perfect is the enemy of the good."

--
BMO

Re:Ratios (1)

Joe Torres (939784) | more than 2 years ago | (#41001775)

First, I'd like to clarify what I meant to say when I said risky. I think "unprecedented" would have been a better word. Peer review does a pretty good (depending on the journal) job of making sure a paper is internally consistent and, as long as the data isn't faked, valid enough to base future hypotheses on them. That being said, many papers will overstate their findings and make conclusions in their discussion section (where it is perfectly fine to put this stuff) that aren't entirely supported by their data. Scientists are expected to evaluate results critically and often don't agree with the conclusions of the papers, but the results (limited to the experimental system) are reliable for the most part. I would assume that most of the "landmark" papers would fit this description (results are reliable, but the conclusions could be crap). As for the slowing down scientific progress part: I think that if the standard of what is acceptable to publish (for disease-focus research) is that it has to work in human patients, then progress will be slowed down. I could be wrong, but how I see what the study concluded is like this: A "landmark" paper is published that identifies Compound X that inhibits a certain signalling pathway in a particular type of tumor (derived from a human cancer cell-line) and prevents an inbred strain of mice from dying (within a certain time-frame) after injecting a certain amount of the cancer cells in a particular place. The authors then conclude that the compound cures cancer. Compound X is then used in a clinical trial involving multiple human patients with tumors made up of a hetergenous cell population (with a unique tumor micro-environment) and is found to not significantly alter the disease outcome (which could be tumor size and not survival). Compound X is considered a failure and the "landmark" paper is considered crap.

They need this for medical studies too (1)

Anonymous Coward | more than 2 years ago | (#40999403)

Multi-variable medical studies need something like this as well. They also need to have 'real world' results to see if their study findings scale to the millions of people in the general population.

College students should be the ones doing this type of stuff. Universities have a budget for this, it will teach the kids what is current in their field, and get them exposure to the test equipment and process.

Also, isn't this what peer-reviewed is supposed to be, prior to getting published?

The Price Could Be Interesting... (1)

damn_registrars (1103043) | more than 2 years ago | (#40999467)

A lot of published work involves a very large number of experiments, sometimes done on very expensive instrumentation over a fairly long span of time. If the costs for reproducing the results are not scaled to the complexity of the work, this new lab won't be able to keep their lights on for long...

Easy for some fields (1)

lorinc (2470890) | more than 2 years ago | (#40999635)

In all computer related fields, that's pretty easy: give the code. It's often a pain in the ass to reproduce the results (and I talk only for my field), but as soon as you get the code, then you see what's the tricky part, and what's left to improve.

Looks like they get something either way... (1)

lazlo (15906) | more than 2 years ago | (#40999663)

If there's confirmation, they get to republish their results. If there isn't, they get to republish in the JIR [jir.com]

Statistical confirmation (2)

mutube (981006) | more than 2 years ago | (#40999781)

The costs involved in performing research would preclude this working in most fields. However where there would be considerable value in this sort of 'out of house' service is in performing re-analysis of the raw data behind the publication. Stats is hard and unfortunately it's all too easy to make a bit of a hash out of it. Unfortunately the current peer review process doesn't always address this adequately - either because the reviewers aren't neccessarily any better at statistics themselves or the data as presented has been stepped through processing that may add unexpected bias. Having a career statistician run a leery eye over the analysis in the orginal Wakefield paper [doi.org] certainly wouldn't have hurt.

Re:Statistical confirmation (1)

Joe Torres (939784) | more than 2 years ago | (#40999895)

Wakefield's study wouldn't have been fixed with independent statistical analysis because the results were faked. I do agree that many scientists could use some help with statistics and it would probably be a good idea if certain journals had a statistician on staff that could re-analyze raw data as a part of the review process.

Re:Statistical confirmation (1)

mutube (981006) | more than 2 years ago | (#41000291)

True the results were faked in some cases but putting those aside (and going completely from memory) the groups were still poorly matched and the conclusions bore little relation to the analysis. It's more that I was getting at - do the data/statistics support the claim of the paper - and I don't think they did.

For catching intentional faking I've always quite liked Benford's Law [wikipedia.org] (according to a very short section in Wikipedia article it's been successfully tested against scientific fraud). At the very least it would make it a lot harder to fake the demographics.

Very expensive (1)

Pigeon451 (958201) | more than 2 years ago | (#41000497)

There have been many hallmark publications on cancer research, which usually involve at the very least extensive animals studies. Many involve human subjects. The cost to test drugs or theories on humans is extensive. Most scientists don't have the funds to redo these experiments, and wouldn't want to either -- the grant money they receive would be towards building on previous results. No funding agency would give money to re-verify a study already published.

At the very least, the authors could make ALL data available for someone to check. Data can be misrepresented very easily, especially in statistics, so having an independent group to verify results would be very welcome.

Re:Very expensive (1)

ralphbecket (225429) | more than 2 years ago | (#41003377)

The cost is irrelevant: if anyone is to build on your results, they should first look for confirmation.

Why?
- Publications suffer a huge selection bias -- it is nearly impossible to get a negative result published (even if it refutes prior publications!).
- Most statistical work (a) uses an absurdly generous 95% confidence interval and (b) hasn't even been vetted by a competent statistician.

Requiring data and code to be published would go a long way towards improving the situation, since there's no point in reproducing an experiment if you can show the original method or data was flawed. But good luck getting the world of climate science to adopt that practice :-)

Significance isn't necessarily significant (0)

Anonymous Coward | more than 2 years ago | (#41000513)

IMHO the problem here is over-reliance on statistical significance as a sign of the validity of a hypothesis. Statistical signficance is simply an estimate of the probability of Type I error. Although it is necessary to find significance in order to provisionally reject the null hypothesis, significance by itself is not sufficient to support the experimental hypothesis. What is necessary is some measure of effect size, or even better a measure of predictive power on a separate cross-validated data set. If all publications were required to report effect size as well as significance, problems with replication failure would be reduced substantially.

Question about getting published (1)

ALeader71 (687693) | more than 2 years ago | (#41001241)

Isn't the purpose of publishing your research to open it up to scrutiny, and yes validation of your results? In college I read a few dozen papers which merely validated the results of another paper.

Replicating results is nothing new (0)

Anonymous Coward | more than 2 years ago | (#41001345)

For example, the Organic Syntheses Journal has been doing it for decades now, and for all submissions, not just "landmark" papers (how do you know this a priori that a paper will be extremely important in a reproducible and relaiable fashion, is unclear to me)

See: http://www.orgsyn.org/submission.html for details

Another optios is that papers that have been independently reproduced bear a tag or distinctive category than the ones that went *only* through peer review process (which might not entail reproducing the experimental work).

Re:Replicating results is nothing new (1)

Lord Maud'Dib (611577) | more than 2 years ago | (#41002929)

Came here to say the exact same thing re orgsyn.org. Nothing new here, move along...

Great Idea! (1)

BitterOak (537666) | more than 2 years ago | (#41001811)

So all we need to do now is wait for someone to build another LHC, find the Higgs themselves, and confirm the CERN result. Then they get to publish!

As a scientist I have to wonder (1)

the plant doctor (842044) | more than 2 years ago | (#41007491)

I like the idea, but, how do you fund this effort? I don't see the article making any mention of this.

We all spend our time writing grants now to support our own research and have little enough time to do it. Now we're expected to do someone else's research? I suppose it's a bit like reviewing articles, if you want to publish, you should review. However, this takes much more time, effort and money to do.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>