Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Medicine Science

Independent Labs To Verify High-Profile Research Papers 74

ananyo writes "Scientific publishers are backing an initiative to encourage authors of high-profile research papers to get their results replicated by independent labs. Validation studies will earn authors a certificate and a second publication, and will save other researchers from basing their work on faulty results. The problem of irreproducible results has gained prominence in recent months. In March, a cancer researcher at Amgen pharmaceutical company reported that its scientists had repeated experiments in 53 'landmark' papers, but managed to confirm findings from only six of the studies. And last year, an internal survey at Bayer HealthCare found that inconsistencies between published findings and the company's own results caused delays or cancellations in about two-thirds of projects. Now, 'Reproducibility Initiative,' a commercial online portal is offering authors the chance of getting their results validated (albeit for a price). Once the validation studies are complete, the original authors will have the option of publishing the results in the open access journal PLoS ONE, linked to the original publication."
This discussion has been archived. No new comments can be posted.

Independent Labs To Verify High-Profile Research Papers

Comments Filter:
  • by Geoffrey.landis ( 926948 ) on Wednesday August 15, 2012 @01:32PM (#40998973) Homepage

    That's always been a problem; the journals usually want to publish new work, and aren't interested in publishing work that just repeats something already done.

    I'm puzzled by this sentence, though: "Once the validation studies are complete, the original authors will have the option of publishing the results in PLoS ONE, linked to the original publication. "

    They're saying that the people who did the work replicating the experiment don't get the credit for their work, but instead the authors of the paper that they're trying to replicate do?

    And, what if the new work doesn't replicate the results? Does it get published? Credited to whom?

    • "That's always been a problem; the journals usually want to publish new work, and aren't interested in publishing work that just repeats something already done."

      That has been a problem as you say.

      "And, what if the new work doesn't replicate the results? Does it get published?"

      Well, that is already part of the previous problem, also since long.

      Very few journals have been publishing "contradictory results", unless "seriously warranted", or something along those lines.

      "Credited to whom?"

      Yes, this is curious. W

      • by ceoyoyo ( 59147 )

        Well, that is already part of the previous problem, also since long.

        Very few journals have been publishing "contradictory results", unless "seriously warranted", or something along those lines.

        I've never run into that. What usually happens is that researchers don't produce contradictory results, they produce inconclusive ones. As it is, lots of inconclusive studies get published with discussions and conclusions that imply they are negative findings.

        Yes, this is curious. Why would the original authors be c

    • by Naffer ( 720686 ) on Wednesday August 15, 2012 @02:07PM (#40999293) Journal
      It’s much worse than this. The burden of proof on people attempting to publish studies showing that work cannot be replicated is extremely high. Often many-fold more experiments and controls are required to show that it isn’t simply a failure on the part of the group attempting to repeat the experiment. Frequently these sorts of papers must offer an alternative hypothesis to explain both the original and new results as well. These sorts of studies are very difficult and time consuming, and can’t be given to junior graduate students who haven’t already proven themselves to be capable experimentalists. Thus to do something like this you need to assign one or more very capable senior students/postdoctoral workers, which costs money and time and takes away from original research.
      • It Thus to do something like this you need to assign one or more very capable senior students/postdoctoral workers, which costs money and time and takes away from original research.

        As someone who gave up on research science in organic chemistry, due to the amount of faulty and incomplete synthesis procedures I encountered in the literature, not doing this costs money and time and takes away from original research. ("Oh yes. We've encountered difficulty getting a high yield in that step as well. We do this." "Um yeah, so why wasn't that in any of your published articles over the last four years?")

    • by fermion ( 181285 )
      Isn't this science? A paper is published and it is treated suspiciously until the work is repeated or applied to another result. An independent lab is not going to guarantee that a result is valid. In most fields, any lab that has the expertise to repeat a study is not going to be independent. All these scientist know each other. There wil always be a risk of collusion. If scientific dishonest is the issue, two dishonest scientist are not going to solve the issue, and if a random lab can't duplicate,
  • by Vario ( 120611 ) on Wednesday August 15, 2012 @01:44PM (#40999091)

    The idea to reproduce important results is good and is part of the scientific method. In practice this is much harder to accomplish due to several constraints. I can only speak for my field but I think this applies to other fields as well that the reproduction is hard by itself.

    This leads us to a bunch of problems. If it takes a graduate student a year to collect a data set on a custom made machine that is expensive and time consuming who has the resources to reproduce the results? In most branches we are limited by the available personnel. It is hard to imagine giving someone the task of 'just' reproducing someone else's result, as this does not generate high-impact publications nor can be used for grant applications.

    The thought behind this would benefit the scientific progress, especially to weed out questionable results that can lead you far off track but someone needs to do it. And it better not be me, as I need time for my own research to publish new results. Any reviewer always asks him/herself whether this is really an achievement that it is worth publishing, which reviewer would accept a paper stating "We reproduced X and did not find any deviations from the already published results" ?

    • by Geoffrey.landis ( 926948 ) on Wednesday August 15, 2012 @02:03PM (#40999259) Homepage

      Yes, the more I look at this, the more I see little chance for it to work.

      A graduate student will typically spend ~2 years of dedicated study in a narrowly specialized field to learning enough lab technique to do a difficult experiment, often either building their own custom-made equipment, or using one-of-a-kind equipment hand-built by a previous graduate student, and do so with absurdly low pay, in order to produce new results. You can't just buy that; they're working for peanuts only because they are looking for the credit for an original contribution to the field. And then you're going to say "oh, by the way, the original authors get the publication credit for your work if it reproduces their results, and we won't publish at all if you don't."

      And, frankly, why would the original researchers go for this? You're really asking institutions to pay in order to set up a laboratory at a rival institution, and then spend time and effort painstakingly tutoring their rivals in technique so as to bring them up to the cutting edge of research? And even if you can bring a team from zero up to cutting-edge enough to duplicate your work, what you get out of it is a publication in a database saying you were right in the first place?

    • How would an independent lab replicate the results from something like the LHC? There's only one.
      • This is why the LHC has ATLAS and CMS as I understand it - different detectors looking at the same beamlines, with independent teams (at least as I understand it).

  • Dumb racket (Score:4, Informative)

    by kencurry ( 471519 ) on Wednesday August 15, 2012 @01:52PM (#40999169)
    There is simply no way this would be effective for major research topics. They can't be experts across all fields, e.g., they would not have regulatory clearance to do medical studies. They would not have equipment or experience to do esoteric materials or particle physics etc. So yeah, call me extremely skeptical.
    • This is why we need mandatory trial registration, so that we have a paper trail for abandoned trials and trials which fail to confirm an effect.
  • by Joe Torres ( 939784 ) on Wednesday August 15, 2012 @01:55PM (#40999201)
    The article says that the "authors will pay for validation studies themselves" at first. This is a nice idea, but it is not practical in an academic setting. Academic labs would rather spend money on more people or supplies instead of paying an independent lab to replicate data for them. New ideas are barley able to get funding these days, so why would extra grand money be spent to do the same exact studies over again. There could be a use for this in industry, but they would probably pay their own employees to do this instead if it is worth it.
    • They're hoping to establish their certification as important, trustworthy, etc. It sounds nice on paper but when you get down to it it's still for-pay research seeking a previously determined outcome, and another in the long line of ultimately pointless certifications/accreditations.

      If they're successful, then anyone at a public institution who wants to be published will strive for that certification, and will demand essentially double the grant money.
      Universities will pay for it because they're mired in a

    • by ceoyoyo ( 59147 )

      It seems to me the right people are currently paying for replication. If a drug company wants to use some results, they replicate them first. The drug company SHOULD pay for that study. If someone else in academia is interested in using a result, they replicate it first.

      The problem seems to be that people, including most researchers, put entirely too much faith in individual studies.

  • Well now (Score:4, Insightful)

    by symes ( 835608 ) on Wednesday August 15, 2012 @02:00PM (#40999235) Journal

    I can see this happening for some fairly small studies, but many very big studies simply can't be replicated. For example, a big public health study will possibly change the sampling population. What about the LHC? How could anyone realistically replicate that work? The deal is replication isn't really replication as you can't always copy what someone has already done. This idea just seems more like profiteering than anything else. What we really need are options for research groups to publish studies that failed but say something interesting about why they failed. This is much more useful. This way we all learn. Plus big labs aren't always free from suspicion themselves.

    • I can see this happening for some fairly small studies, but many very big studies simply can't be replicated. For example, a big public health study will possibly change the sampling population.

      Err...so? If the study results can't be supported across a different sampling population (that still meets the study's stated criteria), then the original study results should be revised or invalidated. In fact, it's a better acid test if they do change the population.

      fictitious e.g.: if a study emerges that says eating X amount of tomatoes increases the chance of bearing twins in women ages 19-29, but changing the sampling population to people from a different country (still eating tomatoes, still 19-29

  • Ratios (Score:5, Interesting)

    by bmo ( 77928 ) on Wednesday August 15, 2012 @02:04PM (#40999263)

    >53 'landmark' papers, but managed to confirm findings from only six of the studies.

    That's 89 percent crap. (88.7%)

    Sounds about right.

    I repeat Sturgeonâ(TM)s Revelation, which was wrung out of me after twenty years of wearying defense of science fiction against attacks of people who used the worst examples of the field for ammunition, and whose conclusion was that ninety percent of SF is crud. Using the same standards that categorize 90% of science fiction as trash, crud, or crap, it can be argued that 90% of film, literature, consumer goods, etc. are crap. In other words, the claim (or fact) that 90% of science fiction is crap is ultimately uninformative, because science fiction conforms to the same trends of quality as all other artforms. -- Theodore Sturgeon

    And as I get older, it seems that this observation holds true more every day, in everything.

    --
    BMO

    • I think the word "crap" is a little harsh. "It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics" These might have been "landmark" papers (whatever that means), but that doesn't mean that the conclusions will hold up in every model or in every application. A finding in one type of cancer cell (or inb
      • by bmo ( 77928 )

        > If scientists are afraid to publish risky results that have never been observed before, then scientific progress will slow down.

        That's not what I'm advocating.

        I'm just making the observation that Sturgeon's Law holds true here too. The overwhelming majority of stuff everywhere is mediocre at best. As for scientific studies, I think the problem is a result of publish-or-perish *and* a bias toward positive results, as opposed to negative results (we did this and it didn't work). As if coming up with

        • First, I'd like to clarify what I meant to say when I said risky. I think "unprecedented" would have been a better word. Peer review does a pretty good (depending on the journal) job of making sure a paper is internally consistent and, as long as the data isn't faked, valid enough to base future hypotheses on them. That being said, many papers will overstate their findings and make conclusions in their discussion section (where it is perfectly fine to put this stuff) that aren't entirely supported by the
  • by Anonymous Coward

    Multi-variable medical studies need something like this as well. They also need to have 'real world' results to see if their study findings scale to the millions of people in the general population.

    College students should be the ones doing this type of stuff. Universities have a budget for this, it will teach the kids what is current in their field, and get them exposure to the test equipment and process.

    Also, isn't this what peer-reviewed is supposed to be, prior to getting published?

  • A lot of published work involves a very large number of experiments, sometimes done on very expensive instrumentation over a fairly long span of time. If the costs for reproducing the results are not scaled to the complexity of the work, this new lab won't be able to keep their lights on for long...
  • In all computer related fields, that's pretty easy: give the code. It's often a pain in the ass to reproduce the results (and I talk only for my field), but as soon as you get the code, then you see what's the tricky part, and what's left to improve.

  • If there's confirmation, they get to republish their results. If there isn't, they get to republish in the JIR [jir.com]

  • by mutube ( 981006 ) on Wednesday August 15, 2012 @02:43PM (#40999781) Homepage

    The costs involved in performing research would preclude this working in most fields. However where there would be considerable value in this sort of 'out of house' service is in performing re-analysis of the raw data behind the publication. Stats is hard and unfortunately it's all too easy to make a bit of a hash out of it. Unfortunately the current peer review process doesn't always address this adequately - either because the reviewers aren't neccessarily any better at statistics themselves or the data as presented has been stepped through processing that may add unexpected bias. Having a career statistician run a leery eye over the analysis in the orginal Wakefield paper [doi.org] certainly wouldn't have hurt.

    • Wakefield's study wouldn't have been fixed with independent statistical analysis because the results were faked. I do agree that many scientists could use some help with statistics and it would probably be a good idea if certain journals had a statistician on staff that could re-analyze raw data as a part of the review process.
      • by mutube ( 981006 )

        True the results were faked in some cases but putting those aside (and going completely from memory) the groups were still poorly matched and the conclusions bore little relation to the analysis. It's more that I was getting at - do the data/statistics support the claim of the paper - and I don't think they did.

        For catching intentional faking I've always quite liked Benford's Law [wikipedia.org] (according to a very short section in Wikipedia article it's been successfully tested against scientific fraud). At the very le

  • There have been many hallmark publications on cancer research, which usually involve at the very least extensive animals studies. Many involve human subjects. The cost to test drugs or theories on humans is extensive. Most scientists don't have the funds to redo these experiments, and wouldn't want to either -- the grant money they receive would be towards building on previous results. No funding agency would give money to re-verify a study already published.

    At the very least, the authors could make ALL dat

    • The cost is irrelevant: if anyone is to build on your results, they should first look for confirmation.

      Why?
      - Publications suffer a huge selection bias -- it is nearly impossible to get a negative result published (even if it refutes prior publications!).
      - Most statistical work (a) uses an absurdly generous 95% confidence interval and (b) hasn't even been vetted by a competent statistician.

      Requiring data and code to be published would go a long way towards improving the situation, since there's no point in r

  • Isn't the purpose of publishing your research to open it up to scrutiny, and yes validation of your results? In college I read a few dozen papers which merely validated the results of another paper.
  • So all we need to do now is wait for someone to build another LHC, find the Higgs themselves, and confirm the CERN result. Then they get to publish!
  • I like the idea, but, how do you fund this effort? I don't see the article making any mention of this.

    We all spend our time writing grants now to support our own research and have little enough time to do it. Now we're expected to do someone else's research? I suppose it's a bit like reviewing articles, if you want to publish, you should review. However, this takes much more time, effort and money to do.

"If it ain't broke, don't fix it." - Bert Lantz

Working...