Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Majority of Landmark Cancer Studies Cannot Be Replicated

Unknown Lamer posted more than 2 years ago | from the scientists-at-work dept.

Medicine 233

New submitter Beeftopia writes with perhaps distressing news about cancer research. From the article: "During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 'landmark' publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature (paywalled) . ... But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers." As is the fashion at Nature, you can only read the actual article if you are a subscriber or want to fork over $32. Anyone with access care to provide more insight? Update: 04/06 14:00 GMT by U L : Naffer pointed us toward informative commentary in Pipeline. Thanks!

cancel ×

233 comments

Grants-whores and publicists in academia?!?!? (5, Insightful)

crazyjj (2598719) | more than 2 years ago | (#39596543)

But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

As I've said before [slashdot.org] , back when I was in academia, there were always grant-whores and academics more interested in their own interests than science around. Too many people have come to treat science with a reverence more appropriate to a religion than a system of knowledge and discovery, however. And so when I point out that there are scientists out there willing to cook the numbers, exaggerate, play to politics and/or public opinion, etc. I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field). But science is only as good as the people practicing it. And, in any field, there are always those willing to put their own personal interests ahead of the greater good.

I just hope this doesn't cast a shadow over those out there who *are* doing good work and *are* trying to do honest work. Sadly, some of the best researchers out there are the ones who make the least noise, get the least attention, get the least grants, and are least likely to get tenure.

Re:Grants-whores and publicists in academia?!?!? (-1)

Anonymous Coward | more than 2 years ago | (#39596589)

You just proved the majority of frosty piss studies can be replicated.

Re:Grants-whores and publicists in academia?!?!? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#39596625)

Those of us in other fields of science tend to hold up biomed as an example of how not to run a science. They tend to have a shoddy idea of experiment design and statistics. Same way when I was a student it was always the premeds who did all the cheating.

Re:Grants-whores and publicists in academia?!?!? (3, Funny)

crazyjj (2598719) | more than 2 years ago | (#39596683)

premeds

I'll thank you not to use that kind of language on a family forum, sir!

Re:Grants-whores and publicists in academia?!?!? (4, Interesting)

Black Parrot (19622) | more than 2 years ago | (#39596763)

Same way when I was a student it was always the premeds who did all the cheating.

During my orientation at a university, the Dean of my college said that's exactly what they found among the people who get busted.

Re:Grants-whores and publicists in academia?!?!? (5, Interesting)

hey! (33014) | more than 2 years ago | (#39596709)

I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field).

Well, science is rather like democracy in that regard. It doesn't prevent mistakes, but what makes it better than other things people have tried is that it has a mechanism for correcting them.

Re:Grants-whores and publicists in academia?!?!? (2)

tkrotchko (124118) | more than 2 years ago | (#39596853)

Like Democracy, the mechanisms take a long time, often decades and centuries.

Re:Grants-whores and publicists in academia?!?!? (-1)

Anonymous Coward | more than 2 years ago | (#39596943)

The "problem" is with incentives. People work to them even if they are not conscious of it. Government needs to butt out of science because the basis of government science is wishful thinking and poorly planned incentives. While private industry may not be perfect, it has a clear incentive to provide real and valuable products, treatments and services in the long run where government science is about researchers securing grants - not customers.

Re:Grants-whores and publicists in academia?!?!? (3, Insightful)

Anonymous Coward | more than 2 years ago | (#39597319)

The private industry has a strong incentive to downplay risks with their product. Big tobacco, big oil are two examples.

Re:Grants-whores and publicists in academia?!?!? (3, Insightful)

crazyjj (2598719) | more than 2 years ago | (#39597495)

Yes, but unless that system is made as efficient as possible, it can take a very long time to correct itself. Eugenics [wikipedia.org] is the classic example. Sure, it was eventually shown to be so much junk science, but not before it contributed to millions being killed/lobotomized/institutionalized. Even though there were skeptics of it almost from the beginning, the biology and medical fields did a piss-poor job at self-correcting, and people suffered for decades after this should have been laughed away as humbug.

Simply saying "Well, it will eventually sort itself out" is not an excuse to avoid reform.

Re:Grants-whores and publicists in academia?!?!? (1)

Anonymous Coward | more than 2 years ago | (#39597727)

How is eugenics junk science? It may be unethical but how is it unsound from a scientific point of view? We breed dogs and other animals to have the characteristics we want. So why won't it work for humans?

There are some dog breeds which have lots of problems, but the breeds for "work" tend to be better. So it's more a matter of picking decent criteria.

Re:Grants-whores and publicists in academia?!?!? (1)

Anonymous Coward | more than 2 years ago | (#39597709)

No, it's not like a democracy.

Papers are published based on peer review from a select committee. Published works are judged by the "impact factor" of the venue. Willfully ignoring high-impact journals and conferences is career suicide. Given the high-risk/high-reward inherent to big-ticket research, it should come as no surprise that people are willing to fudge results.

Likewise, independent work outside the realm of an institution, especially for fields such as a cancer research, is cost prohibitive. By the time you reach an accessibility remotely resembling a democracy, you're at consumer level, which would be governed more by market forces than the discovery process of scientific method. As always, the truth will out.

Oh, and democracies aren't self-correcting since you can vote to abolish the democracy. Fortunately, you can't vote to abolish facts.

Re:Grants-whores and publicists in academia?!?!? (2, Informative)

Anonymous Coward | more than 2 years ago | (#39596719)

The incentive to fake results is always present in academia, as is the incentive to believe faked results. I recommend reading "Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World" by Eugenie Samuel Reich, which details the case of Heinrich Schon. Reading this book, it isn't hard to see how so many people could fall into the trap of trying to get the numbers you think you should see as well as the academic prestige that comes with the cooked numbers.

Amazon link to the book [amazon.com]

Re:Grants-whores and publicists in academia?!?!? (0)

Anonymous Coward | more than 2 years ago | (#39596725)

Your friend is correct, science does not allow that. Genuine science would refute such things.

Unfortunately, true and honest scientists are not the only people in the world, and it is far easier to construct a conniving lie than it is to show the truth.

Like the tobacco causing cancer denialists, or the creationists, or the pro-industry groups. They have no shame. They will tell you true lies all day, while castigating others for minor flaws.

Re:Grants-whores and publicists in academia?!?!? (2)

ArhcAngel (247594) | more than 2 years ago | (#39596903)

Your friend is correct, science does not allow that. Genuine science would refute such things.

Unfortunately, true and honest scientists are not the only people in the world, and it is far easier to construct a conniving lie than it is to show the truth.

Like the tobacco causing cancer denialists, or the creationists, or the pro-industry groups. They have no shame. They will tell you true lies all day, while castigating others for minor flaws.

While few on /. care to make the distinction I believe you are referring to IDists...creationists and IDists are not mutually exclusive. This matters since there are creationists who believe fully in science they just believe there is a creator. I know to most that sounds ludicrous but no more so than people believing Kim Kardashian has talent.

Re:Grants-whores and publicists in academia?!?!? (1)

tbannist (230135) | more than 2 years ago | (#39596987)

The people who believe in creation, evolution and geology are not creationists. They are simply religious. You can believe there is a creator, however, a principle part of creationism is believing that humans and animals were created in their current form. Heck even the deists believe the universe had a creator and they're effectively atheists (they believe God is dead or gone). It's practically by definition that if you believe in evolution, you're not a creationist.

Re:Grants-whores and publicists in academia?!?!? (2)

brokeninside (34168) | more than 2 years ago | (#39597377)

Words change over time.

A hot issue in both science and philosophy, and that dates back to when science and philosophy were considered to be the same thing, is whether the universe is created or self-existent. A related hot issue is whether the universe is eternal or temporal.

Generally speaking, creationists are those that the world is created. Most of these also believe the world is temporal. Some of these, which should be strictly referred to as young earth creationists, hold that the world is only 5,000 (or 6,000 or 7,000 or 10,000) years old.

In the press, and in every day conversation, most young earth creationists are simply referred to as `creationists' with no distinction made between them and those who hold that the universe was created many millions of years ago and those who think that the world was created eternally (i.e. not in time because it has no temporal beginning).

Those creationists who aren't young earth creationists may very well believe in evolution.

Similar problems plague the English language in other contexts, e.g. the question of who is a true liberal by US standards, whether Mormons can be properly considered Christians. Generally speaking, most of us pick whatever grouping best fits our preconceived notions or ideological agenda and stick with that one. Then we insist that anyone using the word with any other nuance is using it incorrectly.

Re:Grants-whores and publicists in academia?!?!? (1)

AvitarX (172628) | more than 2 years ago | (#39597283)

Creationism doesn't mean one thinks stuff was created by a deity, it means one believes literally in the 7 day creation story of Genesis.

Re:Grants-whores and publicists in academia?!?!? (1)

tgibbs (83782) | more than 2 years ago | (#39597631)

No, the Biblical literalists are generally referred to as "young-earth" creationists. But it is still considered creationism if you believe that species were created in something very close to their current forms, as opposed to evolving from a common ancestor.

Re:Grants-whores and publicists in academia?!?!? (4, Informative)

pepty (1976012) | more than 2 years ago | (#39597151)

Can't blame all of this on grants-whores. There are a lot of reasons for published results to "revert to the mean": honest mistakes like cell lines that drift from their normal phenotype, lack of budget necessary to run additional or larger experiments, and publication bias at the journals. One of the biggest problems is brought up in TFA: the observed effect was only seen under extremely specific conditions. Often that means that the company had to adapt the experiment to a model (say a different animal or cell line) that was relevant to their own work, and they couldn't reproduce the result in that model. In which case the result is still true, but much less likely to be useful for identifying a drug target.

On the other hand, this isn't really news, or limited to oncology. Bayer published a report last year covering its analysis of targets in CV, womens health, and onco and overall could only verify ~25% of targets in house.

Re:Grants-whores and publicists in academia?!?!? (5, Insightful)

Black Parrot (19622) | more than 2 years ago | (#39596749)

But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

A few years back one of the USA's leading medical journals changed their rules to allow doctors who are receiving money from pharmaceuticals to publish reviews of the drugs sold by those same pharmaceuticals. We may have a problem that runs deeper than "cutting corners".

Re:Grants-whores and publicists in academia?!?!? (1)

muon-catalyzed (2483394) | more than 2 years ago | (#39596897)

Still, a pharmaceutical company would have much deeper problems with a false drug then a scholar with a false paper.

Re:Grants-whores and publicists in academia?!?!? (3, Interesting)

tbannist (230135) | more than 2 years ago | (#39597019)

Not exactly. A drug that works no better than the placebo could be used for years or decades before anyone figures out that it doesn't do anything but create side effects. As long as there is no evidence of intentional malfeasance and there isn't a bunch of corpses linked to the drug, it would probably have little impact on profits even if it was exposed as useless.

Re:Grants-whores and publicists in academia?!?!? (0)

Anonymous Coward | more than 2 years ago | (#39597291)

They could probably run afoul of fraud laws. Even if something doesn't cause harm, if it's being sold under false pretenses, there's going to be a problem. The people who get hurt by the fraudulent activity might not get much back, but some ambulance chasing lawyer is going to take them to the cleaners.

Re:Grants-whores and publicists in academia?!?!? (3, Insightful)

ATMAvatar (648864) | more than 2 years ago | (#39596753)

It is not that this cannot happen in science - more that the bad science will always eventually be revealed eventually. TFA only serves to reinforce this idea. Though it is a tragedy that these particular problem studies were so lacking in scientific rigor, it is reassuring that the peer review system ultimately brought them to light, even if it took some time to do so.

Re:Grants-whores and publicists in academia?!?!? (3, Insightful)

Nadaka (224565) | more than 2 years ago | (#39596917)

Failure to replicate an experiment is not certain indication the the original experiment was flawed or manipulated. But it does wink suggestively in that direction.

Re:Grants-whores and publicists in academia?!?!? (5, Insightful)

Missing.Matter (1845576) | more than 2 years ago | (#39597009)

I've actually been on a ranting spree the past couple of days due to terrible journal manuscript submissions I've had to review recently. I can't tell you the number of times I've read a submission that was outright published in another periodical. Many foreign submitters don't understand the concept of plagiarism, let along self plagiarism. These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new. This compensation system has obscured the true purpose of publication: what was once a means to disseminate your work to the general population is now a means to get you and your lab more money.

I take my responsibility as a reviewer very seriously; the job of a scientist is not only to create new research, but to critique and evaluate the research of others. But many academics who have been in the field longer than I approach reviewing as a chore, and only focus on half of the interesting part of their job, the research. I don't know how many of these terrible publications slip through the cracks due to lazy reviewers, but I'm sure it's more than I'm comfortable to admit to myself.

Re:Grants-whores and publicists in academia?!?!? (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39597467)

These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new. This compensation system has obscured the true purpose of publication: what was once a means to disseminate your work to the general population is now a means to get you and your lab more money.

The incentives do cause number to be more important than quality, but as a grad student who does understand the concept of self-plagiarism, I gotta tell you that every single academic (not other students, actual professors) I've encountered will try to walk that line. It's not that we republish the same paper, it's that when we get results we're encouraged to think, "how many papers can we divide this up in?" So you publish one part in a journal, then an incremental improvement that you already had in a second journal (while citing the paper in the first one, but they are still fairly similar work). Depending on the nature of those incremental improvements, I can see someone trying to publish a paper that crosses the line.

We need to change the culture to have journals publish papers that are just verification of data from other papers. That's the ideal work for grad students that are just starting to get in the field anyway, and it helps peer review immensely. Once you have a degree and a job doing research, you can start working on publishing new work, but then you'll be more worried about publishing half-baked ideas because you know there's an army of grad students just waiting to see if they can replicate your results.

Re:Grants-whores and publicists in academia?!?!? (4, Insightful)

pavon (30274) | more than 2 years ago | (#39597545)

These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new.

This problem has also created backlash that affects genuine researchers. My adviser had been working on some new research and was invited to present at a conference. So he wrote up his work in-progress (limited to 4-pages), and presented there. When the work was completed he tried to submit it to a journal, and one of the reviewers rejected it because it was "just a copy of this prior work". This is despite the fact that the 12-page journal paper went into far more detail, provided proof for what were conjectures at the time of the conference, and corrected significant errors in that preliminary work. So now the only scientific record of this work is an incomplete incorrect account.

Re:Grants-whores and publicists in academia?!?!? (4, Interesting)

crmarvin42 (652893) | more than 2 years ago | (#39597195)

Your post makes me think of two recent instances in my field (I am a non-ruminant nutritionist).

1st had to do with a Professor down in Texas who is pushing the feeding of supplemental L-arginine to sows and a "consultant" for an Arginine manufacturer. He's been pushing it based on (frequently) contradictory reports of improved litter sizes and reduced piglet mortalitites. However, he's never had sufficeint statisitcal power. You need at least 100 sows per treatment because of the high standard deviations involved, but he frequently uses less than 10 sows per treatment. At the Midwest American Society of Animal Science meeting in Des Moines, IA this year there were two presentations from industry where they EACH used over 100 sows per treatment and found no positive effects of feeding supplemental L-arginine. They never mentioned the Texas professor directly, but you could tell that both studies were intended to be a rebuttal of what they considered bad, and self-serving science.

2nd has to do with an article I read critiquing the use of what is called "Nutritional Epidemiology," and can be found here [garytaubes.com] . It is incredibly long, but very insightfule critique of a field that is given far too much credence simply becuase of where the scientists work, and how free they are with chicken-little-esq proclomations about how meat is going to increase your chances of dieing by 30%!! (everyone has an exactly 100% chance of dieing).

Re:Grants-whores and publicists in academia?!?!? (0)

Anonymous Coward | more than 2 years ago | (#39597209)

I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field).

It sounds like your friend failed humanities, arts, history, current events, and even philospohy, and then went into the sciences. Sadly your point remains. Science today or wrought with corruption. To believe otherwise is to live in a world without humans.

Re:Grants-whores and publicists in academia?!?!? (1)

killmenow (184444) | more than 2 years ago | (#39597287)

Science is a slow play. For fifty years we may believe faked research but just like this article points out: over time science sorts itself out.

The skeptic is always wise to question scientific studies and look for flaws in design, implementation, and interpretation of results.

There are always bad actors, and they may be rewarded in the near term while good scientists are not; but science is inherently self-correcting over time.

Re:Grants-whores and publicists in academia?!?!? (0)

Anonymous Coward | more than 2 years ago | (#39597525)

The skeptic is always wise to question scientific studies and look for flaws in design, implementation, and interpretation of results.

Unless you question scientific studies and look for flaws in design, implementation, and interpretation of results related to Global Warming...I mean Climate Change.

Re:Grants-whores and publicists in academia?!?!? (0)

Anonymous Coward | more than 2 years ago | (#39597691)

Science is a slow play. For fifty years we may believe faked research but just like this article points out: over time science sorts itself out.

And in the mean time, people die. You seem rather apathetic.

Re:Grants-whores and publicists in academia?!?!? (1)

fahrbot-bot (874524) | more than 2 years ago | (#39597687)

...when I point out that there are scientists out there willing to cook the numbers, exaggerate, play to politics and/or public opinion, etc. ...

If course it could be simple error/sloppiness of the researcher(s) in the original experiment that they didn't document something or an issue with the people trying to replicate the experiment (i.e., they're not that good). See Hanlon's Razor [wikipedia.org] , "Never ascribe to malice that which is adequately explained by incompetence." That said, 47/53 non-reproducible results seems suspicious.

Re:Grants-whores and publicists in academia?!?!? (2)

tibit (1762298) | more than 2 years ago | (#39597703)

Yeah -- just look at this little gem:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

That's exactly what drove Feynman up the wall, what made him speak so loudly against pseudoscience. Sigh.

HTFA (-1, Flamebait)

krakass (935403) | more than 2 years ago | (#39596601)

Efforts over the past decade to characterize the genetic alterations in human cancers have led to a better understanding of molecular drivers of this complex set of diseases. Although we in the cancer field hoped that this would lead to more effective drugs, historically, our ability to translate cancer research to clinical success has been remarkably low1. Sadly, clinical trials in oncology have the highest failure rate compared with other therapeutic areas. Given the high unmet need in oncology, it is understandable that barriers to clinical development may be lower than for other disease areas, and a larger number of drugs with suboptimal preclinical validation will enter oncology trials. However, this low success rate is not sustainable or acceptable, and investigators must reassess their approach to translating discovery research into greater clinical success and impact.

Many factors are responsible for the high failure rate, notwithstanding the inherently difficult nature of this disease. Certainly, the limitations of preclinical tools such as inadequate cancer-cell-line and mouse models2 make it difficult for even the best scientists working in optimal conditions to make a discovery that will ultimately have an impact in the clinic. Issues related to clinical-trial design — such as uncontrolled phase II studies, a reliance on standard criteria for evaluating tumour response and the challenges of selecting patients prospectively — also play a significant part in the dismal success rate3.

S. GSCHMEISSNER/SPL

Many landmark findings in preclinical oncology research are not reproducible, in part because of inadequate cell lines and animal models.

Unquestionably, a significant contributor to failure in oncology trials is the quality of published preclinical data. Drug development relies heavily on the literature, especially with regards to new targets and biology. Moreover, clinical endpoints in cancer are defined mainly in terms of patient survival, rather than by the intermediate endpoints seen in other disciplines (for example, cholesterol levels for statins). Thus, it takes many years before the clinical applicability of initial preclinical observations is known. The results of preclinical studies must therefore be very robust to withstand the rigours and challenges of clinical trials, stemming from the heterogeneity of both tumours and patients.

Confirming research findings
The scientific community assumes that the claims in a preclinical study can be taken at face value — that although there might be some errors in detail, the main message of the paper can be relied on and the data will, for the most part, stand the test of time. Unfortunately, this is not always the case. Although the issue of irreproducible data has been discussed between scientists for decades, it has recently received greater attention (see go.nature.com/q7i2up) as the costs of drug development have increased along with the number of late-stage clinical-trial failures and the demand for more effective therapies.

Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies (see 'Reproducibility of research findings'). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.

Table 1: Reproducibility of research findings
Preclinical research generates many secondary publications, even when results cannot be reproduced.
Full table
Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.

Unfortunately, Amgen's findings are consistent with those of others in industry. A team at Bayer HealthCare in Germany last year reported4 that only about 25% of published preclinical studies could be validated to the point at which projects could continue. Notably, published cancer research represented 70% of the studies analysed in that report, some of which might overlap with the 53 papers examined at Amgen.

Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. More troubling, some of the research has triggered a series of clinical studies — suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn't work.

These results, although disturbing, do not mean that the entire system is flawed. There are many examples of outstanding research that has been rapidly and reliably translated into clinical benefit. In 2011, several new cancer drugs were approved, built on robust preclinical data. However, the inability of industry and clinical trials to validate results from the majority of publications on potential therapeutic targets suggests a general, systemic problem. On speaking with many investigators in academia and industry, we found widespread recognition of this issue.

Improving the preclinical environment
How can the robustness of published preclinical cancer research be increased? Clearly there are fundamental problems in both academia and industry in the way such research is conducted and reported. Addressing these systemic issues will require tremendous commitment and a desire to change the prevalent culture. Perhaps the most crucial element for change is to acknowledge that the bar for reproducibility in performing and presenting preclinical studies must be raised.

An enduring challenge in cancer-drug development lies in the erroneous use and misinterpretation of preclinical data from cell lines and animal models. The limitations of preclinical cancer models have been widely reviewed and are largely acknowledged by the field. They include the use of small numbers of poorly characterized tumour cell lines that inadequately recapitulate human disease, an inability to capture the human tumour environment, a poor appreciation of pharmacokinetics and pharmacodynamics, and the use of problematic endpoints and testing strategies. In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug.

Wide recognition of the limitations in preclinical cancer studies means that business as usual is no longer an option. Cancer researchers must be more rigorous in their approach to preclinical studies. Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness.

As with clinical studies, preclinical investigators should be blinded to the control and treatment arms, and use only rigorously validated reagents. All experiments should include and show appropriate positive and negative controls. Critical experiments should be repeated, preferably by different investigators in the same lab, and the entire data set must be represented in the final publication. For example, showing data from tumour models in which a drug is inactive, and may not completely fit an original hypothesis, is just as important as showing models in which the hypothesis was confirmed.

Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development.

“The scientific process demands the highest standards of quality, ethics and rigour.”
Ultimately, however, the responsibility for design, analysis and presentation of data rests with investigators, the laboratory and the host institution. All are accountable for poor experimental design, a lack of robust supportive data or selective data presentation. The scientific process demands the highest standards of quality, ethics and rigour.

Building a stronger system
What reasons underlie the publication of erroneous, selective or irreproducible data? The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a 'perfect' story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.

But there are no perfect stories in biology. In fact, gaps in stories can provide opportunities for further research — for example, a treatment that may work in only some cell lines may allow elucidation of markers of sensitivity or resistance. Journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers.

Although reviewers, editors and grant-committee members share some responsibility for flaws in the system, investigators must be accountable for the data they generate, analyse and submit. We in the field must remain focused on the purpose of cancer research: to improve the lives of patients. Success in our own careers should be a consequence of outstanding research that has an impact on patients.

The lack of rigour that currently exists around generation and analysis of preclinical data is reminiscent of the situation in clinical research about 50 years ago. The changes that have taken place in clinical-trials processes over that time indicate that changes in prevailing attitudes and philosophies can occur (see 'Improving the reliability of preclinical cancer studies').

Box 1: Recommendations: Improving the reliability of preclinical cancer studies
Full box
Improving preclinical cancer research to the point at which it is reproducible and translatable to clinical-trial success will be an extraordinarily difficult challenge. However, it is important to remember that patients are at the centre of all these efforts. If we in the field forget this, it is easy to lose our sense of focus, transparency and urgency. Cancer researchers are funded by community taxes and by the hard work and philanthropic donations of advocates. More importantly, patients rely on us to embrace innovation, make advances and deliver new therapies that will improve their lives. Although hundreds of thousands of research papers are published annually, too few clinical successes have been produced given the public investment of significant financial resources. We need a system that will facilitate a transparent discovery process that frequently and consistently leads to significant patient benefit.

Heterogeneity (0)

Anonymous Coward | more than 2 years ago | (#39597245)

heterogeneity of both tumours and patients.

There is an enormous amount of pressure, from desperate patients who are out of treatment options, resulting in cases where physicians have been uncovered faking records to try to find their patient a spot in someone's clinical trial (I remember one post on Slashdot or some other forum that related a military physician's experience at a VA hospital, where a high-ranking officer pretty much pulled him aside and told him point-blank something like "You will find a spot, doctor" for his family member.

While the research and regulatory sides of medicine tends to be pretty strict regarding the rules, there is some suspicion that the clinical side bends the rules and diagnosis much more often than the handful of cases that have been publicized.

For detail and commentary... (4, Informative)

Naffer (720686) | more than 2 years ago | (#39596611)

See this discussion of the same paper on In the Pipeline, a blog devoted to organic chemistry and drug discovery. http://pipeline.corante.com/archives/2012/03/29/sloppy_science.php [corante.com]

Re:For detail and commentary... (3, Informative)

Hatta (162192) | more than 2 years ago | (#39596747)

Full text available here [davidrasnick.com] .

No Surprise Here (0, Troll)

Anonymous Coward | more than 2 years ago | (#39596623)

My first thought is "man, what a bummer". My second thought is "Hmmm, academics/scientists skewing results for the sake of their own careers. Global warming, anyone?"

Re:No Surprise Here (4, Funny)

smg5266 (2440940) | more than 2 years ago | (#39596713)

Most people pursue careers in atmospheric science for the dollars

Re:No Surprise Here (1)

Black Parrot (19622) | more than 2 years ago | (#39596779)

Most people pursue careers in atmospheric science for the dollars

Yeah, you can make way more than you could at one of those poor energy companies.

Re:No Surprise Here (2)

tbannist (230135) | more than 2 years ago | (#39597051)

Yeah, I'm going to get me a job with Big Green. I hear Greenpeace is paying new grads six-figure starting salaries*.

* Energy companies in north-eastern Alberta are actually offering six-figure salaries to new grad electricians among other trades jobs.

Re:No Surprise Here (0)

Anonymous Coward | more than 2 years ago | (#39597549)

Most people pursue careers in science for their ego.

I suspect most scientists would gladly live in a grass hut and eat bugs as long at they were to get the Nobel Prize and be known as the smartest person in the world.

Ego, Hubris, Arrogance.

Re:No Surprise Here (5, Insightful)

next_ghost (1868792) | more than 2 years ago | (#39596769)

My second thought is "Hmmm, academics/scientists skewing results for the sake of their own careers. Global warming, anyone?"

Your second thought is completely off because every single time someone actually tried to replicate global warming research, they DID get the same results. Unlike in the case of those medical papers TFA is about.

Re:No Surprise Here (0)

Anonymous Coward | more than 2 years ago | (#39597043)

Your second thought is completely off because every single time someone actually tried to replicate global warming research, they DID get the same results. Unlike in the case of those medical papers TFA is about.

Like that second network of thermometers they deployed! Or the independent ice core samples and historical sea level indicators!

TFA (0)

Anonymous Coward | more than 2 years ago | (#39596637)

PDF : www.anonstorage.net/PStorage/510.483531a.pdf

Part of the process (1)

hackula (2596247) | more than 2 years ago | (#39596667)

This is just part of the scientific process. Stuff usually does not stick. It is most often falsified. As more things are falsified, we end up with a better overall understanding of the processes we are trying to understand. Although it may be a bit disturbing if scientists are being dishonest, other researchers have a very strong incentive to go back and fact check.

Re:Part of the process (1)

bsane (148894) | more than 2 years ago | (#39597087)

This is just part of the scientific process. Stuff usually does not stick. It is most often falsified. As more things are falsified, we end up with a better overall understanding of the processes we are trying to understand. Although it may be a bit disturbing if scientists are being dishonest, other researchers have a very strong incentive to go back and fact check.

Agree- finding lies and dishonesty is part of the Scientific process. Eventually the truth and facts will prevail, it just may take decades/lifetimes. Which can be extremely frustrating to watch. Of course that doesn't excuse fraud or knowing manipulation. Ideally each of these cases would be investigated by their peers and the researchers guilty of malfeasance ostracized.

Re:Part of the process (0)

Anonymous Coward | more than 2 years ago | (#39597653)

I don't actually believe that the bulk are malfeasance, but more self-delusion. I'm sure it doesn't help that there are monetary and tenure related pressures, but I think they're only part of the story. As easy as it is to fool someone else, it's easier still to fool oneself. Everyone wants to be the scientist who discovers the mechanism that goes on to be a basis for a miraculous new cancer treatment, so convincing oneself of a possible effect in what is really statistical noise is easy. Once you have accomplished that simple trick, you are extremely motivated to convince others, and you are quite believable.

I applaud the author who directed his research into replicating results. This is such an important step - this is where the "real science" gets done. Unfortunately, there is less allure in being the scientist that "dashes the hopes" (however false they may have been) of cancer sufferers. I just wish I could think of a way to incentivize this kind of work. How do we praise and recognize the nameless scientists who advance the knowledge of the world not through the generation of new theories, but by culling the paths that lead nowhere? We are lost in the maze without them.

Newsflash....... (0)

segedunum (883035) | more than 2 years ago | (#39596691)

Scientists are not always honest when it comes to furthering their careers and getting that lovely new research grant............. Some scientists are even wondering why the public has little faith in what scientists say and the so-called science going on now: http://www.bbc.co.uk/programmes/b00y4yql [bbc.co.uk] . It's a little worrying when a geneticist has seemingly little idea of the far reaching future ethical questions posed in his own field.

That's mostly why I never donate to charities touting cancer research. It all goes into one black hole, never to be seen again. There are people who have cancer now who need help and I donate to charities that look after those people. Cynically, why would anyone try and 'cure' cancer when you can keep the gravy train of research papers and expensive drugs going?

Absolute power corrupts absolutely. (5, Insightful)

concealment (2447304) | more than 2 years ago | (#39596699)

There is no greater "absolute power" than knowing that if you say or write something that others will like, they will pay you lots of money and make you famous.

It's not that money corrupts. Money is not the root of all evil; the full quote is "the love of money is the root of all evil." When our society decided that money was more important than truth, we surrendered truth to the void.

A research scientist thinks about his day. He can slightly fudge his cancer study, make big headlines, get a ton of grant money and get appointed chair at the university. Or he can go down the long hall to his boss and say, "Nope, this one didn't work either, and while I'd like to start a religion based on false hope, this isn't the false hope you're looking for." If he does that, he can then watch one of his subordinates fudge the cancer study, make big headlines, and be his boss at the same time next year.

Which choice would you make?

Re:Absolute power corrupts absolutely. (5, Insightful)

WhiplashII (542766) | more than 2 years ago | (#39597471)

No, its a little worse than that. The honest guy doesn't get tenure, and is eventually fired. The dishonest guy remains a "scientist" for life. So in the steady state, there will be many, many more dishonest "scientists" than honest ones.

Re:Absolute power corrupts absolutely. (4, Insightful)

crazyjj (2598719) | more than 2 years ago | (#39597605)

Or he can go down the long hall to his boss and say, "Nope, this one didn't work either, and while I'd like to start a religion based on false hope, this isn't the false hope you're looking for."

And that's what *really* pissed me off about academia. Guys like that never get tenure, never get thanked. With so many of the people I worked with, "hypothesis" was synonymous with "foregone conclusion." The standard practice was to come up with your hypothesis, cook up a bunch of data to support it (dismissing any evidence that contradicted it with a little intellectual sleight-of-hand), publish, and then get your promotions and tenure. The guys who treated their hypotheses as ACTUAL hypotheses (that they might actually find to be wrong) were treated like bad researchers, when in fact, they were the *good* researchers. With so many people cooking the numbers, it began to be assumed that if your hypotheses weren't always proven right, it meant you were somehow flawed.

Not cutting corners (0)

Anonymous Coward | more than 2 years ago | (#39596701)

Most academics I know are on grants, living hand-to-mouth on short term grants below or close to the poverty line. And when your next grant depends only on whether you have produced a paper, one simply must put quantity over quality when publishing. The alternative is flipping burgers at the corner shop (which actually probably pays better).

But everyone in science knows this. And such papers rarely become "landmark". They are know to be toiletpaper (worthless, unless you happen to need it), and treated as such. I'd wager there is something else behind this - rather alarming - finding.

Don't trust someone who needs money (2)

danbuter (2019760) | more than 2 years ago | (#39596717)

Many of these scientists are getting big grants to do their research. At least a few of them might be skewing their results, even just a little, to give the answers their backers want, in order to keep the money flow open. This goes for a lot of scientific research. (Not to mention the politics of getting published if your research contradicts a heavyweight in your field).

Re:Don't trust someone who needs money (1)

CanHasDIY (1672858) | more than 2 years ago | (#39596867)

So... we should only trust rich people?

Good call, I'm sure they are all fair, honest, and equitable in their decision making. After all, that's how they got to be rich, right?

Re:Don't trust someone who needs money (1)

Maximum Prophet (716608) | more than 2 years ago | (#39597101)

So... we should only trust rich people?

No, never trust the rich, the last thing they want is more company at the top. The rich cartoonist Scott Adams said so...

So why is it wrong (0)

Vinegar Joe (998110) | more than 2 years ago | (#39596751)

For "conservatives" to be skeptical of scientists?

Re:So why is it wrong (0)

Anonymous Coward | more than 2 years ago | (#39596859)

Because conservatives are skeptical of science that libruls want to believe.

Re:So why is it wrong (1)

jandar (304267) | more than 2 years ago | (#39596921)

If they would put scrutiny to all assertion-generating systems in the same way, nothing would be wrong for being skeptical of science. They should be equally skeptical of theology.

Re:So why is it wrong (1)

Vinegar Joe (998110) | more than 2 years ago | (#39597479)

So, because someone is politically conservative, they must also be religious?

Re:So why is it wrong (2)

CanHasDIY (1672858) | more than 2 years ago | (#39596947)

It's their methodology; to wit -

Skeptic: I do not believe that your results accurately reflect reality, and therefore would like to see further experimentation.

Neocon "Skeptic:" Uh-huYUK, I no dat I ain't come frum no durn monkey, cuz da preacher-man done told me so!

Living in the midwest, I tend to see the latter far more than the former.

Re:So why is it wrong (1)

Vinegar Joe (998110) | more than 2 years ago | (#39597551)

Neocon "Skeptic:" Uh-huYUK, I no dat I ain't come frum no durn monkey, cuz da preacher-man done told me so!

You do realize that most of the people who are "neocons" are products of the New York intelligentsia and graduates of Columbia, Cornell, Harvard, etc........not too many are from south of the Mason-Dixon Line or call themselves Tarheels, Gamecocks or Volunteers.

Re:So why is it wrong (1)

archer, the (887288) | more than 2 years ago | (#39596957)

People should have *some* skepticism for science. This does not mean people should have blind faith that all science is wrong. This article is a good example of science's built-in system of checks and balances. Unfortunately, it is also a good indication that more may be needed.

Because it's political and ignorant. (1)

Stem_Cell_Brad (1847248) | more than 2 years ago | (#39597047)

It's fine to be skeptical of new findings. In fact it is healthy, and most good scientists are skeptical about anything new. TFA is an example of healthy skepticism. I am curious about the findings that could not be reproduced by this group- how many of those had already been passed off as weak in the field. This is what scientists do to arrive at consensus - continuous testing. The goal of scientists is to find what is real. Granted it can be affected by human nature and desires, but the profession diligently seeks to limit these effects. The conservative anti-scientist campaign, as far as I can tell, takes aim at scientists when scientific findings do not favor the political will of the conservative. It ignores what is real in favor of what is desired.

Re:So why is it wrong (0)

Anonymous Coward | more than 2 years ago | (#39597637)

Because they are more concerned about who is doing the research than what they are researching.

That prooves cancer research business as a scam... (0)

harduser (1451499) | more than 2 years ago | (#39596759)

... set to pull money from governments and health organization. I got my bad karma points when I said that before.

"Science" said social science not replicated (3, Interesting)

peter303 (12292) | more than 2 years ago | (#39596767)

Two weeks ago said the social science studies are usually not replicated. Either because they are not true or too expensive. They were trying to explain the rise in psychology paper retractions and job firings as poor science.

I call bullshit (5, Interesting)

Black Parrot (19622) | more than 2 years ago | (#39596841)

Given the expense, I flat don't believe that a private company just decided to replicate 53 studies.

And he claims that authors "made" his team sign confidentiality agreements. How do authors force that?

So, he claims, he can't even tell us which studies failed.

Now he works at a different cancer research company. Conflict of interests?

I don't doubt that we've got problems in the "medical industry", but the linked article reeks of bullshit.

Has anyone looked at Nature?

Re:I call bullshit (1)

gl4ss (559668) | more than 2 years ago | (#39596971)

well, you need to sign confidentiality agreements to reproduce the studies..

what, you think the scientific papers have all that you need for duplication, implying real science? you think peer review is nowadays actually trying to reproduce any of the findings? fuck no. it's much easier to provide vague results from tests the reader can't verify - that way you're not a fraud but are still "working".

Re:I call bullshit (3, Informative)

Black Parrot (19622) | more than 2 years ago | (#39597217)

well, you need to sign confidentiality agreements to reproduce the studies..

No you don't. You just need to read the published paper and attempt to reproduce what the paper reports. (A good scientific paper includes enough information to make the work it reports on reproducible.)

*However*, I suspect my post was over-reactive in a couple of regards:

a) They might have asked the authors for their unpublished raw data, in which case a confidentiality agreement becomes plausible.

b) When I read "landmark studies", I think longitudinal studies or clinical trials. However, it appears that they were using their own notions of "landmark", and included things like the effect of a chemical on cell biology. That sort of thing can be reproduced at a cost a private company would undertake.

However, the "I can't tell you" criticism still stands. Among the posts at the on-line article in the Slashdot update, someone points out that the Nature article is a complaint about irreproducible results, but is not itself reproducible. Basically, from what I'm reading, Nature published an anecdote.

Maybe it was a letter rather than a paper?

Re:I call bullshit (1)

burningcpu (1234256) | more than 2 years ago | (#39597315)

A few things here:

His team replicated 53 studies? I haven't read through the details but I would imagine these were long term and difficult studies to complete. I smell BS.

His employees could very well be inept. This would explain why they were unable to replicate experiments. Science is hard.

Re:I call bullshit (5, Informative)

Anonymous Coward | more than 2 years ago | (#39597345)

And look at many of the complaints he has about academic research in this 'paper'.

"In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug."

When I publish a paper, I'm trying to present the the interesting findings in regards to the molecular system I'm studying. Finding a bunch of associated biomarkers is an entirely different study. If you expect me to write multiple studies/papers rolled into one and replicate the experiments over and over in dozens of cell lines and animal models, you better be paying me to do it, because I don't have the funding for that in the small budget I got for the proposed study funded by taxpayers through the NIH. When I wrote the grant to do the study, I didn't even know I would find anything for sure, so I didn't ask for a budget 20x the size so I could replicate it in every model out there to look for biomarkers as well, just in case I found something. Asking for that is a good way to have your grant rejected in the first place. My paper was to find the interesting thing. By publishing it I'm saying, 'we found something interesting, please fund us or other researchers to take the next step and replicate, test in other systems, etc, etc. Publishing my results does not mean 'Big Pharma should go ahead and spend $10 million trying this in humans right now!'

"Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness."

It's tough to do. That's why we typically do small bite-size chunks. That and the size of our grants allow us to do the bite-size chunks. Want greater thouroughness? Increase our funding. Ohh, but that's from taxpayers, and you want them to spend all the money doing research so you don't have to.

"Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. "

Cancer researchers must commit to the costly transition? Yes, yes, research is being held up because all those academics, with all their mega-millions in earnings each year, just aren't willing to pony up the cash to do their experiments right. We live off grants. If there isn't funding to do a huge study, we can't. Simple. No 'not willing to commit' involved.

"Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development."

Once again, the budget for that wasn't in my grant because I didn't know I would even find anything, let alone need to find every associated biomarker. Want to know the biomarkers? Then pay for it, or wait for me to publish this first paper, then write another grant asking for funding to look for biomarkers now that I've got a very good reason to look for them and spend the money. In a rush? Either pony up the cash or stop whining about taxpayer-funded academics not providing everything to you on a silver platter in record time.

Re:I call bullshit (1)

protein folder (228881) | more than 2 years ago | (#39597679)

amgen is a big company--they got $15.6 billion in revenue last year and spent $3.2 billion on research in 2011 according to their fact sheet [amgen.com] . Presumably they have some money to spend to try to replicate published studies, or at least their main findings. I would think that replicating results would be part of their due diligence; if they're going to invest time, money, and resources developing a product based on the results of a research paper, they need to have some confidence that that investment is based on solid footing.

Less evil, more science (4, Interesting)

AtomicDevice (926814) | more than 2 years ago | (#39596847)

While certainly there are those who will publish findings they know to be false, that's not really the big issue I see here. Good science demands that studies be replicated so they can be upheld or refuted. Sure, there's confirmation bias in science all over the place - the bigger problem I see is that there's very little incentive to publish a paper that simply refutes another. Busting existing studies should be a glorious field, but it's not. If big-name scientist A publishes a result in nature, and no-name scientist B publishes a paper in the journal-of-no-one-reads-it, everyone just assumes scientist B is just a bad scientist (assuming he even managed to actually get published at all).

Another major issue is that the null hypothesis is a very un-enticing story. No one wants to publish the paper: "New Drug does nothing to cure cancer". If you spent a year and a ton of money researching New Drug, you're damn sure going to try and make it work. It's unfortunate, because often the null hypothesis is very informative, but it doesn't get you paid or published. Or how about the psychology paper: "Brain does not respond to stimulus A in any meaningful way", don't remember that paper? That's because it never got published.

I think this is less about malicious behavior, and more about a lack of interest (which can somewhat be blamed on the way universities/journals/grants handle funding, notoriety, etc) in replicating and refuting studies.

Do you want to be the guy who cured cancer, or the guy who disproved a bunch of studies?

Re:Less evil, more science (0)

Anonymous Coward | more than 2 years ago | (#39597225)

Do you want to be the guy who cured cancer, or the guy who disproved a bunch of studies?

I'd rather be the guy who outed the scam artists selling expensive eel-oil cancer treatments. So to put it in your context: disproving the false studies.

Re:replication == peer review (0)

Anonymous Coward | more than 2 years ago | (#39597707)

I think it's time that medical research adopts the notion that "peer review" requires replication. We need to junk the current system of publication, and replace it with submission for third-party replication. We need a group of disinterested science staff (with no academic ax to grind) who repeat the experiment before it gets published. Only after confirmation do the original team get credit.

captcha: clarify

Full text (0)

Anonymous Coward | more than 2 years ago | (#39596851)

Here's the full text [scribd.com] , for anyone interested.

Medical scientists (2, Interesting)

nedlohs (1335013) | more than 2 years ago | (#39596863)

are famously bad at sceince and statistics.

And there's no benefit (to the researcher) in replicating a study that's already been done which makes for an obvious problem.

Medical science isn't alone in this of course, it just seems to be worse than most.

Conservatives, thinking ahead as always... (-1, Troll)

SuperKendall (25149) | more than 2 years ago | (#39596899)

Conservatives have been steadily distrusting "science" (whatever that broad label means) more over the last decade or so.

Many of you laughed at conservatives for doing so in a recent Slashdot story. Well I can't hear that laugh quite so well with your shoe shoved down your esophagus.

You see, we have been aware for a while that there is a growing problem with "science" and that there are more and more people just In it to game the system. Or perhaps they did not even start out that way, but they could not reach the conclusion they had publicly stated would probably be the end result, and who wants to look foolish?

Conservative can mean many things but in the end one key value is caution, bordering on skepticism - unwilling to believe a claim just because someone says it is so. We demand real proof, which traditional science could provide more more and more is being skimmed over.

So over the next several years as more and more cases like this come to light just remember it was Conservatives you had the intellect to see this coming while you were still blindly trusting a label that anyone could and did use - "science".

Re:Conservatives, thinking ahead as always... (1)

jandar (304267) | more than 2 years ago | (#39597003)

Conservative can mean many things but in the end one key value is caution, bordering on skepticism - unwilling to believe a claim just because someone says it is so.

Are conservatives so "unwilling to believe" to statements of religious authorities? I'm skeptical about that.

Re:Conservatives, thinking ahead as always... (1)

tbannist (230135) | more than 2 years ago | (#39597203)

No, you don't. You demand that scientific results match your preconceived notions. Conservatives are easily swayed by anything claiming to be science that matches what you want to be true. Just look at the people who listen to Christophen Monckton, he has a bachelor of arts and claims to have cured AIDS and cancer, yet conservatives love to listen to him tell them how global warming isn't occurring, how the earth is actually cooling not warming and all sorts of other nonsense that matches how you want the world to be.

You don't trust science because you don't like the results regardless of how many times the experiments have been replicated. This article is about an inordinately high rate of failure in one particular area of research where not enough verification of results is being performed. The problem is easily fixed, the question is whether the corporations paying for the research will be willing to pay for the verifications and release the results.

Re:Conservatives, thinking ahead as always... (0)

Anonymous Coward | more than 2 years ago | (#39597339)

We demand real proof

I've never seen this from any modern conservative. Which is why Fox News attracts most of them.

Most published research findings are false (4, Informative)

kahizonaki (1226692) | more than 2 years ago | (#39596909)

A recent PLOS article (free to view!) analyses modern research, coming to the conclusion that most research findings are false.
TLDR: Because of the nature of the statistics used and the fact that only positive results are reported.
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 [plosmedicine.org]

Re:Most published research findings are false (1)

Anonymous Coward | more than 2 years ago | (#39597175)

Why do you consider that article valid, and not all the others? Bias? I read it. It's crap. My bias.

Re:Most published research findings are false (0)

Anonymous Coward | more than 2 years ago | (#39597281)

Are you certain that this research stating most findings are false is in fact correct?

The article (0)

Anonymous Coward | more than 2 years ago | (#39596913)

Efforts over the past decade to characterize the genetic alterations in human cancers have led to a better understanding of molecular drivers of this complex set of diseases. Although we in the cancer field hoped that this would lead to more effective drugs, historically, our ability to translate cancer research to clinical success has been remarkably low1. Sadly, clinical trials in oncology have the highest failure rate compared with other therapeutic areas. Given the high unmet need in oncology, it is understandable that barriers to clinical development may be lower than for other disease areas, and a larger number of drugs with suboptimal preclinical validation will enter oncology trials. However, this low success rate is not sustainable or acceptable, and investigators must reassess their approach to translating discovery research into greater clinical success and impact.

Many factors are responsible for the high failure rate, notwithstanding the inherently difficult nature of this disease. Certainly, the limitations of preclinical tools such as inadequate cancer-cell-line and mouse models2 make it difficult for even the best scientists working in optimal conditions to make a discovery that will ultimately have an impact in the clinic. Issues related to clinical-trial design — such as uncontrolled phase II studies, a reliance on standard criteria for evaluating tumour response and the challenges of selecting patients prospectively — also play a significant part in the dismal success rate3.

Many landmark findings in preclinical oncology research are not reproducible, in part because of inadequate cell lines and animal models.

Unquestionably, a significant contributor to failure in oncology trials is the quality of published preclinical data. Drug development relies heavily on the literature, especially with regards to new targets and biology. Moreover, clinical endpoints in cancer are defined mainly in terms of patient survival, rather than by the intermediate endpoints seen in other disciplines (for example, cholesterol levels for statins). Thus, it takes many years before the clinical applicability of initial preclinical observations is known. The results of preclinical studies must therefore be very robust to withstand the rigours and challenges of clinical trials, stemming from the heterogeneity of both tumours and patients.
Confirming research findings

The scientific community assumes that the claims in a preclinical study can be taken at face value — that although there might be some errors in detail, the main message of the paper can be relied on and the data will, for the most part, stand the test of time. Unfortunately, this is not always the case. Although the issue of irreproducible data has been discussed between scientists for decades, it has recently received greater attention (see go.nature.com/q7i2up) as the costs of drug development have increased along with the number of late-stage clinical-trial failures and the demand for more effective therapies.

Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies (see 'Reproducibility of research findings'). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.
Table 1: Reproducibility of research findings
Preclinical research generates many secondary publications, even when results cannot be reproduced.
Full table

Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.

Unfortunately, Amgen's findings are consistent with those of others in industry. A team at Bayer HealthCare in Germany last year reported4 that only about 25% of published preclinical studies could be validated to the point at which projects could continue. Notably, published cancer research represented 70% of the studies analysed in that report, some of which might overlap with the 53 papers examined at Amgen.

Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. More troubling, some of the research has triggered a series of clinical studies — suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn't work.

These results, although disturbing, do not mean that the entire system is flawed. There are many examples of outstanding research that has been rapidly and reliably translated into clinical benefit. In 2011, several new cancer drugs were approved, built on robust preclinical data. However, the inability of industry and clinical trials to validate results from the majority of publications on potential therapeutic targets suggests a general, systemic problem. On speaking with many investigators in academia and industry, we found widespread recognition of this issue.
Improving the preclinical environment

How can the robustness of published preclinical cancer research be increased? Clearly there are fundamental problems in both academia and industry in the way such research is conducted and reported. Addressing these systemic issues will require tremendous commitment and a desire to change the prevalent culture. Perhaps the most crucial element for change is to acknowledge that the bar for reproducibility in performing and presenting preclinical studies must be raised.

An enduring challenge in cancer-drug development lies in the erroneous use and misinterpretation of preclinical data from cell lines and animal models. The limitations of preclinical cancer models have been widely reviewed and are largely acknowledged by the field. They include the use of small numbers of poorly characterized tumour cell lines that inadequately recapitulate human disease, an inability to capture the human tumour environment, a poor appreciation of pharmacokinetics and pharmacodynamics, and the use of problematic endpoints and testing strategies. In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug.

Wide recognition of the limitations in preclinical cancer studies means that business as usual is no longer an option. Cancer researchers must be more rigorous in their approach to preclinical studies. Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness.

As with clinical studies, preclinical investigators should be blinded to the control and treatment arms, and use only rigorously validated reagents. All experiments should include and show appropriate positive and negative controls. Critical experiments should be repeated, preferably by different investigators in the same lab, and the entire data set must be represented in the final publication. For example, showing data from tumour models in which a drug is inactive, and may not completely fit an original hypothesis, is just as important as showing models in which the hypothesis was confirmed.

Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development.

        “The scientific process demands the highest standards of quality, ethics and rigour.”

Ultimately, however, the responsibility for design, analysis and presentation of data rests with investigators, the laboratory and the host institution. All are accountable for poor experimental design, a lack of robust supportive data or selective data presentation. The scientific process demands the highest standards of quality, ethics and rigour.
Building a stronger system

What reasons underlie the publication of erroneous, selective or irreproducible data? The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a 'perfect' story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.

But there are no perfect stories in biology. In fact, gaps in stories can provide opportunities for further research — for example, a treatment that may work in only some cell lines may allow elucidation of markers of sensitivity or resistance. Journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers.

Although reviewers, editors and grant-committee members share some responsibility for flaws in the system, investigators must be accountable for the data they generate, analyse and submit. We in the field must remain focused on the purpose of cancer research: to improve the lives of patients. Success in our own careers should be a consequence of outstanding research that has an impact on patients.

The lack of rigour that currently exists around generation and analysis of preclinical data is reminiscent of the situation in clinical research about 50 years ago. The changes that have taken place in clinical-trials processes over that time indicate that changes in prevailing attitudes and philosophies can occur (see 'Improving the reliability of preclinical cancer studies').
Box 1: Recommendations: Improving the reliability of preclinical cancer studies
Full box

Improving preclinical cancer research to the point at which it is reproducible and translatable to clinical-trial success will be an extraordinarily difficult challenge. However, it is important to remember that patients are at the centre of all these efforts. If we in the field forget this, it is easy to lose our sense of focus, transparency and urgency. Cancer researchers are funded by community taxes and by the hard work and philanthropic donations of advocates. More importantly, patients rely on us to embrace innovation, make advances and deliver new therapies that will improve their lives. Although hundreds of thousands of research papers are published annually, too few clinical successes have been produced given the public investment of significant financial resources. We need a system that will facilitate a transparent discovery process that frequently and consistently leads to significant patient benefit.

This is why I don't trust newspaper health pages (1)

Anonymous Coward | more than 2 years ago | (#39596941)

Because they blindly and uncritically report everything they read, regardless of whether it can be replicated or not. Sometimes they even publish bare press releases as final truth. Remember people, unless it's been replicated it isn't science. No matter how nice the scientists look, no matter how scientific the journal looks, no matter how professional the equipment looks.
A lot (in this case, almost everything) of what's first published is crap. Also remember that one of the original reasons people published in journals was to ask their colleagues: "Is this actually true?" A journal article shouldn't be regarded as a statement of scientific fact, but as an invitation to replication and criticism.

As I say to the PhD students in my lab (2, Funny)

golden age villain (1607173) | more than 2 years ago | (#39596965)

Let's publish it quickly, then get a tenured job and change topic before others find out it's all wrong!

Should make you wonder (-1)

Anonymous Coward | more than 2 years ago | (#39597007)

Does the same thing happen in other academic fields, such as, say, climate science?

Ioannidis said as much for years (2, Informative)

Anonymous Coward | more than 2 years ago | (#39597069)

These findings are no surprise to those who have been following medical science and research for the past decades. See for example what Dr John Ioannidis has to say about the consistency, accuracy and honesty (or lack thereof) of medical science in general [theatlantic.com] : "as much as 90 percent of the published medical information that doctors rely on is flawed","There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases", "he was struck by how many findings of all types were refuted by later findings" - and not just in epidemiological (statistical) studies, but also in randomized, double-blind clinical trials: "Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals ... 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials."

Gary Taubes too denounced this accumulation of bias back in 2007 [nytimes.com] : compliance bias, information bias, confirmation bias, etc. routinely introduce non-uniform effects that can be bigger than what you try to measure. And you cannot compensate for them because you cannot quantify them.

As Sander Greenland, one of the editor/authors of Modern Epidemiology, wrote in chapter 19 "Bias Analysis":

Conventional methods assume all errors are random and that any modeling assumptions (such as homogeneity) are correct. With these assumptions, all uncertainty about the impact of errors on estimates is subsumed within conventional standard deviations for the estimates (standard errors), such as those given in earlier chapters (which assume no measurement error), and any discrepancy between an observed association and the target effect may be attributed to chance alone. When the assumptions are incorrect, however, the logical foundation for conventional statistical methods is absent, and those methods may yield highly misleading inferences. Epidemiologists recognize the possibility of incorrect assumptions in conventional analyses when they talk of residual confounding (from nonrandom exposure assignment), selection bias (from nonrandom subject selection), and information bias (from imperfect measurement). These biases rarely receive quantitative analysis, a situation that is understandable given that the analysis requires specifying values (such as amount of selection bias) for which little or no data may be available. An unfortunate consequence of this lack of quantification is the switch in focus to those aspects of error that are more readily quantified, namely the random components.

Systematic errors can be and often are larger than random errors, and failure to appreciate their impact is potentially disastrous. The problem is magnified in large studies and pooling projects, for in those studies the large size reduces the amount of random error, and as a result the random error may be but a small component of total error. In such studies, a focus on “statistical significance” or even on confidence limits may amount to nothing more than a decision to focus on artifacts of systematic error as if they reflected a real causal effect.

It's called cosmic habituation (0)

Anonymous Coward | more than 2 years ago | (#39597093)

Here is an article on it
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer [newyorker.com]

It's about scientific replication, and that the initial result decrease over repetition.

Schooler says. "One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment."

Dr. John Ioannidis (2)

jamvger (2526832) | more than 2 years ago | (#39597127)

John Ioannidis, a medical statistics researcher on a small island in the Aegean, leads a group that has done significant work in this area. Here [theatlantic.com] is an article in The Atlantic about his work.

From the article: ". . . Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right."

Blame it on Quantum mechanics... (1)

3seas (184403) | more than 2 years ago | (#39597191)

...re: double slit experiment and what happens when you try to observe....the experiment.

If they want the same results then don't observe..... As apparently they didn't the first time

Not surprising? (0)

Anonymous Coward | more than 2 years ago | (#39597451)

And you wonder why conservatives are distrustful of science these days. These people are wasting everyone's time twice over- once by making people think invalid lines of research are worth exploring, and 2nd by not publishing the conclusions that actually match their facts. Anyone responsible should be fired.

I can rest easy (0)

Anonymous Coward | more than 2 years ago | (#39597667)

"But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers"

I'm sure glad that this is just an isolated incident and there are no other politically charged areas of science suffering the same sort of bias.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...