×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Misconduct, Not Error, Is the Main Cause of Scientific Retractions

Soulskill posted about a year and a half ago | from the fake-data-looks-much-better-than-real-data dept.

Science 123

ananyo writes "One of the largest-ever studies of retractions has found that two-thirds of retracted life-sciences papers were stricken from the scientific record because of misconduct such as fraud or suspected fraud — and that journals sometimes soft-pedal the reason. The study contradicts the conventional view that most retractions of papers in scientific journals are triggered by unintentional errors. The survey examined all 2,047 articles in the PubMed database that had been marked as retracted by 3 May this year. But rather than taking journals' retraction notices at face value, as previous analyses have done, the study used secondary sources to pin down the reasons for retraction if the notices were incomplete or vague. The analysis revealed that fraud or suspected fraud was responsible for 43% of the retractions. Other types of misconduct — duplicate publication and plagiarism — accounted for 14% and 10% of retractions, respectively. Only 21% of the papers were retracted because of error (abstract)."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

123 comments

Publish or perish (5, Interesting)

i kan reed (749298) | about a year and a half ago | (#41528503)

"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

Re:Publish or perish (3, Insightful)

spikenerd (642677) | about a year and a half ago | (#41528731)

...The academic community needs to find another metric for researcher quality other than papers published...

such as?

Number of citations? No, it would take a 30-year probationary period before the trend was reliable.
Have experts evaluate your efforts? No, that would require extra effort on the part of expensive tenured experts.
Roll some dice? Hmm, maybe that could work.

Re:Publish or perish (-1, Flamebait)

Kerstyun (832278) | about a year and a half ago | (#41528907)

> Such as?

Ask Jesus ask him from you're heart not your head check if the result's are in accordions with The Scripgure's

Re:Publish or perish (1)

Anonymous Coward | about a year and a half ago | (#41528961)

Roll some dice? Hmm, maybe that could work.

I have the sudden urge to make a D20 modern "Tenure of Educational Evil" (If you don't get it, look up Temple of Elemental Evil) and propose that any researcher must be able to take their level 1 researcher through the module to get any funding. (and funding based on how many objectives they complete along the way)

Bloodsport (1)

DarthVain (724186) | about a year and a half ago | (#41529597)

Have an annual fight to the death for tenure.

As an added byproduct I bet A) Your average tenured professor would start to look a bit different, and B) You gotta bet they would be taking way less shit from students and TA's...

Though seriously though, I know in some circles it has been discussed that not every university be structured in the same way. For the most part most/many are more less training centres rather than places of deep discovery.

Tenure and papers, might make sense if your primary goal is the discovery of the universe. However if your primary goal is moving another year of pukes out the door, perhaps you just need a system like they already have for high school teachers (not that it is all that great either).

Re:Publish or perish (1)

JWW (79176) | about a year and a half ago | (#41529683)

How about calculated gross earnings of the students you have taught?

Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

Or you could measure net income from licensing of IP for creation of new technology. Notice I said licensing and creation, I wouldn't want to encourage our universities to continue acting like asses pursuing IP lawsuits as a means to make money.

Re:Publish or perish (3, Insightful)

interkin3tic (1469267) | about a year and a half ago | (#41530729)

How about calculated gross earnings of the students you have taught? Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

That would also make it in new professors' best interests to not teach the intro level courses where much of the class will change majors and doesn't want to be there in the first place. They'll instead focus on the upper level courses where the weak links have been weeded out.

Guess which courses new faculty get stuck doing now? That would be rewarding the really weaselly ones who were able to skip the hard work.

Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants. They're voting with their wallets for schools where research is valued more than instruction. So your solution is lacking a problem, at least according to the teachers and students of such schools.

Re:Publish or perish (1)

khallow (566160) | about a year and a half ago | (#41531335)

Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants.

Why? The respective research school selling point is that the teaching is better because the faculty is top notch.

Re:Publish or perish (1)

Obfuscant (592200) | about a year and a half ago | (#41531901)

Why? The respective research school selling point is that the teaching is better because the faculty is top notch.

Huh? Where did you get the idea that a good reseach faculty means a good teaching faculty? There's too much pressure on research faculty to do research to expect them to spend alot of their time concentrating on teaching. (Yeah, some research faculty are good teachers, but there is no causation.)

You go to a reseach school if you want to get involved in research, because that's where the student jobs in research are. You go to a teaching school if you want to learn, because as an undergraduate you aren't going to be concentrating on cutting edge information anyway. Chem 101 is still Chem 101, whether it is taught by a nobel laureate research prof or an instructor.

Re:Publish or perish (2)

khallow (566160) | about a year and a half ago | (#41532579)

Where did you get the idea that a good reseach faculty means a good teaching faculty?

Note the use of the phrase, "selling point" in my original post. Where does anyone get that impression that research means better teaching and/or better education? From research schools marketing that angle.

Fundamentally, your assertion that "the students don't care about quality teachers" is based on flawed premises. Prospective college students aren't well known for understanding the nuances of a college and an education. So why expect them to "know" that "smaller schools known more for teaching" have better teachers (maybe)?

Even if the student doesn't care about the outcome of their education (which is a peculiar assertion to make, given that the average student puts something like five years of their life into such an education), there's no reason for the college to not care as well.

Re:Publish or perish (3, Interesting)

raydobbs (99133) | about a year and a half ago | (#41528741)

With the public retreat from education, universities have to take their funding from more private sources. As a result, there is outside pressure to do research to favor these outside sources of funding, and you get a recipe for fraud and misconduct. Of course, the universities won't admit that they have had to make a deal with the devil to keep the doors open - and a large part of our (United States) political system is dead-set on taking us backward in terms of scientific progress to appease their less-than-sophisticated backers; and the problem is set to only get worse unless we as a people do something to stop it.

Re:Publish or perish (1, Interesting)

hsthompson69 (1674722) | about a year and a half ago | (#41529351)

Well, you've got the devils on the corporate side, who may be trying to avoid bad press, say large organic potato farmers who don't wan to see studies that show the deleterious effects of carbohydrate intake on obesity, diabetes, heart disease and other chronic diseases. Fewer carbs sold means less profit to the company.

But then, you've got the devils on the government side, who also may be trying to avoid bad press, say the USDA regulators who don't want to see studies that show the deleterious effects of carbohydrate intake on obesity, diabetes, heart disease and other chronic diseases. In this case, a refutation of government advice (four food groups, food pyramid, my plate), would mean less credibility for government advice in the future.

In either case, I think we need systems in place to combat fraud. Start off with a necessary and sufficient falsifiable hypothesis statement *before* the study. Publish the result data publicly no matter what the outcome (data retention and dissemination). A focus on double blind placebo controlled work instead of observational studies that can't show causality.

Trying to pose this as a "government schools are declining, therefore science is going wrong" is a misunderstanding of the fact that government can have non-scientific impulses.

Re:Publish or perish (0)

Anonymous Coward | about a year and a half ago | (#41529775)

are there many large organic potato farmers?

Re:Publish or perish (2, Interesting)

jamesmusik (2629975) | about a year and a half ago | (#41528871)

Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.

Re:Publish or perish (4, Insightful)

blueg3 (192743) | about a year and a half ago | (#41528911)

Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.

That's what you might think, but getting (most) journals to publish negative results is very difficult.

Re:Publish or perish (5, Insightful)

Anonymous Coward | about a year and a half ago | (#41528939)

Journals don't only publish papers reporting "positive results," whatever that may be

HAHAHAHAHAHAHAHAhahahaha... ha... oh, oh I'm sorry, but that's funny.

Yes. Journals have a very long history of not publishing 'negative results'. (id est: "We tested to see if X happened under situation Y, but no it doesn't.") Mostly because it's not 'interesting'.

If you want a good example of this, check out the medical field, where the studies which don't pan out aren't published, the ones which do are, leading to severely misleading clinical data [ted.com], and it leads to problematic results.

Re:Publish or perish (1, Interesting)

SomeKDEUser (1243392) | about a year and a half ago | (#41531275)

The problem with biomed research is that the field is rife with people who don't understand models. Biomed research is not really science in that we are not yet at the point where we can express mathematical models to make predictions which are then falsified or not.

All too often, it is a case of "I knock down/over-express a gene, find that it does something, and then make up some bullshit where I pretend it'll cure cancer". In many cases, articles get published because the reviewers don't say "this claim is not supported by your experiment (purely on the grounds of the claim being logically inconsistent)" or, "you say this thing is happenning, why the fuck did you not quantify it? (See, you claim this thing disappeared, what are the odds your method is just not sensitive enough?)".

This pepperred with idiots who put gaussian error bars on numbers of cells, which ought to be a motive of immediate rejection. No, I am not bitter.

Re:Publish or perish (5, Interesting)

js33 (1077193) | about a year and a half ago | (#41529141)

A positive result is the rejection of a null hypothesis. In the frequentist statistical paradigm, a failure to reject the null hypothesis is simply not significant. Insignificant results are not usually considered worthy of publication. "If your study comes out a way you didn't expect," then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance. This way you can explain the significance of what you learned from the "failure" of your experiment, and there is no reason you should not be able to publish it.

That's the statistical paradigm. Results just aren't significant unless you can state them in a positive way.

Re:Publish or perish (0)

Anonymous Coward | about a year and a half ago | (#41529577)

A positive result is the rejection of a null hypothesis. In the frequentist statistical paradigm, a failure to reject the null hypothesis is simply not significant. Insignificant results are not usually considered worthy of publication.

Albert Michelson and Edward Morley would disagree.

Re:Publish or perish (0)

SomeKDEUser (1243392) | about a year and a half ago | (#41531301)

In biomed, there are not models to invalidate. the Michelson-Morley was significant because it proved that a model of the universe was wrong. To do the equivalent in biomed would require someone to have a model in the first place...

Re:Publish or perish (2)

Daniel Dvorkin (106857) | about a year and a half ago | (#41531933)

In biomed, there are not models to invalidate.

That statement is one of the stupidest things I've ever read on Slashdot ... which is pretty impressive.

Okay, I'm going back to building models of developmental gene regulation now. You're welcome.

Re:Publish or perish (1)

SomeKDEUser (1243392) | about a year and a half ago | (#41532249)

These are not models in the sense that the Ether Theory was a model. You build gene regulation networks by accumulating data on what gene acts as a promoter/repressor of another and what are the activation cascades.

No one will ever invalidate that work through an experiment -- some of the network might be revised, but it is not the case that someone will come up with some experimental proof that there is no such thing as a gene expression network, which would be the equivalent of the MM experiment.

You could say that you are building a model in the physical science sense if you could look at your network and say: there must be a gene/gene group that acts here, here and here, because this is how the kinetics of the system are when stressed, and the network I built does not have the required dynamics. Now maybe this is what you are doing, and in which case I apologise, but in my limited experience, such papers are few and far between, and are certainly not bio_med_ papers (bioinformatics seem to be mostly about producing pretty pictures for most other biologists -- sad but true).

Re:Publish or perish (2)

FrangoAssado (561740) | about a year and a half ago | (#41529985)

then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance.

You have to be very careful here. In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

Re:Publish or perish (2)

js33 (1077193) | about a year and a half ago | (#41530391)

You have to be very careful here.

Yes you do.

In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

There is also the danger of making an unjustified assumption of objectivity. In preliminary studies, scientists will have gathered data, analyzed it, looked for patterns, and tried to come up with all kinds of hypotheses that could be tested. Even the most final, definitively objective experiment is not designed in a cleanroom with complete objective naivete, and often enough the scientist will have a pretty good idea of the expected nature of the data to be collected. So how significant is a failure to reject a null hypothesis? We cannot say without a full application of Bayes' theorem to interchange the roles of statistical power [wikipedia.org] and degree of confidence [wikipedia.org].

Re:Publish or perish (1)

FrangoAssado (561740) | about a year and a half ago | (#41530885)

often enough the scientist will have a pretty good idea of the expected nature of the data to be collected.

Sure. I was clarifying that you can't just change to a different method of analysis according to the data you got, just because your original analysis didn't give you the result you expected.

In other words, you simply can't change your plan once you've seen the data. In this lecture [lhup.edu], Feynman gives a very clear and real example of what can go wrong even if you're trying to be completely honest (look for the part where he talks about Millikan). That's a neat example because it's a very controlled experiment measuring a completely objective thing. In other fields, like Medicine or Psychology, it's much harder to know if your own expectations are influencing the study, so it's very important that you follow a very clearly defined plan from the beginning.

Re:Publish or perish (0)

Anonymous Coward | about a year and a half ago | (#41531689)

This is exactly why good science is hard to do and bad science is easy to do. Good science requires that you very carefully design your experiments so that you have a meaningful result whether your conceptual model is correct or not. ie: you have to design a null hypothesis that is falsified if your conceptual model is correct, and balance that with a null hypothesis that is falsified if your conceptual model is wrong. Then you have to publish both experiments. At least in biology, few studies are designed this way. Instead, they depend on a single statistical hypothesis that will be falsified if the conceptual model is correct, and sometimes modify the conceptual model to fit the results. This, of course, inverts the whole scientific model of observe - hypothesize - test.

Re:Publish or perish (1)

crazyjj (2598719) | about a year and a half ago | (#41528971)

It's also a great way to keep the grant money flowing in even after you have tenure, particularly if you're publishing findings that are likely to get you grants. And no one gets paid a nice bonus for finding inconclusive or negative results.

Re:Publish or perish (-1)

Anonymous Coward | about a year and a half ago | (#41529167)

Tenure? How about you just get real jobs like the rest of us. We pay for your fucking tenure. Go complain to someone who gives a shit.

Re:Publish or perish (2, Interesting)

scamper_22 (1073470) | about a year and a half ago | (#41529209)

It's not the academic community that is at fault. It is our society.

I've long held the view that science only gained the credibility it has because it was free from politics and power.

But since science has gained such credibility, people think we should now *trust* with power. Which of course destroys the very thing that gave it that trust. Ye old saying 'power corrupts and absolute power corrupts absolutely'.

For one thing, we now have government funding for science. Sounds like a good idea... except of course. That means funding for universities... which need to hire faculty. So whom do they hire and how much do they pay them? Why did they hire Bob and not Alice? Alice would like a job too. The whole question of fairness comes up.

Then of course there's the issue of funding projects. Which projects get funded? Which lobbyists and politicians and special interest groups matter? What policies will be impacted?

It all sounds very neat to have a special scientific class able to deliver *the truth*. It's just completely unscientific and contrary to all empirical evidence in history to think it possible. There has never been a group of wise people in power outside of politics.

Plato envisioned the Philosopher Kings on a group of wise societal leaders. It is said this actually that this was the foundation of the Islamic Republic in Iran... a group of wise religious people given power in Iran. Not unlike people who wish for rational administration or scientific experts in position in Western society to make decision outside of democracy. It's all too common to hear people wishing for transit policy to be decided by transit experts in 'independent panels'. Or healthcare policy...

It's a very dangerous road.

In short... despite all the technology, education, and the internet and accessibility to information... the *truth* remains as elusive as ever.

Re:Publish or perish (2)

frosty_tsm (933163) | about a year and a half ago | (#41529331)

"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

I think the issue is not that they need a new metric for researcher quality but to realize that not every professor needs to be an active researcher their whole career.

Metrics are the problem, not the solution (0)

Anonymous Coward | about a year and a half ago | (#41529377)

This is the same problem as the teacher's unions: argument over what metric to use.

Both of these systems need to step outside the box and realize that metrics aren't the answer. Management is the answer (cringe). In private industry, managers decide who is good. They have the power to hire, promote and FIRE based on what they think is good. It's "manage effectively or perish", and it works pretty well.

There is no metric. There is an ever-evolving body of study on what constitutes good performance, but it's not "this employee has more megapixels than that employee". IT NEVER FUCKING WILL BE.

Damn, I wish they could get that through their heads.

Re:Metrics are the problem, not the solution (0)

Anonymous Coward | about a year and a half ago | (#41529515)

In every academic and research job I've applied for, that is at least how the hiring and promotion process works. It still ultimately comes down to the decision of the who would be your boss (which at the higher levels might be a committee). It is not like they just crunch some numbers and hire who ever has the highest. There are still interviews, a demonstration talk, a demonstration lesson if a teaching position, references. If anything, friends I have that work in design or professional art get judged way far more based on their portfolio of previous work than science researchers.

And the firing or refusing to renew a position can be like that for a majority of the positions which are not tenure. Even tenure is not perfect, as if you do a crappy job, you will never get a raise. I've seen lazy professors with a couple dozen years of experience making less than what they offer to new postdocs.

Re:Publish or perish (2)

Whole Stuffed Camel (2030902) | about a year and a half ago | (#41529399)

It's not just tenure, even getting a good faculty position is dependent on publication of research in high impact journals. I think that the major conflict of interest in my field, basic biomedical research, rather than funding by pharma etc, is the necessity to generate data suitable for publishing in big journal to get/keep jobs. I'm pretty sure this is going to get very messy if something isn't done to address the problem.

Re:Publish or perish (1)

Gerzel (240421) | about a year and a half ago | (#41529563)

Aye. The result isn't surprising at all. In fact it is one of the big reasons why science demands that results be reproducible and methods be published.

Re:Publish or perish (0)

Anonymous Coward | about a year and a half ago | (#41530273)

I think the solution is simple, if nuanced. The profit motive is an interesting effect of human action, and understanding it is necessary if we aim to understand what 'rules' or more accurately 'system of organization' can achieve that end. My solution has always been to drive science via entrepreneurs and thus willing customers. The reason for this is that costs are a function of reality; that is, they are the sum of our collective subjective evaluation of the objective options before us. We are bound by reality in a very sobering way. When scientists are tasked with seeking results, if the results in question are desired by an agent for the purpose of knowing something true, then the agent will seek a methodology from the scientists that properly aims for truth. If someone wants to know how to build a new machine in the real world, he is not going to seek scientists who will tell him bullshit. He would put such fraudsters out on the street for lying to him. He requires true conclusions to create the objective good or service which will then be able to serve peoples wants.

It is only when the conclusions are not then applied in anyway back to something that is real, that results can be fudged. When the conclusion is the end desired, rather than the means to use in some objective fashion, they can have no basis in truth. If I want to scare people or lure them, I could use the auspice of "SCIENCE!" to sell my snakeoil story to the fearful and the guilty. So, when the story itself is the end goal, the objective crucible of reality is avoided, and thus any incentive to achieve truth.

So, the question becomes 'in what circumstances are the conclusions scientific endeavor the end in and of themselves and in what circumstances are they the means?' This is where my claim about nuance comes in. It is not true that only the free market is interested in connecting conclusions back to reality in some solid manner to create something objective and politicians are only interested in conclusions for their own sake to sell stories and bribe intellectuals. There are plenty of times when governments seek substance in scientific investigation. When the military wants to bomb a village or nuke a city, it is not interested in stories. It wants results. So when I argue that the mechanism to encourage integrity and adherence to the natural scientific methodology is entrepreneurship, I mean it in the sense that it must be driven by actors interested in producing objective goods or services.

Lastly, keep in mind that while I think this is the correct way to approach encouraging the proper application of the natural scientific method, it does not mean that this is the only aim we should attempt. There are other problems to consider besides achieving truth in natural scientific realms of investigation. A big one is already mentioned above when I wrote of nukes and the like. So, though for the sake of this topic I recommend supporting science through actors driven by goals of production, I certainly don't think that all production is desirable, and thus on that subject I would offer a more refined subset of my original solution which would promote peaceful and wealth creating/desire achieving ends rather than murderous ones. However, that is another topic. I just don't want to imply my solution to misconduct is a solution to other problems as well.

THE SECRET TO MY SUCCESS !! (0)

Anonymous Coward | about a year and a half ago | (#41528515)

Divide and Conquer !!

The numbers (2)

somaTh (1154199) | about a year and a half ago | (#41528535)

I've read the abstract and several stories that cite it, and I haven't seen some specific numbers that would make this story more relevant. They talk about the number of retractions being up sharply, and the number of those pulled for "misconduct" being up as well. The abstract and other sources have yet to put either number in relative terms. Of the number of papers published, is the percentage of those papers that are retracted up? Of those retracted, is the percentage of retractions due to misconduct up?

Re:The numbers (2)

ceoyoyo (59147) | about a year and a half ago | (#41528885)

That's not the point. The point is that journals need to be clearer about why a paper is retracted. Fraudsters shouldn't be able to hide behind the assumption that a vague retraction notice means someone made an honest error. The authors specifically state that they cannot make statements about the fraud rate because they don't have a good measure of the total number of papers published.

Re:The numbers (1)

maxwell demon (590494) | about a year and a half ago | (#41529383)

Before I judge this paper, I'll first wait some time whether it will get retracted because of misconduct. :-)

Re:The numbers (2)

Joe Torres (939784) | about a year and a half ago | (#41529537)

The first figure of the PNAS paper shows that less than 0.01% (maybe 0.008%) of all published papers are retracted for fraud or suspected fraud and it has been increasing since 1975 (maybe around 0.001%). The authors state that the current number is probably under-reporting because not all fraud is detected and retracted. It is possible that the 1975 numbers are less representative, since fraud might have been harder to detect (at least for duplicate publication and plagiarism).

Re:The numbers (1)

JustinOpinion (1246824) | about a year and a half ago | (#41531017)

This article [guardian.co.uk] has the title "Tenfold increase in scientific research papers retracted for fraud", but at least mentions some actual numbers:

In addition, the long-term trend for misconduct was on the up: in 1976 there were only three retractions for misconduct out of 309,800 papers (0.00097%) whereas there were 83 retractions for misconduct out of 867,700 papers at a recent peak in 2007 (0.0096%).

Percentage-wise, we're talking about a very small number of papers. They quote one of the authors:

"The better the counterfeit, the less likely you are to find it – whatever we show, it is an underestimate," said Arturo Casadevall, professor of microbiology, immunology and medicine at the Albert Einstein College of Medicine in New York and an author on the study.

While this is indeed true... even if the true number of misconduct cases is ten-fold what they measured, it's still a small fraction of the literature. Of course, any number of fraudulent papers is cause for concern (and we should work to remedy the situation); but these results should not cause us to call into question the majority of published science. In fact it points towards the vast majority of papers surviving scrutiny.

large "culture of cheating" in school now (2)

peter303 (12292) | about a year and a half ago | (#41528587)

I dont know if more students cheat now than when I attended grade school in pre-internet days. But the ease and temptation with the web is greater now. Surveys I read suggest at least half of students cheat.
The mystery has been how one progresses from a cheating culture in grade school, then lose it by the time you reach grad school and professorship. Apparently fewer dont escape this culture. Significant science will be attempted to be replicated and fraud discovered.

Just stupid (3, Insightful)

UnresolvedExternal (665288) | about a year and a half ago | (#41528617)

What surprises me is that these scientists actually weigh the risk reward in favour of damn lies. Fifteen minutes of fame then a dead career.

Re:Just stupid (0)

Anonymous Coward | about a year and a half ago | (#41528715)

Most papers are not read by anyone. Their main purpose is to be one more line on the CV of the author. Publish or perish.

Re:Just stupid (2)

UnresolvedExternal (665288) | about a year and a half ago | (#41528843)

Fair point - but where are these peer reviewers hiding?

Re:Just stupid (1)

Anonymous Coward | about a year and a half ago | (#41529239)

Fair point - but where are these peer reviewers hiding?

In tenure.

Re:Just stupid (1)

narcc (412956) | about a year and a half ago | (#41531871)

What do you think peer reviewers do? They sure as hell aren't replicating your work! The feedback you get also varies greatly in quality and importance.

From what I've seen, the average reviewer doesn't spend more than an hour or two of their time on you. Even if they set aside a whole week for you, they can't guard against a completely fabricated experiment. Short of some gross or obvious error, I can see how such a thing could easily slip past.

Conference papers are worse -- I doesn't surprise me in the least that a randomly generated paper could slip in and get accepted [slashdot.org].

Re:Just stupid (2)

Penguinisto (415985) | about a year and a half ago | (#41529221)

Well, FWIW, if Hendrik Schön [wikipedia.org] hadn't gotten stupid and made some pretty massive (and physics-defying) claims in his paper, and stuck with semi-muddy results that looked pretty (as opposed to sexy), but were harder to replicate? His career would have likely lasted years, if not decades, before he got caught.

It all depends, from the fraudster's point-of-view, whether he wants rockstar status, or to make a comfortable living...

Re:Just stupid (1)

gotfork (1395155) | about a year and a half ago | (#41530493)

Eh, even if he had made up realistic-looking data, there were a lot of other red flags: not saving raw data or samples, no one else making measurements, all other groups unable to reproduce results, etc. In retrospect, it sounds like it only went on that long because he was at a private lab, but I see what you mean.

Re:Just stupid (0)

Anonymous Coward | about a year and a half ago | (#41531503)

As opposed to just a dead career if you don't get a paper published. Seems a reasonable decision....

Hoping to not see a retraction of this (2, Funny)

Anonymous Coward | about a year and a half ago | (#41528629)

That would be too ironic.

Misconduct! Fraud! Please ... (5, Informative)

jabberwock (10206) | about a year and a half ago | (#41528659)

I think that the article implicitly misrepresents the level of misconduct by leaving out some relevant statistics. ... More than 2,000 scientific articles, retracted! And ... fraud! ... plagiarism!

In context -- PubMed has more than 22 million documents and accepts 500,000 a year, according to Wikipedia.

So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.

Re:Misconduct! Fraud! Please ... (2, Interesting)

Anonymous Coward | about a year and a half ago | (#41528879)

Yep. That's a very important, and very *missing* bit of information. Even if *ALL* of the retracted articles were for *blatant* and *intentional* misconduct (not duplicate publication), and all of them were published in the same year, and all of them were in PubMed, that would be a whopping 0.4% fraud rate.

It boggles my mind that this number wasn't asked for by the article's author.

Well, it *should*, but instead I'm just getting more cynical and assuming either incompetence (the author is writing about something he has absolutely no clue about, and therefore doesn't even know to ask for the information to put it into context), or malice (the author is trying to paint modern science as intentionally fraudulent).

Re:Misconduct! Fraud! Please ... (0)

Anonymous Coward | about a year and a half ago | (#41529343)

and these are the ones who were stupid enough to get caught. if you are a bit smart, there is no reason that you will get caught unless someone tries to use/reproduce your results and find out your forgery and the chance of that happening is less than 0.4%

so, if 0.4% got caught, take a wild guess to all those that did not get caught.

Re:Misconduct! Fraud! Please ... (1)

Anonymous Coward | about a year and a half ago | (#41529475)

It boggles my mind that this number wasn't asked for by the article's author.

Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

Re:Misconduct! Fraud! Please ... (2)

khallow (566160) | about a year and a half ago | (#41531459)

It boggles my mind that this number wasn't asked for by the article's author.

Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

Here's a case in point. Someone does research on fraud in science and the first thing that you and the parent poster think, "What is the ulterior motive?" That's just another anti-scientific attitude.

Re:Misconduct! Fraud! Please ... (0)

Anonymous Coward | about a year and a half ago | (#41529145)

So, we're talking about statistics that apply to approximately 0.4% of the total annual submissions, and 0.009% of the total articles to date? Oh noes! Giant scientific conspiracy! Let's make sure to get all of those emails and internal documents out into the public domain so we can label these con-men as the frauds they are!

Re:Misconduct! Fraud! Please ... (2, Informative)

Anonymous Coward | about a year and a half ago | (#41529233)

> So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.

Those are only the ones that get discovered. I roll my eyes often when I read medical papers. The statistics are frequently hopelessly muddled (and occasionally just plain wrong on their face), the studies are set up poorly (as in, I would expect better study designs from first year students), or they are obvious cases of plagarism.

EX: does early fertility predict future fertility. "We divide the population into two groups: women aged 20 to 30, and women aged 20 to 40. We find that fertility in the first group predicts fertility in the second group with R^2=.46" Well no shit, because the second group includes the first group, so of course they correlate. If you redo the stats correctly, you find that R=0.001. This paper still stands...

EX: "we found that eating walnuts increases male fertility." No shit. Walnuts are known to be high in Arginine. Arginine is known to increase male fertility (multiple studies already on this). Next, the same group will publish a breakthrough study on male fertility and pumpkin seeds (hint:pumpkin seeds have twice the Arginie concentration as walnuts). The study authors try hard to hide their plagarism though, not once mentioning Arginie. They hypothesize that it is from the ALA in the walnuts... which is BS because they could have tested an ALA hypothesis using flax-seed oil. Oh, and I forgot to mention that this study was not even done single-blind. No placebos were used. One group was given walnuts (not in concealed form, just plain walnuts), the other was given nothing. This paper also still stands...

Re:Misconduct! Fraud! Please ... (0)

Anonymous Coward | about a year and a half ago | (#41529635)

The second example seems to be a rather completely different case than say the first one, and rather different than complete fraud. The complaint seems to be that the results were not novel, but would still be obviously correct. This might be important to people reviewing their grants or trying to hire those researchers in the future. But that is not really relevant in terms of trying to evaluate how often studies are accurate versus being wrong or lies.

Am I the only one bothered by the lumping together fraud of duplicate publication? Even though plagiarism has some serious issues, but the implications are far different from fraud related to changing the results.

Re:Misconduct! Fraud! Please ... (0)

Anonymous Coward | about a year and a half ago | (#41529859)

Am I the only one bothered by the lumping together fraud of duplicate publication?

Should be: Am I the only one bothered by the lumping together fraud and duplicate publication?

Re:Misconduct! Fraud! Please ... (1)

Hentes (2461350) | about a year and a half ago | (#41529301)

Assuming that all fraudsters get caught which knowing the situation of medical science is very far from the truth. The paper doesn't talk about the number of erroneous articles, only the ratio between the number of frauds and the number of genuine mistakes.

Re:Misconduct! Fraud! Please ... (2)

jabberwock (10206) | about a year and a half ago | (#41529591)

Respectfully, you're compounding the error by referring to "all fraudsters" and the "situation of medical science," implying, by language, that this is a much larger problem than statistics show when considering the enormous volume of scientific articles. I'm not a scientist but I'm very good at interpreting numbers.

I didn't say that fraud does not exist, or that there isn't pressure to produce publishable results that might affect accuracy or ethics (on occasion.) I said that this is a much smaller problem than the article implies. Only the retractions were analyzed; the retractions are a vanishingly small percentage of the total. If you want to argue that the retractions are the tip of an iceberg of falsified scientific data, let me know.

Re:Misconduct! Fraud! Please ... (0)

Anonymous Coward | about a year and a half ago | (#41529611)

So? The set of papers TFA is interested in is the set of life science papers that have been retracted. The fact that this is a small subset of a larger set of all life science papers published in PubMed is UTTERLY IRRELEVANT, and comparing life science papers that have been retracted to life science papers that have not been retracted is not terribly useful unless you want to know what percentage of life science papers have been or have not been retracted.

Good grief, it's like complaining that a study that broke down deaths by type or location of cancer didn't include the percentage as all deaths that occur.

Re:Misconduct! Fraud! Please ... (1)

jabberwock (10206) | about a year and a half ago | (#41530073)

Right.

Except that my point about the article -- that it implies that there is lots of fraud in science -- has already been made by the fact that a fair number of commenters jumped right to that unproven implication.

And it would be quite reasonable to complain about such a study of cancer deaths if the article implied that the deaths were substantially greater than might be expected in the general population, without offering evidence.

And that's why.... (3, Insightful)

roc97007 (608802) | about a year and a half ago | (#41528775)

this [slashdot.org] is a bad idea.

Re:And that's why.... (1)

Burning1 (204959) | about a year and a half ago | (#41529699)

Why? Are you suggesting that the scientific studies are suddenly somehow not living up to standards of repeatability and peer-review?

We already have ways of dealing with these issues...

Re:And that's why.... (-1)

Anonymous Coward | about a year and a half ago | (#41531173)

Yea, liberals call people making those claims bigots until they stop questioning. At least thats the way I've been treated every time I point out the misconduct at the CRU in England since they deleted data preventing peer review.

Are the retracted articles localized? (1)

Anonymous Coward | about a year and a half ago | (#41528795)

It would be fun to know how many of those were made in China (a country with a record of forged and fraudulent papers) and how many are from the US.

Re:Are the retracted articles localized? (0)

Anonymous Coward | about a year and a half ago | (#41529111)

Ditto on that. Someone should run the stats against the authors' countries of origin, and plot the percentage of fraud for each nation.

FTFY (2)

drerwk (695572) | about a year and a half ago | (#41528833)

Misconduct, Not Error, Is the Main Cause of Medical Scientific Retractions

Other than Hendrik Schön are there some in Math or Physics that are as likely to commit misconduct?
http://www.nature.com/nature/journal/v418/n6894/full/418120a.html [nature.com]

Re:FTFY (1)

ceoyoyo (59147) | about a year and a half ago | (#41529047)

Life sciences. Not medical. That was in the first sentence of the summary.

Yes, math and physics have issues with misconduct. The article you link to mentions several physical scientists who think it's a problem. You identified a famous one. Retraction Watch lists others, quit a few in chemistry for some reason. Complete fabrication might be a bit less common for the reasons mentioned in your article, but I have no doubt that there's data pruning, faking extra results because you don't have time to do the experiments, plagiarism, dodgy stats, etc. From the sounds of it, the problem might be worse because the physical sciences haven't faced as many high profile scandals as the life sciences and don't have the same controls in place.

Re:FTFY (0)

Anonymous Coward | about a year and a half ago | (#41530369)

Other than Hendrik Schön are there some in Math or Physics that are as likely to commit misconduct?

Two words: cold fusion

Re:FTFY (1)

fermion (181285) | about a year and a half ago | (#41530657)

And this is more proof that life science, medical science, about half of the articles seems to be "medical research" is not science. It is based too much on what people want to believe, too much on making a profit off pushing drugs, too little rigorous science. We know that many articles are paid for, written by ghost writers. We know that drug dealers want the drugs to be safe for kids, but really don't know or won't pay to do the proper research. We know that cancer is a business, and the research is not pushing for a cure, but to promote high cost treatments. We are told that we must be diagnosed with cancer early, but is that because early cancer is generally better for a cure, or because early diagnosis increases the years of survival, the statistic that is most used in cancer advertising. I still hear that early diagnosis of prostate cancer is critical on commercials, but that is not supported by real research. Or maybe it is. We won't know until biomedical researchers are paid at the level of the physics researcher, and doctors are paid at the level of teachers.

So when this gets retracted...? (0)

Anonymous Coward | about a year and a half ago | (#41528925)

n/t

Obviously (-1)

Anonymous Coward | about a year and a half ago | (#41528957)

It's all those Indian "students" and "professors" plagiarizing everything under the sun. When I was in grad school, there were basically three ethnic groups: Caucasian, Asian, and Indian. The Caucasian folks were the ones either born and raised here or from Europe, but struggling to get by in any case due to the flood of non-native students grabbing up all course seats and not actually taking them. The Asian students (mostly from China) were arguably the smartest group of the bunch, collectively. The Indians just grouped together, stunk up the classes with B.O., and cheated their way through school. Whenever someone got caught doing something wrong, sure enough, it was an Indian. I wasn't surprised to discover that they got by with constantly publishing someone's ripped off research, and gangbanging 15 names on the paper like flies on a turd.

Show me an honest Indian and I will show you someone who was not raised in that country. So there's the reason why so many papers are disregarded due to misconduct.

Re:Obviously (2, Funny)

Sarten-X (1102295) | about a year and a half ago | (#41529201)

You've discovered representative traits of different societies. In many Asian societies, individual achievement is valued highly, so each individual must work the hardest to be outstanding. In many Indian societies, the collective effort is what's valued, so a team gathering bits and pieces from myriad sources and reassembling them into a new product is the respectable path to success. In many European and American societies, slacking off and blaming others for the consequences is a venerated tradition.

Re:Obviously (0)

Anonymous Coward | about a year and a half ago | (#41529213)

I know that it's politically incorrect, but I had the same experience back in school. Just about every Indian I knew was a blatant cheater. Not sure if it's a cultural thing, or whether it had to do with the fact that they were all from the Indian privileged class too.

Re:Obviously (0)

Anonymous Coward | about a year and a half ago | (#41530097)

Funny, the friends I had in grad school that were raised in India (from several different regions) were hard workers, definitely harder working than I was. There were at least two of them, that for two different classes, all of the other students would go to them for help when the professor was not around, as they got the assignments done way before everyone else, and sometimes were the only one to answer a question in the whole class. Kind of difficult to be cheating when being the first one to come up with an answer, and also being able to explain and teach it to others. At least one of them that stayed in a field close enough to mine to keep up with his research went on to do some rather novel work in a subfield that was quite small... hard to just pass off someone's work there.

You're probably just a troll, but if not, maybe you are just bitter from not being good at science, because you obviously have some issues with generalization and bias.

Re:Obviously (0)

Anonymous Coward | about a year and a half ago | (#41530499)

I had the same impression as the OP. Another indian guy and me were in class one day. I caught him cheating off my test and reported it to the prof. Later on I heard from a friend that he and 2 other of his buddies got busted for copying each other. To this day I dont trust any indian guy any farther than I can throw him.

Re:Obviously (0)

Anonymous Coward | about a year and a half ago | (#41530635)

I caught an anonymous coward lying once. To this day, I don't trust any AC further than I can throw them.

Re:Obviously (0)

Anonymous Coward | about a year and a half ago | (#41531081)

I agree with the parent. I have even once caught an AC replying to their own posts in agreement to make it look like a consensus. I would never trust an agreement between ACs again. Show me two ACs agreeing, and I'll show you a single poster trying to look like two people.

So what? (1)

Sarten-X (1102295) | about a year and a half ago | (#41529113)

If a researcher follows proper procedure but ends up with an incorrect result, it's still valid science. Perhaps it's the exception to some theory that will lead to later breakthroughs in the future. Simply being incorrect is not a reason to retract. Rather, a retraction is wiping the slate clean, hoping to forget that the research was ever done. The only reason to do that is if the research itself was unethical.

Suspected fraud? (1)

damn_registrars (1103043) | about a year and a half ago | (#41529121)

That is like suspected murder. It needs to be clearly proven or the accused needs to admit to it. Just because there is a whisper campaign alleging fraud from someone doesn't mean it is automatically the case.

An honest journalist would have separated "demonstrated fraud" from "suspected fraud".

Study has since been retracted (1)

Anonymous Coward | about a year and a half ago | (#41529133)

Turns out they made up all the retraction numbers

Because ordinary errors don't lead to retractions (4, Informative)

Jimmy_B (129296) | about a year and a half ago | (#41529355)

You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

Re:Because ordinary errors don't lead to retractio (1)

Colonel Korn (1258968) | about a year and a half ago | (#41529613)

You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

Bingo. Mod to +5.

Re:Because ordinary errors don't lead to retractio (2)

goodmanj (234846) | about a year and a half ago | (#41531555)

Parent is right. Small errors which don't affect the outcome are published as short "correction" notes. Larger, more subtle errors are corrected by the author and/or whoever noticed he was wrong writing a new paper which critiques the old one. But the original paper remains, because it's a useful part of the dialogue.

(And *that* is why you should always do a reverse bibliography search on any paper you read.)

We apologize for the misconduct. (2)

Ichijo (607641) | about a year and a half ago | (#41529415)

The papers responsible have been retracted.

We apologize again for the misconduct. The papers responsible for retracting the papers that have been retracted, have been retracted.

The researchers hired to write papers after the other papers had been retracted, wish it to be known that their papers have just been retracted.

The new papers have been completed in an entirely different style at great expense and at the last minute.

---

Mynd you, m00se bites Kan be pretty nasti ...

PDP-8? (0)

Anonymous Coward | about a year and a half ago | (#41529419)

The survey examined all 2,047 articles

Observation: INT_MAX is 2047 on a PDP-8.
Inappropriate conclusion: The people who conducted the study are in desperate need of a hardware upgrade.

academic tenure is an elitist system (0)

one_who_uses_unix (68992) | about a year and a half ago | (#41530051)

The whole idea of an academic ecosystem distinct from the reality that the rest of the world operates in is an elitist adaptation of medieval socio-political structures. Granting someone an insulated job from which they can not be removed is ridiculous under any conditions. Whether someone publishes a peer reviewed article on something is irrelevant to whether they know what they are talking about in the current model.

The REAL peers are the folks doing work in the profession day in and day out. As a rule most peer reviews are conducted by people with a decidedly academic focus - the experts in the field are working day jobs that don't afford them time to participate in silly self congratulatory exercises.

The only non-academic institutions that have something like tenure are US federal government jobs - yours to loose. Neither one provides us an example of healthy thinking or efficient and innovative work products.

Re:academic tenure is an elitist system (3, Insightful)

Aardpig (622459) | about a year and a half ago | (#41531097)

The REAL peers are the folks doing work in the profession day in and day out.

As an astrophysicist in a research University, I'd like to know where these REAL peers are. I thought I was the expert, but now you tell me there's someone working hard at an astrophysics day job — so hard, in fact, they're too busy to review the papers I write while quaffing champagne by the bucket-load in the penthouse suite of my ivory tower.

I'm all ears.

Re:academic tenure is an elitist system (2)

Daniel Dvorkin (106857) | about a year and a half ago | (#41531199)

The REAL peers are the folks doing work in the profession day in and day out. As a rule most peer reviews are conducted by people with a decidedly academic focus - the experts in the field are working day jobs that don't afford them time to participate in silly self congratulatory exercises.

And in most scientific fields, those folks are overwhelmingly to be found at academic institutions, and most of those who aren't in academia are in government. Corporate R&D is almost all "D" these days. There used to be a lot more research and publication, and peer review, by people outside academia--in light of your username, you might want to consider the history of Bell Labs, and how sad that history's been in recent years.

Considerring the previopus article... (0)

Anonymous Coward | about a year and a half ago | (#41530081)

about scientists wanting to keep their emails private. It makes one wonder what is going on.

At least we can be sure... (0)

Anonymous Coward | about a year and a half ago | (#41531231)

...that there are no fraudulent papers in Climate Science.

(because we're never given enough of the raw data to replicate the findings...)

The way it should be. (1)

goodmanj (234846) | about a year and a half ago | (#41531659)

Getting it wrong an important part of doing science. Papers with errors should be corrected by new publications, not retracted. The incorrect paper inspired the correct one, and so is a useful part of the dialogue. Also, anyone else who has the same wrong idea can follow the paper trail, see the correction, and avoid making the same mistake again.

Classic but extreme example: the Bohr model of the atom, with the electrons orbiting the nucleus like planets around a star. It's wrong. Very wrong. But we still teach it today, because only by studying it you realize the flaws in the classical description of subatomic particles, and the need for quantum mechanics.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...