×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A New Record For Scientific Retractions?

timothy posted about 2 years ago | from the my-study-on-this-record-is-forthcoming dept.

Japan 84

sciencehabit writes "An investigating committee in Japan has concluded that a Japanese anesthesiologist, Yoshitaka Fujii, fabricated a whopping 172 papers over the past 19 years. Among other problems, the panel, set up by the Japanese Society of Anesthesiologists, could find no records of patients and no evidence medication was ever administered. 'It is as if someone sat at a desk and wrote a novel about a research idea,' the committee wrote in a 29 June summary report."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

84 comments

..wrote a novel about a research idea.. (1)

rbrausse (1319883) | about 2 years ago | (#40526991)

is this Yoshitaka Fujii a worthy novelist? if his writing style is appealing I see a second career chance :)

Uh Oh (0)

Anonymous Coward | about 2 years ago | (#40526995)

Those wacky scientists!

almost as many as (0)

Anonymous Coward | about 2 years ago | (#40527005)

anti-vaccination advocates.

Um (1)

crash123 (2523388) | about 2 years ago | (#40527009)

Why weren't these papers peer reviewed?

Re:Um (5, Insightful)

Anonymous Coward | about 2 years ago | (#40527147)

Peer review is not designed to catch fraud, although it can, as to work on the assumption that the work may be fraudulent would cost too much and would not even be effective against cleverer frauds. The only way to catch a clever fraud is to try and replicate their work, and this can only happen after publication, usually when another researcher tries to build on the original work. If you do this all fraud will eventually be caught, the best you can hope for in the long run, as a scientist committing fraud, is to be thought of as critically incompetent. For this reason fraud is rare among academic scientists, but is unfortunately more common among their commercial counterparts.

Re:Um (1)

Anonymous Coward | about 2 years ago | (#40527179)

Ug - should have read TFA, - "The blog notes that three recently retracted papers only garnered six, four, and three citations. Despite the low impact of the work, ..." so he was not caught as few people used his work despite the quantity.

Re:Um (2, Insightful)

J'raxis (248192) | about 2 years ago | (#40527417)

Unfortunately, it is upon publication when a study is picked up by the media and exposed to the general public. By the time other scientists try to replicate the experiments and find they're bullshit, it's "too late" in a sense.

Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption. This either means before "publication" or the media needs to learn (hah) not to cite studies that haven't been independently verified, no matter how sensationalistically important they sound.

Re:Um (3, Insightful)

ceoyoyo (59147) | about 2 years ago | (#40527707)

"Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption."

The media, and the public, need to learn. Publishing and dissemination are a critical part of science and shouldn't be compromised to make some reporters' jobs easier. Fraud isn't even the big problem with jumping to conclusions based on unverified studies - FAR more studies will be incorrect simply due to honest false positives than to fraud.

Re:Um (1)

amRadioHed (463061) | about 2 years ago | (#40527909)

Obviously none of these papers were sensationalistically important sounding judging from the number of times they were cited. If the papers did attract a lot of attention no doubt they would have been discredited quickly by people trying to reproduce the results.

Fixing the wrong problem (3, Insightful)

DragonWriter (970822) | about 2 years ago | (#40528863)

Unfortunately, it is upon publication when a study is picked up by the media and exposed to the general public. By the time other scientists try to replicate the experiments and find they're bullshit, it's "too late" in a sense.

Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption.

The whole point of the scientific method is that putting work out for general consumption is the best avenue for independent verification (to adapt a phrase familiar to this audience, one might think of it as "with many eyes, all non-reproducible results are shallow".)

The fact that reporters covering science in the popular media lack a basic understanding of the scientific method is a reason to change something, but the thing that needs change isn't scientific publishing.

Re:Fixing the wrong problem (1)

wrook (134116) | about 2 years ago | (#40539429)

They also have no incentive to begin to understand the scientific method. Reports of amazing discoveries followed up by scandals and retractions leads to more sales than waiting to see if anyone is able to duplicate the results.

Having said that, there are occasional media figures who have a very solid grasp of science. It's unfair to paint everyone with the same brush. But the general state of affairs is somewhat grim and I don't see it getting much better.

Re:Um (0)

Anonymous Coward | about 2 years ago | (#40527997)

...the best you can hope for in the long run, as a scientist committing fraud, is to be thought of as critically incompetent. For this reason fraud is rare among academic scientists, but is unfortunately more common among their commercial counterparts.

Actually, the best you can hope for is that the fraud is not discovered for decades allowing you to build your career on the fraud. Eventually, you will be found out but if you've already built a career and benefited financially from the fraud it matters little. Additionally, on a non-personal profit motivation there is always the true believers who want to advance a specific view or agenda regardless of what the current science says. And by the way, these motives applies to both corporate and academic frauds and there are plenty of both.

Re:Um (3, Interesting)

slew (2918) | about 2 years ago | (#40528101)

I'm not so sure that fraud is more or less prevalant among academic scientists than commercial scientist. As you alluded to, peer review is not designed to catch fraud. If a researcher published in an obscure backwater field that no-one would likely try to replicate, a researcher could go on for a long while w/o being caught or even being considered incompetent. The same is true for a commercial scientist (witness the cosmetic and nutritional suppliment industries).

On the other hand, scientist in "hot" fields will have hoards of researcher in that field and if you do research in an uninteresting niche in that field, you might also escape detection for a while (or not, as bayer and pfizer have recently leaked out, most research that they have internally attempted to replicate to try to find new avenues for drugs had non-replicable results).

People are people no matter in academia or industry...

Re:Um (2)

joe_frisch (1366229) | about 2 years ago | (#40533393)

I expect fraud will be most common where there is the most motivation and the most opportunity. Motivation can be a direct profit motive - so drug tests need special scrutiny. Motivation can also be for career / academic success - so universities might want to carefully consider policies that promote scientists based heavily on their number of publications.

Fields where small groups of researchers work on expensive to reproduce projects are more suspect. Medical tests requiring large numbers of subjects are an example. Research that requires unique equipment might also be suspect (say experiments at LHC), but fortunately most of those collaborations involve large groups of scientists, so fraud would me much more difficult to hide.

A particular problem with reports of this type of fraud is that it may encourage other researchers to do the same. If they see someone who had a successful career for 20 years, it might be tempting. I'm NOT suggesting that we cover up fraud, just that reporting may have an unfortunate side effect.

Re:Um (5, Interesting)

rgbatduke (1231380) | about 2 years ago | (#40528239)

I disagree about the rarity, on the basis of empirical evidence, for example the recent paper in Science (IIRC, sorry I don't have the reference handy) in which a cancer researcher failed to replicate 46 out of 53 papers published -- all of them with peer review -- prior to embarking on new research in the field. Similar meta-studies have turned up astounding rates of non-reproducible results in other fields (some more than others -- sociology and IIRC social psychology topping the non-medical list).

One problem is that we have constructed a system that rewards the publication of positive results and punishes negative results published or unpublished. Punishes as in makes or breaks the entire career of young researchers, if the negative result occurs when they are up for tenure. Rewards as in ensures research funding and professional advancement as long as positive results keep flowing out.

Another fundamental problem that peer review has a terrible time with is confirmation bias. Science in general has a serious problem with confirmation bias. If one ever embarks on a study where one seeks evidence for some causal linkage associated with some phenomenon in a general population where the phenomenon occurs, one can always find exemplars that support your hypothesis. Lacking actual work to replicate your results using sound methodology (e.g. double blinded and/or conducted using competent statistical analysis, something still as rare as hen's teeth in science in general because to it is difficult to do statistics correctly in a complex problem, not easy, and certainly not easy as in covered in one or two undergrad stats courses which is all that it is probable that the researcher has ever taken) confirmation bias can not only worm its way into the literature, it can come to dominate entire fields as a significant fraction of scientists who do the reviewing for both publication and grants are "descended" from one or two original researchers and their papers. It can take decades for this to be discovered and work out in the wash.

Peer review works better in some disciplines than others. Math it works well, because there is literally nothing up a publisher's sleeve -- fraudulent publication is indeed impossible and even mistaken publication is relatively rare and conditional on involving math so difficult even the reviewers have a hard time following it. Physics and the very hard sciences are also fortunate in that it works decently (although less perfectly), at least where there is competition and the proper critical/skeptical eye applied to results new and old. At least there a mix of laboratory replication and strong requirements of consistency usually keep one out of the worst trouble.

A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.

There is a move afoot to do something about this. We know that it happens. We know why it happens. We know a number of things that we can do to reduce the probability of it happening -- for example requiring the open publication of all data and methods contemporary with any paper produced from them, permitting absolutely anybody to look at them and see if they make sense or contain egregious errors, recognizing the value of negative results when considering tenure, grants, advancement.

In the meantime, does resveratrol really reduce the risk of cardiovascular disease? Does a diet high in oat bran? Do high voltage power lines cause cancer? Does your cell phone? Do carbon dioxide levels in the atmosphere lag or lead global temperatures throughout the cycle of interglacial/glacial periods in the current ice age? Are mercury levels in coal high enough to produce a detectable difference in environmental mercury downwind of coal-burning power plants? Are those differences large enough and chemically suited to be a detectable health risk (above the noise associated with natural and other anthropogenic sources)?

Peer review provided little protection from pernicious confirmation bias in all of these cases, because they all rely/relied on a data selection process and (often) were published with marginal statistical significance, or significan statistical "significance" that turned out later to be a side effect of poorly done statistical analysis. And these aren't even the tip of this particular iceberg -- they're just famous enough that one might know about them.

Note well, I'm not asserting that I know the answers! The point is that answers have been asserted that depend(ed) on just where and how you look and that might or might not hold up (or in some cases, did NOT hold up) when examined more carefully by someone who set out to replicate them.

rgb

Re:Um (4, Interesting)

Blahah (1444607) | about 2 years ago | (#40529017)

Great summary here. As a young scientist I see this a serious problem for the credibility of science in general. The gross fraud cases seem to be mostly limited to a few fields - anaesthesiology in particular has had a few major retractions bouts recently.

As you point out, medicine and other fields involving population studies are much more prone to confirmation bias. In a similar vein, any field where the cutting edge involves extremely expensive experiments is open to direct abuse or failure of scrutiny to discover mistakes because it's prohibitively expensive to replicate experiments. Open data is one part of the solution to that problem, but to understand the data you need a good precise methodology published along with it, and often methods are lacking in detail to the point where they could never be accurately replicated. I think openness with data and methods need to go hand in hand.

The major lesson I've taken from all this is not to allow myself space for confirmation bias. In my field that means always performing the complete set of experiments to confirm the causative link you are exploring, not just getting a fat load of correlations. That needs to go hand in hand with a thorough understanding of the relevant statistics, not just blindly working with standard confidence intervals.

Re:Um (2)

rgbatduke (1231380) | about 2 years ago | (#40530733)

Precisely. Indeed, a major part of the solution is to make scientists their own greatest skeptic, to mistrust our own pet ideas, to hesitate to claim "proof" to a fault even if evidence or model computations seem to support it.

The latter are an entire category in and of themselves. I "do" predictive modelling and moderately advanced statistics on a professional basis, and even have a patent pending in the field. I've done Monte Carlo computations in physics for well over a decade, and know a lot about randomness and hypothesis testing compared to your average scientist in the street, so to speak. I am all too aware that model computations are among the least trustworthy kinds of evidence and usually have far less predictive power than that which is claimed by the modeler. The problem there is subtle and related to complexity and nonlinearity. A highly multivariate, semi-empirical, nonlinear theory implemented as a model is often implemented by "fitting" some or all of its parameters to some (sub)set of data. This in turn is often equivalent in modelspeak to using hill-climbing (gradient search) to find an optimum fit to the data relative to some selected parametric starting point (this is sometimes referred to as making a "Bayesian" choice of the parameters based on some set of data used as priors").

There are many problems with this. One is the problem of omitted variables. In many of the problems where this is done, the choice of parameters (dimensions in the parametric space) is highly model dependent. Heuristics are often used to limit the size of the parametric space simply because doing anything in a really high dimensional space is a lot of work and introduces a substantially higher (but honest) estimate for errors in the final result. Heuristics, of course, is code for "I don't think these variables will significantly contribute", an open opportunity to omit variables that you don't want to be significant because they confound your hypothesis. A second problem is that many models are de facto parametric nonlinear function approximators. This means that -- especially if the data being fit to the parameters is "simple", e.g. monotonic or otherwise simply nonlinear over the range of the fit -- it is often perfectly easy to fit the data with a set of parameters, have the fit be "optimal", have the fit produce a perfectly reasonable chisq, and have the parametric fit be perfectly meaningless. This is all elementary modeling theory 101, but somehow "hiding" the basis by turning it into the solution of a set of coupled ordinary differential equations with nearly e.g. sinusoidal or nearly e.g. polynomial or exponential behavior makes the problem somehow disappear in the minds of the modellers. A third problem is that of complexity -- in many cases (especially for highly multivariate nonlinear models) there may be many local optima and hill-climbing from a selected starting point can easily be yet another form of inadvertent confirmation bias. A global search might find a better optimum, or might reveal that there are several constellations of parameters (especially when omitted variables are included) that can fit the empirical data within its precision, (properly) reducing confidence in the final prediction. A fourth is that even a model built with the variables that were important in the past (where the data being fit resides), that is robust in any parametric/Bayesian search, that uses any of several methods to "validate" the model (using past data) can easily fail in the future because the model simply does not extrapolate. The real space of variables and data is much larger, the model is always being built on some sort of optimistic projection onto a manageable subspace, and ignored stuff eventually becomes important and causes a complete deviation from the model. Chaotic models, stiff differential models -- there is no lack of examples, but somehow this sort of thing doesn't get factored into claims of uncertainty or possible error as long as the model works well enough and confirms a favored hypothesis.

It is difficult to be as brutally honest with ourselves as required by the rigors of good science. I love to link Feynman's famous "Cargo Cult" address to a graduating class at Cal Tech:

http://www.lhup.edu/~DSIMANEK/cargocul.htm [lhup.edu]

in to any discussion of good scientific discipline, as it is difficult to find a better set of advice to young scientists starting out, or a better statement of the rigorous ethical standard we should all strive for when we conduct research.

I'm a theorist, and hence a sucker for a good story (especially one that I myself have written). I've been convinced of some of my own stories -- they were just soooo seductive, so close to the numbers produced by good models -- and yet been proven wrong by my own, sufficiently carefully conducted work.. We all are suckers for a seductive narrative, but this in and of itself is a kind of fallacy. If we were to really be brutally honest, as honest as Feynman's Ghost would have us be, we would all immediately discount our degree of belief in seductive narratives in general in any "complex" theory, on the simple grounds that we lack the information to exclude confounding explanations, including even the simplest of information -- e.g. where to look.

Sadly, this sort of humility is all too lacking in science these days. And paradoxically, it is most lacking in many fields where the most egregious of claims are made about the most complex and multivariate of phenomena. Nassim Nicholas Taleb comments on this in his marvelous book, The Black Swan -- the more certain an "expert" is of the truth of their beliefs in certain highly multivariate fields, the more certain it is that they are wrong, that they have homed in on a small heuristically selected subset of what might be correct and constructed an extremely fragile model that works to explain the data so far at the expense of badly failing to predict future data.

rgb

Re:Um (1)

Belial6 (794905) | about 2 years ago | (#40532381)

Nice link. It is kind of funny that while dismissing witch doctors as obviously fake, he discusses parapsychology as a legitimate field of study. I don't mean that in a bad way. The fact that it was considered a legitimate field of study at the time, drives his point home that much better.

Re:Um (1)

rgbatduke (1231380) | about 2 years ago | (#40535979)

It is a legitimate field of study. One where negative results should have been (and eventually were) published. And also one where confirmation bias produced "surprising" results indeed -- until you looked for the man behind the curtain.

Re:Um (1)

Belial6 (794905) | about 2 years ago | (#40536509)

Yes. In the same way that studying the magic of witch doctors is also legitimate field of study. They are equivalent, but at the time of the writing, he clearly did not see this.

Re:Um (2)

Kergan (780543) | about 2 years ago | (#40531917)

A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.

The more interesting aspect of this is how economic facts [debunkingeconomics.com] are so rooted, mainstream and rehashed, using this very same process, that they become political ideology... Sad world...

Re:Um (1)

rgbatduke (1231380) | about 2 years ago | (#40535965)

Economics is one of the worst offenders, agreed, and the primary target of The Black Swan. But a number of other fields, including some that are supposedly in mainstream science, are almost as bad. And not necessarily intentionally. The saddest thing is that the researchers themselves are often totally oblivious as to just how biased and/or weakly founded their own results are, because they always get them "using statistics" in a traditional way, not realizing that they are using Statistics 101 (freshman intro stats) where what is required is Statics 404 (a graduate course and not for the faint of heart or incompetent in calculus, ODES, combinatorics, and a certain amount of common sense and experience).

Re:Um (2)

ThreeKelvin (2024342) | about 2 years ago | (#40527467)

It seems they were, but catching fabricated results like these isn't exactly easy and it won't happen in the review process.

In order to catch fabricated results you'd either have to repeat the experiment, which nobody wanted to do since the research was low impact, or catch discrepancies in the data, which was how he was caught out.

study shows 99% people believe the word "science" (2, Insightful)

starworks5 (139327) | about 2 years ago | (#40527017)

news at 11pm

Re:study shows 99% people believe the word "scienc (2)

devitto (230479) | about 2 years ago | (#40527171)

Except Americans. Unless it matches their beliefs in religon, politics, nature and economics.

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40527391)

No worry. This Japanese anesthesiologist is guaranteed to get a job offer from the USA Department of Homeland Security in their "terrorism threat assessment" division. I understand that they are always on the lookout for "creative writers" to help keep the fear alive. Job security, year-over-year bigger budgets, and the full implementation of the national security surveillance police state are the real DHS agendas, not the actual fighting of terrorism.

Failing that, for for instance not passing a security clearance vetting, he can always get a job as a groper for the TSA, presuming he has a demonstrated proclivity to such behavior from riding the Japanese rail system.

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40529185)

You mean right wing Americans.

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40530779)

No, pretty much most Americans. It's just that everything the right wing does is more noticeable because they're so loud about it.

Re:study shows 99% people believe the word "scienc (1)

Bob-taro (996889) | about 2 years ago | (#40533347)

Except Americans. Unless it matches their beliefs in religon, politics, nature and economics.

Given the context of this discussion, is it necessarily a bad thing not to automatically accept as fact anything called "science"?

Re:study shows 99% people believe the word "scienc (1)

Soulshift (1044432) | about 2 years ago | (#40527493)

Parent is modded +5 Insightful?! Since when did Slashdot moderators develop an anti-science bent?

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40527725)

Poorly worded, but too many people (including scientists) take things at face value without checking things like citations.

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40527971)

Since when did Slashdot moderators develop an anti-science bent?

Since Big Oil got involved. Who do think is footing the bill for Slashdot TV? You think Timmy's funding that out of his own pocket? Nope, it's Big Oil money.

Re:study shows 99% people believe the word "scienc (0)

Anonymous Coward | about 2 years ago | (#40529863)

Since when did Slashdot moderators develop an anti-science bent?

Since Big Oil got involved. Who do think is footing the bill for Slashdot TV? You think Timmy's funding that out of his own pocket? Nope, it's Big Oil money.

TIMMAY!!!!!!!!!!!

Re:study shows 99% people believe the word "scienc (1)

ceoyoyo (59147) | about 2 years ago | (#40528165)

You must be new here.

The average Slashdotter, or at least the ones who moderate and post, seems to have the "I know all about science/statistics/whatever" and "stupid scientists don't know as much as I do" attitudes. Although you can really replace "science" with just about anything and that statement would probably be true.

Re:study shows 99% people believe the word "scienc (1)

digitig (1056110) | about 2 years ago | (#40528873)

That wasn't "an anti-science bent". It was anti using the word "science" as a magical invocation over anything at all, scientific or not, to make people suspend their critical judgement. Using the word "science" as an invocation in that way is all too effective in some circles, and I for one am against it.

Re:study shows 99% people believe the word "scienc (2, Insightful)

bacon.frankfurter (2584789) | about 2 years ago | (#40531653)

Technically atheism is a "belief", since the absence of certain supernatural forces, and parallel universes purported to be accessible upon death isn't completely proven.

When people are prevented from attempting to carry out (nuclear tests are banned by international treaty), cannot (because they lack the means or large equipment like the LHC) or simply do not carry out experiments themselves (out of sheer laziness, or dropping out of school), then they must take the ones who actusally DO carry out scientific experiments at their word.

Scientists, then, take on the role of holy men, do they not? Isn't this where the fundamental conflict between science and religion emerges? Who are our social leaders, our bastions of sage advice? As a social problem, it's essentially the same conflict as with capitalism vs. communism. Who get to be "The Leaders"? The Government and/or Monarchs, or wealthy Corporate Executives who are "free"? With science vs. religion, it instead becomes a choice between The Scientists or The Elder Shamans.

Re:study shows 99% people believe the word "scienc (1)

Belial6 (794905) | about 2 years ago | (#40533713)

Holy men don't claim to have performed experiments on your behalf. In religion, no one ever claims to have actually performed an experiment.

Re:study shows 99% people believe the word "scienc (1)

bacon.frankfurter (2584789) | about 2 years ago | (#40559743)

Right... right... but what about when Scientists indeed CLAIM to have performed and experiment, even though they never did? So instead of demanding belief in fiction, with no supporting evidence, we have people demanding acknowledgement of fabricated evidence, in support of "a more reasonable" fiction.

This article points out how a lack of integrity within the scientific community threatens to sabotage the very trust that the public and the 24 hour news cycle would like to imbue upon Science. (even though Science essentially depends on skepticism...)

I'm not saying Science is as imaginary as mythology, but what I am saying is that much of the ordinary world out there will predicate upon Science with the same amount of implicit trust that they might place in Religion. Just as with plagiarism, falsified experiments damage that certain sort of trust everyday people bestow upon Science.

Re:study shows 99% people believe the word "scienc (1)

Soulshift (1044432) | about 2 years ago | (#40536515)

Sure, everything is a "belief" just like everything is a "theory," but that's just playing with semantics. I agree that you CAN'T verify ANY empirical fact beyond a shadow of a doubt, but that doesn't mean you imagine everything will lose its mass tomorrow and you'll just float off into space, right? (Among a multitude of other possible scenarios)

You're not making the distinction between reasonable belief and unfounded or poorly founded belief. Reasonable belief follows a chain of reasoning that is at least based on empirical observation and extends some level of trust to other people to make reports about empirical tests to us. I.e. we observe that people with such-and-such traits are generally not liars, and people who publish fake papers are found out at a rate of X%, therefore we have a certain level of confidence that what we read is actually what was observed by these people. A chain of trust, which is != to a chain of blind trust.

tl;dr It is reasonable to believe that on average, scientists, unlike shamans, perform empirical tests before making claims.

Re:study shows 99% people believe the word "scienc (1)

bacon.frankfurter (2584789) | about 2 years ago | (#40559891)

I'm not arguing in favor of equivocating Religion and Science. But there's an interesting side-effect that retractions and fraud, like this, can have. It's not so much that "Science" is ruined by these incidents, but really it sabotages the innate credibility of Scientists, and in the minds of some, it might reduce them to the same level as holy men, by introducing that seed of doubt. When Scientists play the "let's not and say we did game" when it comes to experimentation, and get caught, the tangible evidence and the facts (The Science) remains the same as the universe ever has, but in the here and now, among living human beings, the lines separating those who support Science vs. Religion, and how much, might be re-drawn slightly.

Not the First Time (3, Informative)

Quantum_Infinity (2038086) | about 2 years ago | (#40527037)

During World War II, Americans were very keen and excited to get their hands on scientific data from the Japanese after nuking them, especially all the data from human experiments which were not feasible in US. When they got the data, they realized most of it was non-sense. They had been randomly doing experiments on humans without any clear hypothesis or theory and most of the data did not make much sense.

Re:Not the First Time (4, Funny)

gl4ss (559668) | about 2 years ago | (#40527127)

.. and then they gave the data to cia to use as basis for strategy.

Re:Not the First Time (0, Flamebait)

Anonymous Coward | about 2 years ago | (#40527281)

indeed, America has done medical experiments on soldiers since world war 2...
and don't forget the testing of agent orange on people living in ghetto's and the numerous medical experiments on black people in America...

frigging Nazi USA.

Re:Not the First Time (0, Troll)

gilboad (986599) | about 2 years ago | (#40528087)

... Because the Americans took millions of people, shove them into 3sq km Ghettos where they'll die, in the millions from hunger and disease.
Those who were lucky enough to survive the Ghettos (and God knows how many forms of random murders) were then taken to huge camps in the middle of NY state, where they were gassed.

Do yourself a favor and read a book (or two) about the Holocaust before you compare *ANYONE* to Nazi Germany. (And no, Mein Kampf is *not* what I meant, you illiterate pi^H^H^H^H^H^H.)

- Gilboa
P.S. I'm not even American. Not even close.

Re:Not the First Time (0)

Anonymous Coward | about 2 years ago | (#40528593)

hey now, I live in the middle of NY and I have yet to see any of these gassings, just dont look over there by the garden behind the sign that says bee farm.....

Re:Not the First Time (1)

gilboad (986599) | about 2 years ago | (#40538511)

Re-read my comment *SLOWLY*.

Now idea how this:
"Do yourself a favor and read a book (or two) about the Holocaust before you compare *ANYONE* to Nazi Germany. (And no, Mein Kampf is *not* what I meant, you illiterate pi^H^H^H^H^H^H.)" Could be understood as me comparing the U.S. to Nazi Germany.
(Quite the opposite, It was a sarcastic remark about the OP's "America is is just as bad as the Nazies" post)

English may not be my first language, but come on!

- Gilboa

Re:Not the First Time (0)

Anonymous Coward | about 2 years ago | (#40533037)

you illiterate pi^H^H^H^H^H^H

Hm... all I see is "you illiter".

Try setting your terminal to VT-100 and resend, please.

Re:Not the First Time (1)

gilboad (986599) | about 2 years ago | (#40538525)

*Come-on* people. English may not be my first language, but anyone what basic fourth grade reading comprehension skills should have understood that "Do yourself a favor and read a book (or two) about the Holocaust before you compare *ANYONE* to Nazi Germany" means I was being sarcastic and that I **do not** believe the U.S. can be compared to Nazy Germany. This ain't rockets science.

Guess I should have marked the post with "Sarcasm" in huge, bold, capital letters...?

- Gilboa

Re:Not the First Time (-1)

Anonymous Coward | about 2 years ago | (#40527195)

Right, global warming is another fabrication which oddly continues today for and by the dumb.

Indepently repeated trials (4, Insightful)

Bronster (13157) | about 2 years ago | (#40527055)

And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

Once that has been done sufficient times, if your method itself is sound, then the results are valid.

Re:Indepently repeated trials (3, Insightful)

digitig (1056110) | about 2 years ago | (#40528953)

And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

Once that has been done sufficient times, if your method itself is sound, then the results are valid.

Nope, not good enough. It's not enough to write it up in such a way that it can be replicated by independent researchers. You should only start to trust the results when it has been replicated by independent researchers. Unfortunately that hardly ever happens, because it's all but impossible to get funding for replication of work that's already been done.

Re:Indepently repeated trials (0)

Anonymous Coward | about 2 years ago | (#40537675)

Nope, not good enough. It's not enough to write it up in such a way that it can be replicated by independent researchers. You should only start to trust the results when it has been replicated by independent researchers. Unfortunately that hardly ever happens, because it's all but impossible to get funding for replication of work that's already been done.

While you're right that researchers basically never just attempt a full replication of every condition in a published study, partial replications of the primary claim happen frequently for influential claims. If you want to test a variant of somebody's claim ("X said Y, but we think Y holds only for materials of such-and-such type" or "X said Y is linearly related to Z, but we think they were only looking at the approximately linear portion of a logarithmic relation"), the first step is to make sure you can reproduce their basic result in your lab on your equipment. Then you test the variant of the claim with some distinguishing manipulation, and include the replication and your manipulation in your write-up so the original claim can be compared with your variant. As more people test different variants of the hypothesis, different conditions of the original study will be replicated as a baseline.

There's always a trade-off between exploration and replication, but as a claim becomes more influential, it becomes replicated more often. That is a good thing.

Re:Indepently repeated trials (2)

blind biker (1066130) | about 2 years ago | (#40531655)

And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

I had a referee rejecting my paper because, I shit you not, my description of the experiment and apparatus was too detailed. But I have not engaged in any pedantery, I described the bare minimum necessary to reproduce the results, didn't use or describe any exotic chemicals or mixtures, nothing that a person in the same community of research wouldn't have found normal and necessary to repeat my experiment.

I thought of writing a strongly-worded e-mail to the editor, but my adviser... advised against.

Not Copying (0)

Anonymous Coward | about 2 years ago | (#40527085)

Well, in this case at least, one can't claim that the Japonese are only good at copying :)

but but but he has degrees (0, Troll)

alen (225700) | about 2 years ago | (#40527105)

he has 50 years of education, anything he writes is fact

20 years from now people will be saying the same thing about this supposed global warming. in the northeast it has actually been cooler than 30 years ago when i was a kid. almost every ridiculous theory about super hurricanes destroying NYC by 2010 have not happened.

Re:but but but he has degrees (3, Insightful)

Colonel Korn (1258968) | about 2 years ago | (#40527181)

he has 50 years of education, anything he writes is fact

20 years from now people will be saying the same thing about this supposed global warming. in the northeast it has actually been cooler than 30 years ago when i was a kid. almost every ridiculous theory about super hurricanes destroying NYC by 2010 have not happened.

To summarize: straw man, nonsense, nonsensical anecdote that doesn't matter even if true, straw man.

Re:but but but he has degrees (0)

Anonymous Coward | about 2 years ago | (#40527959)

To summarize: wooosh!

Re:but but but he has degrees (0)

Anonymous Coward | about 2 years ago | (#40532013)

With the GP downmodded, you could have at least put in a *Spoiler Alert* warning. Now I'm not even going to bother reading it.

Re:but but but he has degrees (0)

Anonymous Coward | about 2 years ago | (#40527377)

Slashdot is about the last place I would expect to see a breathless anti-intellectual rant like this. But, by all means, carry on -- you've made it clear that evidence or reasoned arguments will do little to change your mind.

Hmm who is responsible for review? (4, Interesting)

OzPeter (195038) | about 2 years ago | (#40527117)

From TFA

German anesthesiologist Joachim Boldt is believed to hold the dubious distinction of having the most retractions—about 90. Boldt's scientific record also came under fire several years ago by some of the same journal editors questioning Fujii's work.

Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

Re:Hmm who is responsible for review? (3, Interesting)

MadKeithV (102058) | about 2 years ago | (#40527627)

Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

This could easily be a case of "fool me once, shame on you, fool me twice, shame on me". These editors seem to be taking themselves seriously (slashdot editors, take note :) ). After being caught out by a cheater once they are probably now in the process of going over their past record with a fine-toothed comb looking for more fakers.
Looks like their new review process is working better - as they've found another one.
Maybe other papers should now be giving the suspiciously prolific submitters a new look as well.

Re:Hmm who is responsible for review? (1)

snoop.daub (1093313) | about 2 years ago | (#40533467)

From TFA

German anesthesiologist Joachim Boldt is believed to hold the dubious distinction of having the most retractions—about 90. Boldt's scientific record also came under fire several years ago by some of the same journal editors questioning Fujii's work.

Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

Also, what is it about anesthesiology and its practitioners that makes them succumb to the lure of academic forgery? Something about people who enjoy putting other people at the edge of death being power-mad? Or, it could be that these journal editors just decided to crack down, and if a bunch of editors of say, cardiology journals did the same, they'd find just as much fakery in their field.

Re:Hmm who is responsible for review? (1)

godel_56 (1287256) | about 2 years ago | (#40535183)

Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

Also, what is it about anesthesiology and its practitioners that makes them succumb to the lure of academic forgery? Something about people who enjoy putting other people at the edge of death being power-mad?

Maybe some of their gases are leaking?

Does anyone actually read this stuff? (3, Interesting)

AB3A (192265) | about 2 years ago | (#40527177)

So this guy was writing, what, approximately nine or ten papers a year on average? Was anyone paying attention? Didn't anyone notice something strange about his "discoveries?"

What does that say about the field of academic medical research?

Re:Does anyone actually read this stuff? (1)

ceoyoyo (59147) | about 2 years ago | (#40527747)

Nine or ten papers a year isn't terribly unusual for someone with a decent sized lab.

About 30% of papers reproducible ... (4, Interesting)

Cassini2 (956052) | about 2 years ago | (#40528201)

A professor at my local university periodically gets undergraduate and starting graduate students to try reproducing the work of interesting research papers.

One engineering professor at my local institution figures about one-third of the papers can be reproduced to demonstrate the effect in question.

At most schools, graduate students are required to published paper on a "new" idea in an academic journal in order to receive their degree. As such, journals must exist to collect all the ideas students generate, and this is the driver behind the modern academic journal system. Huge pressure is put upon the students to describe "new" ideas, and as such, the paper must sell itself as being "new".

Complicating this effort, it the reality that most students are working on student projects. These projects don't have the necessary resources (time and money) to be developed into fully effective and reproducible ideas. As such, the results from these projects are fundamentally suspect. Also, the students working on the project, may not fully understand all the relevant effects on their research (because they are students). In particular, many students do not understand statistics. As such, students deliberately or inadvertently conduct biased experiments to show the desired effect in question, because the academic requirement is a "new" idea and not a "new and reproducible" idea.

The result is a collection of papers that all describe themselves as having "new" and "brilliant" ideas on topics that cannot be easily reproduced. When the ideas are reproduced, practicing engineers quickly discover they are reproducing a marginal student project. It is actually really tough to find reproducible, inventive and commercializable research ideas in academic journals, because of all the noise.

Methinks the peer-review process needs reviewing (0)

crazyjj (2598719) | about 2 years ago | (#40527241)

So much for frauds being caught by peer-review, huh?

Re:Methinks the peer-review process needs reviewin (2)

Slippery_Hank (2035136) | about 2 years ago | (#40527595)

So much for frauds being caught by peer-review, huh?

That's an odd comment for an article about the peer-review system catching a fraudster. The system will find it eventually, though the time scale can be quite large, especially if you don't publish in a 'hot' field. You can't expect the peer review system to catch every bit of fraud as it comes in, it doesn't appear like a glowing fireball in the sky. This is likely a small amount of fabricated data about medical proceedure that didn't happen. Moreover, the author is probably quite bright (just lazy) and had a good idea of what would be expected in those experiment, so it is easy to make up reasonable data.

Re:Methinks the peer-review process needs reviewin (2)

fatphil (181876) | about 2 years ago | (#40528413)

Faked studies are only detected if someone attempts to reproduce them. People will only try to reproduce them if journals adopt a policy of publishing papers that are either confirmations or refutations of prior studies. On the whole this isn't the case.

I'd like to know how many of the studies could have been detected as fake through thorough enough statistical analysis of results - humans are notorious bad at faking data, even when they're trying their hardest to make it believable (as they then make it too believable).

Re:Methinks the peer-review process needs reviewin (1)

Sique (173459) | about 2 years ago | (#40528381)

He was caught in the end. Sometimes it takes a while, especially if no one really cares about the papers you publish.

Fukushima Scientists (0)

Anonymous Coward | about 2 years ago | (#40528283)

The article is a complete fucking joke without commenting on the treatment CURRENTLY of Japanese Scientists and Journalists. Go back to your fucking games, blizzard and such, no need to pay attention to enenews.

So? (1)

NicknamesAreStupid (1040118) | about 2 years ago | (#40529119)

Science fiction, a popular genre, always needs an element to suspend disbelief. Lying that is is real science seems as good as any, literarily speaking.

Bring light - and context - to the situation (1)

damn_registrars (1103043) | about 2 years ago | (#40529225)

It is important to shine light on fraudulent work in science, for sure. As others have already pointed out in this discussion, some work is impractical to reproduce and that is not the purpose of peer review any ways.

In science, just like every other occupation on earth, there are people doing shoddy and/or fraudulent work. It is a function of humanity in general and no occupation is immune to it. The important thing is that this person has been exposed as a fake, and his identity and record are well known as such. While you cannot prevent every fraud and fake, every time, showing a thorough debunking and dismissal of one when they come does help to discourage future abuses.

Probably more common than you think (5, Interesting)

Cute Fuzzy Bunny (2234232) | about 2 years ago | (#40529877)

I saw a study done recently where the author found that the results of many studies are quite difficult to reproduce, and he found that the more you tried to reproduce them and the more you talked publicly about your results, the more difficult it became to reproduce the results.

The problem is that researchers usually aren't approaching a study as "Lets do xxx and see what happens, then write about that". They've been funded by someone who has a particular result or proof point they'd like to see, or the study operator has a vested interest in the study outcome. At least an expectation of what they think they'll find.

Our happy little brains then lead us to that conclusion or desired outcome, and we'll gleefully ignore the things that detract from the results.

And yes, the guy who did this study of study results also found that his ability to reproduce his own results became more difficult as time went by.

For a couple of good examples of how this works, see the studies on salt and saturated fats in our diet. The intersalt study folks threw out 40% of the data that said that salt had no effect on health, suggesting that since its well known that salt affects your health that the people who weren't affected must be lying about their salt consumption. So almost half the data suggested no result, but it was discarded because it didn't fit with the desired determination. Same thing happened with saturated fats. The original researcher took 21 countries worth of data but only 5 of the 21 showed health issues that were allegedly correlated to the consumption of saturated fats. The other 16 showed no correlation at all. In fact, the real correlation was to high caloric, high sugar/carb, highly processed foods and health issues, not anything to do with saturated fats. There are cultures that eat 50-70% of their food intake as fat and they have little to no cancer, obesity or diabetes. Take one of those people and move them to the US or England and put them on our diet? They get fat and sick.

Of course, even when the study obviously sucks, the press can be counted on to come to conclusions that the study didn't even address.

Re:Probably more common than you think (1)

guises (2423402) | about 2 years ago | (#40541717)

The problem is that researchers usually aren't approaching a study as "Lets do xxx and see what happens, then write about that".

This is not a problem, this is part of the scientific method. You always start with a hypothesis, from that create a theory about how it would apply to the natural world, and design your experiment to disprove this theory in every way possible. Doing it the other way around leads to that correlation/causation mantra that people like to throw around: given a data set - it's easy to make tons of conclusions which may or may be true but sound great when you come up with them.

Maintaining your objectivity is one of those things that they try to beat into you at school, and this is part of it.

Re:Probably more common than you think (1)

Cute Fuzzy Bunny (2234232) | about 2 years ago | (#40544067)

That was my point...the study I mentioned demonstrated that a lack of objectivity led to results that couldn't be reproduced as easily by someone without the same lack of objectivity. Even that guys study turned out to lack some objectivity.

Its in the breed. School beatings can't and won't help that.

The problem here isn't the flaky science. Part of the problem is that the media and the public think that if someone ran a study with vague correlation and very little causation, that the results are facts. Sadly, the results are usually an opinion piece with some questionable conclusions.

On the Media had an interesting report about this (0)

Anonymous Coward | about 2 years ago | (#40531303)

you can find it here: http://www.onthemedia.org/2012/jun/08/scientific-retractions-rise/ .... most alarming, after papers are retracted, they are still being referenced.

Serious Question (0)

AmberBlackCat (829689) | about 2 years ago | (#40535237)

Are you pro-atheism guys going to use this example as a vector to bash all scientists the same way these guys [slashdot.org] were used to bash all religious groups?

Change career (0)

Anonymous Coward | about 2 years ago | (#40540277)

Sat at a desk and wrote a novel about a research idea? He should switch to patents instead.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...