Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hoax-Proofing the Open Access Journals

timothy posted about 10 months ago | from the sokal-affairs-are-so-embarrassing dept.

The Media 114

Frequent contributor Bennett Haselton writes: "A Harvard biologist was able to get an intentionally flawed paper accepted for publication by a number of open-access academic journals, included that had supposedly been vetted for quality by advocates of open access. It seems the problem could be mitigated by consolidating journals within a field, so that there are much fewer of them, publishing much more articles per journal -- so the review processes take the same amount of labor, but you have fewer journals that have to be audited for procedural honesty." Read on for the rest, including his idea to solve the problem of fraudulent submissions (or even just sub-par science) through simplification.

Harvard biologist John Bohannon wrote about his experiment in an article published by Science Magazine. He submitted his deliberately bogus paper to 304 open-access publishers, including 183 that were listed in the Directory of Open Access Journals (DOAJ), which Bohannon calls the "Who's Who of credible open-access journals", and whose quality is supposedly vetted by the DOAJ staff.

Of the 304 open-access journals targeted by the sting, 60% published the paper. I think this mainly just shows that the average quality of open-access journals may always be low, but that's not surprising since anyone in the world can set up an "open access journal". That shouldn't be relevant to the reputation of the best open-access journals. If the best open-access journals acquire a reputation for high standards and proper peer review, then that will attract high-quality papers, whose publication will reinforce the reputation of the journal, which enables it to confer prestige on the papers it publishes, which in turn will continue to attract high-quality papers. The existence of other open-access journals with crummy standards, should be irrelevant.

What's more disturbing, is that of the 183 journals listed in the Directory of Open Access Journals, 45% of those published the paper -- which, according to Bohannon's article, surprised and disappointed the DOAJ founders. But perhaps if you're maintaining a database of thousands of allegedly reputable open-access journals, there's no way to make sure that they're all telling the truth about their standards and their practices. At a quick glance, all you can really say is that they would be good-quality journals if they're telling the truth about how they operate, but it's hard to tell from the outside whether they're being honest.

So perhaps a different solution is that we don't really need a huge number of good open-access journals. Rather, in each field, you could get by with a small number of "super-journals" which have a lot of reviewers on file, and which publish a high number of papers but apply uniformly high standards across all of them.

Consider: you have two journals, A and B. Each has their own non-overlapping database of 20 reviewers. When they receive a paper, the standard practice for each of them is to send the paper to 3 randomly chosen reviewers in their database. Each one receives 10 submissions per month.

Now combine A and B to form one single journal which has 40 reviewers and gets 20 submissions per month, and still sends out each submitted paper to three randomly chosen reviewers. The total amount of work performed by the reviewers, doesn't change. But now, if you're auditing the quality of a journal according to its adherence to its own practices, you only have to audit one journal instead of two. By the same logic there's no reason in principle that any number of journals in one field couldn't be subsumed into a few behemoths, which apply uniform standards across all their papers.

You could do this without waiting for the traditional system to be dismantled. Somebody in the field just assembles a list of people to be peer reviewers for the "virtual super-journal". That list is public, so that anybody can audit it and see that it consists of people with a credible reputation in their field. Anyone who pays the (nominal) fee can submit a paper to the VSJ, which sends the paper to a random selection of n reviewers from that list. If the paper "passes" the test, then it gets the stamp of approval of the VSJ, which says, "This paper was judged to be good by a majority of a random sample of reviewers on our list, and you can see from this list that the quality of our reviewers is pretty good."

And suppose someone wants to publish their paper in some other journal XYZ, and they also want to publish it in the VSJ just to get a certification of its quality, but journal XYZ doesn't allow them to simultaneously submit it to another journal for publication? In that case, you can still submit your paper to get the stamp of approval from the VSJ -- just pay the normal reviewing fee, and if it passes the VSJ's review process, they can list the paper on their website, saying, "This paper was judged to be good by a majority of our reviewers. We can't actually publish the paper here, because some other journal XYZ has exclusive publication rights, but you can view the paper at this link in this other journal." You still have the self-reinforcing cycle where the VSJ's stamp of approval maintains high standards, which attracts high quality papers, which reinforce the reputation of the VSJ's stamp of approval. There's no part of that cycle that requires the VSJ to actually "publish" the paper itself.

And people could subscribe to the VSJ's "stamp of approval" feed the way they subscribe to any other publication -- the VSJ can send out the papers themselves that they have the right to publish, or links to papers in other journals, saying, "This paper got our stamp of approval, and follow the link to read it here."

You could even use this process to do a "hit job" on someone else's paper that got published in another journal, but which you think is too low-quality to have been published. You can submit it to the VSJ and if the VSJ rejects it, you can ask them to list it as a paper that failed their review process. (Whether or not the VSJ would give you the option of doing this, may depend on their policies. It "sounds mean", yes, but academics are supposed to keep each other honest. I've never heard of a traditional journal doing that -- calling out a paper published somewhere else and saying, "This sucked, we never would have published it.")

There should probably be multiple open-access journals (or Virtual Super-Journals) within each field, so that the competition between them keeps them honest. But there's no reason to have such a huge number of them that the Directory of Open Access Journals can't keep track of what they're doing.

cancel ×

114 comments

Sorry! There are no comments related to the filter you selected.

This seems overly complex. (4, Informative)

Joining Yet Again (2992179) | about 10 months ago | (#45326637)

Surely the solution is to have people who understand the papers actually reading them. And if nobody among you understands them then you don't accept them.

And if you don't do that, you don't really have an academic "journal", just a blog.

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45326905)

Surely the solution is to have people who understand the papers actually reading them. And if nobody among you understands them then you don't accept them.

That will require a full time staff member to coordinate that. How will we pay their salary?

I know we can charge for access to the articles.

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45327479)

Thankfully, toll access journals don't have a full time staff member to coordinate peer review either, they just offload that job to unpaid academic editor (usually professors at academic institutions).

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45328207)

Thankfully, toll access journals don't have a full time staff member to coordinate peer review either, they just offload that job to unpaid academic editor (usually professors at academic institutions).

Wrong

Re:This seems overly complex. (1)

TeXMaster (593524) | about 10 months ago | (#45327809)

How will we pay their salary?

Oh, I don't know, you could perhaps use the hundreds of euros you charge the article authors?

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45326981)

You didn't understand the problem. It's not about how to run a journal, it's about how non-experts distinguish between properly run journals and shitty journals just trying to scam some money.

Re:This seems overly complex. (1)

ShanghaiBill (739463) | about 10 months ago | (#45329229)

It's not about how to run a journal, it's about how non-experts distinguish between properly run journals and shitty journals just trying to scam some money.

If I am buying a product, I distinguish the good products from the bad by reading the user reviews on Amazon. Why not just have user reviews for journals? Even better would be to have reviews and ratings for individual articles. A reader could select articles to read based on either all ratings, or only on ratings from reviewers using their real name and affiliation, to ensure the process isn't being gamed. Just like Amazon reviews, the reviews could point out flaws in the article, or provide useful feedback for the author.

Re:This seems overly complex. (1)

NoKaOi (1415755) | about 10 months ago | (#45329699)

It's not about how to run a journal, it's about how non-experts distinguish between properly run journals and shitty journals just trying to scam some money.

If I am buying a product, I distinguish the good products from the bad by reading the user reviews on Amazon. Why not just have user reviews for journals? Even better would be to have reviews and ratings for individual articles. A reader could select articles to read based on either all ratings, or only on ratings from reviewers using their real name and affiliation, to ensure the process isn't being gamed. Just like Amazon reviews, the reviews could point out flaws in the article, or provide useful feedback for the author.

While I see your point, that's pretty much exactly what the point of peer review is. The difference with peer review is that you're supposed to be an expert in order to review the paper. In your scenario, what you'll end up with is a group of people claiming to be experts doing 90% of the reviewing. The whole point of a peer reviewed journal is that somebody is verifying (to some degree at least) that the reviewer is actually an expert on the topic. This is very different than amazon reviews, where the point is simply to judge whether or not other customers were satisfied with their purchase and why.

Just look at slashdot comments on nuclear power...everybody sounds like an expert in nuclear physics and in politics. Would you want those commenters reviewing a paper on the topic of nuclear physics, or would you want somebody actually verifying that the reviewers are nuclear physicists?

Re:This seems overly complex. (1)

ShanghaiBill (739463) | about 10 months ago | (#45330543)

While I see your point, that's pretty much exactly what the point of peer review is.

No it isn't. When I read a journal, I can not see what the peer reviewer wrote. The peer reviews are not published with the article. If one of the peer reviewer expressed some doubts, or pointed out some problems with the methodology, that information is lost when the the article is published. Furthermore, no one has the ability to add new reviews if an article is later shown to be reproducible, or is even later confirmed. The information is the journal is fixed and static.

Just look at slashdot comments on nuclear power...everybody sounds like an expert in nuclear physics and in politics. Would you want those commenters reviewing a paper on the topic of nuclear physics, or would you want somebody actually verifying that the reviewers are nuclear physicists?

I have learned far more about nuclear power from Slashdot than from journals. If I am an expert, I can judge for myself whether a review makes valid points. If I am not an expert, I want to hear a variety of opinions, especially the unpopular and dissenting opinions, and then I will follow the links to people that are experts. Scientists don't need to be protected from debate.

Re:This seems overly complex. (1)

gerddie (173963) | about 10 months ago | (#45331195)

The information is the journal is fixed and static.

That depends on the journal, e.g. this journal Source Code for Biology and Medicine [scfbm.org] offers the option to comment on articles, just like all the other Biomed central journals [biomedcentral.com] , or the PLoS [plos.org] journals.

Re:This seems overly complex. (1, Informative)

sycodon (149926) | about 10 months ago | (#45327003)

Lest we forget, a large number of submissions from the paid journals had data that was not reproducible [wsj.com]

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45327079)

Lest we forget, a large number of submissions from the paid journals had data that was not reproducible [wsj.com]

That doesn't make it fraudulent.

Re:This seems overly complex. (1)

ShanghaiBill (739463) | about 10 months ago | (#45329301)

That doesn't make it fraudulent.

But it does make it wrong. The journals will usually only retract an article that is fraudulent, but not one that is wrong. So the article is still available to be read, but with no indication that it is wrong.

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45329963)

There is no reason for any of this. No retracting "wrong" research, it simply gets labeled as "replication failed". How do you know which study was right? Or that there was not some specific aspect of the experiment that was important which did not translate to the second study? There is no reason for protecting against "fraudulent" papers either. Independent replication is at the core of science, if it has not been replicated it should not be trusted. You guys are replicating results right? Not saying it isn't worth doing because it is "not novel"? Right?

Re:This seems overly complex. (1)

ShanghaiBill (739463) | about 10 months ago | (#45330575)

No retracting "wrong" research, it simply gets labeled as "replication failed".

Seriously?? What journal does that? Can you show me a journal that has attached a "replication failed" label to an already published article?

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45330793)

I should have said that is what should be done, in my opinion, instead of retracting. There are "no" (very very few) replications being done so there is currently not a reason to. I have found a few but it has been almost always on accident.

Re:This seems overly complex. (1)

mcmonkey (96054) | about 10 months ago | (#45327491)

Lest we forget, a large number of submissions from the paid journals had data that was not reproducible [wsj.com]

I admit my eyes started to glaze over and I didn't finish reading TFS because it seemed like a lot of hand waving and busy work to no real efect, but isn't this proposal, well, a lot of hand waving and busy work to no real effect?

The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.

But don't all serious fields have that already? Getting through the review process for publication is just the first step--not all published results are inducted into the cannon of acepted knowledge. Publication basically just puts the method and results out there to be examined by a larger audience.

Which is why publications aren't a good measuring stick--for an indivual theory or personal success. The real measure is citations. If lots of other papers are citing a paper (and not just to say they couldn't reproduce the original results) you have a better chance that paper is not a hoax. If someone has a long list of publications with many citations, that person is likely trust-worthy.

Re:This seems overly complex. (1)

ShanghaiBill (739463) | about 10 months ago | (#45329443)

The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.

But don't all serious fields have that already?

No, they do not. I have never heard of a reviewer trying to reproduce the results. I have reviewed plenty of papers. I will spend about 2-4 hours reviewing something that took the author months of work. All I do is read the paper, make a recommendation, write a few paragraphs of feedback, and email it back to the journal. That's it. This is an unpaid process. There is no way I am going to put my own work on hold for several months to repeat the experiment.

Re:This seems overly complex. (0)

Anonymous Coward | about 10 months ago | (#45331057)

As little as 2 hours to review a paper for an academic journal? Fucking hell, what is your field?

Re:This seems overly complex. (2)

svyyn (530783) | about 10 months ago | (#45331183)

I'm an ecologist. 2-4 hours sounds about right. It takes longer if they're using some fancy new statistic, or if it's not really my subfield, or if it's a particularly long and complex paper. Many papers get a 'reject' with much less time.

Re:This seems overly complex. (1)

mcmonkey (96054) | about 10 months ago | (#45331627)

The only way I see to "hoax-proof" a journal is to require reproduction of the results during peer review.

But don't all serious fields have that already?

No, they do not. I have never heard of a reviewer trying to reproduce the results. I have reviewed plenty of papers. I will spend about 2-4 hours reviewing something that took the author months of work. All I do is read the paper, make a recommendation, write a few paragraphs of feedback, and email it back to the journal. That's it. This is an unpaid process. There is no way I am going to put my own work on hold for several months to repeat the experiment.

Well, I hope you do better with the papers you recommend than you did with my comment. If you continue to the sentence after the one you quote, you'll see I'm talking about what happens _after_ a paper is published--other researchers attempt to reproduce the results, either directly or indirectly.

Re:This seems overly complex. (1)

interkin3tic (1469267) | about 10 months ago | (#45328431)

I'd say it's actually overly simplistic. It's a complex problem. A simple solution is doomed to only make things worse. "There are a lot of open access journals, and there are some shitty ones, so we should make there be less" is the basic suggestion here. The factors that motivated the journal numbers aren't going to go away just by consolidating them. Publishers of the shitty open access journals who are simply looking for a profit will still simply want a profit. Researchers who just want to churn out crap still have the same incentives to churn out crap.

This seal of approval or "virtual super journal" wouldn't end it either. We know this because such things already exist [f1000.com] and yet the problem continues. You make this virtual journal to seperate the good from the bad. The people who made the bad journals will come up with their own. In addition to a large number of shitty journals mucking things up, you'll have a large number of shitty VSJs.

There should probably be multiple open-access journals (or Virtual Super-Journals) within each field, so that the competition between them keeps them honest.

Why does competition = honesty not work now then?

For that matter, what's the problem? Shitty journal articles may be annoying, but researchers aren't exactly confused by them. "This article was published in the 'western romanian journal of blood borne pathogens in pigeons?' Hmm... better take it very seriously." A huge number of crap open access journals is only a problem for people who want to see research in extremely simplistic terms. People who just want to count publications and determine who to throw money at are the ones who see it as a problem. But such people are idiots and are going to waste money no matter how you try to prevent shitty publications. And researchers are going to be able to game such systems anyway.

Re:This seems overly complex. (1)

bennetthaselton (1016233) | about 10 months ago | (#45330709)

This is correct, but the question is how to make this scale to a large number of journals. The more journals you have, the more you run the risk that some are not following this procedure, and it's hard to audit them from the outside.

That's why I'm recommending having a smaller number of journals with a larger output and a larger number of reviewers on call, because then you have a centralized point where the procedures can be enforced.

Why the hell not? (3, Funny)

srussia (884021) | about 10 months ago | (#45326821)

It works for /.

Re:Why the hell not? (0)

Anonymous Coward | about 10 months ago | (#45327853)

Ah, like that magazine, Populist Science.

Re:Why the hell not? (1)

228e2 (934443) | about 10 months ago | (#45327939)

I am glad that someone else caught this glaring irony while reading the summary . . . .

Re:Why the hell not? (1)

Hognoxious (631665) | about 10 months ago | (#45328701)

You read the summary? My eyes went on strike a quarter of the way through.

Re:Why the hell not? (1)

icebike (68054) | about 10 months ago | (#45329047)

You read the summary? My eyes went on strike a quarter of the way through.

I wonder if this was one of those hoax submissions the author waves under our noses in his first paragraph.
"Lets see is we can hoax those /. readers by telling them a story about hoax submissions."

Harvard biologist commits Fraud better title. (0)

Anonymous Coward | about 10 months ago | (#45326841)

Perhaps the title of this article should be: Harvard biologist commits Fraud on Open Access Journals. The Journals all assume that the author is acting in good faith and believes his result. Not that he is trying to trick them into publishing an incorrect paper. They are 'peer' reviewers, not Police. Just because they got tricked by a con artist doesn't automatically lead to the conclusion that the system is broken.

Re:Harvard biologist commits Fraud better title. (5, Insightful)

RDW (41497) | about 10 months ago | (#45327123)

He didn't set subtle traps that somehow slipped past vigilant reviewers. He included deliberate, very basic, glaring errors that showed no meaningful peer review had ever taken place. It's more like being fleeced by a con artist with 'This is a Con!' tattooed on his forehead - if you're taken in, you really only have yourself to blame.

Re:Harvard biologist commits Fraud better title. (1)

Linzer (753270) | about 10 months ago | (#45327243)

Thank you for setting this straight. I wish I had mod points...

Re:Harvard biologist commits Fraud better title. (0)

Anonymous Coward | about 10 months ago | (#45330057)

His study was flawed. He did not submit to any "closed-access" journals. Attributing the problem to the property of "open-access" is totally unjustified.

Re:Harvard biologist commits Fraud better title. (0)

Anonymous Coward | about 10 months ago | (#45327227)

Yes, ethics is irrelevant. This is the same person who "created the Dance Your PhD competition", and "writes for Science, Discover Magazine, and Wired Magazine, and frequently reports on the intersections of science and war."
"...I have a PhD in molecular biology, I still barely understand what scientists are talking about..."

Maybe Mr. Bohannon can approach... Elsevier,Springer, Wiley and the others for a fund to further his research in how DOAJ is no good?

Three blind mice. (2)

westlake (615356) | about 10 months ago | (#45327419)

The Journals all assume that the author is acting in good faith and believes his result. They are 'peer' reviewers, not Police.

Meaningful peer review demands an intelligent evaluation of the author's arguments and evidence and the clarity with which they are presented. Good faith does not imply good science. Belief does not imply good science. Neither faith or belief implies good writing and sound editing.

Economics (2)

mysterons (1472839) | about 10 months ago | (#45326961)

A major problem with open-access journals is that there is no motivation for them to reject submissions, If anything, the more they publish the more money they make. Likewise, peer reviewers (at least in my field --natural language processing and machine learning) are never paid to review them. This is not a good combination. I cannot see any reason for journals nowadays. Either publish in conferences (which in some fields are competitive and very tightly reviewed) or better still publish them on arvXiv and have some kind of citation / comment system as a way to crowd-source quality control.

Re:Economics (2)

TWiTfan (2887093) | about 10 months ago | (#45327099)

Yep, a lot of these "journals" are the academic journal equivalent of diploma mills. They accept any piece of garbage you submit to them, then you get to add said piece of garbage to your resume/c.v., postdoc/grant/tenure application, etc.

Re:Economics (1)

icebike (68054) | about 10 months ago | (#45329253)

Maybe these journals need to take a clue from your field.

Most colleges and even some high-schools run student papers through software packages to detect plagiarism and
structural problems. In fact most careful students do the same thing before submitting papers.

You would think there would be a first layer of software detection that could throw up enough red flags to sent the papers back without any wasted efforts. Not JUST on plagiarism (which might be harder to check, since quoting other studies in a paper is often necessary), but also for the detection of "crazy-talk".

Maybe, the "greedy" journals have a point (2)

mi (197448) | about 10 months ago | (#45326969)

The established publications — often denounced as "greedy" for having the audacity of wanting to get paid — do add value, after all?

Next in the news: a private farm's crop beats the yield of a communal field.

Re:Maybe, the "greedy" journals have a point (0)

Anonymous Coward | about 10 months ago | (#45327019)

The established publications — often denounced as "greedy" for having the audacity of wanting to get paid — do add value, after all?

Absolutely right. You get what your pay for.

Re:Maybe, the "greedy" journals have a point (3, Informative)

i kan reed (749298) | about 10 months ago | (#45327059)

Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.

Re:Maybe, the "greedy" journals have a point (0)

Anonymous Coward | about 10 months ago | (#45327281)

Do those really really big journals have credibility? They seem to select for papers which will have a bit impact in the general press rather than necessarily for good ones.

Re:Maybe, the "greedy" journals have a point (2)

i kan reed (749298) | about 10 months ago | (#45327501)

Well, credibility is probably the wrong word. It's Linus' "enough eyeballs" principle in paper form.

Re:Maybe, the "greedy" journals have a point (1)

icebike (68054) | about 10 months ago | (#45329441)

They seem to select for papers which will have a bit impact in the general press rather than necessarily for good ones.

Well, at least Nature admits as much in their Mission Statement:

First, to serve scientists through prompt publication of significant advances in any branch of science, and to provide a forum for the reporting and discussion of news and issues concerning science. Second, to ensure that the results of science are rapidly disseminated to the public throughout the world, in a fashion that conveys their significance for knowledge, culture and daily life.

Even big discoveries in small fields with no significant tie in to wider fields of science or daily life will not be accepted, which is fine. Their mission statement is fairly clear on that point. They are not focused field specific journals.

Re:Maybe, the "greedy" journals have a point (4, Interesting)

Linzer (753270) | about 10 months ago | (#45327335)

Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.

Actually, no journal is fully immune, no matter how prestigious. Worse than that, top-notch journals like Science and Nature require sensational stories, which makes them more likely to publish skillfully hyped-up reports than honest ones that acknowledge their own limitations. The best science is often found in mid-range journals that accept longish and seemingly boring manuscripts, with nothing swept under the rug.

Re:Maybe, the "greedy" journals have a point (1)

mi (197448) | about 10 months ago | (#45328087)

Except those paid journals have also had serious hoaxes foisted on them.

Wouldn't this be the place to list good examples? I'm most curious... Thanks!

Re:Maybe, the "greedy" journals have a point (1)

icebike (68054) | about 10 months ago | (#45329471)

Except those paid journals have also had serious hoaxes foisted on them.

Wouldn't this be the place to list good examples? I'm most curious... Thanks!

Start here, http://en.wikipedia.org/wiki/Nature_(journal)#Controversies [wikipedia.org]

Re:Maybe, the "greedy" journals have a point (1)

mi (197448) | about 10 months ago | (#45329673)

Indeed... Well, I guess, the time (and the free market) shall tell. Hope, the market remains free...

Re:Maybe, the "greedy" journals have a point (0)

Anonymous Coward | about 10 months ago | (#45330113)

Google "effectiveness of peer review". There is no evidence it has ever worked in any field, it functions to enforce social norms. The study we are talking about with the fake submissions failed to submit to paywall journals. We can not say from this evidence whether open access has anything to do with the results. Based on the previous literature it is very plausible that all that has been detected is the background failure of peer review to be effective.

Re:Maybe, the "greedy" journals have a point (1)

icebike (68054) | about 10 months ago | (#45329357)

Except those paid journals have also had serious hoaxes foisted on them. You have to go to really really really really big journals like science or nature before there's enough credibility to protect against fraud.

Wait, you're saying Nature and Science don't get frauded?
Seems to me both of those Journals were taken in by Jan Hendrik Schön.
And paid journals also have turned down key papers, including Nature's total snub of Watson and Crick's 1953 paper on the structure of DNA.

Re:Maybe, the "greedy" journals have a point (0)

Anonymous Coward | about 10 months ago | (#45331039)

You mean the ones where they word their conclusions based on statistics in a manner that makes their claims appear to be amazing?

The glass is more than half full.... no it's barely less than half empty and it makes no significant difference.

Re:Maybe, the "greedy" journals have a point (2)

tinkerton (199273) | about 10 months ago | (#45327067)

AFAIK the protest against Elsevier was not about them wanting to get paid but about monopolies. Monopolies tend to want just a bit more than getting paid.

Re:Maybe, the "greedy" journals have a point (1)

mi (197448) | about 10 months ago | (#45327249)

AFAIK the protest against Elsevier was not about them wanting to get paid but about monopolies.

My recollection is, more than a single publisher was targeted by the "information wants to be free" folks...

Re:Maybe, the "greedy" journals have a point (1)

femtobyte (710429) | about 10 months ago | (#45331903)

Well, then, you don't recall very well some of the major issues for which Elsevier was targeted. Such as printing fake shill journals under the Elsevier label just for articles produced by Big Pharma PR, so that pharmaceutical companies could pass requirements for "peer reviewed" studies of their products with a citation to a "peer reviewed" study (in their privately purchased Elsevier journal). The Elsevier upper management are profiteering crooks, plain and simple, which is a pity, because they've bought up and influence a lot of good journals with established good reputations (so you're stuck supporting the crooks to get to important past research).

Re:Maybe, the "greedy" journals have a point (1)

mi (197448) | about 10 months ago | (#45332283)

Well, then, you don't recall very well some of the major issues for which Elsevier was targeted

The fraud you are referring to is, indeed, reprehensible, but the publisher (and I remain convinced, other publishers were blamed too) was targeted simply for wanting profit. Indeed, you are doing just that right now:

The Elsevier upper management are profiteering crooks

"Profit" seems to be a dirty word for you...

Re:Maybe, the "greedy" journals have a point (1)

Anonymous Coward | about 10 months ago | (#45327547)

You mean we should all published in flagship journals such as "chaos solitions and fractals" or "Australasian journal of bone and joint medicine". The Elsevier brand guarantees they're good journals!

Re:Maybe, the "greedy" journals have a point (1)

Daniel Dvorkin (106857) | about 10 months ago | (#45329769)

The established publications -- often denounced as "greedy" for having the audacity of wanting to get paid -- do add value, after all?

We have no idea if they do or not, because the "study" deliberately omitted traditional journals. Congratulations, you got played for a sucker.

Re:Maybe, the "greedy" journals have a point (0)

Anonymous Coward | about 10 months ago | (#45331607)

The established publications — often denounced as "greedy" for having the audacity of wanting to get paid — do add value, after all?

That wasn't at all established by this experiment. Traditional journals were not "tested", although it is well known that they have been frequent publishers of fallacious and even fake research.

Discredit the scientists (2)

areusche (1297613) | about 10 months ago | (#45327045)

It doesn't matter if it is an "exclusive" journal or one that is open access. If a scientist submits fake data to a publication, shouldn't the scientific community take the time to verify his results? I'm pretty sure that had he just made up his findings someone somewhere would have called him on it and his cred amongst his colleagues will go down the toilet. Isn't this how cold fusion was proven false?

Re:Discredit the scientists (0)

Anonymous Coward | about 10 months ago | (#45328335)

They should but there there is no incentive in doing so. Besides, is it really a good idea as a PhD student to publish a "this paper is wrong" paper correcting an established scientist that will review his job application.

Re:Discredit the scientists (0)

Anonymous Coward | about 10 months ago | (#45328777)

Let's be reasonable here.
If the someone working with the Large Hadron Collider publishes a paper summing up 10 years of measurements, then the repeatability of this experiment is highly questionable.
Thus, if someone tries to hoax a journal with fake images/data from some ridiculously expensive and complicated machine where only one or two exist in the entire world, no one is going to find out for a very very long time.

Verify the results (0)

Anonymous Coward | about 10 months ago | (#45328901)

How do you verify the results?

Reviewers just look at the data, look at the assumptions and statements and see if it "makes sense".

Generally they're not capable of rerunning the study.

Re:Discredit the scientists (0)

Anonymous Coward | about 10 months ago | (#45329001)

>If a scientist submits fake data to a publication, shouldn't the scientific community take the time to verify his results?

"The scientific community?" Who's going spend the time and the (very hard to get) funding to verify other people's results?

This is the reason that "Many eyes make all bugs shallow" is just a sad joke.

slashdot science journal (1)

penfern (760298) | about 10 months ago | (#45327077)

Slashdot engine should be used to maintain author and paper karma points, moderation and meta-moderation! Yeah?

Re:slashdot science journal (1)

Sarten-X (1102295) | about 10 months ago | (#45327145)

Oh yes... A hivemind with knee-jerk reactions is great for science!

Re:slashdot science journal (1)

somersault (912633) | about 10 months ago | (#45327423)

Okay. Look. We both said a lot of things that you're going to regret. But I think we can put our differences behind us. For science. You monster.

Re:slashdot science journal (1)

Hognoxious (631665) | about 10 months ago | (#45328605)

99% of those problems can be solved by blocking all IPs originating south of the Mason-Dixon line.

harvard Hack-ologist John Bohannon (4, Informative)

Uberbah (647458) | about 10 months ago | (#45327103)

Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups. If he were honest, he would have submitted the same papers to closed, for-profit journals as well, even if it cost him money to do so.

Re:harvard Hack-ologist John Bohannon (1)

Linzer (753270) | about 10 months ago | (#45327383)

Good point. And it wouldn't even have cost money, because for the journals I know of, you don't pay anything until the paper is accepted.

Re:harvard Hack-ologist John Bohannon (2)

king neckbeard (1801738) | about 10 months ago | (#45327535)

Science published the article on his clearly unsound research, which provides some data on for-profit journals as well.

Re:harvard Hack-ologist John Bohannon (1)

interkin3tic (1469267) | about 10 months ago | (#45328551)

This wasn't an experiment, it was a stunt to highlight a problem he saw.

Re:harvard Hack-ologist John Bohannon (1)

westlake (615356) | about 10 months ago | (#45329303)

Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups.

The "control group" isn't necessary if the only question being asked is whether the open access journal would publish a paper that is utterly ridiculous, absolute nonsense.

You can call the call the experiment unfair, if you like.

But that is cold comfort for suppporters of the open source model.

Re:harvard Hack-ologist John Bohannon (0)

Anonymous Coward | about 10 months ago | (#45330979)

This is incorrect. Peer review simply is not effective. The only effective filter is independent replication. It has nothing to do with open source.

Re:harvard Hack-ologist John Bohannon (1)

NoKaOi (1415755) | about 10 months ago | (#45329805)

Notice he only submitted his fake papers to open access journals. As a scientist, and especially as a biologist, he's perfectly aware of the importance of control groups. If he were honest, he would have submitted the same papers to closed, for-profit journals as well, even if it cost him money to do so.

Yes, his study was not scientifically valid. Now his only option will be to publish his paper in an open access journal, instead of closed, for-profit journal* that only accepts scientifically valid papers.

*The summary was based on an article in Science that simply described his experiment, not a peer reviewed scientific paper in Science.

Re:harvard Hack-ologist John Bohannon (0)

Anonymous Coward | about 10 months ago | (#45330225)

Exacly. There a good and bad journals. There are certainly more bad open access journals, because it is much easier to make money with a fake open access journal. But fundamentally, this has not much to do with open access at all. For example, there are fake conferences too.

I also have to point out that both open-access and closed journals are usually for-profit. for closed journals authors do not usually pay anything, while for open-access you usually pay on acceptance. So money is not an issue which could have prevented him from doing this. He specifially targetted open access.

Wrong from the start (4, Insightful)

Sarten-X (1102295) | about 10 months ago | (#45327119)

you have fewer journals that have to be audited for procedural honesty

Taking this to its logical conclusion, a monopoly is the most honest organization, right?

Once one of these "fewer" journals has an established reputation, it can obscure its procedures and refuse to be audited, while it turns corrupt for profit. Since it's still a well-known journal (because who really has time to monitor the procedural audits, anyway?) it will still get the submissions and readers, and it will stay relevant for many years after "everybody knows" its' corrupt.

Re:Wrong from the start (0)

Anonymous Coward | about 10 months ago | (#45327925)

No, that is not the logical conclusion. That is reductio ad absurdum, possibly qualifies as a slipery slope argument, and misrepresents what is being said.
 
He is saying that fewer journals with more reviewers each would result in easier and thus better oversight of the journals. He is arguing fewer journals with better oversight instead of many journals with little oversight.
 
Like many things, it is a trade off.

Traffic Light System (3, Interesting)

Anonymous Coward | about 10 months ago | (#45327139)

Modern journals are all online. So, it would be easy to provide a traffic light system that indicates if a reference is found to be fraudulent or incorrect. In a correct paper, all references would have a green background. If one of those papers was found to be incorrect by a 3rd party, it would be flagged and its colour changed to amber. This would propagate to all papers that make a reference to that paper and the entire chain would become amber. This would force all authors to update their papers, or the author of the original flagged paper to correct their work. If a paper is just flat out wrong, discovered to be a fake, or fails to be updated after a period of time in amber, it would become red, which again would propagate to every paper that uses it as a reference.

This will keep the chain of dependencies clean throughout the entire scientific world and minimise the impact of improperly peer-reviewed work.

Re:Traffic Light System (0)

Anonymous Coward | about 10 months ago | (#45327907)

Nine out of ten of those flagged papers' author would admit not having read the "wrong" paper they cited, and having cited it just so they could have an impressive bibliography.

Re: Traffic Light System (0)

Anonymous Coward | about 10 months ago | (#45328315)

Everybody would resurrect their old "paper white" grayscale VGA monitors.

Re:Traffic Light System (3, Interesting)

femtobyte (710429) | about 10 months ago | (#45328561)

A little subtlety would be required. In a document I'm working on, a whole load of references are papers containing flat out wrong, erroneous information (honest scientific mistakes, not deliberate fraud, but disproven material nonetheless). In the text, I'm using these as examples of things that have gone wrong in the past history of the field. Would my paper, with its bibliography littered with "red" entries (that I put there because they were "red," in order to comment on the scientific process --- which involves making and correcting mistakes --- leading to the present-day state of the field) be tagged as "red"? What if I reference a paper in which some material has held up under later scrutiny, and other hasn't (basing my work on the more "correct" material)?

What about subscription journals? (2)

Jace Harker (814866) | about 10 months ago | (#45327157)

This paper has already been extensively critiqued. To me the biggest problem is that he didn't include any subscription journals.

Many intentionally flawed or nonsense papers have been submitted -- and published! -- to reputable journals in the past.

This latest demonstration by Bohannan just shows that the peer review system needs improvement. It does not show whether Open Access journals are better or worse than subscription journals in terms of quality and reliability of content.

Re:What about subscription journals? (1)

Linzer (753270) | about 10 months ago | (#45327403)

Yup. You could rephrase the conclusion as: "there are an awful lot of crappy open access journals out there". It doesn't say anything about a difference in that respect between OA and non-OA journals.

Re:What about subscription journals? (0)

Anonymous Coward | about 10 months ago | (#45331109)

But this is begging the question. Why are we only talking about crappy open source journals?

Do paid journals never get hoaxed? (3, Insightful)

Ultra64 (318705) | about 10 months ago | (#45327251)

Of the 304 open-access journals targeted by the sting, 60% published the paper. I think this mainly just shows that the average quality of open-access journals may always be low, but that's not surprising since anyone in the world can set up an "open access journal".

Is this information really meaningful without a similar test on the paid journals?

Re:Do paid journals never get hoaxed? (1)

genegrey (3420409) | about 10 months ago | (#45331631)

There have been plenty of incidences in which paywalled journals have also been shown to, um, let dodgy papers through. Although I agree, it would be interesting to see what happened with an identical test :) Also, here's another take on the entire thing - it's not open access vs paywalled, it's peer review (and shows an awesome opportunity to improve it!). http://www.theguardian.com/science/grrlscientist/2013/oct/08/1 [theguardian.com]

The overall reviewing load will decrease... (0)

Anonymous Coward | about 10 months ago | (#45327435)

As crappy researchers will not get their shitty papers accepted at open access journals and will stop publishing.

Workload! (1)

supercrisp (936036) | about 10 months ago | (#45327441)

If ten a month is the standard load for a reviewer, I think there's already a problem. Reading an article should probably be allotted at least an hour. Any fact-checking will take more. I read articles for the humanities, and that's pretty easy. You can spot a bullshitter pretty quickly, in a page or two. But I'd imagine science can be trickier. So, the half-hour or so it might take me to be sure that it's crap would probably double for science. At a minimum. I say minimum because reading a stack of a dozen poetry submissions can easily take me over an hour, and that's really not very much text. Then you have the separate but connected problem of being rushed or just feeling sick and tired of the stack and rushing through it. It seems to me like it's a recipe for rubber-stamping and carelessness. I know that a science journal my ex-wife worked for sent out far fewer articles a month. But it was a small journal on a narrow topic. I think that it will boil down to, this whole issue, to the fact that you'll always be able to game the system. The process of peer-review doesn't end with publication. For good reasons.

Re:Workload! (0)

Anonymous Coward | about 10 months ago | (#45327615)

Well, ideally the managing editor actually reads the paper and decides if it's worth bothering 2 busy scientists. I say ideally, because I know this doesn't happen all the time.

It's what you publish that counts (0)

Anonymous Coward | about 10 months ago | (#45327503)

Peer review was a necessary filter when there were limited options for publishing to a wide community, mostly due to expense.It wasn't necessarily a golden age, and there were tradeoffs in that possibly many deserving papers weren't published for various reasons. Obviously it's different today and quality of content e.g. as measured by citations is starting to replace quantity. An ignored paper today is the same as a rejected paper yesterday.

how about crap-proofing slahsdot? (0)

Anonymous Coward | about 10 months ago | (#45327837)

for some reason, shit like this makes it to the front page. I think that reason is: timothy.

Re:how about crap-proofing slahsdot? (1)

Brucelet (1857158) | about 10 months ago | (#45328637)

Complain about Timothy all you want, but Bennett Haselton is a completely different problem.

wrong premise (1)

stenvar (2789879) | about 10 months ago | (#45328005)

Anybody who expects peer review to let through only correct papers doesn't understand peer review. Science editors, of all people, should understand that, given the many bogus and fraudulent papers they have published over the years.

In different words, there's nothing to fix here.

Write Only Science (1)

SplawnDarts (1405209) | about 10 months ago | (#45328189)

The publication process has gone so far downhill it's basically not recognizable as science any more. This is driven by the university tenure process. Being a tenured professorship is a sweet job. The hours are short and flexible and the work is interesting and varied. Pay is less than industry, but once tenured the pay is guaranteed. Benefits are usually top-notch. That's an appealing package for anyone of reasonable intellect, middling ambition, and a desire for ironclad security. Not surprisingly, the supply of would-be professor labor greatly outstrips demand.

So who gets that cushy seat? Well, it's all based on publications and grant money. Grant money is based mostly on publications. So what you, would-be professor, need is a pile of publications. This is a huge change from the scientific publications of yore, which were by and large written for the benefit of the reader. These papers are written for the benefit of the WRITER, and that makes all the difference.

Most are on insanely obscure topics. The writer needs novelty (which is easiest achieved by obscurity) to get past peer review and no one cares if anyone else actually wants to know about the topic. Organization and clarity are for the birds - as long as the reviewers can't prove you're wrong per say it will get accepted somewhere eventually, especially at a pay journal. Reproducibility is actually undesirable - the last thing you want is scrutiny. It can't get you another publication, but it could force you to retract one. The problem being addressed by a given paper is typically very easy, but made to look very hard. Solve a hard problem, get one paper. Solve an easy problem that looks hard, get one paper. It's a no brainer.

Think these papers won't get past peer review? Think again. Mostly the journals don't REALLY read them. Just sort of skim. Think tenure committees will evaluate the papers on their merits? Think again. They don't have time, and in most universities the ultimate arbiter of tenure is the whole body of professors, most from different fields. TAre they're going to parse your obscure minutia? Heck no. They weigh it.

This can't be changed by fixing the journals. The real problem is that many of those publishing are doing so in bad faith. Right now scientists have exactly the journals they deserve.

Rubbish! (0)

Anonymous Coward | about 10 months ago | (#45328719)

It's about time to abandon the concept of "journals" altogether. No one wants to read journals. They want to read individual papers that pertain to the subject of their interest. Post individual papers, open to review by others who are qualified to comment on the content. The papers and comments are open access worldwide, after the specified comment period.

At present, peer review for most journals is done by junior faculty who are inexperienced and eager to advance their career, not be the real experts. It is a broken system. And there is no justification for maintaining a publishing model that was designed to share knowledge among a limited number of people during the print-on-paper era. Now, journal publishing is merely a mechanism for furthering the economic interests of publishers.

We need a StackOverflow Open Access Journal (0)

Anonymous Coward | about 10 months ago | (#45328857)

If StackOverflow made open access journals, he would now have damaged his reputation because other users of the site would vote his publication down. This would have a knock on effect on 1) How his future publications are ranked 2) How seriously the feedback he leave for others is taken 3) And ultimately the grant money he would receive. Basically, he would be shotting himself in the foot.

I think that the faster we move to open access journals with a single sign in, voting and comments - the sooner crap will be filtered out.

If only StackOverflow made an Open Access Journal (1)

randomhacks (3420197) | about 10 months ago | (#45328947)

... then you would be able to vote his paper down for being nonsense. This would then reduce his reputation and would: 1) Reduce the rating of future publications 2) Reduce the affect of feedback that he leaves others 3) Reduce his funding. I.e. Post bad work and reduce the chances of succeeding as an academic.

Most open access journals are frauds (1)

SoftwareArtist (1472499) | about 10 months ago | (#45331411)

Ok, maybe "fraud" is a slightly strong term, but it's pretty close. There are thousands of "open access journals" created only to make money by sounding as if they were legitimate journals, getting people to send them articles, and then charging to publish them. I get spam from them on a daily basis. They aren't legitimate, they have no interest in quality, and they have no reason ever to reject an article.

Please don't confuse legitimate open access journals like PLoS with these scams.

Make peer review better (1)

arhpreston (3420391) | about 10 months ago | (#45331477)

Disclosure: I'm the co-founder of Publons.com

Really good post. I think you've hit on the key issue, which is that peer review can be done better.

One problem is that peer review probably is done well in a lot of cases; we only hear about it when the system breaks down. From that perspective the most obvious solution is to start focusing on peer review and giving reviewers credit for the times where they do a good job. One way we're doing that is by assigning DOIs to post-publication reviews that the community decide are a valuable contribution to science. This turns reviews into citable, indexable publications in their own right. (See e.g., http://dx.doi.org/10.14322/publons.r38 [doi.org] )

Why? (0)

Anonymous Coward | about 10 months ago | (#45331483)

Isn't much of the supposedly good research seriously faulty and impossible to reproduce? Fix that first?

Re:Why? (0)

Anonymous Coward | about 10 months ago | (#45332167)

Yes, this is a distraction.http://nexus.od.nih.gov/all/2012/02/13/age-distribution-of-nih-principal-investigators-and-medical-school-faculty/

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>