Frequent contributor Bennett Haselton writes: "A Harvard biologist was able to get an intentionally flawed paper accepted for publication by a number of open-access academic journals, included that had supposedly been vetted for quality by advocates of open access. It seems the problem could be mitigated by consolidating journals within a field, so that there are much fewer of them, publishing much more articles per journal -- so the review processes take the same amount of labor, but you have fewer journals that have to be audited for procedural honesty." Read on for the rest, including his idea to solve the problem of fraudulent submissions (or even just sub-par science) through simplification.
Harvard biologist John Bohannon wrote about his experiment in an article published by Science Magazine. He submitted his deliberately bogus paper to 304 open-access publishers, including 183 that were listed in the Directory of Open Access Journals (DOAJ), which Bohannon calls the "Who's Who of credible open-access journals", and whose quality is supposedly vetted by the DOAJ staff.
Of the 304 open-access journals targeted by the sting, 60% published the paper. I think this mainly just shows that the average quality of open-access journals may always be low, but that's not surprising since anyone in the world can set up an "open access journal". That shouldn't be relevant to the reputation of the best open-access journals. If the best open-access journals acquire a reputation for high standards and proper peer review, then that will attract high-quality papers, whose publication will reinforce the reputation of the journal, which enables it to confer prestige on the papers it publishes, which in turn will continue to attract high-quality papers. The existence of other open-access journals with crummy standards, should be irrelevant.
What's more disturbing, is that of the 183 journals listed in the Directory of Open Access Journals, 45% of those published the paper -- which, according to Bohannon's article, surprised and disappointed the DOAJ founders. But perhaps if you're maintaining a database of thousands of allegedly reputable open-access journals, there's no way to make sure that they're all telling the truth about their standards and their practices. At a quick glance, all you can really say is that they would be good-quality journals if they're telling the truth about how they operate, but it's hard to tell from the outside whether they're being honest.
So perhaps a different solution is that we don't really need a huge number of good open-access journals. Rather, in each field, you could get by with a small number of "super-journals" which have a lot of reviewers on file, and which publish a high number of papers but apply uniformly high standards across all of them.
Consider: you have two journals, A and B. Each has their own non-overlapping database of 20 reviewers. When they receive a paper, the standard practice for each of them is to send the paper to 3 randomly chosen reviewers in their database. Each one receives 10 submissions per month.
Now combine A and B to form one single journal which has 40 reviewers and gets 20 submissions per month, and still sends out each submitted paper to three randomly chosen reviewers. The total amount of work performed by the reviewers, doesn't change. But now, if you're auditing the quality of a journal according to its adherence to its own practices, you only have to audit one journal instead of two. By the same logic there's no reason in principle that any number of journals in one field couldn't be subsumed into a few behemoths, which apply uniform standards across all their papers.
You could do this without waiting for the traditional system to be dismantled. Somebody in the field just assembles a list of people to be peer reviewers for the "virtual super-journal". That list is public, so that anybody can audit it and see that it consists of people with a credible reputation in their field. Anyone who pays the (nominal) fee can submit a paper to the VSJ, which sends the paper to a random selection of n reviewers from that list. If the paper "passes" the test, then it gets the stamp of approval of the VSJ, which says, "This paper was judged to be good by a majority of a random sample of reviewers on our list, and you can see from this list that the quality of our reviewers is pretty good."
And suppose someone wants to publish their paper in some other journal XYZ, and they also want to publish it in the VSJ just to get a certification of its quality, but journal XYZ doesn't allow them to simultaneously submit it to another journal for publication? In that case, you can still submit your paper to get the stamp of approval from the VSJ -- just pay the normal reviewing fee, and if it passes the VSJ's review process, they can list the paper on their website, saying, "This paper was judged to be good by a majority of our reviewers. We can't actually publish the paper here, because some other journal XYZ has exclusive publication rights, but you can view the paper at this link in this other journal." You still have the self-reinforcing cycle where the VSJ's stamp of approval maintains high standards, which attracts high quality papers, which reinforce the reputation of the VSJ's stamp of approval. There's no part of that cycle that requires the VSJ to actually "publish" the paper itself.
And people could subscribe to the VSJ's "stamp of approval" feed the way they subscribe to any other publication -- the VSJ can send out the papers themselves that they have the right to publish, or links to papers in other journals, saying, "This paper got our stamp of approval, and follow the link to read it here."
You could even use this process to do a "hit job" on someone else's paper that got published in another journal, but which you think is too low-quality to have been published. You can submit it to the VSJ and if the VSJ rejects it, you can ask them to list it as a paper that failed their review process. (Whether or not the VSJ would give you the option of doing this, may depend on their policies. It "sounds mean", yes, but academics are supposed to keep each other honest. I've never heard of a traditional journal doing that -- calling out a paper published somewhere else and saying, "This sucked, we never would have published it.")
There should probably be multiple open-access journals (or Virtual Super-Journals) within each field, so that the competition between them keeps them honest. But there's no reason to have such a huge number of them that the Directory of Open Access Journals can't keep track of what they're doing.