Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Open Research Computation Closes Before Opening

Soulskill posted more than 2 years ago | from the good-run-folks dept.

Programming 22

New submitter wagdav writes " Open Research Computation, a peer-reviewed journal on software designed for use by researchers closes on 8th May 2012. It just started to accept manuscripts sometime last year, and had not actually launched yet. The journal was to be open access and tried to be different than others with very demanding pre-submission requirements such as: code availability, high quality documentation and testing, the availability of test input and output data, and reproducibility. Now it is planned to be launched as an ongoing series in Source Code for Biology and Medicine."

cancel ×

22 comments

Sorry! There are no comments related to the filter you selected.

And that's why (0)

Anonymous Coward | more than 2 years ago | (#39864945)

That's why I bought a Saturn.

The other journal is open access also. (5, Interesting)

Mathinker (909784) | more than 2 years ago | (#39864985)

The summary fails to note that the other journal is open access, also. If I were more cynical, I'd think that some scientific publishers want to give the impression that "open access" is failing before it starts.

Re:The other journal is open access also. (1)

Taco Cowboy (5327) | more than 2 years ago | (#39865553)

I do not understand if this is a case of fails to start or starts to fail

Re:The other journal is open access also. (2)

Mathinker (909784) | more than 2 years ago | (#39866161)

My guess is that this is simply a case of a prospective journal not getting as many submissions as expected. Why anyone thought this was particularly newsworthy is beyond me --- hence the rising cynicism / astroturf-sniffer.

Re:The other journal is open access also. (4, Interesting)

tibit (1762298) | more than 2 years ago | (#39867871)

The reason is probably obvious: they really wanted to publish good science. I wouldn't be surprised if a lot of computational results are obtained with software that's tweaked until it "works" and held together with chewing gum and spit, and don't dare upgrade that FORTRAN compiler or else. Nobody cared enough to comply with their high standards if the same-old way of "doing it" will get you published elsewhere. Their failure is probably a contribution to the body of proof that there's a lot of published "science" out there that has that pungent aroma of a freshly fertilized field on an organic farm. They did exactly what Feynman would have liked journals to do, and exactly what he'd have expected all scientists to follow. It's sad in a way that it didn't work out.

You can't go wrong... (4, Funny)

Black Parrot (19622) | more than 2 years ago | (#39864987)

...with a journal named ORC.

Wouldn't it be more accurate... (1)

TWX (665546) | more than 2 years ago | (#39865059)

...to say that the project has been stopped before it opened? I don't see how it could close if it had never opened in the first place, and since this was to be a journal about computer science, and arguably about logic, this makes no sense as stated...

Re:Wouldn't it be more accurate... (2)

Smallpond (221300) | more than 2 years ago | (#39866261)

The open completed but no writes had taken place. A close was still necessary or you'd have an extra file handle.

Re:Wouldn't it be more accurate... (0)

Anonymous Coward | more than 2 years ago | (#39872497)

The open completed but no writes had taken place. A close was still necessary or you'd have an extra file handle.

Dude, in the 21st century we recycle.

Telepathy (1)

dimethylxanthine (946092) | more than 2 years ago | (#39866063)

I once had a girlfriend who was a telepath. She dumped me before we even met. -- I'll be here all week :)

Re:Telepathy (1)

X0563511 (793323) | more than 2 years ago | (#39870317)

That's not what telepathy means, you meat-head!

Running (-1)

Anonymous Coward | more than 2 years ago | (#39866091)

Running is the most popular sport, running both weight loss and can exercise, but how to get running good for us it has two aspects, that is, the running of sports equipment and running exercise, so we should choose athe right running shoes [nikefranchise.com] , and plan an appropriate amount of exercise, the Nike Free 2012 [nikefranchise.com] and Nike Air Max 2012 [nikefranchise.com] running shoes for running, and hope that we have time to exercise more, as our lives will become more healthy and beautiful.

no wonder (1)

l3v1 (787564) | more than 2 years ago | (#39866747)

No wonder.

I mean users must have the right to examine, compile, run and modify the code for any purpose (emphasis mine): really? I know it's "science" and "open", but come on, realism needs to kick in at some point.

Re:no wonder (1)

Fwipp (1473271) | more than 2 years ago | (#39867105)

What's unrealistic about letting them use the code however they want?

Re:no wonder (1)

tibit (1762298) | more than 2 years ago | (#39867895)

In a computational paper, software is part of the methodology. Not being able to use the code as the reader pleases is equivalent to not being able to reproduce the results. If a journal makes it purposefully hard to reproduce the results, it's not a scientific journal.

Re:no wonder (0)

Anonymous Coward | more than 2 years ago | (#39869639)

But this ignores the reality of developing _research_ software. Research software is developed to answer research questions. Rarely is there incentive or funding to develop it beyond that purpose. Requiring extensive testing, documentation, and the ability to easily run it on any other system sets the bar beyond what most labs can justify. Most researchers work on environments that are very difficult to reproduce (e.g., if I'm running a simulation that requires grid resources such as a Blue Gene and maybe a few cycles on Jaguar, it would not be possible for someone else to replicate that environment, but the results could still be very useful).

Sure, some research projects reach critical mass and end up with good documentation and test suites. However, that usually happens after the novel work has already been published and a lab takes on the mission of maintaining their tools for others to use. At that point, it's simply software engineering and not research.

The additional engineering required is non-trivial and takes significant time and money commitments. Some grants specifically fund this activity, but in most cases, the time and money will not be available for a researcher to fully engineer robust, deployable software.

There's also the simple fact that most of the people working on this software do not have experience developing user-facing software. A grad student or post doc is at the beginning of their career and, while they may be a great coder, most likely won't have the skill set developed to release and maintain full software systems. More importantly, they shouldn't need to do this. Their job is research, not developing tools for every one else to use.

We've made it over 50 years without this level of reproducibility in the literature. While a lot of what's been published may not be reproducible, that's true in _every_ discipline, not just computer science. A much better approach would be to have standards for describing methods and environments. Simple things like compiler flags and library versions are often omitted but go a long way towards helping people replicate work.

-Chris

Re:no wonder (0)

Anonymous Coward | more than 2 years ago | (#39872569)

Not being able to use the code as the reader pleases is equivalent to not being able to reproduce the results.

So my not being able to use code for Wolfram Alpha as I please is equivalent to my not being able to reproduce, say, 2^42-1 in Wolfram Alpha?

Re:no wonder (1)

l3v1 (787564) | more than 2 years ago | (#39877049)

Not being able to use the code as the reader pleases is equivalent to not being able to reproduce the results.

I'm sorry but this is stupid. Research is not free software charity work. Most of software developed for research - unless we're talking software research strictly of course - is proof-of-concept code to underline an idea, and the results are what matter, not the implementation. Most researchers wouldn't even have time to create publishable quality code, because it's not the goal, it's just a tool. But all is not lost, since most sincere researchers will run their algorithms on your data if you ask, and a lot of researchers provide at least some libraries or binaries to test their stuff.

But saying that without software the results are irreproducible, is not true. The papers are all about describing the methods - sometimes a bit vagualey, true, but then again, if you want to patent and protect something, you can't always be totally clear. It's just how this game is played.

Reproducing an algorithm from descriptions in a paper is usually a student's job, sometimes as an entry-level filter when they want to work on something more serious. If you fail to understand and implement algorithms, you've got not much to look for in research. Implementing algorithms is a basic thing that you need to be able to do. Otherwise you're just a curious citizen.

Re:no wonder (1)

tibit (1762298) | more than 2 years ago | (#39886165)

if you want to patent and protect something, you can't always be totally clear. It's just how this game is played

I'd be seriously pissed if any of my taxes went to fund your research (if you do such). Not only you're a bad-science apologist (it's all a game to you), you're wrong. If you want to patent something, you're free to be as clear as you wish. There's at least one patent from HP where they include entire instrument's firmware in the US patent. Yes, it runs hundreds of pages. You have a limited time to apply for a patent once you publish, but that's it. Go read up on IP law one day, please.

Reproducing an algorithm from descriptions in a paper is usually a student's job

And how on earth does this help with anything? It's completely backwards! If your lab setup is just a bunch of code, why not publish it along with your paper and let others simply re-run it on same data and see same output?? Why task a student with reproducing something that's already done, and that costs next to nothing to distribute? It's not about reproducing someone's handmade one-off lab setup, it's about getting some code compiled with same tools and running it on same files. Other than having the source, data and makefiles needed to re-run the code on the data, there's nothing to it.

And where the heck did I said one needs "publishable quality code"? I lamented that most research code is a dirty hack, but it doesn't mean it can't be published! It'd be too long to publish in the sense of including it in a journal article anyway. It's all about putting up a zip file in supplemental materials that are distributed electronically by the journal publisher (almost all do, these days).

What you're advocating is forcing everyone who wants to enter the field into wasting time on busywork. Sure they can catch up with you quicker if they can get your software, but so what? I don't see what's wrong with it. Heck, I think anything less is just making up excuses. It doesn't take a year of work to have the entirety of your code and data processing done from Makefiles or a similar mechanism, and making sure that whatever ends up in your paper is produced in the same make run. It takes just a tiny bit of discipline. I've helped a few Ph.D. students do it this way, under version control, all the way to the thesis PDF, all from one cmake file. At least it made them trust their results and embrace change without fear of breaking something irreversibly. Sure, there will be times when you're running calculations on a cluster and such, but even then you should be able to go from the source code to results in one repeatable, no-manual-tweaks-needed operation. How can you trust your own results if you can't simply re-run everything at will, or at least smaller subproblems in the name of saving time?

Re:no wonder (1)

complex_pi (2030154) | more than 2 years ago | (#39869895)

They don't request that you allow unlimited distribution of the software neither that one can sell it. Examine, compile and run. It does not sound as scary as you make it sound :-)

first article (0)

Anonymous Coward | more than 2 years ago | (#39868175)

// announcement.c:

#include <stdio.h>

int main()
{
    printf( "goodbye, world,\n" );
    return 0;
}

# greeting_unit_test.sh:

#!/bin/sh
./announcement > announcement.txt
grep "^goodbye, world.$" announcement.txt > /dev/null
if [ $? -eq 0 ]; then
    echo "Test passed. Thanks for 0 great years!"
else
    echo "Test failed."
fi

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>