Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bloggers Put Scientific Method To the Test

Unknown Lamer posted about a year and a half ago | from the like-mythbusters-but-with-science dept.

Science 154

ananyo writes "Scrounging chemicals and equipment in their spare time, a team of chemistry bloggers is trying to replicate published protocols for making molecules. The researchers want to check how easy it is to repeat the recipes that scientists report in papers — and are inviting fellow chemists to join them. Blogger See Arr Oh, chemistry graduate student Matt Katcher from Princeton, New Jersey, and two bloggers called Organometallica and BRSM, have together launched Blog Syn, in which they report their progress online. Among the frustrations that led the team to set up Blog Syn are claims that reactions yield products in greater amounts than seems reasonable, and scanty detail about specific conditions in which to run reactions. In some cases, reactions are reported which seem too good to be true — such as a 2009 paper which was corrected within 24 hours by web-savvy chemists live-blogging the experiment; an episode which partially inspired Blog Syn. According to chemist Peter Scott of the University of Warwick in Coventry, UK, synthetic chemists spend most of their time getting published reactions to work. 'That is the elephant in the room of synthetic chemistry.'"

cancel ×

154 comments

Sorry! There are no comments related to the filter you selected.

Terrible, Terrible, Headline (5, Insightful)

damn_registrars (1103043) | about a year and a half ago | (#42652853)

The bloggers are not testing the scientific method, they are testing methods that are scientific. Those are two vastly different concepts. Their work is important, but not epic.

Re:Terrible, Terrible, Headline (2, Interesting)

girlintraining (1395911) | about a year and a half ago | (#42652963)

The bloggers are not testing the scientific method, they are testing methods that are scientific

Putting your ignorance in boldface type is amusing. The most basic promise of the scientific method is that results can be replicated by anyone with the proper equipment repeatedly and reliably. This is accomplished by describing an experiment to the detail level necessary to reproduce the result. If the result cannot be reproduced from a description of the experiment, it has failed this test.

They are testing the scientific method insofar as asking whether professional and peer-reviewed scientific work actually meets this basic test. And in many cases it doesn't. Science only works if it is built on a firm foundation: Their work isn't just important, it's critical. It may not be fun, but explorations like this prevent us from assembling a body of knowledge and understanding based on flawed experiments... and that has happened many times in the history of science, especially in medicine.

Re:Terrible, Terrible, Headline (5, Informative)

gman003 (1693318) | about a year and a half ago | (#42653069)

They are testing whether scientific papers meet the scientific method (ie. the results are reproducible). They are not testing the validity of the scientific method itself (myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question).

That is the point GP was attempting to make.

Re:Terrible, Terrible, Headline (5, Funny)

c0lo (1497653) | about a year and a half ago | (#42653147)

myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question

So little faith you have...

(large grin)

How about unpublished protocols ? (1)

Taco Cowboy (5327) | about a year and a half ago | (#42653661)

There are protocols which are published and then there are protocols that remains unpublished

How about those protocols which have remained unpublished?

Any way to test those?

Re:How about unpublished protocols ? (1)

alvinrod (889928) | about a year and a half ago | (#42653717)

Sure, there is. All you have to do is send me $500 and I will test them according to some protocols which I have developed. I'd like to share them with you, but unfortunately due to proprietary business secrets which must remain undisclosed, I will be unable to publish them.

Re:How about unpublished protocols ? (1)

Shavano (2541114) | about a year and a half ago | (#42654213)

There are protocols which are published and then there are protocols that remains unpublished

How about those protocols which have remained unpublished?

Any way to test those?

That's not science. That's grandma's secret recipe.

Re:How about unpublished protocols ? (1)

c0lo (1497653) | about a year and a half ago | (#42654279)

How about those protocols which have remained unpublished?

Ok... for you, let me be as explicit as possible: put you faith in God and pray!

Is it clearer now?
If not, do consider the context (or else [xkcd.com] ).

Still confused? I hope this will clear the matter: whoosh!

Re:Terrible, Terrible, Headline (0)

girlintraining (1395911) | about a year and a half ago | (#42653267)

They are testing whether scientific papers meet the scientific method (ie. the results are reproducible). They are not testing the validity of the scientific method itself (myself, I cannot see how one could test the scientific method without using it, thus bringing the results into question).

Epistemology [wikipedia.org] would have something to say about your sight. Yes, the scientific method can be evaluated and tested. Anyway, if that's what you say the GP is making as a point, then it's a really stupid and pedantic one. The scientific method isn't just an abstract concept, it's also a means to an end. If we find that many people are making the same mistakes in the process itself, then yes, it is a statement about the scientific method.

It's like saying a lake is a body of water and ignoring the fact that you also need something to hold it in.

Re:Terrible, Terrible, Headline (1)

Anonymous Coward | about a year and a half ago | (#42653497)

Oh good lord. Testing other's methods is not testing the scientific method. They are using the Scientific Method. What is wrong with Slashdot? The Scientific Method is not merely doing stuff in science. Part of the method is testing other's work. This is what they are doing. Showing that you can repeat an experiment is part of the Scientific Method. They are not testing the Scientific Method itself. Please do tell. How would you test the Scientific Method without using the Scientific Method?

Re:Terrible, Terrible, Headline (-1, Troll)

Your.Master (1088569) | about a year and a half ago | (#42653559)

What's wrong with slashdot is pedants like you who not only aren't getting it, but refusing to consider the possibility that there's something you aren't getting.

Testing whether the previous tests have captured flaws that could easily be found by bloggers IS a test of the practical effectiveness of the scientific method, using the scientific method. Even if in principle the platonic ideal of the scientific method hasn't been tested, the actual reality is being tested.

More abstractly, there's no problem testing the scientific method with the scientific method. If the test using the scientific method says that the scientific method fails very often, then you definitely have a problem. If it says that you don't, then you may or may not have a problem depending on whether there's a flaw in the scientific method that the scientific method cannot itself detect. Which is okay because the scientific method itself is often about testing a hypothesis and believing it on the basis of how many times people fail to disprove it. Any further than that is an epistemolgy discussion.

Re:Terrible, Terrible, Headline (2)

gandhi_2 (1108023) | about a year and a half ago | (#42653119)

This speaks to the failings of the participants implementation of the scientific method, not to the failing of The Scientific Method.

We have discussed here before the problem of too much content being generated and not enough people to peer review it all. Still not a failing of The Scientific Method.

And I prefer my ignorance in courier font.

Re:Terrible, Terrible, Headline (3, Insightful)

Shavano (2541114) | about a year and a half ago | (#42654225)

No, it means that the original experimenters didn't describe their experiment correctly. Or worse, may have never done it at all...

Re:Terrible, Terrible, Headline (2)

mysidia (191772) | about a year and a half ago | (#42654433)

We have discussed here before the problem of too much content being generated and not enough people to peer review it all. Still not a failing of The Scientific Method.

The article's headline text is a demonstration of that issue of too much content and not enough peer review :)

Re:Terrible, Terrible, Headline (3, Informative)

damn_registrars (1103043) | about a year and a half ago | (#42653191)

The bloggers are not testing the scientific method, they are testing methods that are scientific

Putting your ignorance in boldface type is amusing. The most basic promise of the scientific method is that results can be replicated by anyone with the proper equipment repeatedly and reliably

And I am sorry that you struggle so greatly to understand what I have written.

They are testing the scientific method insofar as asking whether professional and peer-reviewed scientific work actually meets this basic test.

Do you not understand the purpose of peer-review? If results that were peer-reviewed are not reproducible, that is not a failing of the scientific method itself, nor is it a failing of peer review. Peer review does not exist to validate methods as that would be quite nearly an impossible task for the majority of all scientific papers that are published currently unless the journal sent an editor to the lab that submitted said paper to rerun the work themselves - which would be so absurdly expensive that nobody would ever pay to publish. Peer review is intended to make sure that work published is scientifically rigorous and well written.

Hell, if you go back and actually read my comment - I would say re-read but it does not appear you read it successfully for a first time yet - you will find that I did say this work is important. I also said that it is not testing the scientific method itself, which is correct.

Re:Terrible, Terrible, Headline (2)

fluffy99 (870997) | about a year and a half ago | (#42653367)

Do you not understand the purpose of peer-review? If results that were peer-reviewed are not reproducible, that is not a failing of the scientific method itself, nor is it a failing of peer review. Peer review does not exist to validate methods as that would be quite nearly an impossible task for the majority of all scientific papers that are published currently unless the journal sent an editor to the lab that submitted said paper to rerun the work themselves - which would be so absurdly expensive that nobody would ever pay to publish. Peer review is intended to make sure that work published is scientifically rigorous and well written.
 

Many of the published results and methods being verified are ones that have questionable results - such as producing too much output chemicals or reactions don't appear they should work at all. Those are papers for which peer-review has failed to provide adequate review. If the paper was truly read in-depth by other equally qualified scientists these issues would have been noticed and the paper (published or not) would have been called into question.

The caveat to this would be papers that are published with the sole purpose of seeking peer review and inviting other to validate the results, for example many of the cold fusion papers and the experiment which implied neutrinos were traveling faster than light.

I also recognize that peer review happens both before and after publication, and in fact the bloggers are part of the peer-review process.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42653355)

E-specially, forgive me I couldn't help myself. I am one of your devoted and humble fans girlintraining.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42653583)

Thank you for making your ignorance more verbose, and ignorant

They are not testing the method, they're testing the documentation and practices of those who claim to be using this method.

IF an experiment is not repeatable given the information published by peers, due to vague or ambiguous documentation, Then the method hasn't failed, those publishing and peer reviewing said documentation have failed.

Re:Terrible, Terrible, Headline (1)

nedlohs (1335013) | about a year and a half ago | (#42654119)

That's not testing the scientific method. Testing the scientific method would involve some test as to whether the scientific method itself works. Determining if some published experiment is actually described in a way that is reproducible says nothing about whether the scientific method itself works or is useful.

Asking whether professional and peer-reviewed scientific work is actually using the scientific method is also not asking whether the scientific method itself works or is useful.

I agree it's a useful thing to do though. In fact it seems like something that should be a routine part of graduate studies, heck even honors level studies. Give them some experience with real world experiments and check something to boot.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42654299)

To find that a researcher falsified results, or can't write a paper if his life depended on it, does not reflect on the validity of the scientific method in the least. They are using the scientific method to test published works produced by others.

The subject of testing is the work, not the method. I don't know how many other ways I can put this to explain it to you, but the end result is that damn_registrars is correct in his statement.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42654335)

If the result cannot be reproduced from a description of the experiment, it has failed this test.

You might as well say that the laws of reality have failed this test. What actually fails, of course, is not the scientific method, but the scientists who published the irreproducible results in the first place, and the scientists who allowed them to pass peer review.

Re:Terrible, Terrible, Headline (1)

phantomfive (622387) | about a year and a half ago | (#42652985)

they are testing methods that are scientific.

If a lot of those experiments can't be reproduced, then those methods weren't scientific to begin with.

Re:Terrible, Terrible, Headline (-1, Offtopic)

Anonymous Coward | about a year and a half ago | (#42653085)

Just like the whole of "global warming science."

Re:Terrible, Terrible, Headline (0)

gandhi_2 (1108023) | about a year and a half ago | (#42653127)

Oh snap!

Re:Terrible, Terrible, Headline (5, Insightful)

flink (18449) | about a year and a half ago | (#42653097)

Right, but they are utilizing the scientific method to test the quality of published papers, not attempting to verify the utility of the scientific method itself.

The headline should read "Bloggers apply scientific method to validate published findings".

Re:Terrible, Terrible, Headline (4, Insightful)

phantomfive (622387) | about a year and a half ago | (#42653137)

"Bloggers apply scientific method to validate published findings".

A much better headline.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42654847)

Yeah, but he has the benefit of being interested in science and technology. The editors as Slashdot don't care about that nerdy shit - they're just nerd chic. (See Timothy's glasses.)

Re:Terrible, Terrible, Headline (1)

jedidiah (1196) | about a year and a half ago | (#42653397)

...or alternately: The Half Blood Prince tweaks the results.

Re:Terrible, Terrible, Headline (0)

mysidia (191772) | about a year and a half ago | (#42654447)

Right, but they are utilizing the scientific method to test the quality of published papers

This could turn into a study on... what is the quality of published papers/ ? :)

If studies show, that studies are meaningless -- eg if it becomes shown that very often, the scientific method has been ignored....

Then the scientific method is effectively shot, not because it's invalid, but because it's not being followed, and it becomes no longer reasonable to believe researchers are following it

Re:Terrible, Terrible, Headline (2)

RespekMyAthorati (798091) | about a year and a half ago | (#42654553)

Then the scientific method is effectively shot, not because it's invalid, but because it's not being followed, and it becomes no longer reasonable to believe researchers are following it

That is identical to:

People who don't do up their seatbelt buckles die.
Therefore, seat belts fail to protect people.

The fact that people are failing to apply the scientific method does not mean that the scientific method has failed, only that some people who call themselves "scientists" shouldn't.

Re:Terrible, Terrible, Headline (1)

damn_registrars (1103043) | about a year and a half ago | (#42653129)

they are testing methods that are scientific.

If a lot of those experiments can't be reproduced, then those methods weren't scientific to begin with.

Not necessarily. Sure, irreproducible results can be the result of shoddy, missing, or fabricated work, and unfortunately often are. There are also times when those results can come about through no fault the experimenter. If the scientist was running an overnight synthesis and was not notified that the HVAC failed for two hours starting at 3am is that his fault? Sure, he should check afterwards to make sure that the environmental conditions were properly controlled but that isn't always immediately apparent to the scientist. Similar with something like a pressure regulator or any of a number of other laboratory instruments which should be reliable though have a nasty habit of failing in interesting ways at inopportune times.

There are times when good science is done, and bad (or badly reproducible) results come out.

Re:Terrible, Terrible, Headline (2)

RatherBeAnonymous (1812866) | about a year and a half ago | (#42653227)

A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results. If you can't repeat your results you can't possibly give others instructions on how they can repeat them. Not knowing that the HVAC failed for a couple hours during one run out of a dozen should result in outlier results that can be investigated or discarded.

Re:Terrible, Terrible, Headline (1)

fluffy99 (870997) | about a year and a half ago | (#42653375)

A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results. If you can't repeat your results you can't possibly give others instructions on how they can repeat them. Not knowing that the HVAC failed for a couple hours during one run out of a dozen should result in outlier results that can be investigated or discarded.

Unfortunately, sometimes the experiments do get run multiple times and the data that didn't fit the expected results was thrown out and not included in the final data. I've seen far too many scientists who automatically throw out the highest and lowest samples and only average the data that grouped nicely, without making much effort to find out why some samples deviated.

Re:Terrible, Terrible, Headline (0)

Anonymous Coward | about a year and a half ago | (#42653573)

Science seems much more brute force now rather than driven by curiosity. Curiosity might lead someone to make a new discovery by exploring those outliers. Both methods yield results though. Progress is made. One has the mindset of industry and the other of enlightenment.

Re:Terrible, Terrible, Headline (1)

paiute (550198) | about a year and a half ago | (#42653477)

A scientist should also run experiments multiple times to see if the results are repeatable before publishing those results.

No time. Many if not most synthetic methodology papers will test a new reaction on a range of homologous compounds, say 20. Few of those are repeated twice, let alone multiple times. A lot of this work comes out of graduate school groups where the emphasis is on publishing rapidly. There is one Nobel Prize winner in particular whose publications in Tetrahedron Letters (usually a two page paper) are notorious for sacrificing accuracy for rapidity.

Nobel prizes come from "Funny, that's odd" (0)

Anonymous Coward | about a year and a half ago | (#42653457)

And just those things you didn't realize happen is how new science is discovered.

Re:Terrible, Terrible, Headline (1)

Zontar The Mindless (9002) | about a year and a half ago | (#42653221)

Right, I tagged the story "crapheadline" as soon as I RTFS.

It might be epic (5, Interesting)

Okian Warrior (537106) | about a year and a half ago | (#42653301)

The bloggers are not testing the scientific method, they are testing methods that are scientific. Those are two vastly different concepts. Their work is important, but not epic.

I'm not so sure about that.

We believe in a scientific method founded on observation and reproducible results, but for a great number of papers the results are not reproduced.

Taking soft sciences into consideration (psychology, social sciences, medical), most papers hinge on a 95% confidence level [explainxkcd.com] . This means that 1 out of every 20 results arise from chance, and no one bothers to check.

Recent reports tell us depression meds are no better than chance [www.cwcc.us] and scientists can only replicate 11% of cancer studies [nbcnews.com] , so perhaps the ratio is higher than 1 in 20. And no one bothers to check.

I've read many follow-on studies in behavioral psychology where the researchers didn't bother to check the original results, and it all seems 'kinda fishy to me. Perhaps wide swaths of behavioral psychology have no foundation; or not, we can't really tell because the studies haven't been reproduced.

And finally, each of us has an "ontology" (ie - a representation of knowledge) which is used to convey information. If I tell you a recipe, I'm actually calling out bits of your ontology by name: add 3 cups of flour, mix, bake at 400 degrees, &c.

This assumes that your ontology is the same as mine, or similar enough that the differences are not relevant. If I say "mix", I assume that your mental image of "mix" is the same as mine. ...but people screw up recipes, don't understand assembly instructions, and are confused by small nuanced differences in documentation.

Does this happen in chemistry?

(Ignoring the view that reactions can depend on aspects that the researchers were unaware of, or didn't think were relevant. One researcher told me that one of her assistants could always make the reaction work but no one else could. Turns out that the assistant didn't rinse the glassware very well after washing, leaving behind a tiny bit of soap.)

It's good that people are reproducing studies. Undergrads and post-grads should reproduce results as part of their training, and successful attempts should be published - if only as a footnote to the original paper ("this result was reproduced by the following 5 teams..."). It's good practice for them, it will hold the original research to a higher standard, and eliminate the 1 out of 20 irreproducible results.

Also, reproducing the results might add insight into descriptive weaknesses, and might inform better descriptions. Perhaps results should be kept "Wikipedia" style, where people can annotate and comment on the descriptions for better clarity.

But then again, that's a lot of work. What was the goal, again?

Re:It might be epic (2)

Areyoukiddingme (1289470) | about a year and a half ago | (#42653691)

I have mod points, but stupid Slashdot isn't showing me the moderation option on posts. So I will snark instead.

But then again, that's a lot of work. What was the goal, again?

Uhm. Publish or perish, I think it was...... Full speed ahead and damn the torpedoes. Verification studies don't count on your CV.

Good idea to harness the slave labor though. That's what grad students are for.

And how many grad students will actually be willing to do this verification work? None, who can think politically enough to stay in academia. What are you asking them to do? Verify published papers. Who published those papers? The people who will be sitting on their doctoral thesis board. The people who will be peer reviewing their papers. The people who are on the tenure committee. The people who are on the funding committee. I could go on.

The scientific method is a marvelous thing, but the way the system is rigged, we're left to amateurs to verify anything. All hail the internet and bored people, I guess. I also guess this is a measure of our wealth as a society, that there are people with the time and money to donate to verifying synthetic chemistry results. Expect a lot of shouting and angst when they can't verify results though.

Re:Terrible, Terrible, Headline (1)

countach (534280) | about a year and a half ago | (#42653453)

Since testing IS the scientific method, if there is something wrong with the scientific method, this test will fail!

Not just synth chemistry (3, Informative)

Anonymous Coward | about a year and a half ago | (#42652883)

If you try to repeat an experiment and fail then it is almost impossible to get published. Failed experiments, though critical for advancing science, aren't sexy and editors prefer their journals to be full of positives. So scientists don't even bother trying anymore. This is a problem in medicine and probably all sciences. There is a movement in medicine to report all trials [alltrials.net] so they can be found by researchers doing meta-studies.

Re:Not just synth chemistry (1)

ChrisMaple (607946) | about a year and a half ago | (#42652915)

If you try to repeat an experiment and fail then it is almost impossible to get published

Tell that to Fleischmann and Pons.

Re:Not just synth chemistry (0)

DiamondGeezer (872237) | about a year and a half ago | (#42654867)

These bloggers are really denialists who refuse to accept published scientific findings in peer-reviewed journals by credible scientists. An overwhelming scientific consensus is against them.

Re:Not just synth chemistry (3, Interesting)

WrongMonkey (1027334) | about a year and a half ago | (#42652941)

As a research chemist, I've published a couple of papers that were motivated because I didn't believe a paper's results to be true. The trick to get it past reviewers is to not only prove that they are wrong, but to come up with an alternative that is demonstrably superior.

Re:Not just synth chemistry (1)

phantomfive (622387) | about a year and a half ago | (#42653001)

That doesn't sound like a very good system.

Watching scientists work should be like hearing one guy say, "Hey, check this out!" and another guy saying, "wow, that's cool, I can't get it." and another saying, "oh, you need to do it like this!"

If someone can't say, "that doesn't work like you said it did," then scientific progress is going to be hampered.

Re:Not just synth chemistry (1)

BrokenHalo (565198) | about a year and a half ago | (#42653265)

That doesn't sound like a very good system.

It's one that works, and is another part of the scientific method, albeit a step along from establishing reproducibility of an experiment. If prior research appears to set a hypothesis, then that should be tested - i.e. someone should make an attempt to knock it down by whatever means are available. You never really "prove" something to be the case, but if nothing can be done to disprove it, that's pretty nearly as good.

Re:Not just synth chemistry (2)

phantomfive (622387) | about a year and a half ago | (#42653465)

It's not a problem with the scientific method, it's a problem of communicating results. Clearly it isn't working optimally if this article is correct. If people can only tell each other when something works, but can't discuss when things don't work, then the communication channels are severely broken.

Mythbusters: Chem Edition (1)

gmuslera (3436) | about a year and a half ago | (#42652949)

They got explosions at least?

Seriously? IDIOTS (1)

BobGod8 (1123841) | about a year and a half ago | (#42652959)

Okay, how is this NOT common knowledge? Of course chemistry is hard, of course it's hard to replicate results--if it weren't, why in the hell do you think it took this long in the first place? If chemistry were easy, there'd be hundreds of reports of planes downed by terrorists, instead of the 3-4 ATTEMPTED explosions there have been. Chemistry is HARD, good luck doing it with limited ingreditents and improvised equipment.

Re:Seriously? IDIOTS (0)

Anonymous Coward | about a year and a half ago | (#42653047)

Replicating chemistry isn't hard with good instructions. Having had to turn a simple experiment that was more simple physics than anything else from a Doctorate thesis into a production process, I can tell you these things are sometimes very poorly documented. The basic assumption should be that they can be repeated from the instruction provided, and that simply isn't true. I'm not saying there's fraud involved (we were able to replicate the results, but even with access over the phone to the person who did it it wasn't easy as his notes were crap) I'm just saying these people sometimes get away with doing an extremely poor job at documentation. And that goes against one of the basic premises of the scientific method.

Re:Seriously? IDIOTS (1)

paiute (550198) | about a year and a half ago | (#42653521)

Replicating chemistry isn't hard with good instructions.

Organic chemists often lump reactions into two classes: peak reactions and plateau reactions. Peak reactions give a maximum yield of product only for a narrow range of conditions. Plateau reactions work over a broad range of conditions. If you go to run a plateau reaction, you don't need a very good SOP, but if you are running a peak reaction you need very specific conditions and detailed instructions for every step. And you generally don't know what type of reaction you are going to have on your hands just by looking at it on paper.

Re:Seriously? IDIOTS (0)

Anonymous Coward | about a year and a half ago | (#42653649)

And you generally don't know what type of reaction you are going to have on your hands just by looking at it on paper.

If this distinction is so important and difficult to figure out, why is it almost never mentioned which class of reaction is going on in the paper that describes the synthesis? Why don't reviewers, and other chemists, demand that such information be included?

Re:Seriously? IDIOTS (1)

mysidia (191772) | about a year and a half ago | (#42654491)

I'm just saying these people sometimes get away with doing an extremely poor job at documentation.

They're not the only ones... it happens in IT a whole lot, as well.

Could this be why? (1)

EzInKy (115248) | about a year and a half ago | (#42653059)

Could this be why the general public doesn't trust science and instead rely on ancient mystical texts to make sense of the world they live in? Maybe a push to show the "hoi poloi" are perfectly capable of replicating the results researchers have observed would advance the cause of science?

Re:Could this be why? (1)

BobGod8 (1123841) | about a year and a half ago | (#42653765)

Not really. A lot of science is easily accessible, but trying to replicate cutting edge chemistry is not the place to start there. Extracting asprin from bark, 100 year old processes, these are the (still scientific) easy experiments. The public doesn't trust science because we don't even do those with kids--why I have no idea.

Of course the other reason the public distrusts science is because they've been told to by people whom the science inconveniences, but that's not really the scope of this article.

Re:Could this be why? (1)

RespekMyAthorati (798091) | about a year and a half ago | (#42654577)

Maybe a push to show the "hoi poloi" are perfectly capable of replicating the results researchers have observed would advance the cause of science?

How many of the "hoi poloi" would have any idea what the paper is even trying to demonstrate? Or how to test it?

Re:Seriously? IDIOTS (0)

Anonymous Coward | about a year and a half ago | (#42653073)

They have access to GC/LC-MS and NMR equipment. They are not using improvised equipment.

IAAC (I am a chemist).

Re:Seriously? IDIOTS (0)

Anonymous Coward | about a year and a half ago | (#42653121)

Who's using "limited ingredients and improvised equipment"? These are trained chemists with access to fully equipped labs, high-purity materials, and the original authors' descriptions of their procedure. If they can't replicate the findings of the original chemists by following the exact same procedure, something is badly wrong.

Re:Seriously? IDIOTS (0)

isomer1 (749303) | about a year and a half ago | (#42653185)

Eh. These aren't random idiots. They are graduate students. Typically they know more about the nuts and bolts than the PIs. They are trying to show the absurdity of the current system. I for one heartily applaud their efforts. The frustrating issue with science in academia, and I say this from 3 years of graduate work and 5 years of post-graduate work at major research universities, is that the process has become just another lame industry. The purpose of modern publicly funded laboratories is to churn out papers. Yes those papers are peer reviewed. But that doesn't make the science any more sound, it just insures the arguments in the paper are internally consistent. I've seen people repeat an experiment 20+ times, then publish the results of the SINGLE experiment that happened to give the numbers that matched their model. That doesn't mean the experiment worked that time, it just randomly gave a value that a reviewer would accept. Frankly, Chemistry is among the easiest of the physical sciences. I say this as the physicist who was tasked by the chemists to fix their gear when it broke down. Their papers are almost universally set forth as an over glorified recipe. If you look across the fields chem majors have a higher percentage of publishing as undergrads, because the papers are so damn easy. You mix your bits, filter and then quantify the results. Boom, paper. Yes I'm oversimplifying, but not by much. The joke goes that if you want to make a breakthrough in modern chemistry you'd better find a physicist. If you want a breakthrough in modern physics you'd better find a mathematician. The overarching problem is that the system pushes for papers period. Not worthwhile science, not correct science, simply papers. This has been passed down by the NIH to whom it was in turn handed to by congress and the public at large. People demand some mechanism to quantify how the tax dollars are spent, and papers became the most convenient metric. Thus the number of papers have ballooned astronomically, while the value has plummeted. People think they're so damn clever when the talk about how our pace of innovation accomplishes in months what used to take generations. They're just bullshitting themselves. Yes, there are *some* papers that are absolutely fantastic today, but the signal to noise ratio is far lower than it was even twenty years ago.

Re:Seriously? IDIOTS (1)

Zontar The Mindless (9002) | about a year and a half ago | (#42653289)

I can tell you're the real article. It's well known that physicists have no concept of paragraph breaks. ;)

More seriously, thank you for the excellent post. Which, BTW, backs up everything that my dad (retired physics/math prof) has been complaining about in this area for the last 20 years or so.

Re:Seriously? IDIOTS (3, Insightful)

paiute (550198) | about a year and a half ago | (#42653553)

Frankly, Chemistry is among the easiest of the physical sciences. I say this as the physicist who was tasked by the chemists to fix their gear when it broke down.

Organic chemistry is quite difficult. The purpose of synthesis is not as you suppose, just mix A and B, see what happens and publish. Most organic chemists are trying to make specific transformations on certain parts of molecules in high conversion and trying to control the variables of time, temperature, concentration, reagent reactivity with substrate functional groups, etc.

Physics is just a block on an inclined plane and variations.

Re:Seriously? IDIOTS (1)

BobGod8 (1123841) | about a year and a half ago | (#42653787)

Not really germane to my point, but I know what you mean. I'm just frustrated that the article makes their work sound like exactly what you picture 5 year olds doing with test tubes. I'm sure grad students will fair better, but it will still be difficult--I've done exactly what's being described, and it's anything but easy. There's simply too many variables in real world conditions to even consider, let alone write about.

I once tried to replicate an experiment that it turned out the silicate structure of the glass acted as a catalyst. Change the glass type, change the reaction.

Re:Seriously? IDIOTS (1)

fearofcarpet (654438) | about a year and a half ago | (#42654397)

Frankly, Chemistry is among the easiest of the physical sciences. I say this as the physicist who was tasked by the chemists to fix their gear when it broke down. Their papers are almost universally set forth as an over glorified recipe. If you look across the fields chem majors have a higher percentage of publishing as undergrads, because the papers are so damn easy. You mix your bits, filter and then quantify the results. Boom, paper. Yes I'm oversimplifying, but not by much. The joke goes that if you want to make a breakthrough in modern chemistry you'd better find a physicist. If you want a breakthrough in modern physics you'd better find a mathematician. The overarching problem is that the system pushes for papers period. Not worthwhile science, not correct science, simply papers.

By your logic Physics is even easier that Chemistry; just push "go" in LabView and publish the resulting graph. And Biology practically researches itself; just shake up a jar of bacteria and watch them grow.

The nuances of synthetic chemistry--a sub-discipline of Chemistry--are far more complex than mixing A and B, which is the core problem of reproducibility. If you can prove the structure, then you obviously made it, but conveying precisely what you did can be a challenge as there are too many factors to control for. Perhaps you ran the reaction in a flask that was once used for Pd chemistry. That little bit of residual metal may unknowingly catalyze the reaction, which then fails when someone tries to reproduce it with a different flask. Scale is also a huge issue; reactions do not always scale up or or down. Temperature gradients, rates of heating, rates of stirring, rates of addition, argon or nitrogen, oxygen concentration, methods for cleaning/drying glassware, etc; there are too many parameters to include for each reaction. That is why there used to be a rigorous standard across synthetic journals for how to report a procedure. Unfortunately those standards have been lost due to the one observation you made that isn't born from myopic ignorance; that science--all science--is driven by papers, not science.

Re:Seriously? IDIOTS (0)

Anonymous Coward | about a year and a half ago | (#42653251)

You might find chemistry hard, but the Universe does it all the time without complaining.

Good luck with that (4, Insightful)

WrongMonkey (1027334) | about a year and a half ago | (#42652979)

It's not a secret that about half of published synthesis methods are garbage and yield values are wildly creative. Reviewers don't have the means to verify these, so anything that seems plausible gets published. Then researchers are left to sort out the best methods based on which ones get the most citations.

Re:Good luck with that (2)

TrekkieGod (627867) | about a year and a half ago | (#42653587)

It's not a secret that about half of published synthesis methods are garbage and yield values are wildly creative. Reviewers don't have the means to verify these, so anything that seems plausible gets published. Then researchers are left to sort out the best methods based on which ones get the most citations.

It's not just synthesis methods. I remember taking a graduate control theory class in which the final project was for the class to replicate the results of a paper with a particular control algorithm. It just...wouldn't...work. Not a single person managed to replicate the results, which simply led to the inevitable conclusion they were fudged.

I'm not and would never defend anyone who publishes any data that has been tempered with, but I still find it annoying that we've set up a system where there's an incentive to do so. There's tremendous pressure to publish in academia, starting in grad school. Combine that with the difficulty in publishing papers that have negative results and a lack of interest in replicating any experiments that are not groundbreaking and you end up with the quality of papers we have here. I'd love for it to be standard for grad students' first papers, as they're learning to write them, to be just replicating results from other papers. And have journals actually recognize the importance of such work, and publish the results often. I think this would cut down the number of crappy papers, because first, you wouldn't want to publish something that's going to be shown to be bullshit in short order, and second, you'd be able to satisfy your publishing requirements by doing the important task of verifying other people's work.

Re:Good luck with that (1)

ygtai (1330807) | about a year and a half ago | (#42654177)

So true and well-said. Grad students' first papers should be reproducing results from other published papers and they should be accepted for publication if well-conducted and well-written. Wish I had mod points.

Re:Good luck with that (0)

Anonymous Coward | about a year and a half ago | (#42654399)

Meh... then you would just get cases where Prof X publishes his result, which is immediately followed by his 4 grad students validating the result... or perhaps students who want to be his grad students...

Re:Good luck with that (0)

Anonymous Coward | about a year and a half ago | (#42654711)

While what you posted is a result of perverse-incentives, I kinda agree with the GP; I can recall a time I got down-graded for _not_ being able to reproduce the results of an experiment and later thought to fudge numbers so they "fit," but refused. I learned more that way than just accepting... -T

Isn't there an easier fix? (1)

Anonymous Coward | about a year and a half ago | (#42654705)

If there was a significant reward (monetary or otherwise) for proving that results of a paper have been fudged or there otherwise hasn't been given adequate information for replicating them... and there was penalty (monetary or otherwise) for having someone prove that your submitted paper was piece of crap... We should see the number of such papers go down very fast, without having to depend on any kind of cultural changes in the journals or communities where we already know that the culture can't be trusted (or we wouldn't be in this situation in the first place).

It seems like a very soviet approach to say "Well, we need to satisfy the publishing requirements but many papers are crap, so maybe we could demand some people to fill their publishing requirements by just trying to test which papers are crap and hit two birds with one stone" instead of saying "Let's discourage submitting crappy papers and if that causes the total number of papers to go down, the publishing requirements will be adjusted in the process".

live-blogging (0)

Anonymous Coward | about a year and a half ago | (#42653011)

Seriously?
It's pathetic enough when you have all these people "live-blogging" Apple's latest release. (Like I can't wait until their press conference is done to read the results). Now they are love-blogging chemistry experiments? Better to do the experiment, take down the results, and then do real analysis and write a proper paper about your findings.

Is anyone actually watching these things? I can see it now:
1. I'm adding 250ml of the calcium carbonate
2. Ok now I am adding the 35cc of Iodine now
3. Now we are turning on the Bunsen burner
4. And three drops of Nitric acid ...
3 hours later the comments start
AC: "Then?! what happened?"
AC2: "I guess it blew up since the posts stopped."

Re:live-blogging (0)

Anonymous Coward | about a year and a half ago | (#42653331)

Better to do the experiment, take down the results, and then do real analysis and write a proper paper about your findings.

Oh, yeah, that'll work real well when the point of the exercise is to show that the "proper paper" business is seriously fucked up right now.

Its called... (1)

Hangtime (19526) | about a year and a half ago | (#42653013)

calling out BS, exaggerated, wrongly calibrated, and/or embellished results. This makes perfect sense to me. If you publishing a paper on a subject then it should be a repeatable recipe.

Which scientific method are they testing? (2)

EmagGeek (574360) | about a year and a half ago | (#42653023)

Are they testing the tried and true scientific method that *real* scientists used for centuries to arrive at the cumulative knowledge of mankind, or are they testing the modern scientific method that involves drawing a conclusion and then trying to find data that fits your model, discarding any data that doesn't?

Re:Which scientific method are they testing? (0)

Anonymous Coward | about a year and a half ago | (#42653539)

But, what is in reality the "scientific method".

Each science question, field or problem has its one way to be approached, but there are people that claim that the "scientific method" is how to be approached but referencing only on concrete way and ignoring all the other possible variants of the "scientific method", more if its applied cross-field knowledge. Actually anyone enough smart and clever that had studied different science fields know that there are problems that are actually the same problem with the same solution but that they are presented with different names and actors.

Chemistry (1, Funny)

93 Escort Wagon (326346) | about a year and a half ago | (#42653067)

It's the major people that can't handle physics switch to.

(I kid, I kid)

Re:Chemistry (1)

Spiridios (2406474) | about a year and a half ago | (#42653675)

It's the major people that can't handle physics switch to.

Chemistry is just applied physics. [xkcd.com]

Re:Chemistry (0)

Anonymous Coward | about a year and a half ago | (#42653781)

I always disliked xkcd # 435. They obviously missed logicians which would be far, far to the right of mathematicians. Some say that logic is a subset of math, but logic is the study of reason. Math is merely an application system. I suspect this was omitted on purpose because despite what they say, mathematicians believe deep down in their surly hearts that the most fundamental subject of study is not within the math department, but philosophy.

Re:Chemistry (0)

WrongMonkey (1027334) | about a year and a half ago | (#42653817)

Complexity is more interesting than purity. Physics is just the branch of science that is simple enough to be modeled accurately with math.

So, this theory? (-1, Flamebait)

cusabiozdy (2818117) | about a year and a half ago | (#42653123)

So, this theory? http://www.cusabio.com/pro_11.html [cusabio.com]

Re:So, this theory? (0)

Anonymous Coward | about a year and a half ago | (#42653357)

Fuck off spammer.

P.S. Your site sucks ass.

LSD (0)

Anonymous Coward | about a year and a half ago | (#42653189)

And today, we synthesize LSD, again

Combine Forces (1)

englishknnigits (1568303) | about a year and a half ago | (#42653203)

http://openscienceframework.org/project/EZcUj/wiki/home [openscienceframework.org]

The Open Science Framework focuses on psychology instead of chemistry but I would imagine they could both utilize similar frameworks and methodologies. I'm not a chemist or a psychologist so there may be some major incompatibility I don't know about.

Re:Combine Forces (1)

englishknnigits (1568303) | about a year and a half ago | (#42653237)

I was actually describing the Reproducibility Project which uses the Open Science Framework. My bad.

A consequence of publish or perish (1)

g01d4 (888748) | about a year and a half ago | (#42653205)

I suppose reviewers are supposed to catch these things but they're probably too busy with their own 'discoveries' to give more than a cursory glance. Hopefully the bloggers will get enough recognition from their efforts to spur others; especially if the trend towards publishing direct online and away from peer reviewed journals continues.

good, but... (1)

stenvar (2789879) | about a year and a half ago | (#42653377)

Replicating published experiments is a worthwhile effort, and there should be more of it. However, it's already pretty well known that scientific papers have a high rate of irreproducible results, due to both fraud and error. If they fail to replicate particular experiments, it's not an indictment of the scientific method, but of the particular scientists.

(Also, some experiments in the natural sciences are tricky to do and require experience to work, but they can still be replicated.)

Incorrect headline (0)

Anonymous Coward | about a year and a half ago | (#42653479)

Good for them, great cause, and can only help their understanding.

How did a factually inaccurate headline mike it to the front page? They are not testing the "scientific method". Nature.com gets it right "Bloggers put chemical reactions through the replication mill"

Yeah, because thats exactly who I trust. (0)

Anonymous Coward | about a year and a half ago | (#42653567)

A bunch of bloggers? Yah, like I actually believe any person who has a blog on the internet about anything /rollseyes

Its bad enough science is a problem because most of can be made up. A lot of things anyone can say anything they like and skew whatever they please. A lot of science is pure hard fact data, but even that is laughable at times. Like how in the 70s scientists allover the world proved and knew nuclear energy was causing a ice age but of course we all know how good their factual evidence turned out to be.

So why would exactly would I care or even believe some guys with blogs doing experiments? Because they say they are smart? Because they say that are qualified? Oh and lets not forget, they have blogs....how amazing! We all know only the smartest, most intelligent, honest and qualified people on the planet are allowed to have a blog. Bloggers are part of a super secret and super strict club called "everyone".

Besides really all they did was say "Hey, we proved whats already been proven. We did what others have already done! Arent we so super awesome?!"...no, no you are not.

Hell I could replicated someone elses theory on quantum mechanics but that doesnt really mean Im smart or know what I am doing. Or hell, I could do a shot by shot remake of raiders of the lost ark but it doesnt mean Im a good film maker.

Re:Yeah, because thats exactly who I trust. (2)

docmordin (2654319) | about a year and a half ago | (#42653689)

Like how in the 70s scientists all over the world proved and knew nuclear energy was causing an ice age but of course we all know how good their factual evidence turned out to be.

Or like how you're wrong about that, since there was no scientific consensus in the 1970s that Earth was headed into an ice age:

T. C. Peterson, W. M. Connolley, and J. Feck, "The myth of the 1970s global cooling scientific consensus", Bulletin of the American Meteorological Society, 89: 1325-1337, 2008.

Re:Yeah, because thats exactly who I trust. (0)

Anonymous Coward | about a year and a half ago | (#42653701)

So why would exactly would I care or even believe some guys with blogs doing experiments? Because they say they are smart? Because they say that are qualified? Oh and lets not forget, they have blogs....

Have blog-envy because you don't have your own blog? Or just bitter because some blogger killed your dog? Why are you emphasizing the fact they have a blog when that isn't relevant? Maybe they should be trusted because they have experience working in a research lab, and they would have to justify their results just like anyone else. The fact they blog about it is irrelevant beyond it is just a communication medium.

Like how in the 70s scientists allover the world proved and knew nuclear energy was causing a ice age but of course we all know how good their factual evidence turned out to be.

Maybe before questioning your trust of bloggers, you should question your trust of your own memory...

The work of Harold Garfinkel (2)

aussersterne (212916) | about a year and a half ago | (#42653697)

is instructive (though still not very readable) here.

The founding ethnomethodologist, Garfinkel argued that much of science depends on practical assumptions and habits of which researchers are only vaguely aware, leading to the "loss" of the phenomenon.

This is both good and bad. On the one hand, it means that a phenomenon is real, with real implications (useless theory and tautology are marked by the difficulty of losing the phenomenon), but on the other hand it means that what is said about the phenomenon is often missing the most critical bits of information, unbeknownst to the PIs themselves because they are unaware of practical (embedded in the practice of) assumptions and habits, something that makes it seem likely that many scientific truths are either solidified far later than they might otherwise have been or incorrectly lost to falsification rather than pursued.

Things I Won't Work With (1)

Required Snark (1702878) | about a year and a half ago | (#42653883)

Another Fun Chemistry Blog: Things I Won't Work With http://pipeline.corante.com/archives/things_i_wont_work_with/ [corante.com]

It's by Derek Lowe, a pharmaceutical chemist who blogs about chemical compounds that are way too dangerous. His position is that the closest you want to get to any of these things is reading about them. The closest I want to get is reading what he has to say about them.

Take FOOF, aka F2O2, aka Dioxygen Difloride. Lowe calls it "Satan's kimchi".

The latest addition to the long list of chemicals that I never hope to encounter takes us back to the wonderful world of fluorine chemistry. I'm always struck by how much work has taken place in that field, how long ago some of it was first done, and how many violently hideous compounds have been carefully studied. Here's how the experimental prep of today's fragrant breath of spring starts:

"The heater was warmed to approximately 700C. The heater block glowed a dull red color, observable with room lights turned off. The ballast tank was filled to 300 torr with oxygen, and fluorine was added until the total pressure was 901 torr. . ."

And yes, what happens next is just what you think happens: you run a mixture of oxygen and fluorine through a 700-degree-heating block. "Oh, no you don't," is the common reaction of most chemists to that proposal, ". . .not unless I'm at least a mile away, two miles if I'm downwind." This, folks, is the bracingly direct route to preparing dioxygen difluoride, often referred to in the literature by its evocative formula of FOOF.

His latest posting is about the compound C2N14, which is two carbon atoms with 14 nitrogen atoms.

The compound exploded in solution, it exploded on any attempts to touch or move the solid, and (most interestingly) it exploded when they were trying to get an infrared spectrum of it. The papers mention several detonations inside the Raman spectrometer as soon as the laser source was turned on, which must have helped the time pass more quickly.

He doesn't just blog about things that go bang, he also covers things that smell really really bad. It's good he doesn't get stuck in a rut.

He has a fine turn with a descriptive phrase and a dry sense of humor. Check out his blog.

Re:Things I Won't Work With (1)

jfengel (409917) | about a year and a half ago | (#42654065)

Thanks for that. That's hilarious.

A very old elephant in the room (2)

decora (1710862) | about a year and a half ago | (#42653987)

These many many decades ago I went to a "top 5 STEM school" and enrolled in Chemistry. While all of the students had Very Expensive Machines to play with, they also had No Time To Do Careful Work.

"Flubbing" of results was considered kind of, you know, ordinary, and if you didn't get what was expected, well, you were expected to just kind of ignore it.

People like to harsh on the 'soft' subjects like history ... but I can assure you that many a history teacher would chew you up one side and down the other for saying things that are patently false and blatantly contradict the evidence - while many a Chemistry teacher would simply tell you "well, it should have worked. and you understand the basic ideas. so lets move on."

I'm not sure what the issue is - if it's just too expensive to do experiments, or maybe the point is not to learn 'experimenting' but rather to learn 'theories... with some hands on experimenting to give you a flavor of it"????

Re:A very old elephant in the room (1)

mysidia (191772) | about a year and a half ago | (#42654549)

"Flubbing" of results was considered kind of, you know, ordinary, and if you didn't get what was expected, well, you were expected to just kind of ignore it.

See... research teams should have separation of duties, to avoid the temptation to cheat. The scientist does the experiment, someone else takes the measurements, someone else records the results, and enters into digital format; which quickly become immutable, digitally signed and timestamped; with noone taking or recording measurements allowed to be the people that know exactly what that measurement "is supposed to be".. finally, an independent auditor monitors, and signs off on the results.

The peer review journals then require the scratch sheets for each trial of the experiment, as signed off by the auditors and measurement takers :)

Re:A very old elephant in the room (1)

TheRedSeven (1234758) | about a year and a half ago | (#42654605)

That reminds me of (aeons ago) my 8th grade science project. While most of the other kids were testing "Which battery lasts the longest?" I decided to test the effect of humidity on the speed of sound. Seemed relevant to my 13 year old mind, I couldn't find a lot of information on it, and I had 3 possible outcomes: H0 was that higher relative humidity has no effect; H1 was that higher relative humidity made the speed of sound faster; H2 was that higher relative humidity made the speed of sound slower.

The experiment involved a trip to a nearby college's physics lab, a big old aquarium, some PVC pipe, a humidifier, a microphone attached to some old Apple computer of some sort with audio software, and a wimshurst generator (the only thing that could produce a brief enough noise so the echo could be differentiated from the continuing reverberations). The result was that H0 was disproved and the evidence pointed toward HIGHER speed of sound in higher relative humidity.

I loved the whole thing. Physics! And testing! And math!

And then the judging came. Most of them loved my experiment and gave the whole thing high marks. But one happened to be a college physics professor who walked up, took a look at my results, and said, "You did a lot of fine work, but your results are wrong." And despite my protests of "But those are the results I got," proceeded to give me essentially a 0, making my experiment one of the few that failed.

I still hold a grudge against that physics prof. Not for crushing all the fun out of experiments, but for trusting the 'right' answer over the experimental one. If that's the kind of scientists we're pushing out these days, we've got some serious issues to deal with.

Human Error (1)

Anonymous Coward | about a year and a half ago | (#42654103)

When a new student tried to pick up where I left off on my PhD, she wasn't able to reproduce any of my results. Doubt was cast and, under the suspicioun of fabrication, I was forced to come back to the lab to duplicate my results. I did. And so did my PhD committee when I showed them how to do it - exactly as I'd explained in my methodology portion of my thesis.

The problem wasn't with the process, but with the scientist - and it's hard to control for ineptitude. I mention this because the inability for a blogger (even one with a scientific background) to reproduce peer-reviewed results is not necessarily an indictment of the results. It could just as well be a problem with the scientist.

If Michael Jordan published a paper about how to slam dunk a basketball, it wouldn't disprove his paper just because I can't jump high enough to touch the rim.

Not just chemistry (1)

crepe-boy (950569) | about a year and a half ago | (#42654155)

Drug companies often find that biological science from academia cannot be reproduced (or is much less robust than indicated. Amgen and Bayer have both published on this topic, and when they called the researchers to ask why, they were told that the bettter results had been picked for the publication. Reuters article [reuters.com]

mod 04 (-1)

Anonymous Coward | about a year and a half ago | (#42654489)

and reports and t0 download the visit documents Like a which allows Are tied up in deliver. Some of

computer science too (3, Interesting)

Anonymous Coward | about a year and a half ago | (#42654607)

The sad truth is that this happens everywhere, also in computer science. I once implemented a software pipelining algorithm based on a scientific paper, but when implemented, the algorithm appeared to be broken and had to be fixed first. In this case, it probably wasn't malice either. But the requirements to get something published are different from getting it to actually run. Brevity and readability may get in the way of correctness, which may easily go unnoted.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>