×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

124 comments

Ooo! First post? (-1)

Anonymous Coward | about 2 months ago | (#46232681)

Never before :)

Re:Ooo! First post? (2, Funny)

Anonymous Coward | about 2 months ago | (#46232763)

Don't worry, with the way beta is going you'll soon have first post on -every- post :)

Beta sucks (1)

Anonymous Coward | about 2 months ago | (#46234553)

It really, really sucks.

I don't get the hate for the new style (0)

Anonymous Coward | about 2 months ago | (#46236143)

It's not like the comments on Slashdot are worth much: Ars has far more mature and on the mark comments. It's like only kids are commenting on Slashdot.

Re:I don't get the hate for the new style (1)

udippel (562132) | about 2 months ago | (#46238099)

It's like only kids are commenting on Slashdot.

It takes a thief to catch a thief ...

Oblig XKCD (5, Informative)

c++0xFF (1758032) | about 2 months ago | (#46232767)

http://xkcd.com/882/ [xkcd.com]

Even the example of p=0.01 from the article is subject to the same problem. That's why the LHC worked for something like 6 sigma before declaring the higgs boson to be discovered. Even then, there's always the chance, however remote, that statistics fooled them.

Re:Oblig XKCD (0)

Anonymous Coward | about 2 months ago | (#46232871)

Yeah, but multiple comparisons (which is what is going on in the xkcd) is hardly the major problem.

Re:Oblig XKCD (2)

HiThere (15173) | about 2 months ago | (#46234977)

A lot of times it is. Remember, you don't only need to count the comparisons that you do, but also that of everyone else studying the same problem. ONE of you is likely to find a result by pure coincidence.

Re:Oblig XKCD (1)

sFurbo (1361249) | about 2 months ago | (#46236171)

It is quite often one of the problems of medical papers. People are different, and you can always find another way to split up the groups. What if you only look at males? Aged 20-30? Who eats a moderate amount of broccoli? Furthermore, there are a lot of diseases that people can have, so if you are doing a observational study, even eating multivitamins doesn't change the incidence of cancer in 20-30 year old males who eat a moderate amount of broccoli, what about the incidence of ear cancer? In the right ear?

Re:Oblig XKCD (0)

Anonymous Coward | about 2 months ago | (#46236505)

Or like studies showing that taking fish oil supplements don't help much because many researchers are using rancid fish oil.

Or troll articles linking high fat diets to depression in mice when it's actually a "high fat and sugar" diet.http://blogs.scientificamerican.com/scicurious-brain/2012/05/02/high-fat-diets-and-depression-a-look-in-mice/

Too much crappy research around. Part of it is incompetence, but I think a lot of it is just so they can get $$$$. Too much crappy "journalism" around. Part of it is incompetence, but I think a lot of it is trolling just so they can get $$$$.

It's probably too hard nowadays to do an expensive 10 year high quality study. Much easier to do five 2 year crappy studies.

P-hacking. (3, Insightful)

khasim (1285) | about 2 months ago | (#46232981)

From TFA:

Perhaps the worst fallacy is the kind of self-deception for which psychologist Uri Simonsohn of the University of Pennsylvania and his colleagues have popularized the term P-hacking; it is also known as data-dredging, snooping, fishing, significance-chasing and double-dipping. "P-hacking," says Simonsohn, "is trying multiple things until you get the desired result" - even unconsciously.

Re:Oblig XKCD (5, Informative)

xQx (5744) | about 2 months ago | (#46233537)

While I agree with the article's headline/conclusion - They aren't innocent of playing games themselves:

Take their sentence: "meeting online nudged the divorce rate from 7.67% down to 5.96%, and barely budged happiness from 5.48 to 5.64 on a 7-point scale" ... Isn't that intentionally misleading? Sure, 0.16 points doesn't sound like much... but it's on a seven point scale. If we change that to a 3 point scale it's only 0.06 points! Amazingly small! ... but wait, if I change that to a 900,000 point scale, well, then that's a whole 20,571 points difference. HUGE NUMBERS!

But I think they missed a really important point - SPSS (one of the very popular data analysis packages) offers you a huge range of correlation tests, and you are _supposed_ to choose to best match the data. Each has their own assumptions, and will only provide the correct 'p' value if the data matches those assumptions.

For example, Many of the tests require that the data follow a bell-shaped curve, and you are supposed to first test your data to ensure that it is normally distributed before using any of the correlation tests that assume normally distributed data. If you don't, you risk over-stating the correlation.

If you have data from a likert scale, you should treat it as ordinal (ranked) data, not numerical (ie. the difference between "Totally Disagree" and "somewhat disagree" should not be assumed to be the same as the difference between "somewhat disagree" and " totally agree") - however, if you aren't getting to the magic p0.5 treating it as ordinal data, you can usually get it over the line by treating it as numerical data and running a different correlation test.

Lecturers are measured on how many papers they publish, most peer reviewers don't know the subtle differences between these tests, so as long as they see 'SPSS said p0.5' and they don't disagree with any of the content of your paper, yay, you get published.

Finally, many of the tests have a minimum sample size that should ever be analysed. If you only have a study of 300 people, there's a whole range of popular correlation tests that you are not supposed to use. But you do, because SPSS makes it easy, because it gets better results, because you forgot what the minimum size was and can't be arsed looking it up (if it's a real problem the reviewers will point it out).

(Evidence to support these statements can be found in the "Survey Researcher's SPSS Cookbook" by Mark Manning and Don Munro. Obviously, it doesn't go into how you can choose an incorrect test to 'hack the p value', to prove that I recommend you download a copy of SPSS and take a short-term position as a lecturer's assistant)

Re:Oblig XKCD (1)

stenvar (2789879) | about 2 months ago | (#46235429)

Lecturers are measured on how many papers they publish, most peer reviewers don't know the subtle differences between these tests

Nobody really "knows the subtle difference between the tests" because nobody really knows what the actual distribution of the data is. In addition, the same way people shop for statistical tests, they shop for experimental procedures, samples, and all other aspects of an experiment, until they get the result they want.

Re:Oblig XKCD (1)

ceoyoyo (59147) | about 2 months ago | (#46234557)

That's a different problem. Like this one, it's not a problem with p-values, it's a problem with people who don't know what a p-value is. The examples in the comic are NOT p-values for the experiment that was done. Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

Re:Oblig XKCD (2)

Rich0 (548339) | about 2 months ago | (#46235239)

That's a different problem. Like this one, it's not a problem with p-values, it's a problem with people who don't know what a p-value is. The examples in the comic are NOT p-values for the experiment that was done. Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

Agree completely, but the problem is that to an outside observer it is impossible to know how many comparisons were actually done.

If you design an experiment to handle 20 comparisons and perform 20 comparisons you'll get meaningful data. However, that design will probably tell you that you need to collect 50x as many data points as you have money to pay for. So, instead you design an experiment that can handle one comparison, then you still perform 20 comparisons, and then you publish the one that showed something interesting.

That was why there was a big push a bunch of years ago to have clinical trial designs (including endpoints) published before the start of trials - it gets rid of the ability to cherry-pick the hypothesis after the fact. From what I've read that experiment hasn't really been a smashing success.

Re:Oblig XKCD (1)

ceoyoyo (59147) | about 2 months ago | (#46235609)

It's also impossible to tell if the other guy made the whole thing up. Fraud is detected via replication. Generally though, it's pretty easy to detect people doing it inadvertently - they publish all their "p-values." I suppose if people actually get better at doing stats some of the inadvertent stuff will turn into harder to detect fraud.

Clinical trials are actually required to be pre-registered with one of a few tracking agencies if they are to be accepted by the FDA and other similar agencies. There are a few problems, but it's much better than it used to be.

Re:Oblig XKCD (0)

Anonymous Coward | about 2 months ago | (#46237225)

'There are a few problems, but it's much better than it used to be.'
Considering how many problems there have been that's a bit of damning with faint praise. ,)

Re:Oblig XKCD (1)

Rich0 (548339) | about 2 months ago | (#46238321)

Clinical trials are actually required to be pre-registered with one of a few tracking agencies if they are to be accepted by the FDA and other similar agencies. There are a few problems, but it's much better than it used to be.

My concern isn't with the trials that are pre-submitted, and then the results are submitted to the FDA. My concern is with the trials that are pre-submitted and then the results are never published.

If you can do that, then there really is no benefit of pre-submission. Just pre-submit 100 trials, then take the 5 good ones and publish them.

Granted, I don't know how many of those trials that don't get published actually pertain to drugs that get marketed. If a company abandons a drug entirely during R&D I'm not sure if there is any real public harm if they don't publish the gory details, as long as they don't try to turn around and use it for something else later (without then publishing the prior results).

Re:Oblig XKCD (1)

ceoyoyo (59147) | about 2 months ago | (#46238417)

Human trials are incredibly expensive. You don't just do 100 and take the best one. Also, various people, including regulatory agencies, are wise to that. If you have a bunch of registered trials without results it's going to be looked at with suspicion.

There have been cases of abuse, but pharma generally doesn't want to outright fake results. Developing drugs is expensive, but getting sued into oblivion is too.

Re:Oblig XKCD (1)

petermgreen (876956) | about 2 months ago | (#46237099)

Agree completely, but the problem is that to an outside observer it is impossible to know how many comparisons were actually done.

And even if the researchers are being honest about the number of comparisons THEY did it is very likely that multiple people will be independently working on the same problem. If all of those people individually only publish positive results then you get the same problem.

Re:Oblig XKCD (1)

stenvar (2789879) | about 2 months ago | (#46235481)

Properly calculated p-values do not have this problem because they are corrected for multiple comparisons.

You can't correct for multiple comparisons because you don't know about all the experiments other people have been doing; you'd have to to know that in order to do that correction properly. It's very hard even to count the number of comparisons you have been doing yourself, because it's not just the number of times you've run the test.

Re:Oblig XKCD (1)

ceoyoyo (59147) | about 2 months ago | (#46235665)

You don't need to correct for experiments done on other datasets. If it's multiple experiments on the same dataset, whoever is in charge of that dataset should be keeping track. Even then, you only need to correct for multiple tests of the same, or similar, hypothesis. That's mostly a problem when your hypothesis is "something happened," which it should never be, but it is all too often.

Re:Oblig XKCD (1)

Sique (173459) | about 2 months ago | (#46237029)

But lets say, you are a statistician working with hundreds of datasets and mining them for some interesting data. If you are looking for p 0.05, about every 20th attempt will yield something "significant", even if all datasets are white noise, and you don't use any dataset twice.

Re:Oblig XKCD (1)

stenvar (2789879) | about 2 months ago | (#46235355)

It's not a problem with being "fooled by statistics". If they applied the statistics wrong or made some other error, six sigma is no better than two sigma if there is something wrong with the underlying assumptions. (Not saying that there is anything wrong with the Higgs experiment.)

The only real protection against this sort of thing is to have many different research groups repeat an experiment independently and analyze it many different ways.

And this is why (3, Insightful)

geekoid (135745) | about 2 months ago | (#46232771)

it takes more then 1 study.
There is a push to have studies include Bayesian Probability.

IMHO all papers should be read be statisticians just to be sure the calculation are correct.

Re:And this is why (1)

Wootery (1087023) | about 2 months ago | (#46233153)

IMHO all papers should be read be statisticians just to be sure the calculation are correct.

Slashdot doesn't offer a too much confidence rating.

Also your 'calculation' is wrong: should be 'calculations'.

Re:And this is why (1)

serviscope_minor (664417) | about 2 months ago | (#46233429)

I'm a big fan of Bayesian techniques. However, sgtatistical tests still have their place if they're not misused. The trouble is that they are deeply misunderstood and terribly badly used.

All they do is tell you if a hypothesis is probably incorrect. You can use them to refute a hypothesis, but not support one.

Re:And this is why (3, Informative)

ceoyoyo (59147) | about 2 months ago | (#46234641)

Other way around, and not quite true for a properly formulated hypothesis.

Frequentist statistics involves making a statistical hypothesis, choosing a level of evidence that you find acceptable (usually alpha=0.05) and using that to accept or reject it. The statistical hypothesis is tied to your scientific hypothesis (or it should be). If the standard of evidence is met or exceeded, the results support the hypothesis. If not, they don't mean anything.

HOWEVER, if you specify your hypothesis well, you include a minimum difference that you consider meaningful. You then calculate error bars for your result and, if they show that your measured value is less than the minimum you hypothesized, that's evidence supporting the negative (not the null) hypothesis: any difference is so small as to be meaningless.

I am not a fan of everyone using Bayesian techniques. A Bayesian analysis of a single experiment that gives a normally distributed measurement (which is most of them) with a non-informative prior is generally equivalent to a frequentist analysis. Since scientists already have trouble doing simple frequentist tests correctly, they do not need to be doing needless Bayesian analyses.

As for informative priors, I don't think they should ever be used in the report of a single experiment. Report the p-value and the Bayes factor, or the equivalent information needed to calculate them. Since an informative prior is inherently subjective, the reader should be left to make up his own mind what it is. Reporting the Bayes factor makes meta-analyses, where Bayesian stats SHOULD be mandatory, easier.

Re:And this is why (2)

serviscope_minor (664417) | about 2 months ago | (#46236695)

If the standard of evidence is met or exceeded, the results support the hypothesis. If not, they don't mean anything.

No: I disagree slightly. If you get a bad P value, then it means that the data is unlikey to have come from that hypothesis. If the data is sufficiently unlikely given the hypothesis, then this is generally read as meaning that the hypothesis is unlikely. That's often applied to the null hypothesis, e.g there is no difference between X and Y, but some numbers computed show that it's unlikely that undiffrerentiated data would lead to those results.

E.g. you could compute the mean and variance of two datasets, compute the variance on the mean and find the means are very far apart given their variances. This would indicate strongly that the data have different means,

The thing is a good P value gives not support, but ther weaker statement that the data is not inconsistent with the hypothesis.

You could for example claim that the data in the above example was actually drawn from some rather exotic distribution. The statistical test might indicate it's not inconsistend with your hypothesis. However it doesn't say more than that.

The hypothesis not being inconsistent with the data is the absolute minimum standard for proposing a model.

Re:And this is why (2)

ColdWetDog (752185) | about 2 months ago | (#46235189)

No, all researchers should have be able to pass graduate level statistics courses.

Yes, I realize that most of us would be back at flipping hamburgers or worse, end up going to law school. But to understand what you're doing, you really need to understand statistics.

Einstein was basically a very good statistician.

Re:And this is why (1)

Rich0 (548339) | about 2 months ago | (#46235255)

it takes more then 1 study.

That only works if ALL the studies actually get published. If the labs only write up the studies with "interesting" results then there really is no difference between cherry-picking 1 trial out of 100 and cherry-picking 10 trials out of 1000.

Misleading statistics (3, Insightful)

cold fjord (826450) | about 2 months ago | (#46232789)

There is no shortage of misleading statistics out there. It can be a discipline fraught with peril for the uninformed, and there are lots of statistics packages out there that reduce advanced tests to a "point and shoot" level of difficulty that produces results that may not mean what the user thinks they mean. I've read some articles showing no lack of problems in the social sciences, but the problem is bigger than that.

I can't help wondering how much that plays into the oscillating recommendations that you see for various foods. Both coffee and eggs have gone through repeated cycles of, "it's bad," "no, it's good," "no, it's bad," "no, it's good." I understand that at least some of it is coming down to the aspect they choose to measure, but I can't help but wonder now much bad statistics is playing into it.

Re:Misleading statistics (-1)

Anonymous Coward | about 2 months ago | (#46232833)

You're a vampire. Oregon.

Re:Misleading statistics (4, Insightful)

geekoid (135745) | about 2 months ago | (#46233041)

Not a lot.

Eggs is a good example.
They where 'bad' becasue they had high cholesterol.
Science move on, and it turns out there are different kind of cholesterol, some 'good' some 'bad' so now eggs aren't as unhealthy as was thought.

Same with many things.

The media s the issue. It's can report science worth a damn.

Re:Misleading statistics (4, Insightful)

tlhIngan (30335) | about 2 months ago | (#46233231)

Eggs is a good example.
They where 'bad' becasue they had high cholesterol.
Science move on, and it turns out there are different kind of cholesterol, some 'good' some 'bad' so now eggs aren't as unhealthy as was thought.

Fats, too. It was deemed that fats were bad for you, so instead of butter, use margarine. Better yet, skip the fats period. Bad for you.

Of course, it was also discovered that hydrogenation had a nasty habit of turning unsaturated fats into different chiral forms - "cis" and "trans". And guess what? The "trans" form of the fat is really, really, really bad for you (yes, that's the same "trans" in trans fats). Suddenly butter wasn't such an unreasonable option anymore as margarine as margarine had to undergo hydrogenation.

Not to mention the effort to go "low fat" has had nasty side effects of its own - the overuse of sugar and salt to replace the taste that fats had, resulting in even worse health problems (obesity, heart disease) than just having the fat to begin with.

(And no, banning trans fats doesn't mean they ban "yummy stuff" - there's plenty of fats you can cook with to still get the "yummy" without all the trans fats.)

Re:Misleading statistics (2)

cold fjord (826450) | about 2 months ago | (#46233425)

Not to mention the effort to go "low fat" has had nasty side effects of its own - the overuse of sugar and salt to replace the taste that fats had, resulting in even worse health problems (obesity, heart disease) than just having the fat to begin with.

To that you can add lower bioavailability of various nutrients that are fat soluble.

Re:Misleading statistics (1)

ebno-10db (1459097) | about 2 months ago | (#46233751)

Eggs is a good example. They where 'bad' becasue they had high cholesterol.

That was even worse than bad statistics. There were no statistics because there was no data. Once it was figured out that high cholesterol levels were bad, somebody just assumed that the cholesterol content of the foods you ate had a significant on your cholesterol levels. They don't. A bad guess became gospel for years So much for scientific medicine.

Re:Misleading statistics (1)

Capsaicin (412918) | about 2 months ago | (#46235613)

So much for scientific medicine.

You don't mean that we ... gulp ... know more about medicine now than we did 10 years ago? And that we will ... teeth chatter ... might know more in another 10 than we do today?! That's it, no more Western medicine for me ---it's sweat tents all the way.

Seriously though, we make guesses based on current knowledge ... some turn out to be bad. Folks do some empirical work, stats show the guess was wrong, we move one. Or we use invalid stats, statisticians complain, we clean up our act. We move on. That is scientific Medicine.

Grammar Narcissim (0)

Anonymous Coward | about 2 months ago | (#46235589)

Eggs is a good example. They where 'bad' becasue they had high cholesterol.... The media s the issue. It's can report science worth a damn.

Now they are bad because they impair language skills.

Only joking, this is obviously the product of caffeine deprivation! Nothing to do with eggs at all ...

Re:Misleading statistics (0)

Anonymous Coward | about 2 months ago | (#46233383)

Both coffee and eggs have gone through repeated cycles of, "it's bad," "no, it's good," "no, it's bad," "no, it's good."

Once you get to the point of "it's bad" or "it's good", chances are you're not dealing with the original science but some writer's personal opinion. Journal articles on food don't say things like "It's bad" or "It's bad with p-value...", they say things more like, "X causes Y with correlation Z" or "X is a factor in Y." Most researchers are pretty aware that foods have both good and bad effects, and it is difficult to pigeon-hole each thing into a particular category, short of saying "In moderate amounts, it won't do much," for various definitions of moderate.

Re:Misleading statistics (1)

floobedy (3470583) | about 2 months ago | (#46234305)

Unfortunately, scientists studying nutrition face an ethical conundrum. They feel they must publish (and publicize) preliminary results because it might save lives. Suppose there's fairly good (but not extremely strong) reason to think that eggs are bad for you. Shouldn't you publicize that result? If you don't, millions of people could die needlessly. If you wait until the results are really certain (or at least more certain), then you have denied people the benefit of preliminary information.

Bear in mind that diseases like atherosclerosis develop over decades. It would take decades (and it would be unethical besides) to assign people to different dietary groups, control everything perfectly, and see who drops dead of heat disease. Since those studies can't be done, the results we do have are frequently preliminary or merely suggestive.

Eggs were bad for you because they contain cholesterol, and some peoples' arteries are clogged with exactly that substance. A few scientists made a leap--let's not consume a lot of exactly the substance which is clogging your arteries.

Unfortunately, that was wrong.

These days more publicizers provide tentative wording to suggest that a result is preliminary. For example, there is a campaign in California to get people to eat more nuts. There are signs paid for by the state which say "research suggests but does not yet prove that eating nuts can reduce your chance of a heart attack" and so on. At least that's a step in the right direction, IMO.

Re:Misleading statistics (1)

ceoyoyo (59147) | about 2 months ago | (#46234695)

The biggest problem with the way most people do statistics is that they don't have adequate statistical reasoning skills. The problem is in the design of experiments and analyses, before you ever get to punching the buttons in your stats package of choice. The differences you get from punching the wrong button are really very minor compared to things that happen all the time, like drawing conclusions based on tests you didn't do (the difference of differences error is an excellent example: half of all high impact neuroscience papers that can make this mistake do so).

The wild world of nutrition recommendations is mostly the way it is because it's all made up. The scientific evidence amounts to "get the basic amounts of macro and micro-nutrients to avoid disease", "eat a variety of foods", "eat vegetables" and a bunch of very basic, and specific things. Those basic and specific results are then wildly extrapolated, mostly by talk show hosts, celebrities, and people looking to cash in on gullible dieters.

Reminds me of the Bible Code controversy (4, Interesting)

sideslash (1865434) | about 2 months ago | (#46232797)

The world is full of coincidental correlations waiting to be rationalized into causality relationships.

Re:Reminds me of the Bible Code controversy (1)

ceoyoyo (59147) | about 2 months ago | (#46234727)

There's no such thing. A correlation test always comes with a p-value that gives you an idea of how likely your observations are to be a coincidence rather than a correlation.

Re:Reminds me of the Bible Code controversy (1)

colinrichardday (768814) | about 2 months ago | (#46234823)

But the standard tests for correlation only supply the correct P-value if the data satisfy certain conditions, such as normally distributed errors, no errors in the independent variable, constant standard deviation of errors, and so on.

Re:Reminds me of the Bible Code controversy (1)

ceoyoyo (59147) | about 2 months ago | (#46234843)

Any statistical test requires that you apply it appropriately. If you don't do so the result is called a "mistake," not a "coincidence" OR a "correlation".

Re:Reminds me of the Bible Code controversy (0)

Anonymous Coward | about 2 months ago | (#46235353)

A p-value is not a measure of how likely your observation is to be a coincidence.

It is the probability of observing your data or other data more extreme and not observed, given the modeling assumption that there is no relationship.

The inference is typically then, when a very small p-value is found, that either something very unusual happened (you measured extreme data), or the assumption of no relationship is a poor assumption.

There are a number of problems with that inference. In particular, it is a conclusion about the data sample, not about the hypothesis.

Neyman and Pearson wrote that we won't be too often wrong if in the long run we act as if we believe there is a relationship when the p-value is less than the pre-chosen threshold alpha.

Gold Standard? (4, Interesting)

radtea (464814) | about 2 months ago | (#46232819)

That means "outmoded and archaic", right?

I realize I have a p-value in my .sig line and have for a decade, but p-values were a mediocre way to communicate the plausibility of a claim even in 2003. They are still used simply because the scientific community--and even moreso the research communities in some areas of the social sciences--are incredibly conservative and unwilling to update their standards of practice long after the rest of the world has passed them by.

Everyone who cares about epistemology has known for decades that p-values are a lousy way to communicate (im)plausibility. This is part and parcel of the Bayesian revolution. It's good that Nature is finally noticing, but it's not as if papers haven't been published in ApJ and similar journals since the '90's with curves showing the plausibility of hypotheses as positive statements.

A p-value is the probability of the data occurring given the null hypothesis is true, and which in the strictest sense says nothing about the hypothesis under test, only the null. This is why the value cited in my .sig line is relevant: people who are innocent are not guilty. This rare case where there is an interesting binary opposition between competing hypothesis is the only one where p-values are modestly useful.

In the general case there are multiple competing hypotheses, and Bayesian analysis is well-suited to updating their plausiblities given some new evidence (I'm personally in favour of biased priors as well.) The results of such an analysis is the plausibility of each hypothesis given everything we know, which is the most anyone can ever reasonably hope for in our quest to know the world.

[Note on language: I distinguish between "plausibility"--which is the degree of belief we have in something--and "probability"--which I'm comfortable taking on a more-or-less frequentist basis. Many Bayesians use "probability" for both of these related by distinct concepts, which I believe is a source of a great deal of confusion, particularly around the question of subjectivity. Plausibilities are subjective, probabilities are objective.]

Re:Gold Standard? (1)

geekoid (135745) | about 2 months ago | (#46233091)

But publishing studies with Bayesian probability hasn't been the norm.
It will be, and heir s a move to do so.
IT's not fast, and it shouldn't be.

"Many Bayesians use "probability" for both of these related by distinct concepts, which I believe is a source of a great deal of confusion"
Sing it, brother!

Re:Gold Standard? (1)

ceoyoyo (59147) | about 2 months ago | (#46234739)

It shouldn't be, and I hope it never will be. If you use a non-informative prior for your Bayesian analysis in most cases you're just doing extra work to get the same result. If you use an informative prior you're colouring your results with your preconceptions. The reader, or the author of a meta-analysis, is the one who should be doing the Bayesian analysis, using their own priors. By all means, report a Bayes factor to make the meta-analysis easier, but also report a p-value, which is a good metric for a single experiment.

Re:Gold Standard? (0)

Anonymous Coward | about 2 months ago | (#46233331)

Thank you for a wonderful and educational post!

Re:Gold Standard? (0)

Anonymous Coward | about 2 months ago | (#46235205)

You need to read ET Jaynes. A sound axiomatization for "subjective plausibility" yields a probability measure.

Re:Gold Standard? (0)

Anonymous Coward | about 2 months ago | (#46236291)

real world probabilities are usually defined by statistics, and thus have issues because statistics are based on "finite sampling", in other words, some scientist can determine the "probability" of 3 cars turning right on some light by sitting there for a while and gathering statistics, but there is nothing that links that "statistical probability" to the real world, so statistics and probability are just guesses.
I would add "possibility" to your list of concepts because many people confuse "not likely" with "not possible", "possible" with "likely", etc.

Re:Gold Standard? (0)

Anonymous Coward | about 2 months ago | (#46236301)

The people in Fukushima probably confused "not likely" with "not possible" and thought they were safe...but anything that is possible, can (and eventually will) happen.

Real world implications (1)

martinux (1742570) | about 2 months ago | (#46232825)

Any researcher worth their salt states a p-value with enough additional information to understand if the p-value is actually meaningful. Anyone who looks at a paper and makes a conclusion besed solely (or largely) off a p-value without thinking about how meaningful the results are from a clinical or real-world perspective is being lazy or reckless.

I guess there are quite a few insightful XKCD strips but this one seems most apt, here: http://xkcd.com/552/ [xkcd.com]

Re:Real world implications (1)

tgv (254536) | about 2 months ago | (#46236237)

Then there are very, very few researchers worth their salt. Even then, it has been shown that a .05 significance under ideal conditions has a chance of being a coincidence of about 1/3. If we add to that the number of errors in the assumptions, the experiments, the unpublished studies, etc., .05 means nothing. I found the work by Jim Berger et al. interesting: http://www.stat.duke.edu/~berg... [duke.edu]

Sudoku teaches all (2)

evanh (627108) | about 2 months ago | (#46232903)

I learnt the uselessness of statistics for guidance of correctness when trying to reduce my effort required at Sudoku. I've since discover the best way to win is not to play. Doesn't stop me trying though!

Simple -- Correlation is NOT causality (1)

redelm (54142) | about 2 months ago | (#46232923)

p-value is just the probability the data/observations were the result of a random process. So a great p value like 0.01 says the results were not random. They do not conform what made them non-random (ie theory).

Epistimology is elementary, and often skipped by those who wish to persuade. "Figures do not lie, but liars figure."[Clemens]

Re:Simple -- Correlation is NOT causality (1)

guacamole (24270) | about 2 months ago | (#46236073)

You're using a quite confusing/inaccurate terminology here. The p-value is the probability of observing a statistic that's at least as large (or extreme) as what has been computed from the sample under the assumption of the "null" or default hypothesis. p=0.01 means that if the null hypothesis is correct, then the probability of observing what you just observed "or worse" is just 1%. A low p-value does not mean that under the _alternative_ hypothesis your results are necessarily "non-random". Normally, the alternative still specifies some kind of probability model. This depends entirely on what your alternative is.

Covered before (0)

Anonymous Coward | about 2 months ago | (#46233075)

Dance of the p-values youtu.be/ez4DgdurRPg

Misconceptions (2)

vdorie (1106873) | about 2 months ago | (#46233089)

A few folk here have commented using incomplete or inaccurate definitions of p-values. A p-value is the probability of finding new data as or more extreme as data you observed assuming a null hypothesis is true. A couple of salient criticisms not mentioned in the article are a) why should more extreme data be lumped in with what was observed and b) what if "new" data can't sensibly be obtained.

In a less technical sense, what the article didn't get into so much is that there is a strong publication bias towards results that are significant (i.e. small p-values), to the point where you need <0.05 to even consider submitting. Some key reading: http://www.stanford.edu/~neilm/qjps.pdf [stanford.edu]. The short version is to not believe it when the news says that "recent research shows...".

Personally, I wait for evidence to accumulate before, say, changing my diet. And if you really want to get it right, dig through the literature yourself. Some of my saddest moments have come from statistics consulting where mostly people come to you looking for permission to run an inappropriate analysis, not understand their data or fit the "right" model. They want to get published, and that's just how things are done.

Re:Misconceptions (0)

Anonymous Coward | about 2 months ago | (#46233873)

Instead of "finding new data," i would say "hypothetically replicating the study and obtaining data", but this is minor.

The reason for lumping in more extreme data is exactly because that's what you're interested in; you're looking for the best evidence possible to disprove the null hypothesis. What is more important, that group A has a much higher IQ than group B, or that the estimated difference of the two groups' IQs is about 8.4? If a measured difference of 8.4 is enough to convince you, then (assuming constant sigma) 12.4 should be too, and 18.7 is even better, etc., and that is what the p-value measures: the probability of obtaining evidence at least as good as what you observed, since that evidence would (presumably) be just as powerful or better. In other words, no one sets out to do an experiment hoping that the measured difference will be exactly, say, 8.4; they just want it to be positive and far enough away from 0 to be interesting.

One is typically interested in showing an effect and, while bounding type-i error, not underestimating it.

Re:Misconceptions (1)

ceoyoyo (59147) | about 2 months ago | (#46234773)

Um, no. Your criticisms on't make sense. You're falling into a misunderstanding that is perpetuated because theoretical statisticians are so careful about how they define things, particularly when Bayesians might be looking over their shoulders.

A p-value is the probability that accepting your statistical hypothesis (rejecting the null hypothesis) would be an error. This is equivalent to saying that the p-value is the probability that, picking a random run out of many runs of your experiment, you'd expect to get your result or something more extreme, purely by chance.

Re:Misconceptions (1)

Paradigma11 (645246) | about 2 months ago | (#46236165)

No, you are wrong, the op is correct. Especially: "A p-value is the probability that accepting your statistical hypothesis (rejecting the null hypothesis) would be an error." is wrong and "A p-value is the probability of finding new data as or more extreme as data you observed assuming a null hypothesis is true." is correct.

The Earth Is Round (p < 0.05) (4, Insightful)

DVega (211997) | about 2 months ago | (#46233093)

There is a classic article by Jacob Cohen [uci.edu] on this subject.

Also there is a simpler analysis of the above article [reid.name]

To quote one of my professors... (1)

margeman2k3 (1933034) | about 2 months ago | (#46233207)

"A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit."

Re:To quote one of my professors... (0)

Anonymous Coward | about 2 months ago | (#46233813)

Your professor is bad and he should feel bad.

Re:To quote one of my professors... (0)

Anonymous Coward | about 2 months ago | (#46234433)

A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit.

RTFA, it's much worse that your professor thinks. Specifically in their "long-shot" scenario a p-value of 0.05 means there is a 89% chance that your paper is wrong! Even in their "toss-up" scenario, a p-value of 0.05 means there is a 29% chance that it is wrong.

In fact the point of TFA is to address the very error your professor is promulgating.

Re:To quote one of my professors... (1)

ceoyoyo (59147) | about 2 months ago | (#46234797)

No. If someone writes a paper and claims that "This is true because p 0.05" then they are wrong regardless. The correct conclusion is "this result supports our hypothesis because p the threshold we set for minimal evidence." The point the article makes is correct, but it's not a problem with p-values, it's a problem with the conclusions people draw from them. His professor is absolutely correct, supposing that the paper's he's talking about aren't meaningless bullshit to start with.

Re:To quote one of my professors... (1)

guacamole (24270) | about 2 months ago | (#46236083)

That's exactly what it means. This is why for the idea to be accepted or consensus to be reached, you need a lot more than one study.

Re:To quote one of my professors... (1)

Paradigma11 (645246) | about 2 months ago | (#46236175)

"A p-value of 0.05 means there's a 5% chance that your paper is wrong. In other words, 1 in 20 papers is bullshit."

This is complete bullshit. If you study something where the h1 is true then there is a 0% possibility to be wrong if you report significant findings.

Re:To quote one of my professors... (1)

zachie (2491880) | about 2 months ago | (#46236391)

Wrong, it is much, much worse than that.

Imagine a body of scholars continuously producing wrong hypotheses. They test all of them. Your teacher correctly pointed that one in twenty will have a p-value > 0.05. But they write papers only off these! In such a scenario, 100% of the papers are wrong.

In other words, this 5% chance of a paper being bullshit is only a lower bound.

Re:To quote one of my professors... (1)

LateArthurDent (1403947) | about 2 months ago | (#46237057)

Exactly. However, that's not a difficult problem to solve. What the Nature article fails to address is the real problem: it's not easy to publish papers that do nothing but confirm the findings of another paper.

The article talks about how a researcher had his dreams of being published dashed once he failed to achieve a similar p-value upon attempting to reproduce his own research. This is bullshit. Journals should be selective, yes. They should be selective in terms of whether experiments have been run with proper methodology, and whether the study supports the conclusions made by the author. They shouldn't be selective based on p-value. For proper science to occur, not only should said researcher have been able to publish his first low p-value study, he should have been able to publish his second high p-value study. Other researchers at different institutions should attempt to reproduce the work and publish their positive or negative findings. Only once dozens of said studies are performed can we actually start to draw a conclusion: if only 1 in 20 experiments show a p-value below 0.05, well that doesn't actually disprove the null-hypothesis, it's evidence for it.

Even if the 5% chance of a paper being bullshit was an upper bound, that would still be a really plausible scenario. Replicating experiments are a fundamental part of science, but journals are only interested in unique experiments yielding positive results with low p-values. Either that or negative results replicating a particularly important paper that everyone takes for granted. Ideally, while grad students are still new and learning the ropes, their research should consist of replicating others' research and publishing the result, whatever it may be. It's the perfect job to get them started before they've had the chance to do significant research of their own, and it's incredibly valuable to the community at large.

Obvious fact is obvious (0)

Anonymous Coward | about 2 months ago | (#46233369)

Of course p-values don't tell you that your hypothesis is correct.

A p-value is the likelihood of getting a result as extreme as (or more extreme than) the one observed, assuming the null hypothesis is true. It has nothing to do with the probability of the truth of the alternative hypothesis.

95% CI (0)

Anonymous Coward | about 2 months ago | (#46233517)

is used by people who are aware of this well-known problem.

Re:95% CI (1)

ceoyoyo (59147) | about 2 months ago | (#46234807)

A confidence interval is completely equivalent to a statement of p-value, mean and type of distribution. In fact, CIs are almost always calculated from that trio. It's just another way of showing the same information.

Most commonly (1)

msobkow (48369) | about 2 months ago | (#46234085)

Correlation != Causation

Re:Most commonly (0)

Anonymous Coward | about 2 months ago | (#46234337)

Irrelevant. The discussion is about whether or not there's a correlation in the first place.

Lies! (0)

Anonymous Coward | about 2 months ago | (#46234167)

To rephrase a famous quote, "There are lies, damned lies, and then there are statistics - who every heard of a statistician fudging the numbers?"

Q values (1)

drooling-dog (189103) | about 2 months ago | (#46234357)

At least in bioiniformatics, the correction of p-values for multiple comparisons ("q-values") has been standard practice for quite a while now.

Re:Q values (1)

ceoyoyo (59147) | about 2 months ago | (#46234829)

Please tell me they don't really call them 'q-values'?

A p-value IS corrected for multiple comparisons. If you did multiple comparisons and you didn't correct it, it ain't a p-value. A good term for those would be "the result of my fishing expedition."

Re:Q values (1)

tgv (254536) | about 2 months ago | (#46236247)

But with what correction? There isn't one correction for multiple comparisons, and they all have their problems. Just go Bayesian instead.

Re:Q values (1)

Paradigma11 (645246) | about 2 months ago | (#46236183)

At least in bioiniformatics, the correction of p-values for multiple comparisons ("q-values") has been standard practice for quite a while now.

But then your beta-error goes through the roof and you wont find anything. wouldnt it be far more efficient to repeat the significant experiments.

Torturing the data (4, Informative)

floobedy (3470583) | about 2 months ago | (#46234589)

One variant of "p-hacking" is "torturing the data", or performing the same statistical test over and over again, on slightly different data sets, until you get the result that you want. You will eventually get the result you want, regardless of the underlying reality, because there is 1 spurious result for every 20 statistical tests you perform (p=0.05).

I remember one amusing example, which involved a researcher who claimed that a positive mental outlook increases cancer survival times. He had a poorly-controlled study demonstrating that people who keep their "mood up" are more likely to survive longer if they have cancer. When other researchers designed a larger, high-quality study to examine this phenomenon, it found no effect. Mood made no difference to survival time.

Then something interesting happened. The original researcher responded by looking for subsets of the data from the large study, to find any sub-groups where his hypothesis would be confirmed. He ended up retorting that "keeping a positive mental outlook DID work, according to your own data, for 35-45 year-old east asian females (peven if the p value was 0.05.

This kind of thing crops up all the time.

Re:Torturing the data (1)

floobedy (3470583) | about 2 months ago | (#46234675)

Whoops. The post got garbled because slashdot wrongly interpreted the less-than sign as an html tag opening, and I didn't escape it. (Which seems like a bug to me. The text "p<0.05" is obviously not the beginning of an html tag, because no html tags accepted by slashdot begin with 0.05). Anyway, the offending paragraph should say:

Then something interesting happened. The original researcher responded by looking for subsets of the data from the large study, to find any sub-groups where his hypothesis would be confirmed. He ended up retorting that "keeping a positive mental outlook DID work, according to your own data, for 35-45 year-old east asian females (p<0.05)"... How many statistical tests did he perform to reach that conclusion, while trying to rescue his hypothesis? If he ran more than 20 tests, then you would expect one spurious positive result just from random error, even though his p value was less than 0.05.

Statistics (1)

stevez67 (2374822) | about 2 months ago | (#46234705)

Lies ... damned lies ... and statistics. The P value only tells you if there is statistical significance in the data, not whether your hypothesis is correct or incorrect.

Nothing wrong with P values if they are applied co (0)

Anonymous Coward | about 2 months ago | (#46235195)

I think the article missed that major point. I think P values are fine when applied to the normal distribution however apparently some people are using the "empirical rule" to apply it to any distribution. If people configured their data collection to allow the "central limit theorem" to be used then the data would be normally distributed and P value limits would be fine.

No where does the article mention the normal distribution of the central limit theorem.

Re: Nothing wrong with P values if they are applie (1)

brickh0us3 (3510891) | about 2 months ago | (#46235775)

This was an interesting read on the subject published by a medical doctor who apparently had a good stat background. Dr. Ioannidis has kind of been on a crusade to look at design issues through meta analysis of old med studies that were suspect. http://www.ncbi.nlm.nih.gov/pm... [nih.gov]. There has been some more recent work of his in the news as of late i believe. I love the 1001 varying non-mathematical definitions of p values in this thread it was cute.

So much confusion, so little understanding (0)

Anonymous Coward | about 2 months ago | (#46236251)

The article is little more than an appeal to undefined conceptions of "plausibility" that presumably would be little more than generalized weighted probability functions. However, the article seems to rely more heavily upon being reasonably sure the reader is as confused as the author with respect to the more subtle technical details.

P values reflect, given certain assumptions about how the data are sampled and the statistical distribution from which they are drawn, the probability of committing a type I statistical error (rejecting a true null hypothesis as false). Consequently, a P value of 0.05 suggests that such a result will manifest itself about 1 time in 20 by chance alone, assuming sampling independence and the nature of the probability distribution from which it is presumably drawn, usually a Gaussian one given that in most situations the true mean of an unknown sampling distribution will approach the true mean as the sample size tends to infinity. For most types of data, such as the agronomic data evaluated by Fischer, a p value of 0.05 was and remains a useful choice since it has been found generally useful or meaningful in the context within which it was used (to discuss the nature and existences of differences among plants and their growth rates). In other fields of study more stringent critical values are the norm given the nature and frequency of the expected outcomes of interest. Obviously, type I statistical errors are not the only type of statistical error, since it is also possible to accept a true null hypothesis when it is in fact false. However, given the law of the excluded middle, these two notions are not independent of one another.

Much of the confusion in statistical literature and clearly on display in slashdot (no surprise there) stems from being unable to appropriately recognize what constitutes the null hypothesis and what constitutes a reasonable framework from which to decide what is the nature of the underlying distribution being tested, as well as the extent to which certain assumptions the independence of samples and variates are met and how they may affect p values. In some tests it is the measure of central tendency that is tested, whereas in others it is the homogenetity of variances. In more complicated designs more care must be afforded, just as in analyses of correlation or covariance, where it is the independence or lack thereof between variables that is being tested, In situations with multiple covariates as in their block-design analogs, interaction effects must be controlled. Likewise, in cases of multiple comparisons one must account for the family-wide or experiment-wide error, since as one conducts more tests, the greater the chance of encountering an unlikely result by chance alone. Such considerations give rise to adjustments of critical p values, as in the case of the widely known Bonferroni adjustments. Likewise, the discriminatory power of different tests for a given sample size, can also be an issue requiring consideration, not to mention specificity and sensitivity, which may be more important in certain settings, such as epidemiology.

It should also be kept in mind that most statistical procedures assume variates to be either continuous or discrete in their distribution and to arrive at an idea of the underlying topology of the space being sampled, typically a metric one, and often a Hilbert space. Should the phenomena of interest be pseudometric rather than metric in nature or living within more general topological spaces, then standard probability theory as derived from measure theory would have to be more delicately applied, if not abandoned entirely for non-parametric techniques. p-values like everything else in science must be appropriately interpreted within the context they are being discussed. However, once such considerations are appropriately made, a lower p-value will, as noted by Fisher, provide further confidence in ruling out chance as an explanation of a particular statistical outcome than a higher one. Alternatives, such as "plausibility" will need to prove themselves as viable.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...