Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
User Journal

Journal ShakaUVM's Journal: Epistemology and the Scientific Method

There's three points I'd like to make in conclusion of this thread (which I've enjoyed, by the way -- the Philosophy of Science is a fun topic for me). It's obviously a huge topic, so I'd like to summarize what I'm trying to say:

Point One: There are many paths to learning facts, not just the scientific method.
Point Two: Science as practiced (as opposed to an ideal practice of science) is flawed, but it works well enough that we use it.
Point Three: The Scientific Method even in its ideal form could be better, and doesn't deserve the sort of religious fervor associated with it by many people these days.

Point One:
I think you're lumping a whole bunch of things together there.

Quite true. The field of epistemology -- how do I know something to be true? -- encompasses a great many ways and means.

Different kinds of questions have different methods for appropriately answering them:
Question 1: Did Sally kiss Harry? Answered by an observation or self-reporting followed by a chain of word-of-mouth.
Question 2: Are Scrub Jays blue? Answered by an orthinologist going out and studying a number of Scrub Jays.
Question 3: Are men taller than women? Answered by statistical methods.
Question 4: Is 5 greater than 4? Answered by a rational claim without any empirical observations.
Question 5: Does ice cream cause polio? Answered incorrectly by establishing correlation, Answered correctly by establishing causation.
Question 6: Should people wear hats in church? Answered by religious debate from authority.
Question 7: Is murder wrong? Answered by a variety of ethical or religious arguments. Some people claim that "murder is wrong" is not a fact at all, but an opinion. Others claim it is a fact.
Question 8: Does adding fertilizer cause tomato plants to grow faster? Answered by every 8th grade science fair, using the traditional scientific method of hypothesis testing.

My point is that the scientific method, while indeed a powerful tool at arriving at truth, and useful in many situation, is not the only means of learning truth. Different questions have different ways that are appropriate for answering them. The scientific method is not the complete answer to epistemology. It has made *huge* advances possible, and was right to excise argument from authority from questions that can be tested, but it is not the answer to epistemology that Logical Positivists make it out to be.

In other words, the only reason to believe something is based on evidence, and things about which evidence can fundamentally not be collected are not worth thinking about.
Logical Positivism has gone through various incarnations over the last century, but the most strident version says that that which cannot be scientifically proven (via verification or falsifiability) is not worth considering. Other versions accept different sets of facts. "My fiancee loves me" is resistant to being put in a test tube, but we could perhaps look at evidence for it, like taking her word for it, or perhaps looking at a cake she baked for me.

The trouble of course, is that it really is a slippery slope from there to reports of people seeing ghosts (the original point of this Slashdot article). How can we accept my fiancee's word that she loves me, but not accept the fact that her friend saw a ghost when she was young? We can't start from the assumption that ghosts don't exist, since then we're just assuming our conclusions. Hence the strident version of Logical Positivism -- that which cannot be shown scientifically is not of interest.

Point Two: The practice of Science will never match the ideal.
What I'm saying is that the belief that every published result is even supposed to be correct is, in itself, a fundamental misunderstanding of the nature of the scientific method on your part. Like I said, bad science exists, and even good science can reach incorrect conclusions. The claim advocates of the scientific method would make is that incorrect results get rooted out over time, not that all published results are intrinsically correct the very first time.

I've rambled enough on this, I think. I think any discussion of the flaws of the scientific method must necessarily include how it works out in practice. Even if we assume perfect scientists, the results of the science process (by which I include publication and review) will necessarily produce errors, distorted claims, overblown reportage in the press, misunderstandings by the public, outright fraud, and slipshod reproduction of results. And yes, I think we have to say this is a problem with the scientific method at some level. We're not trying to make ourselves feel good when we do science -- we're trying to find the truth. And a process for discovering truth which results in all these problems is not perfect and can be improved. For example, if we simply had a systemized way of following up studies to test their accuracy, I'd say the scientific method had indeed been improved.

Point Three: The Scientific Method even in its ideal form could be better, and doesn't deserve the sort of religious fervor associated with it by many people these days.

As you can probably guess, I don't hold with Logical Positivism. The scientific method has developed a sort of cult following among scientists, by which I mean that some of the other methods for determining truth (as outlined above) get sort of sneered at, and anything impugning the sanctity of the scientific method is met with what one might call a sort of dogmatic reactionism.

Let's discard the issues of publication bias, treatment of heretics, etc. There are three major problems with the scientific method, even in its ideal form.
1) Generation of the hypothesis.
2) Probabilistic Results.
3) Singular Results.

Hypothesis Generation
The first issue Kuhn wrote about extensively. Essentially, he said, the hypothesis was generated *after* the testing phase of the scientific method. While this must be done to a certain degree, he believes this compromises the scientific process.

I have a different issue with it -- essentially, hypothesis creation is educated guessing. Even if a hypothesis can be tested and found to be supported by facts, it doesn't actually mean it is the *right guess*. I could hypothesize that "a lack of insulin is the cause for diabetes". I could do an experimental/control test, injecting the experimental group with insulin, the other group with a placebo, and find that the insulin group has their diabetes symptoms go away. According to science, I have established that hypothesis.

*Alternatively* I could hypothesize that "diabetes is caused by an inflammation process out of control", inject an experimental group with strong anti-inflammatories and the control group with placebo, and *also* find that I halted the diabetes. Or I could hypothesize that "eating fast food causes diabetes". I'd feed the experimental group nothing but Big Macs for 10 years and see that they, indeed, have higher rates of diabetes than the control group. A hypothesis is an educated guess, and multiple educated guesses, all orthogonal to each other, can be found and proven to be "true".

I understand you are very big on the models developed by the scientific method, but (assuming we know nothing about diabetes) it is unclear which of those three models is best, or the most useful. Should we focus on the lack of insulin? The underlying inflammation? Diet? This is a somewhat simplistic example, but the point is this in a nutshell: a hypothesis is a guess, and may be shown to be "true" by the scientific method without actually being true, or without actually being the underlying cause.

The scientific method is the search for truth after all, and it's not clear how a guess is supposed to perfectly match the underlying reality.

Probabilistic Results.
This is, as I've mentioned before, a very serious problem with the scientific method. Especially in the life sciences, we don't get clear-cut answers to questions. We get probabilistic answers. While a 95% probabilistic accuracy sounds quite high, any person with an understanding of Bayes' Law can immediately see the problem -- I could conduct 20 or studies asking the most ridiculous questions -- "Does listening to DVDs instead of CDs cause heart attacks?" "Does drinking Sprite instead of 7-UP cause cancer?" etc., and odds are one of them -- a complete fabrication, remember -- will be "scientifically" shown to be "true".

It's similar to a landmark case in DNA testing. Early DNA testing was very accurate. Say, a million-to-one chance of providing a wrong answer. There was a murder in LA, with a prosecutor looking for a conviction based solely on this scientific and accurate DNA evidence with only million to one odds of being wrong. The defendant, on the other hand, points out that there's only a one-in-thirteen chance he's the murderer. 13 million people (living in the LA region) * 10^-6 = 13 people matching the DNA, of which he happened to get accidentally tagged. If the defendant hadn't been aware of stats, he could easily have gone to the electric chair.

It's an analogy for the scientific process in general. When we combine large numbers of studies with relatively high p-values, we are pretty much guaranteed that our process at arriving at truth will arrive at falsehood.

Singular Results
We create scientific models based on large numbers of observations. From the formation to stars to the effects of pituitary growth hormone supplements on adolescent males to string theory (ok, maybe not string theory), we make empirical observations and then create a model which fits the data. But the process simply doesn't work when we've only met a single Martian, and want to guess the height distributions of Martians -- after all, the guy visiting Earth might be an especially small one, in order to fit inside his spaceship.

And data models are all well and good -- until we get a data point which doesn't fit the model. Taleb wrote a book on the topic, called Black Swans which says that history is shaped by events which don't fit our pretty models. In other words, the most interesting things in life don't get predicted well, and don't get handled well by our models of the world.

Taleb is something of an idiot, for all that he's a smart guy, but it's worth reading if you can find a copy without paying for it. He makes the very important point that life isn't usually Gaussian. We just guess things are Gaussian, because we're taught to, even when the model has no right to be. He goes a bit too far, calling Gaussians a fraud (on the contrary, they have every right to be used when we're summing large numbers of independent events, by the Central Limit Theory), but he's right that their use is dogmatic. And wrong in many, many instances.

He's fascinated by fractal models. I personally think that Cauchy curves better model a lot of things in real life (they have thick tails, meaning that they allow exceptional events to occur much more often than a Gaussian), but the point remains: we can't make models (or very well) from singular observations, and extraordinary events are hard to predict and model.

This is not a theoretical argument. Events like Katrina, Black Monday, a hypothetical Big One in California, etc., are all very important topics both to science and to the general public, but our current methods are inadequate for dealing with them.

This discussion has been archived. No new comments can be posted.

Epistemology and the Scientific Method

Comments Filter:

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...