We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
ShakaUVM (157947) writes "Without any fanfare or notice in the media, President Obama has granted INTERPOL diplomatic immunity while conducting investigations on American soil. While INTERPOL has been allowed to operate in the US in the past, under an executive order by President Reagan, they've had to follow the same rules as the FBI, CIA, etc., while on American soil. This means, among other things, INTERPOL is immune to Freedom of Information Act requests and that INTERPOL agents cannot be punished for most any crimes they may commit. Hopefully the worst we'll see from this is INTERPOL agents ignoring their speeding tickets." Link to Original Source top
ShakaUVM writes "People who give up a little bit of liberty for a little bit of security deserve neither, the saying goes. But what happens when people give up so much liberty their entire country resembles an Orweillean dystopia — but the pervasive monitoring doesn't help to solve any crimes? That's what is happening in the United Kingdom today, the Guardian is reporting. While the Guardian tries to put a good spin on the entire fiasco, the fact remains that CCTVs only help with 3% of all street robberies, the very crimes they were supposed to be best at protecting.
Should England finally move to eliminate its troubling state surveillance program?" Link to Original Source top
ShakaUVM writes "Blizzard is suing Michael Donnely, author of the WOW Glider program, which automates the repetitive gameplay of World of Warcraft. Blizzard is claiming the tool violates copyright law because "It copies the game into RAM." Regardless of what one thinks of macroing programs, the implications for copyright law if "copying into RAM" becomes a violation is very profound — and Blizzard has the legal funding to push such a warped interpretation of the law through. Hopefully this case will go better than when Blizzard used its well-funded legal team to bully and then crush the UCSD students that developed the BNETD battlenet emulator." Link to Original Source top
Anonymous Howard (157947) writes "As the Bali climate change conference gets underway, a group of global warming dissenters have descended on Bali, arguing for governments to have the courage to "do nothing" in the face of a non-existent problem. Claiming that they have papers showing that the IPCC has been wrong about nearly everything, their more insightful claim is about how politics can lead to a fracturing of the scientific community, where subscription to a political cause is more important than real scientific debate.
That's the point of regulations. Or rather, why large businesses oftentimes support gov't regulation - to squeeze small businesses out so that the large corporations, who can hire people to take care of tedious things like HIPAA, Sarbanes-Oxley, monthly W-2 Payroll Withholdings, etc. I went crosseyed trying to handle all the payroll and taxes for a small corporation, and even with professionals doing it for us (costing $500/year for payroll, $1000/year for taxes), it's *still* a massive hassle. Guess who's responsible when Paychex sends out 1099s -- but instead of sending 1099s, majorly messes up and issues checks for the amount that should be on the 1099? Hint: not Paychex.
Due to friction / barriers to entry, it's very difficult to profitably run a small S Corp on less than six digits a year.
While you always hear some left-wing people complaining about large corporations, the highly regulated environments found in socialist countries are very favorable to the large companies at the expense of the small. If you contrast the number of top-100 corporations in 1950 in France and the US vs. the ones still around decades later, the majority of the US firms had vanished, but the majority in France were still around.
(In regards to the ratemycop story.) The police internal review system is totally incestuous. Policemen will never give each other a fair review unless there's a tremendous amount of outside pressure applied to the department.
One of my dad's friends was a guy named Sam Knott. On my 9th birthday, his 20 year old daughter was pulled over by an unstable CHP officer named Craig Peyer. He killed her by a bridge overpass.
The real sticking point for Sam was that the CHP has received a large number of complaints about the guy's aggressive and threatening personality. Not only did they not even bother to investigate any of them, they didn't even have a system for tracking them. They all went into a filing cabinet and ignored. Sam investigated the black hole of police accountability, and really didn't like what he found, and crusaded tirelessly for the next 20 years to reform the system. He showed up at city hall meetings, befriended politicians, antagonized police chiefs that were desperate to preserve their above-the-law status, and got the bridge where she was killed renamed after her (it's a couple miles from my house). He got the laws changed, too.
He died from a heart attack in 2000 while cleaning up the bridge where his daughter was killed.
There's countless other examples of police being never held accountable - you can watch videos on Youtube of some black guys trying to file complaint reports, and being dismissed or turned away. Hell, my dad was held at gunpoint by a Texas Ranger because he didn't think he should have to fill out his SSN on the speeding ticket he got (for doing 70 on the highway). When we called to complain, they said, yeah, he's been having some psychological issues. Kind of an understatement - the guy turned purple with rage when my dad just questioned if he could be asked for his SSN, and they guy drew his gun and threatened to throw him in jail for the night.
But Texas still let him go on patrol -- he was just having "some issues".
So yeah, sites like ratemycop which provide even a totally unofficial level of police accountability should desperately be encouraged. In fact, something like this should be mandated to be part of every department's internal affairs office.
It's sad that the epic story of Sam Knott's crusade for reform doesn't get a wikipedia article, and there's just a brief stub for Craig Peyer (who once claimed, "There are two people you don't piss off in this world: God and a Highway Patrolman-and not necessarily in that order."), even Sam was tremendously influential and the stories got a lot of press. I guess having one's daughter murdered by an evil cop and having the father campaign for and win systematic change isn't as notable as a pokemon character.
Here's some classical rational arguments for God by Aquinas, Anselm, Descartes, Pascal, Lewis, and James, using my own paraphrasing of them to make them short. I think Pascal's and James' are probably the ones that will interest atheists the most, as the others, while interesting, are rational arguments *for* the existence of God, but these pragmatic arguments say that it is rational *to believe* in God. An important difference. (Most people confuse Pascal's wager as an argument in the first case, not the second, as it was intended -- he wrote it for Christians, not trying to convince atheists.)
1) Aquinas: Everything in science has a cause. What caused the big bang? If you say nothing, that is a less scientific statement than saying something. Therefore, rationally, something caused the universe. Since that something must stand outside of time, the only thing which fits our concept of a powerful entity sitting outside of time is God. Though you could posit something else that fits those shoes, like an omnipotent 8th grader in a higher dimension creating our universe as a science fair project, whatever it is will resemble God to some degree. Note: The universe cannot be infinitely old. If the universe started an infinite amount of time ago, we could not get to the present one second at a time.
2) Anselm: Unlike with unicorns and fairies, we know that God has to exist simply from the definition of him as the most perfect being, as existence is one of the required attributes for perfection. Certainly a god that exists is more perfect than a god that doesn't exist.
3) Descartes (heavily adapted): God or evolution made us (or maybe space aliens). Therefore, we were made either with a purpose, or survived by being be more fit than other species, with useful traits retained and harmful ones pruned. All humans have a yearning for God, hence atheists' greater belief in the supernatural than theists, as they attempt to fill their need another way. But this need makes no sense in any creation method (unless we were made by aliens, I guess, who wanted religious slaves to tend their stargates...) unless there was a God. A creature who has blinded or deceptive senses is useless evolutionary, and wouldn't be done by a kind and loving God. Therefore, since in both cases we are given facilities which we should be able to trust, the yearning for God should be seen as actual evidence that God does exist.
4) Pascal: We don't know if God exists or not. However, we *do* know what the consequences of belief and nonbelief are. When dealing with uncertainty, the rule is to ignore the non-quantifiable probabilities and focus on the consequences in order to make a rational decision. In this case, it is a very simple decision, as with even a small (but non-zero!) chance of God's existence, the rational decision is to believe.
Note: this means that if you think there's a 0% chance that God exists, you shouldn't believe in him. In any event, trying to believe in something that you think is completely false is stupid, and probably impossible to boot.
However, it does mean that if someone thinks that there is a chance that God exists, that you shouldn't criticize them for being irrational, as well.
5) Lewis: The historical record, unlike with Mormonism and some other records, shows that there probably was a guy named Jesus who ran around on earth and did stuff in front of a bunch of people. There's several possibilities: 1) Jesus was a man, but a great moral teacher (Christian "modernism"), 2) Jesus thought he was the son of God -- but was just kind of crazy, 3) Jesus was a sort of crazy evil cult leader guy, like David Koresh, or 4) He was the son of God. Lewis eliminates the first three possibilities due to various things like his disciples almost universally dying for him (sorry, I'm running out of time here, I have to take off for Jiu-Jitsu), and so concludes that Jesus must have been the son of God.
6) James (the Will to Believe, one of the greatest works of philosophy: http://falcon.jmu.edu/~omearawm/ph101willtobelieve.html) We don't know if God is real or not. However, we must choose -- we cannot put it off. As long as the option is a live option (in other words, it's an option a specific person could actually believe in, as opposed to "the world was created by My Little Ponies") which is rational and not self-contradictory, then let him believe it without shame. The person who tediously insists on 100% proof of anything will be sorely disappointed in life (and probably a bore, to boot), since even science doesn't give us things that are 100% true. A tedious skeptic is just as bad, if not worse, than a fatuous believer.
Imagine a person put a gun to your head (i.e., it's a forced decision), and says that you have to decide, right now, if P=NP or not, and how you pick will govern the rest of your life somehow. It's a momentous, forced, live decision (as either P=NP or P!=NP could be true), but you *must* pick, without firm proof either way. James' point is that in such situation we can freely choose either, without shame, or without being called irrational.
So there you have it -- four rational arguments for the existence of God, and two that the belief in God is rational.
>>God, I hate philosophy. The problem with so many of you people is that you talk yourself out of knowledge, and into ignorance.
Philosophy and logic are interesting in that you can study what *must* be true, regardless of, well, anything. Even in a whacky world with 23 dimensions and no light, a circle is still a circle.
>>The "realness" of reality is irrelevant; regardless of the reality of Reality, the Reality that we perceive is the "only context in which anything is meaningful."
Except our *understanding* of reality is critically important. The basic questions about life (you know, stuff like What is the Meaning of Life) have different answers for different people, and essentially shape the entire direction of a person's life. A person who says that the Meaning is to spread the Word of God will live a very different life than the person who says that it is to enjoy life as much as we can, and then get out while the getting is good. Philosophy can help differentiate these stances, and reveal problems and contradictions in them.
>>The problem with religion is that it gets people to believe things that are demonstratively false - "abstinence-only education prevents pregnancy and STD's" - on the basis of no good evidence.
Atheism gets people to believe things that are patently false, like Hutchens saying that religion is irrational and bad (for some definition of bad in a world that doesn't involve a moral law, naturally), or Dawkins claiming that religion doesn't change how people act, or, hey, your statements implying that religious people must be blind to the real world in order to believe in God. Which is not at all the case. There is no contradiction between saying, I am a Scientist and a Man of God.
Gould's NOM model is primitive... while I think there is quite a very large lesson people should learn that religion teaches religious matters and science teaches scientific matters, there is a non-negligible area that overlaps between the two. Buddhism claims that the world is without end, and has lessons which rely on this key point (be nice to everyone you meet, because since the world is infinitely old, everyone has probably been your mother and your child at some point). Christianity claims that the world was created. If science can conclusively rule one way or another, that would make a tremendous difference for religion. Christians don't claim primacy of religion over scientific matters -- if the Bible says Pi is 3, well, that's because it was rounded off to one digit, not because it was 3. When a Christian scientist learns more about God's creation, that is an act of worship, not an attack on religion.
Islam has a different approach to science, though. The great Islamic scholar Averroes pointed out in the 1100s that there is only one truth -- there cannot be a contradiction between religious truth and scientific truth (such as it is). In later years, though, Sufi mysticism has sort of permeated Islamic culture, and so claim that every electron only moves because the Will of God commands it to be so, and thus it is pointless to study things scientifically, since it is impossible (and incredibly hubristic) to predict the Will of God. Even more recently, there has been a resurgence in scientific thought in Islamic worlds, but in my opinion, that was one of the major reasons Islamic scientific progress stalled after the 1100s or so.
On the other hand, religion can and should inform scientific decisions when it comes to making certain decisions. This probably horrifies you to no end, but science in the absence of all morality and ethics leads to much worse tragedies and horrors than religion ever created.
>>Which is a pretty good indication that those books have little that is useful to a rational society.
Whether or not you believe in God, which it's pretty (adamantly) clear that you don't, it should be rather obvious that religion is the only thing that lets mankind transcend the real world as we know it, and achieve those Herculean heights of greatness that would never happen if we were all just concerned with the real world. Where is the Mother Theresa of the atheists?
As far as the law goes, Jesus said that he wasn't there to change the law, but fulfill it. While he never broke the law himself (the pharisees criticizing him for breaking the law were actually in error), his point was this: "The law was given to man for man's benefit. Man was not given to the law. God wants a merciful heart, not blind obedience to the law." Does that make sense to you? It does to me.
The question after Jesus was whether non-Jews would have to follow the law, as the covenant was with the Jewish people. Peter and the others held it did (i.e., you'd have to be circumcised to become a Christian). But Paul's viewpoint won, which is that the law doesn't apply to Gentiles, but the greater moral law does.
>>And of course, there's zero evidence to support any of these religious claims anyway.
Is there? Certainly people act differently when highly religious, and that is measurable. If a religion's claim is that it makes you happy (or not care about being happy, like with Buddhism), you can test and measure that.
>>As an engineer, if I doubt something, I can set up an experiment and determine if it works or not.
You can test electrons. It's rather different trying to test God. More importantly, most statements of this sort are dishonest -- even if an experiment of some sort showed that God might have intervened, the doubter would doubt it anyway. You know, like with a religious friend of mine with a terminal brain tumor that vanished between two visits. There's an infinite number of explanations to this, but a doubtful person will always select the one that doesn't involve religion, making the test fundamentally dishonest.
Seems to me that they solved some of the problem, but not the problem they were looking for. The F5 neurons in question appear to be the sort of task visualization center. As in, when you're operating a tool, from the remote crane on the space shuttle, to playing Super Mario Brothers, you imagine the task happening. If you're opening a ziploc bag, the opening task will be the same regardless of if you're using your hand, pliers, or reverse pliers (which close when they open and open when they close, according to the article) -- you imagine the ziploc bit getting prised apart. Apparently, since these neurons fire exactly the same way when they do their task, this is probably what they found.
The more important part, how the brain can sublimate operating complex machinery so that it doesn't require conscious thought to operate, isn't explained here. Shuttle operators are actually trained to treat the crane like an extension of their arm, video game players eventually move past the controls to directly control the player on the screen, experienced skiers just imagine themselves turning without consciously having to weight their skis or edge, etc. All of these tasks originally required a lot of conscious control and expenditure of brain power (and in the case of skiing, a lot of bruises). And as long as it stays at this level, it stays awkward and stilted. It is at the point in which you transcend the raw mechanics and are capable of controlling it at a higher level (which is what this study found), that the skier becomes graceful, the video game player can race through flaming rotating death traps in super mario brothers, and the space shuttle control can quickly and adroitly manipulate stuff.
The human brain really is a fascinating thing, and capable of really amazing feats, if you think about it.
There's three points I'd like to make in conclusion of this thread (which I've enjoyed, by the way -- the Philosophy of Science is a fun topic for me). It's obviously a huge topic, so I'd like to summarize what I'm trying to say:
Point One: There are many paths to learning facts, not just the scientific method. Point Two: Science as practiced (as opposed to an ideal practice of science) is flawed, but it works well enough that we use it. Point Three: The Scientific Method even in its ideal form could be better, and doesn't deserve the sort of religious fervor associated with it by many people these days.
Point One: I think you're lumping a whole bunch of things together there.
Quite true. The field of epistemology -- how do I know something to be true? -- encompasses a great many ways and means.
Different kinds of questions have different methods for appropriately answering them: Question 1: Did Sally kiss Harry? Answered by an observation or self-reporting followed by a chain of word-of-mouth. Question 2: Are Scrub Jays blue? Answered by an orthinologist going out and studying a number of Scrub Jays. Question 3: Are men taller than women? Answered by statistical methods. Question 4: Is 5 greater than 4? Answered by a rational claim without any empirical observations. Question 5: Does ice cream cause polio? Answered incorrectly by establishing correlation, Answered correctly by establishing causation. Question 6: Should people wear hats in church? Answered by religious debate from authority. Question 7: Is murder wrong? Answered by a variety of ethical or religious arguments. Some people claim that "murder is wrong" is not a fact at all, but an opinion. Others claim it is a fact. Question 8: Does adding fertilizer cause tomato plants to grow faster? Answered by every 8th grade science fair, using the traditional scientific method of hypothesis testing.
My point is that the scientific method, while indeed a powerful tool at arriving at truth, and useful in many situation, is not the only means of learning truth. Different questions have different ways that are appropriate for answering them. The scientific method is not the complete answer to epistemology. It has made *huge* advances possible, and was right to excise argument from authority from questions that can be tested, but it is not the answer to epistemology that Logical Positivists make it out to be.
In other words, the only reason to believe something is based on evidence, and things about which evidence can fundamentally not be collected are not worth thinking about. Logical Positivism has gone through various incarnations over the last century, but the most strident version says that that which cannot be scientifically proven (via verification or falsifiability) is not worth considering. Other versions accept different sets of facts. "My fiancee loves me" is resistant to being put in a test tube, but we could perhaps look at evidence for it, like taking her word for it, or perhaps looking at a cake she baked for me.
The trouble of course, is that it really is a slippery slope from there to reports of people seeing ghosts (the original point of this Slashdot article). How can we accept my fiancee's word that she loves me, but not accept the fact that her friend saw a ghost when she was young? We can't start from the assumption that ghosts don't exist, since then we're just assuming our conclusions. Hence the strident version of Logical Positivism -- that which cannot be shown scientifically is not of interest.
Point Two: The practice of Science will never match the ideal. What I'm saying is that the belief that every published result is even supposed to be correct is, in itself, a fundamental misunderstanding of the nature of the scientific method on your part. Like I said, bad science exists, and even good science can reach incorrect conclusions. The claim advocates of the scientific method would make is that incorrect results get rooted out over time, not that all published results are intrinsically correct the very first time.
I've rambled enough on this, I think. I think any discussion of the flaws of the scientific method must necessarily include how it works out in practice. Even if we assume perfect scientists, the results of the science process (by which I include publication and review) will necessarily produce errors, distorted claims, overblown reportage in the press, misunderstandings by the public, outright fraud, and slipshod reproduction of results. And yes, I think we have to say this is a problem with the scientific method at some level. We're not trying to make ourselves feel good when we do science -- we're trying to find the truth. And a process for discovering truth which results in all these problems is not perfect and can be improved. For example, if we simply had a systemized way of following up studies to test their accuracy, I'd say the scientific method had indeed been improved.
Point Three: The Scientific Method even in its ideal form could be better, and doesn't deserve the sort of religious fervor associated with it by many people these days.
As you can probably guess, I don't hold with Logical Positivism. The scientific method has developed a sort of cult following among scientists, by which I mean that some of the other methods for determining truth (as outlined above) get sort of sneered at, and anything impugning the sanctity of the scientific method is met with what one might call a sort of dogmatic reactionism.
Let's discard the issues of publication bias, treatment of heretics, etc. There are three major problems with the scientific method, even in its ideal form. 1) Generation of the hypothesis. 2) Probabilistic Results. 3) Singular Results.
Hypothesis Generation The first issue Kuhn wrote about extensively. Essentially, he said, the hypothesis was generated *after* the testing phase of the scientific method. While this must be done to a certain degree, he believes this compromises the scientific process.
I have a different issue with it -- essentially, hypothesis creation is educated guessing. Even if a hypothesis can be tested and found to be supported by facts, it doesn't actually mean it is the *right guess*. I could hypothesize that "a lack of insulin is the cause for diabetes". I could do an experimental/control test, injecting the experimental group with insulin, the other group with a placebo, and find that the insulin group has their diabetes symptoms go away. According to science, I have established that hypothesis.
*Alternatively* I could hypothesize that "diabetes is caused by an inflammation process out of control", inject an experimental group with strong anti-inflammatories and the control group with placebo, and *also* find that I halted the diabetes. Or I could hypothesize that "eating fast food causes diabetes". I'd feed the experimental group nothing but Big Macs for 10 years and see that they, indeed, have higher rates of diabetes than the control group. A hypothesis is an educated guess, and multiple educated guesses, all orthogonal to each other, can be found and proven to be "true".
I understand you are very big on the models developed by the scientific method, but (assuming we know nothing about diabetes) it is unclear which of those three models is best, or the most useful. Should we focus on the lack of insulin? The underlying inflammation? Diet? This is a somewhat simplistic example, but the point is this in a nutshell: a hypothesis is a guess, and may be shown to be "true" by the scientific method without actually being true, or without actually being the underlying cause.
The scientific method is the search for truth after all, and it's not clear how a guess is supposed to perfectly match the underlying reality.
Probabilistic Results. This is, as I've mentioned before, a very serious problem with the scientific method. Especially in the life sciences, we don't get clear-cut answers to questions. We get probabilistic answers. While a 95% probabilistic accuracy sounds quite high, any person with an understanding of Bayes' Law can immediately see the problem -- I could conduct 20 or studies asking the most ridiculous questions -- "Does listening to DVDs instead of CDs cause heart attacks?" "Does drinking Sprite instead of 7-UP cause cancer?" etc., and odds are one of them -- a complete fabrication, remember -- will be "scientifically" shown to be "true".
It's similar to a landmark case in DNA testing. Early DNA testing was very accurate. Say, a million-to-one chance of providing a wrong answer. There was a murder in LA, with a prosecutor looking for a conviction based solely on this scientific and accurate DNA evidence with only million to one odds of being wrong. The defendant, on the other hand, points out that there's only a one-in-thirteen chance he's the murderer. 13 million people (living in the LA region) * 10^-6 = 13 people matching the DNA, of which he happened to get accidentally tagged. If the defendant hadn't been aware of stats, he could easily have gone to the electric chair.
It's an analogy for the scientific process in general. When we combine large numbers of studies with relatively high p-values, we are pretty much guaranteed that our process at arriving at truth will arrive at falsehood.
Singular Results We create scientific models based on large numbers of observations. From the formation to stars to the effects of pituitary growth hormone supplements on adolescent males to string theory (ok, maybe not string theory), we make empirical observations and then create a model which fits the data. But the process simply doesn't work when we've only met a single Martian, and want to guess the height distributions of Martians -- after all, the guy visiting Earth might be an especially small one, in order to fit inside his spaceship.
And data models are all well and good -- until we get a data point which doesn't fit the model. Taleb wrote a book on the topic, called Black Swans which says that history is shaped by events which don't fit our pretty models. In other words, the most interesting things in life don't get predicted well, and don't get handled well by our models of the world.
Taleb is something of an idiot, for all that he's a smart guy, but it's worth reading if you can find a copy without paying for it. He makes the very important point that life isn't usually Gaussian. We just guess things are Gaussian, because we're taught to, even when the model has no right to be. He goes a bit too far, calling Gaussians a fraud (on the contrary, they have every right to be used when we're summing large numbers of independent events, by the Central Limit Theory), but he's right that their use is dogmatic. And wrong in many, many instances.
He's fascinated by fractal models. I personally think that Cauchy curves better model a lot of things in real life (they have thick tails, meaning that they allow exceptional events to occur much more often than a Gaussian), but the point remains: we can't make models (or very well) from singular observations, and extraordinary events are hard to predict and model.
This is not a theoretical argument. Events like Katrina, Black Monday, a hypothetical Big One in California, etc., are all very important topics both to science and to the general public, but our current methods are inadequate for dealing with them.
Ugh, I wrote a long response and lost it. Essentially, as I said before, I agree the scientific method is very useful, but I disagree when you say that "I believe that science (in its ideal form) is not only the best method we've found so far, but the best method there could possibly be."
My criticism of the scientific method are: 1) Unique events can and do all the time (it's a consequence of the probabilistic nature of the universe). A unique event is simply one that we haven't observed before or since. Science can't deal with unique events, which are often the most interesting things to us. What is the standard deviation of the heights of Martians, if we have only met one Martian? Science can't answer that question -- but you could certainly ask the Martian to gain this knowledge.
2) The scientific method's reaction to inexplicable events is to reject the event, and the person reporting it. When Roentgen was working with these crazy new X-Ray things, he didn't publish his observations until he had established the cause and effect, so as not to risk his professional reputation. A method which can take an observation, find no explanation, and as a result reject the *observation* has a fundamental flaw.
3) Science should eliminate bias and politics. Studies performed by people who have a financial stake in the result must always be suspect, as stats can usually be massaged to show whatever it is you want them to show. A better model would be a sort of escrowing process where Intel or Merck or GM hands money to an escrow dealer (possibly the government, possibly a private entity) who then presents an unbiased question to a scientist in the field: "Are Intel CPUs faster at 3DMARK07 than AMD's?" "Does Vioxx reduce or induce heart attacks?" "What is the 0 to 60 time on the following cars...?" With the proviso that the results will be published regardless of if they are favorable to the original sponsor of the study. This would go a HUGE way to fixing the problems of the science of today.
4) You said: "The level of certainty science can provide is sufficient." Hormone Replacement Therapy was "scientifically" shown to reduce breast cancer risk. As a result, some 10,000 women have died of breast cancer from HRT. All scientific studies are uncertain to different degrees; as you stated, studies in physics are probably pretty reliable. But studies in medicine are overturned constantly. The level of certainty in medicine is really quite low.
5) This is also the problem of trust in science. The problem is not malice or fraud (though the case of the South Korean cloning guy shows this can happen) but that whereas you or I can understand that studies inherently have uncertainty in them, people go out there and make life or death decisions based on studies, thinking that something which is scientifically shown to be true means the same thing as something mathematically shown to be true.
6) The combination of (1-out-of-20 (p-value less than 0.05) or 1-out-of-100 (p-value less than 0.01) studies being due to random chance) x (a large number of studies per year) results in a huge number of conclusions being published *due solely to chance*. This should be caught (eventually) by reproduction of results, but...
7) Reproducing results (whether building on previous research as you say, or simply doing a new study on the same topic) is done haphazardly, and the result of a follow-up study that contradicts a previous one does *not* actually overturn it... if you have one study showing that eyeblink therapy stops PTSD, then a follow-up shows it doesn't, the literature will conclude "Studies are conflicted". Only topics which interest someone at NSF get the kind of treatment needed to conclusively establish something as true. Many interesting questions linger in the "studies conflict" category for years without any systematic approach to resolving the conflict. Reproduction of results is the cornerstone of the scientific method, but studies are expensive, so follow-up studies are often neglected.
8) People from all times and places always think that whatever we-know-now is truth, whatever people-used-to-know is helplessly ignorant, and there will probably not be a lot new in whatever people-will-know in the future. We're hopelessly arrogant in our present-ness. Hence the resistance to science's heretics. The Kuhnian Crisis model for science is not a very efficient one for finding truth.
In conclusion: As a result of all this, especially in fields like medicine where it is very difficult to isolate cause and effect (was the heart attack due to his diabetes or because of his Vioxx?) you will have hundreds of wrong results added to the literature every year. A press and public that thinks that whatever we know is the perfect truth that will jump on whatever study has captured their attention, combined with a very haphazard follow-up process which doesn't efficiently weed out the results published due to random chance means that the scientific method could indeed be much better than it is today.
Science seeks not to collect random facts, but to discover the general underlying principles of reality (which you refer to as "the natural world," as if to imply there is another). If there is another, science can't tell us about it, which is precisely my point. =)
>>Science is a method, it requires no faith. In fact it is a method through which provides it's own falsifiable test of itself.
Slow down there, cowboy. Nothing proves itself -- you always start with a certain set of axioms.
While the Scientific Method is indeed one of the great tools for knowing things that we have, it is certainly not the only way things become known. We can learn certain things through reason alone (such as math), and many things can only be learned through word of mouth (Sally said that Harry said that...). Statistics is one of the fundamental answers to epistemology (how can I know something), but ultimately we only can learn things at certain (not very high) confidence levels. While a p-value of 0.05 or 0.01 might sound pretty impressive (and are the standard rules of thumb for statistical 'proofs'), they represent 1-out-of-20 and 1-out-of-100 studies' results being nothing more than the result of random chance. If you have, say, 10,000 papers published a year, 500 or 100 of them will be wrong.
Given how often scientific answers have indeed been found to be wrong, especially in epidemiological studies (which is a sort of scientific wishful thinking), it hardly proves itself to be true (which can't be done anyway). A better way of putting it is, "It's the best method we have of figuring out empirical truths about nature."
There are very major limits on science and the scientific method. Notably: 1) Singular events. Science can't handle singular events very well, or not at all. For example, suppose the people that claimed they had seen cold fusion back in '89 really did see Cold Fusion. Perhaps a gamma ray hit something at just the right time, or maybe it required high altitude, or something. But when researchers tried to duplicate it, they couldn't and so the guys were branded as frauds. Maybe they were, maybe they weren't... but they could actually have made an honest empirical observation, and then branded as frauds as a result of it.
2) Trust. The motto of the Royal Society is "Nullis in Verba" ("On the words of no one") In other words, don't believe what people say, but only trust in reproducible experiments. The trouble with this is, of course, that no one can come close to reproducing all of the empirical experiments needed for a full understanding of modern science, and so it always boils down to trusting what other people say. If a car full of scientists drove through a mountain pass and saw a white substance outside, they could send one of their members out to report if it was sand or snow... without accomplishing anything. The friend could be playing a practical joke on them, after all. All of them would need to go outside and make an empirical observation of the substance themselves in order to be satisfied. This is a very fundamental flaw in the system, which only works since malicious papers (as far as I know) are not inserted into the literature like viruses.
3) The old induction problem / uncertainty. Science is based on inductive reasoning, and inductive reasoning from empirical events can't actually prove anything. We can make certain claims, but not proofs in the sense that logical or mathematical statements can be proven true. "The sun will rise tomorrow" is a scientific claim, but it cannot be proven to be true. The fundamental problem is that what is true in the past might not be true in the future. Since certain things like universal constants are likely to stay the same (though some have theorized they have not in the past!), it can be answered by simply stipulating "If things stay like they are now..." but this is still not the same level of proof as people deal with in logic and math. All scientific knowledge, ultimately, is uncertain.
4) Heretics. The heretics of science have always received rough treatment. Most of the time it is deserved (there are a lot of nutcases out there), but sometimes people have followed the scientific method but had their papers rejected because the reviewers assume their preconceived conclusion. The guys who discovered that stomach ulcers were caused by heliobacter pylori had their landmark paper rejected because it went against what was commonly known to be true -- the rejection of which was the entire point of scientific knowledge.
However, for all its flaws the scientific method is, as I said, the best method we've found so far of determining empirical truths about nature. It is certainly not the be-all and end-all, as by its very nature it will reject all unique events. It is fundamentally useless for knowledge about anything which cannot be reproduced, and what it does know it only knows with varying degrees of certainty. It cannot say anything about anything outside of its sphere of influence (empirical observations of the natural world), and so stands mute on being anything but a fact-provider to fields like philosophy, religion, logic, math, ethics, and et cetera.
The scientific method certainly doesn't deserve the religion-like attitude of worship Popper and many people on here seem to give it.
Sign, burning two mod points on this (both +funny, whatever), but it's an issue that comes up whenever I talk with Europeans about mass transit, and how they can't understand why we don't have a rail system.
The fundamental problem is that Europeans cannot fully grasp the difference in scale invoved in America, especially in the American West. (It's big. It is really really big. You just won't believe how vastly hugely mind-bogglingly big it is. You may think it is long way down the road to the chemist, but that's just peanuts to Texas.) I travel rather often from San Diego, through Los Angeles, and to the Bay Area / San Francisco (these are the three major cities in California, incidentally). The trip takes 8-10 hours to complete, depending on traffic passing through Los Angeles. There is a single rail line that runs down the coast. Once per day it travels between SF and SD, and you have to get up at 5AM to catch it. It takes 11 hours.
San Francisco and San Diego are 500 miles apart.
By comparison, Amsterdam to Paris is 500 *km* apart. The distance from San Diego to San Francisco would span the breadth of England (London to Inverness was 8 hours by train, and is about 550 miles, as is Paris to Nice). When I was in Europe, I was constantly surprised about how little time it took to travel from one city to the next while I was on a train. When you live in the American West, you get used to 6 hour drives at 75-80 miles per hour where you literally see no living human beings outside of the gas stations and rest stops. And maybe some farms.
Europe is very heavily built up. It's dense. Rail networks make a lot more sense in dense networks than in sparse ones. That same rail line that runs to Oxford (60 miles from London) can be used to connect to Warwick, or Stratford-upon-Avon (if my memory serves). The rail network in California is essentially a 3-node graph with a line between SF, LA, and SD. With two mountain ranges in between, to boot. The train company loses money on the line pretty consistently. There's literally nothing in between to make the run profitable. San Luis Obispo and Santa Cruz are nice places, don't get me wrong, but they simply aren't volume destinations. And because it's not profitable, there won't be any more private infrastructure development. The State of California has been toying with the notion of building a high speed line from SF to SD for a while now, but, hell, I ran the numbers myself. Japan wouldn't have built a high speed rail line if their cities were all 500 miles apart. It's too costly. The main island of Japan is about 600 miles long, total.
It's not a better-than or worse-than comparison, I'm simply stating the facts. You have to have a certain critical mass of density to make rail networks worth your while. An analogy that works well with Europeans I've met: Imagine France. Now imagine there is nothing in the country but Paris, Lyon, and Marseille. None of the little villages, towns, and cities. Nothing but desert. Now consider the practicality of a rail network in the country. This is Texas.
This isn't an America-is-bigger-is-better argument. In fact, I can pretty firmly say that I would greatly prefer being able to travel to another city in an hour or two. I lose an entire day whenever I make the trip. A drive to Phoenix, first major city east of San Diego (Yuma doesn't count) is 6 hours (@75 MPH) through almost nothing but desert. To the average San Diegan or San Franciscan, the other city is akin to a vacation destination. Road Trips are boring as hell unless you find a way to entertain yourself -- I personally go through audiobooks like water.
Rail Networks simply don't work when the graphs are so sparse. Out in the middle of the desert, a car moves faster than a train, and costs less, so why bother going to the hassle of parking your car in long term parking (unless you have a garage of your own), and paying more money to travel slower? I'd do it just for the scenic-ness of it, except you have to board at 5AM to get into the other town by 6PM (and then have a friend or family pick you up from downtown, which is another hassle). By and large, airplanes simply seem to be the mass transit of the future. $35 one-way between SF and SD (cheaper than gas for driving), and the trip takes 45 minutes instead of 11 hours. The same train ticket is around $40-$60.
I'd agree that America could use better mass transit systems, especially between transit hubs like Airports and downtowns, but most American cities are so geographically disperse now that they almost require a car to get around in anyway (for example, if you were to take a bus to downtown San Diego, I doubt you could even find a hotel room there -- the hotel district is 10 minutes away by car). You can't even walk from Point A to Point B in San Diego. It's just too far away, and the sidewalk will just vanish at some point. The geographical area that the greater LA area encompasses is an oval about 130 miles in diameter. By comparison, this would be almost the entire southern half of the UK (not counting Wales). Leicester east to the coast, and south to the coast, that square would be covered by the city of LA. All American cities have picked up such sprawl to a certain degree, and, as you said, tend to build their airports outside of the city for sound and traffic reasons.
When I travel in Europe, I take a train. In America, I rent a car.
You can't say that America is bad for not having a train network. Trains don't make economic sense in a sparse network. France and Texas are the same size, and shape, but Texas along the I-80 is filled with 10 hours of nothing but desert and homocidal cops (a long story for another time).
Or to my Chinese roommate who lacks alcohol dehydrogenase enzymes in his liver and so has one drink and turns bright red. Embarassing for a guy who was in a frat that prized heavy drinking skills very highly. The enzyme deficiency has a huge penetration in Asia, something like up to 70% in some countries, a couple percent in Germany, 0% in Ireland. Go figure.
Or the Jewish student organization that sponsored a free screening day for Tay-Sachs.
The concept that race is solely a cultural construct is mere wishful thinking: "I wish there were no genetic differences in people, because then there'd be no racism, and we'd all live in a world filled with flowers and ponies." No, as we discover more about genetic diversity we learn which genes have greater tendencies in certain ethnic groups. This is NOT an excuse for racism -- the concept that one person can be somehow metaphysically superior than another due to skin pigmentation is absurd -- but denying uncontroversial science for political reasons is troubling as well.
This is in reference to the following post: http://slashdot.org/comments.pl?sid=178269&threshold=5&commentsort=0&mode=nested&cid=14782984
A critical part of the full understanding of a concept is the ability to express it simply.
As soon as you use jargon, you lose your audience unless they also understand the specialized vocabulary of your field. With such a simple question they are blatantly not physicists, so you're just (perhaps intentionally) talking over their heads, making yourself sound smart./shrug
Not a big deal, most professors do it, but it's a bad habit, and one I hope academia will someday break. My fiancee is at UCSF, so I have to constantly tell her to use plain English. Skin works just as well as Epidermis, honey. She works with the public, so she has a more critical need to break that habit than most academes.
Just recalling my high school physics, if I'd have written it (and you were responding to a troll, IMO), I'd just have said that all electrical waves are electromagnetic in nature. Maybe talked about the history of ether, and how the electrical and magnetic components provide the medium for each other.
Perhaps the worst field is Philosophy, though. Philosophers are especially bad at it. Or especially good, depending on your perspective, of taking something that can be stated clearly and simply, and loading it up with propositions and jargon until only they know what they're arguing about. And sometimes not even then. Kant took an idea that could be stated in a sentence, expanded it out to hundreds of pages, and ended with something not even his other contemporaries could understand. Ditto Wittgenstein (whose thesis Bertrand Russell couldn't understand), ditto Hegel, etc.
Not a criticism of you, but a criticism of your defense of your post.
--------------------- 2nd post on the subject in response to http://slashdot.org/comments.pl?sid=178269&threshold=-1&commentsort=0&mode=nested&pid=14783558 ---------------------
>>In general most modern applications are not expressed clearly or simply to >>normal every day people and programmers or advanced/expert users (i.e. much of >>the Slashdot crowd) simply make fun of them. Why should we expect any better >>when we don't understand something? Pot, meet kettle.
I majored in computer science with a minor in writing. I hold nothing but contempt for people that take a simple idea and make it complicated. For people that 'understand' a complex concept but cannot express it with less complexity, again I hold low regard for their intellect.
I flip it around see it rather as a challenge. If a person asks me, say, what the advantages are of a hyperthreading chip, and I found myself unable to reduce the issue into a correct, meaningful, and jargon-free response, I'd see that as a sign that I didn't understand it well enough, and would hit the books again. Defending a baroque explanation of divs, grads, and undefined variables to someone who obviously hasn't taken basic physics with this:
"I can only report the truth. I do not have the ability to explain it better than Einstein, Bergmann and Wheeler themselves (Covariant 4-vector formalism is their work mainly)... This person is clearly ignorant about many things, and too arrogant to admit it and try to learn something interesting and relevant to the discussion. (American? Nah! That's just a stereotype)."
is indefensible. Again, giving an explanation that only someone who already knows the answer can understand is no explanation at all. Perhaps you would call it a proof, or some such, but certainly not an explanation.
>>My Jargon vocabulary was highly limited until I entered the business world. >> At University we used the specific, word that concisely and precisely >>describes what we are talking about. The Epidermis -> skin point you use is >>actually a good example as skin generally covers multiple layers as aspects, >>while epidermis is precise.
Do you really think that you should tell people to apply a creme to their 'epidermis' instead of their 'skin'? Is Mrs Jones at risk of pulling back her epidermis and dermis so she can apply her antifungal creme to her hypodermis? Hey, you said "skin", after all.
No. Use terminology only when the difference is important, and if you must, define your terms before unleashing them. Call it a bruise instead of a hematoma, so they don't get a heart attack thinking they have cancer.
Even when my fiancee is explaining things like pharmokinetics to me, as long I stop her from using terms like AUC, MEC, MTC, etc. I can understand her lectures.
The key defining factor, of course, is your audience. If I am conducting technology training with K-12 teachers, which I do on a fairly regular basis, I have two options: 1) Use big fancy words to make myself look and feel smart, or 2) Express the exact same concept in words they understand and can use. There's a reason why I get universally good reviews on my workshops: I don't have an ego to get in the way of a clear explanation.
It doesn't just apply to the general public either. At my university, people applying for professorial positions and people giving defenses of their theses wuold generally talk for an hour or so on their respective areas of expertise, and then would be asked questions. Almost always, the first question was: explain your thesis in three sentences or less. If the person stumbled, or even worse, couldn't answer, they were almost sure to not get the position (graduating students were obviously given a little more leniency, but at the same time the dept chair would tell them they'd better damn well have an answer for that question ready when asked).
This is a fairly common trend from what I've seen. Some say, for example, the genius of Einstein was not in his theory of relativity, but in his E=mc^2.
It's a movement I wholeheartedly support.
>>Is that the simplest explanation? Could I take that to an average 5 year old >>and explain electrical and magnetic energy fields are related and have them >>understand? I'd guess not. Which of course is the problem, simplicity is in >>the eye of the beholder.
Simplicity is entirely dependent on the audience. The very worst workshops I like to give are to audiences of mixed backgrounds. I once gave a talk on "Tips and Tricks in UNIX Shell Script". Was advertised around the department and held in a department lecture area. So when I got into the talk and started talking about setting up neat little autocompletion tricks in TCSH, I noticed only blank stares coming back. I trailed off and asked the audience for their background. 25% of the audience were UNIX experts, 75% had never used UNIX before. How do you lecture to that? How can you possibly satisfy all of your customers, so to speak? You can't. So I simply gave an hour long lecture on an introduction to UNIX, and by the time the hour was over, all the UNIX hackers had long since walked out.
>>I'm sure his answer is a pretty simplistic one for someone in his position. I >> bet post-grad physicists call people losers when they don't understand basic >> physics.
I agree, if he was addressing an audience of physicists, he could give divs, grads, curls and super duper 4-dimensional tensor flux capacitors all day long while he realigns the main deflector dish to catch hypertachyons in his n-dimensional subspace receiver.
His audience was one person, and one who obviously hadn't even taken an introductory physics class. Hence his mistake.
But to come back to my original point, my main criticism of him is not that he gave an incomprehensible, jargon loaded response, but that he defended it by saying that the other person was stupid, and too lazy to learn something new. As I said, the fault in this case is entirely in the bad and lazy explanation of the physicist, a trend altogether too common in academia, and which reveals the academics own lack of undertsanding of the subject.
I am deeply troubled by the ongoing and perplexing trend I have seen recently of people denying the holocaust online. It is not the Nazi holocaust I am referring to, but rather the even greater mass murderers called Stalin, and Mao.
I've long become accustomed to the media focusing mainly on the Nazi genocide in WWII (Nazis are such easy targets), while mainly ignoring the order of magnitude greater murder of people in communist countries. But denying that the communist holocausts occured... that just ain't right.
This post was in response to: http://slashdot.org/comments.pl?sid=177016&threshold=5&commentsort=0&mode=nested&cid=14693711 (But this is certainly not the only such post I've seen recently, claiming the same thing)
No, communism is the single greatest murderer of innocents the world has ever seen.
You said... "Murdered few million under misguided regime of Stalin."
WTF? Please tell me this was said in some sort of morbid tongue and cheek manner. Or do you actually believe this?? Are you like a holocaust denier, but for communists? Try 20,000,000 innocents killed by Stalin, mainly by his own hand. He beat Hitler hands down. The Communist Revolution in China has killed 65,000,000 since the revolution started. It depends on if you count starving your own people to death because you believe in a retarded philosophy like communism as better than, or worse than, sending them to death camps to be executed, in Siberia.
Educate yourself... please. Learn to think for yourself. For the sake of everyone on this planet. People willing to lightly dismiss the largest mass murderers the world has ever seen are not just deluded, but scarily so.
I think the discoveries in the latter part of the 20th Century were quite exciting from a philosophical point of view. Watson or Crick (I can't remember which) became a strong determinist after his ongoing research in DNA led him to a mechanistic viewpoint -- essentially saying that we're slaves to our DNA, that we have no free will.
Since then, various advances showing the essential unpredictability of the world at the atomic and subatomic levels makes me hopeful that it will be actually impossible for, say, a perfectly oracular computer to ever be built, regardless of how much processing power it has. I.e., it will actually be impossible for a computer to state that two years from now you'll be working as a tuna packer in Alaska, on the lam from the law for murdering the Future Crimes policeman...
Recently, I have been posting on the Slashdot article relating to the Kansas Board of Education voting to approve teaching Intelligent Design (ID) in schools. The Slashdot editor seeming to claim that intelligent, scientifically minded people have no business coming near Intelligent Design with a 10 foot pole.
I feel that ID is an interesting theory, whether or not it is actually true. I think that ID can be stated in such a way that it is a scientific theory, using the definition that in order for something to be scientific, it must be falsifiable. I make no claims about whether or not it is true.
First, I will address some of the commonly held notions about ID, many of which are wrong, then I will state ID in such a way that it is a scientific theory.
ID is a term with a wide collection of individual belief, the nexus of which is that a designer influenced evolution. This is the definition I will use, and will ignore the plethora of muddle headed arguments for ID, most of which are arguments from example ("Look at the pretty rose! Tell me it isn't the product of an intelligent creature!") or appeals to ignorance ("We don't know how evolution happened, so it couldn't have happened!"), neither one of which are valid forms of argumentation. One can just as easily claim that clouds that look like ships are designed, or that one day, as the literature grows, we will have a complete understanding of evolutionary mechanisms.
I rely heavily on statistics, and recommend anyone who hasn't gotten past the "mean, median, mode" bit to go out there and read up on t-tests, analysis of variance, and similar topics. I consider statistics to be the solution to the age old question of Epistemology: how do we know when we know anything? Answer -- when our confidence level in a statement exceeds 95% or 99%.
Commonly held false idea #1: ID is Creationism. This myth is repeated over and over. *ID is NOT Creationism.* Creationism (to use the common definition) is the belief in the literal word of the Bible found in the first creation story in Genesis (and, incidentally, not the second creation story). ID is contradictory with this belief as ID says that evolution happened, but was influenced by a designer.
Commonly held false idea #2: The designer is God. The Theory of ID is religion-neutral. *ID at its core does not care one whit who the designer is or why he influenced evolution.* These are all realms for speculation, but has nothing to do with ID. It is a common counter-argument to ask "Why did God do ?" and then look smug. However, this argument carries no weight at all since ID is not based on the Who or Why of the development of life, only the How (which, incidentally, is the same question Evolution answers). Additionally, it is common for atheists to attack ID because they assume it is a Christian idea. On the contrary, not only is ID religion-neutral, it is surprisingly common for atheists to believe in ID (though they often do not call it that). Theories regarding aliens influencing evolution, or even DNA blown in from outer space (Francis Crick's idea of panspermia) are also possible 'designers'. But as I said before, Who the designer is or Why he did it is completely immaterial. ID can be proven or disproven solely through math, without resorting to metaphysics.
Commonly held false idea #3: ID conflicts with evolution. This is an interesting debate to watch because ID states that evolution is true -- it simply was influenced by a designer. Stating things like proofs of genetic drift or even examples of speciation doesn't disprove ID because they are tenets of ID, as odd as that is to consider. When one does something like point to the famous moths of industrial England and state that it disproves ID, this is just another case of the "ID is Creationism" fallacy that so many people subscribe to. ID is Evolution, not Creationism.
Commonly held false idea #4: ID is not a "scientific" theory because it is not falsifiable. (Many people have the wrong idea on what falsifiability means. I recommend you read Bertrand Russell, or the wiki article on the subject.) Essentially, in order for a theory to be scientific, it must be possible via one means or other to show it is false. It is true that many of the arguments and theories made by IDers amount to nothing more than hand-waving, but it IS possible to state ID in such a way that it is falsifiable, and hence a scientific theory. I shall do so now.
The Scientific Theory of Intelligent Design by Bill Kerney
Summary of Evolution: New species emerge from the process of random mutations and selective pressures.
Summary of Intelligent Design: Bias existed in the random mutation component of Evolution.
All the hot air, court trials, ridiculous statements, and emotional debates aside, this is sole point of difference between the theories of Evolution and Intelligent Design.
When restated this way, it becomes obvious that Intelligent Design can be proven true or false simply by showing if there was or was not statistical bias in evolution. Intelligent Design claimed that a designer "leaned on" evolution to make the world turn out the way it did. And any time something influences a population on a large scale, statistics can easily, easily distill the truth out of the white noise. If I had a team of people pay everyone in New York making a call at a public phone booth to either talk longer, talk at a different time, or even just call a different person than they were intending too, stats could pluck the truth of it out of the phone company's call records.
All arguments along the lines of specific examples (eyes, knees, etc.) are useless simply because you can't really argue from example. Just as if I can't claim there was a widespread effort to disrupt the phone network in New York just because someone paid me $20 to dial the operator, proof or disproof of a single example is actually, interestingly enough, almost entirely irrelevant to the debate. Hence you won't find any discussion of irreducable complexity or arguments from ignorance here -- ID simply claims that a designer introduced a bias in evolution, and that is the *only thing* that needs to be considered to show ID true or false.
(Commonly held false idea #5: You can never absolutely show one side or the other to be right. But statistics are a wonderful thing. With enough data, you gain confidence in your conclusions to state whether something is true or not. Generally speaking, a statement is "true" in science when it is 95%+ or 99%+ likely to be correct (the most common p-values used). It's always amusing when supporters of 'science' claim they would need ID to be 100% proven correct when science itself doesn't work that way.)
The Casino Analogy: I think that the best statistical analogy for Intelligent Design is that of a person in a shady casino playing a game. He is winning (or losing) a lot more than what he thinks he really should be. How does that player know if the game is rigged or not? In a nutshell, he usually can't know. He doesn't have enough information; his sample size is too small. At one casino I was at, after a hot winning streak, they switched dealers and I lost 17 straight hands of 5$ blackjack before I finally got aggravated and left. Was the casino cheating? While it's fun to contemplate, again, you just can't really know.
However, this is where most people's thought processes stop when it comes to statistical analysis.
The fact it, it's possible to detect bias in casino games -- it's actually an easy problem that readily succumbs to math. You simply have to have enough observations in order to gain a confidence level in your statement that "the game is rigged" is correct. The less the bias, the more observations you have to make in order to gain a 95% or 99% confidence level. Contrawise, the greater the bias (say, a slot machine that pays a one-in-a-million jackpot with every pull) the less observations you have to make. But for every bias, there is a number of observations that will reveal it, for any confidence level you choose. Bias simply cannot hide from statistical analysis.
Coming back to the Intelligent Design issue. ID claims that the random mutations in evolution has someone "rigging the dice" making life turn out in a fashion unaccountably complicated for something sheerly the result of random processes and selection. (Again, ID claims this, not me. I'm tired of being flamed for stating other peoples' beliefs.) The surprising thing is, this is now a scientific theory. It may be wrong -- but it is a falsifiable claim. I actually consider it likely that it will be conclusively proven or disproven one day (and again, to all of you pendantists, 'conclusively' means within a given level of confidence).
Join me in a thought experiment to show how easy it is to prove or disprove.
Thought experiment: Scientists don't have anywhere close to an exact number, but there is somewhere between 1 million and 100 million species on the planet, the majorty of which are insects. Approximately 300,000 of them are the so-called "interesting species" (pandas, marmots, supermodels) that we care about and wish to observe. Let's assume the British model of security has been adopted worldwide and we have cameras blanketing every inch of the world and crack teams of scientists standing by, who repeatedly sample the DNA of every creature and embryo on the planet.
All that needs to be done to prove or disporve ID is: 1) Quantify random mutations types and rates (before selection pressures). 2) Observe the random mutations in each population (before selection pressures) 3) New "interesting species" are expected to emerge at a rate of between one per three years to 30 per year (the expected value changes depending on how long you consider the average time for a new species to evolve, from 1 million years to 10,000 years). 4) See if a bias exists/existed.
That's it. ID can be shown to be true or false. The same experiment could be done on evolution in the past with archaic DNA if (ala Jurassic Park) we could get a DNA record, with the requisite detail. On the other hand, if no new species emerge, even with global information, then the confidence level that Evolution is false will rise over time until the theory is discarded.
Of course, any given designer might have given up designing and taken a day off, but if species emerge through statistically normal events, then most reasonable people would assume that the rest of evolution could have happened through similarly unshocking means.
Simply stated, ID claims that the dice were rigged in evolution. Math provides us with a powerful tool to discover bias. Hence, ID is a falsifiable claim. Hence, ID is a scientific claim -- though the strong possibility exists it might be completely wrong.