Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why the Cloud Cannot Obscure the Scientific Method

CmdrTaco posted more than 6 years ago | from the because-of-science-dude dept.

Science 137

aproposofwhat noted Ars Technica's rebuttal to yesterday's story about "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete." The response is titled "Why the cloud cannot obscure the Scientific Method," and is a good follow up to the discussion.

Sorry! There are no comments related to the filter you selected.

FYI (0, Offtopic)

imstanny (722685) | more than 6 years ago | (#23947483)

Link error in story.

Re:FYI (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23947529)

Why, everytime there is a mistake in summary, it has to be Taco?

Re:FYI (1)

NoTheory (580275) | more than 6 years ago | (#23947553)

The original story made me cringe. I would have imagined that it would have made Norvig do the same. Hope the rebuttal hits the right points.

Here's the link: []

Re:FYI (1)

Wolfbone (668810) | more than 6 years ago | (#23947895)

The original story made me cringe
Hehe! Yes indeed - it reminded me of this: []

Re:FYI (1)

Thiez (1281866) | more than 6 years ago | (#23948967)

That video is HORRIBLE. That thing will give my nightmares for days.

Please someone put that woman out of her misery.

Re:FYI (1)

mckorr (1274964) | more than 6 years ago | (#23949137)

A shame we can't drop that idiot on a neutron star, then let her tell us that mass isn't important...

Re:FYI (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23949875)

The worst part would be that we can 'leave out mass' in the E=mc^2 formula because the total amount of mass in the universe is so tiny (it isn't?).
So let's assume mass = 0 for all things (even though that makes no sense at all, she thinks it does). That means E = 0*c^2 = 0 -> there is no energy. Since she claims homeopathy works by changing you energy, AND that people have no energy (because they have no mass (?!) and Einstein's formula E=mc^2 applies), homeopathy cannot work.

Re:FYI (2, Funny)

Sethus (609631) | more than 6 years ago | (#23947569)

The author's head is completely up in the clouds...

Re:FYI (1)

CaptDeuce (84529) | more than 6 years ago | (#23949337)

The author's head is completely up in the clouds...

Rather than a meteorological reference as to the location of his head, may I make a suggestion that is biological -- or more specifically, anatomical.

datasource != process (5, Insightful)

Bandman (86149) | more than 6 years ago | (#23947485)

Because a datasource isn't a process?

Re:datasource != process (0)

Anonymous Coward | more than 6 years ago | (#23947929)

Additionally ice cream != Paris Hilton's cleavage.

(Though, I'd like to lick that instead..)

Re:datasource != process (1)

jeiler (1106393) | more than 6 years ago | (#23948167)

Paris Hilton has cleavage? A couple of bandaids and some Clearasil would fix that!

Where's the link? (0)

Anonymous Coward | more than 6 years ago | (#23947491)

Where's the link to the Ars Technica story?

Re:Where's the link? (1)

Breakfast Cereal (27298) | more than 6 years ago | (#23947551)

The Cloud was supposed to take care of details like that, but

Re:Where's the link? (1)

aproposofwhat (1019098) | more than 6 years ago | (#23947573)

My bad - should have checked the links better :o(

Re:Where's the link? (1)

sveard (1076275) | more than 6 years ago | (#23948453)

Nah that's why we have editors

oh wait... I guess not

Link (0)

Anonymous Coward | more than 6 years ago | (#23947535)

missing link (4, Insightful)

lhorn (528432) | more than 6 years ago | (#23947555) []
I like the fact that the web and search/aggregate engines may combine vast amounts of data in ways we now
cannot imagine - it expands the field for new scientific research enormously. Replace science? No.

Re:missing link (3, Funny)

kalirion (728907) | more than 6 years ago | (#23948471)

What, you mean I can't just google for "unified field theory" and get the right answer? Why does the universe have to be so hard?????

silly labels (1)

illlfates (917771) | more than 6 years ago | (#23949253)

I believe science is a direct descendent of the capacity for deduction as granted by our model-making brains. In our internal symbol sense, we often use the subjunctive tense, if when why hypothetically depicting, don't call it science, fine, it's still predicting what will happen from what has happened. -- that's what i'm rappin'

Why the Cloud CAN Obscure the Scientific Method (1)

sm62704 (957197) | more than 6 years ago | (#23947593)

Crack cocaine makes you stupid.

Oh, you were talking about the "information cloud" the crackheads at Wired always talk about. Never mind.

Re:Why the Cloud CAN Obscure the Scientific Method (1)

ceoyoyo (59147) | more than 6 years ago | (#23949631)

I think I figured out why they use "the cloud." Obviously all the good patents for "... on the Internet" have been taken, so they're just making possible a new round of frivolous patents with the phrase "... in the cloud."

Bullshit bingo (5, Funny)

Anonymous Coward | more than 6 years ago | (#23947621)

Latest addition to bullshit bingo cards:


It's a good rebuttal (5, Insightful)

Hoplite3 (671379) | more than 6 years ago | (#23947653)

I'd say that the models are the science. They're how you explain your data. They provide evidence that the experiments make sense, and they guide you by making predictions you can test.

Moreover, SIMPLIFIED MODELS are good science. Understanding which details can be omitted without impacting the predictive ability of your model shows you know which effects are important and which aren't.

I agree, but... (3, Insightful)

wfolta (603698) | more than 6 years ago | (#23948129)

What you say is true, Hoplite3. The big issue I see is how people define "model". My guess is that quite a few unfortunately define it as "I got 3 asterisks in the significance test", whether the "model" (say, linear regression) makes sense or not.

I forget where I read it, but I've been studying linear regression, and there was a fascinating example were if they'd have used linear regression techniques on the early "drop the canonball and time it's fall" data, they would have come up with a nice, highly-significant linear regression for gravity.

Then there is the whole issue of explanation versus prediction. Something can be predictive while providing no explanation, and perhaps that's where the petabyte idea is going: who cares about explanation if prediction is accurate enough? (Not my philosophy, BTW.)

Re:I agree, but... (4, Interesting)

Hoplite3 (671379) | more than 6 years ago | (#23948415)

Yes, I think that prediction without explanation is fascinating, but I don't know if it's what I like about science :) Have you ever heard Lenard Smith speak? I saw him at SAMSI, but his MSRI talk is online and is roughly the same. He's a statistician who works in exactly this.

Some fancy-pants technique he has is better at predicting the future behavior of chaotic systems (like van der Pol circuits or the weather) than physical models. But he also points out that these predictions don't tell you what type of data to collect to make better predictions, and that they don't generalize. One nice "model" he has can predict the weather at Heathrow better than physical weather models (from the same inputs: wind speed, temperature, pressure, etc), but it's useless for predicting the weather in Kinshasa until the model is re-trained.

I think these types of data analysis tools will be very important in the future, but they won't replace the explanatory power of models. Just like how scientific computing is useful, but never replaced actual experiments.

Re:I agree, but... (4, Insightful)

aurispector (530273) | more than 6 years ago | (#23948449)

Thank you. Sure, there's a ton of data out there, but how was it collected? What statistical methods were used to analyze the data? How did you select the data set you're analyzing? Nothing I understand about science really applies to data mining a so-called "cloud". Prediction without explanation is just observation. Observation in and of itself is not science. You might have data, but is it the right data?

I see all this petabyte stuff as interesting and even as a valuable adjunct to real science, but a basic requirement of science is reproducibility and you can't reproduce the data collection.

Re:I agree, but... (1)

mckorr (1274964) | more than 6 years ago | (#23949421)

Until they shot said cannonball out of a cannon and noticed that it doesn't follow a straight line. For that you need a quadratic regression.

Linear regression is good for making predictions given strong correlation between items in a data set, but the linear equation you get is a probability, not the solution to the actual data. To show this, plug in the values for any given data point and see if the equation produces the exact results.

Granted, at the quantum level we are dealing in probabilities, but for a satellite, which is traveling in a relativistic framework, we want an exact orbit, not one that will "probably" keep it moving around the planet. This goes for other areas of scientific endeavor. Our computers can run every regression we can think of till it picks one that fits the data set perfectly, but to really understand what that data means requires hard science, not more data. And that doesn't include making intuitive connections between apparently disparate data sets, which currently only a human can do.

Re:I agree, but... (1)

ceoyoyo (59147) | more than 6 years ago | (#23950111)

That's kind of a bad example. Galileo basically did just that: rolling marbles down inclined planes and looking for a simple relationship that fit the data. Correcting for the inclination of the plane, he found one.

I don't remember how far Galileo got in explaining what the various terms in the relationship were, but Newton certainly finished the job. Only when that experimental relationship was explained did we get the theory of gravity and kinematics.

Re:I agree, but... (1)

mysticgoat (582871) | more than 6 years ago | (#23950825)

The discussion needs to bring in some other terminology.

  • Mapping: Google creates the largest and most detailed mappings of some subjects that we have ever seen. Further, it provides a number of map manipulation tools that are incredibly fast and easy to work with.
  • Territory: The map is not the territory; what Google delivers is always suggestive of the way the world actually is, but should never be mistaken as reality. The data Google draws on is abstracted from reality, and there may be several metadata processes in between the Real World and what is provided to Google users. This becomes more of an issue as one approaches the frontiers of human endeavor. And of course science is often done on those very frontiers.
  • Algorithm: In the context of this discussion, the highly specific, technical definition needs to be used, rather than the way the term is bandied about in casual conversations. See this definition [] , whose first paragraph should be sufficient for this thread while remaining accessible to all of slashdot's readership.
  • Heuristic: Basically, any problem solving strategy that might provide an adequate solution to a class of problems. See this description [] , whose first section should be good enough for this discussion. All algorithms are heuristics, but not all heuristics are algorithmic. A heuristic may lead to a wrong answer and still be considered good, if the cost of working with a wrong answer is low compared to other benefits, like speed or ease of use.

For the most part, Google relies on non-algorithmic heuristics to generate its results.

The scientific method can be described as a set of algorithms designed to select among all possible hypotheses the few that seem to best model real world events. These models are in turn used to suggest new hypotheses that can be tested with the scientific method; it is iterative. The (possibly unreachable) goal is to eventually find connections that tie all the separate models into one universal supermodel; a strong secondary desire is to simplify each model as much as can be done while preserving its ability to predict real world events.

Note that implicit to the above is the core of the Copenhagen Convention: science is all about our intellectual models of reality, and is not about reality itself, which might or might not be humanly understandable. We stay within the scope of what we know we can comprehend, which comprises the models that our minds have built. Reality is separate from that: we test our models against reality, but reality is external to our modeling space.

In these terms, what Google presents us with is another view of reality that may or may not be distorted by the viewing process, but which is very easy to manipulate. However Google is not part of the scientific modeling process, and cannot replace those activities.

Google is however very good for engineering things, where the elegance of scientific models is often trumped by the pragmatics of Getting Things Done.

God bless wireless internet... (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#23947727)

The advent of the ability to take a shit (a.k.a. an Obama) while posting on Slashdot has got to be one of the greatest achievements of humanity.

More Google marketing (0, Offtopic)

12357bd (686909) | more than 6 years ago | (#23947741)

another obvious history.

I am sorry Google, but your ad bussines model will be terminated by random page requests. It is alraedy happening, no 'pseudo' articles will help.

Data Deluge Since Davinci (0, Redundant)

Doc Ruby (173196) | more than 6 years ago | (#23947791)

Leonardo Davinci is reputed to be the last person who "knew everything" that there was to know during their lifetime. Even that wasn't true. But the scientific method has been the key to both creating and coping with a "data deluge".

Science suffers when there's too little data: scientists then must generate more by observation, or do something else that isn't science (and doesn't work nearly as well). Too much data is only a problem if you're willing to settle for imprecise/inaccurate results. I'm sure there are a lot more lazy scientists than since Leonardo's time, just with the inflation of the scientist population, but that doesn't mean we should dumb down scientists who just want to own a computer that spits out answers to the data they put in.

Correlation is not causation (4, Informative)

tist (1086039) | more than 6 years ago | (#23947809)

A large source of data that has a correlation does not somehow imply causation. Even if it works under some conditions (or even all conditions). The science happens when the causation is determined and then applied.

Re:Correlation is not causation (1)

damburger (981828) | more than 6 years ago | (#23947835)

Yup. Mathematicians gushing about clouds and implying they have made science obsolete need to have that branded on their butts then be sent back to the mathematics department. They've already done quite enough giving us string theory (look! its internally consistent! it sounds cool! ergo its real!)

Re:Correlation is not causation (2, Interesting)

mckorr (1274964) | more than 6 years ago | (#23949629)

I'm a mathematician, and I have never heard a colleague make the claim that science is obsolete.

Mathematics is the language of science, and there has never been an advancement in either one without an accompanying advance in the other.

A mathematician might "gush" about clouds of data, and work on the mathematics of it, but if he insisted it made science obsolete he'd be tossed out on his ear.

Oh, and string theory? That was the physicists. The mathematicians were pissed off that someone found a use for topology, which we considered pure mathematics for its own sake and unconnected to the real world. Damned physicists ruined our fun.

Re:Correlation is not causation (1)

maxume (22995) | more than 6 years ago | (#23947925)

Of course correlation implies causation. When things are correlated, it is often a good place to look for causation. That's exactly what "imply" means.

Correlation doesn't *prove* causation.

There is a difference.

Re:Correlation is not causation (3, Informative)

damburger (981828) | more than 6 years ago | (#23948141)

Wrong - imply has a very specific meaning to mathematicians and scientists. 'A implies B' means that if A is true, B MUST be true also.

Re:Correlation is not causation (2, Interesting)

maxume (22995) | more than 6 years ago | (#23948395)

Fine. I'll try to restate my point using more specific language.

The fact that correlation does not imply causation isn't nearly as troublesome as the volume of "Remember folks correlation!=causation" would have us believe; lacking other evidence, it is a reasonable assumption to start with.

Re:Correlation is not causation (2, Interesting)

damburger (981828) | more than 6 years ago | (#23948423)

But nobody said that here, so your whole point is a strawman. I think its safe to assume that nobody on /. thinks correlation!=causation because that would make all science impossible.

Re:Correlation is not causation (1)

maxume (22995) | more than 6 years ago | (#23948751)

People say it all the time.

Re:Correlation is not causation (1)

damburger (981828) | more than 6 years ago | (#23948789)

Whatever. You waded in saying

Of course correlation implies causation
and then when I pointed out the flaw in your argument you backtracked. Nobody said correlation!=causation, now saying

People say it all the time.
just makes you sound like you won't admit you are wrong.

Re:Correlation is not causation (1)

maxume (22995) | more than 6 years ago | (#23948933)

I admitted that I wasn't using precise language. As the AC that also replied to me pointed out, imply does happen to mean suggest in normal usage.

All the time was apparently an overstatement, but look at the tone surrounding that exact phrase:! []

and the words: []

Re:Correlation is not causation (1)

mckorr (1274964) | more than 6 years ago | (#23949825)

Correlation does imply causation, but it does not prove it. It is possible for items of data to correlate, but have unrelated causes. There really are coincidences.

Let's see if I can put this in symbolic logic for you. Two data sets, A and B :

A --> C if A then C

B --> C if B then C

Therefore A --> B error

This is a logical error. Both A and B correlate to C, but insisting that that means that A and B correlate to each other does not pass rigor. Science is needed to prove that the two data sets have the same cause.

Re:Correlation is not causation (1)

nodrogluap (165820) | more than 6 years ago | (#23949043)

The correlation != causation tag is usually applied because either:

  1. There are obvious confounding factors the article fails to mention
  2. There's a good chance the direction of the arrow of causation is incorrect. e.g. just because fireman tend to be where you see big fires, doesn't mean they cause them. Or perhaps less obviously, aluminum doesn't cause Alzheimer's, it builds up in the brain as a consequence of Alzheimer's. Statistical inferences are only as good as the data available to you, and you need theories to drive the data collection...that's where the original article's logic fails.

A valid point the article could have made in the biological sciences is that we are returning (including my research group) to an observation-driven approach rather than a theory driven approach to initial experimental design. What do I mean? For probably the last 50 years, you needed to have a specific target (e.g. a given protein, mRNA, etc.) in order to test for its presence or concentration, etc.. So you came up with a theory for the condition of interest, and tested for the target it implied. With new techniques, you can test an experimental condition for many thousands of targets at once, allowing you to build theories from the observations then you design more experiments to confirm the theories you developed. Science is not dead, it just has a better leg up now.

Re:Correlation is not causation (1)

maxume (22995) | more than 6 years ago | (#23949095)

The article makes the mistake of assuming that new methods that can be used when you have bigger piles of information will make the old methods less powerful. As you say, it is often the case that they can be used together, resulting in faster/better/cheaper results.

Re:Correlation is not causation (0)

Anonymous Coward | more than 6 years ago | (#23949543)

Better to say "Causation implies correlation". That is, correlation is necessary to causation, but not sufficient for it.

Re:Correlation is not causation (1)

zeromorph (1009305) | more than 6 years ago | (#23948915)

... and correct if you ask some logicians and linguists. For us, imply [] means "something meant although not said but (through different mechanisms) conveyed" and entail [] means "if A is true, B MUST be true also".

Re:Correlation is not causation (0)

Anonymous Coward | more than 6 years ago | (#23948157)

'Imply' can mean either entail or suggest. In the statement 'correlation does not imply causation', it's used to mean entail. Correlation suggests causation, but does not entail causation.

Dictionaries are you friends. []

Re:Correlation is not causation (2, Insightful)

99BottlesOfBeerInMyF (813746) | more than 6 years ago | (#23949527)

In science, the phrase usually used is "correlation does not imply a specific causation." It does, of course, imply some correlation and most of modern science is noticing correlations and testing for causation.

Re:Correlation is not causation (1)

OeLeWaPpErKe (412765) | more than 6 years ago | (#23948327)

Actually there is a statistical concept "causation" as well.

So yes, correlation does not imply causation. The reverse is through, though, causation implies correlation. There is only one mathematical relation between "things that correlate" and "causes" that supports this outcome : intersection. All causes correlate.

So you only need another mathematical property of causation, take the intersection of the concepts and there you'll have a much more precise source for causation.

You could also simply take the temporal aspect in time series. Increases and decreases of solar output can be shown to correlate with temperatures a few months later (and the sun has entered a low-activity cycle, so temp is going to drop, no matter what the goracle (I want what he's smoking) says). This means that solar output causes temperature (doesn't mean it's the sole cause obviously, let's not get into it, it does explain over 98% of temp. variation though). If correlation occurs with a temporal shift, it is trivially simple to separate cause and effect.

There are other properties that imply causation as opposed to correlation : you can already see the concept in Bayesian theory. Every bayesian spam filter "knows" that an occurance of "viagra" causes spam.

The problem is much simpler : everybody (well for the moment esp. the UN and specifically the goracle) want to politicize science.

But the scientific attitude "we doubt everything" (that means that if the earth surface temperature rises to 5000 degrees after we double Co2 output, that the scientific response to "does co2 cause temp rises ?" remains "we doubt it"), is the very antithesis of policy. We don't know how the climate responds to co2. For the moment we don't know at all.

This is, to say the least, not what Obamatons want to hear.

Re:Correlation is not causation (1)

zacronos (937891) | more than 6 years ago | (#23949129)

If correlation occurs with a temporal shift, it is trivially simple to separate cause and effect.

I have to disagree with that -- it's kinda correct, but I think it oversimplifies and misses some situations. (Note that I'm talking about the general case, not your solar output example in particular.)

As one example, imagine someone without an understanding of the physics of weather discovered that, at least 10 minutes prior to the arrival of any major thunderstorm, all birds in a particular forest stopped chirping and sought shelter. And in fact, every observed time that the birds stopped chirping and sought shelter, a major thunderstorm occurred. A naive application of your statement implies that the storm could not have caused the birds to seek shelter, since they happened in the wrong order. In fact, might it be possible that birds are the cause of thunderstorms? Perhaps the immediate cessation of all flying by the birds in the forest somehow triggers the thunderstorm by changing the flow of air? This is the sort of mistake the ancients would make -- assuming that because the observed phenomena happen in a certain order, the earlier observed event is the cause. The problem here, of course, is that the pressure drop preceding a major thunderstorm happens before the birds seek shelter, but if that isn't observed, the order seems backwards.

Another place this breaks down is where 2 events are correlated, but neither causes the other; instead, it is possible both have a common cause. Imagine this (very contrived) example: every time Bob presses a certain button, a bell rings in Rover the dog's doghouse. Rover has been trained to go fetch the newspaper and bring it to you whenever that bell rings. However, whenever Bob presses his button, 5 minutes later there is another effect -- a light comes on in your room. If it takes Rover no more than 2-3 minutes to bring you the newspaper, you will observe that, 2-3 minutes after Rover brings you the newspaper every morning, a light comes on in your room. Does Rover bringing you the newspaper trigger the light? No, not at all.

Correlation plus temporal shift does not equal causation.

Re:Correlation is not causation (1)

OeLeWaPpErKe (412765) | more than 6 years ago | (#23949569)

Yes but those birds and the thunderstorm do have a very important connection :

these events SHARE CAUSES. This is true for your second example as well. They would never satisfy the second part of the causation demand : A correlates with B (with a timeshift) but B never decorrelates with A (with or without a timeshift).

In otherwords : it is a specific type of deviation in correlation that implies causation in statistical data.

Re:Correlation is not causation (3, Interesting)

eli pabst (948845) | more than 6 years ago | (#23948661)

You're exactly right. In fact if anything, science has started moving *away* from the kind of purely computational and statistical correlations that you get through data mining. Granted they are extremely important for generating hypotheses, but journals are much less likely to accept a paper without some kind of experimental validation.

The large scale genetic association studies are a great example. There was a day that you could publish a paper solely describing a correlation between a variant in gene X and its association with disease Y. However, because of the way we do statistics in science, sooner or later you'll find a statistically significant correlation simply due to chance alone. In fact the epidemiologist John ioannidis wrote an article [] about this (that I believe appeared on Slashdot as well). Now you're often required to show some kind of experimental validation that there is a biological basis that verifies the statistical correlation. The scientific method is not going away anytime soon.

Re:Correlation is not causation (1)

mckorr (1274964) | more than 6 years ago | (#23949893)

And that is exactly what you want. Data mining can show a correlation between genes X and Y, but that doesn't tell you how to fix it. For that you need the scientific method.

Re:Correlation is not causation (1)

ceoyoyo (59147) | more than 6 years ago | (#23950157)

Yes, it does imply causation. Just not necessarily the obvious one. The correlation != causation meme is technically accurate, but the writer of the previous article, as do so many people here, managed to screw it up completely by assuming that a correlation between two associated factors that is not a causal relationship between those factors is coincidence. It isn't. For a sufficiently strong correlation it implies a causal relationship between those two factors and a third factor.

All models are wrong, but .... (4, Insightful)

gopla (597381) | more than 6 years ago | (#23947817)

All models are wrong, but some are useful.

We still need scientific methods to develop useful models and understand and refine the existing models. When Newton defined his mechanics that was the state of the art in his era, and now we have progressed to quantum mechanics which might be refined tomorrow.

But mere observation of some phenomena is not sufficient to postulate the behaviour in a changed condition. A scientific model and its rigorous application is required for this. Correlations drawn from the cloud cannot substitute it.


Re:All models are wrong, but .... (4, Insightful)

99BottlesOfBeerInMyF (813746) | more than 6 years ago | (#23949581)

All models are wrong, but some are useful.

All models are wrong, to some degree. A better way to put it is all models are imprecise, but some are precise enough to be useful. 'Wrong' is a very flexible word and can easily lead to a misunderstanding in this context.

Don't blame the author's incompetence (2, Interesting)

ruin20 (1242396) | more than 6 years ago | (#23947851)

The point of the last story was horribly miscommunicated. There were two main points. The first is that data is expanding in such scope that hierarchal organization systems don't work and that the second is we're approaching a time where the method or analysis of data to show causation will come from correlation, because you can determine all the variances due to the fact that all the variables have been accounted for. Look at the human genome project or folding at home.
I don't think this is completely true, but lets not bash the idea or miss the point just cause the original author's a complete bumbling moron.

Re:Don't blame the author's incompetence (3, Insightful)

phobos13013 (813040) | more than 6 years ago | (#23948161)

You seem to be missing a fundamental flaw in the argument. No matter how many parameters you account for a) you can never account for ALL parameters of this system we call life (if for no other reason, there may well be some we dont know about yet!), and b) most importantly, even if you DO have all the parameters and the results show a correlation, there is no logical jump one can make that says it is the cause of the observed behavior.

Truly what yesterday's article was saying is that causation or correlation is meaningless if you have a mimic of the real world in the form of a collection of data. You don't need a model that is accurate or valid or anything. You just need to run the data in the exact replica of reality. This is the simulacrum. The first problem is that data does not just run itself. At the least it needs an algorithm to be processed to a result. Thats the model, without its just useless data, which has been mentioned already yesterday in comments. But second, the problem with even ATTEMPTING such an idea is that you lead yourself into a situation where you "predict" the future and then operate to become that future thus destroying the creative nature of humanity and become the self-fulling prophecy of machine code!

Keep in mind i speak mostly of social sciences that try to pattern human behavior. For hard sciences, etc., all you have done is created a simulation of reality, but it tells you nothing about the reality. It merely mimics it. There is no insight into creating a map the size of the United States, at best it is a work of art.

Re:Don't blame the author's incompetence (0, Flamebait)

oh_my_080980980 (773867) | more than 6 years ago | (#23948193)

"but lets not bash the idea or miss the point just cause the original author's a complete bumbling moron."

No, but you are if you think that was the point of the article. First, nowhere did the author speak of "hierarchal organization systems." In fact what are "hierarchal organization systems?" Are speaking of XML? Object Oriented databases? But regardless that was not his point.

Second, "...analysis of data to show causation will come from correlation," is gibberish. It means nothing. It underscores you're profound lack of knowledge of how scientific experiments are conducted, how data from experiments are collected and analyzed. In fact you are very much like the Wired author who speaks without knowing.

You might want to read books on Statistics.

mod 3o3n (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23947861)

Nice rebuttal, bad example. (4, Informative)

Angostura (703910) | more than 6 years ago | (#23947863)

In general I'm right behind the rebuttal. However John Timmer chooses a very bad real-life example as his rebuttal champion.

He asks: ...would Anderson be willing to help test a drug that was based on a poorly understood correlation pulled out of a datamine? These days, we like our drugs to have known targets and mechanisms of action and, to get there, we need standard science.

These days we may like our drugs to have these attributes, but very often they don't. There are still quite a few medicines around that clearly work and are prescribed on that basis, but for which there is only the haziest evidence as to how exactly they work.

The good thing about the scientific method, however is it gives us a framework to investigate these drug's actions - even if the explanation is still currently beyond us.

Amen (1)

wfolta (603698) | more than 6 years ago | (#23948251)

You're right about the medicine example. It's odd that medicine has an incredibly rigorous statistical process before approval, yet many medicines are basically black boxes.

Look at statins (cholesterol medication), which are one of the most widely-prescribed medicines in the world -- and which I take. There's a legitimate question as to whether their main effect is to reduce cholesterol levels, or whether it's actually a specific kind of anti-inflammatory which happens to reduce cholesterol levels.

Or how about ulcers, which were chalked up to personality and stomach acid, and treated as such, until a "crank" pushed the medical community for decades and they finally realized that a bacteria was behind most of it. The medicines were (and are) effective, but no amount of modeling along those lines could find the actual, root cause of most ulcers.

(I also take medicine for stomach acid, and interestingly I am one of the 10% whose ulcer was not caused by bacteria.)

Number one pet peeve with my doctor (2, Interesting)

bamwham (1211702) | more than 6 years ago | (#23948841)

He makes statements about treatments, causes, and outcomes as if they were God given truths proven to the world beyond all doubt. In truth medicine seems to this mathematician as a field governed sooley by statistical correlation with next to no concern over (a) what is the actual cause is, (b) testing the hypothesized cause in any meaningful way. I've read study after study that goes through a wonderful presented statistical analysis to conclude that such and such drug works well at treating such and such symptom; they then close with a couple of paragraphs as to why (they think) the drug is working often not using an qualifiers such as "we don't know but our guess is..." or "it would be nice to find out if it is ...."

To the vast majority of practicing physicians I've met "cause" just doesn't seem to be the important question. Which I think is why things happen like my pharmacist declaring that two drugs prescribed by my doctor are going to cancel each others effects or why I take a drug to treat a painful toenail and end up with bleeding in my stomach.

Re:Number one pet peeve with my doctor (2, Interesting)

ColdWetDog (752185) | more than 6 years ago | (#23950337)

In truth medicine seems to this mathematician as a field governed sooley [sic] by statistical correlation with next to no concern over (a) what is the actual cause is, (b) testing the hypothesized cause in any meaningful way. I've read study after study that goes through a wonderful presented statistical analysis to conclude that such and such drug works well at treating such and such symptom; they then close with a couple of paragraphs as to why (they think) the drug is working often not using an qualifiers such as "we don't know but our guess is..." or "it would be nice to find out if it is ...."

You are unfortunately quite correct and it's very frustrating. I speak as a physician with a strong background in experimental biology. MOST medical research is complete and utter garbage. Statistically correct garbage, but crap none the less. However, in defense of my current field - it's awfully hard to do "experiments" in human research. Hell, it was hard enough to do on eurkaryotic culture cells. Which is why much of the underpinning on modern biological sciences was done on "simple" organisms like bacteria and phages.

Another, more empiric way of looking at what most of what medical science is doing comes from the realization that if you "cure" or "improve" a disease process, at some levels it makes no difference whether you understood what you're doing or just managed to get a valid correlation between treatment and effect. To use a previous example, when you taken a statin to reduce cholesterol, you (as the patient) don't do this to "lower your cholesterol" - you do it so you live longer / healthier / disease free. The statin -> reduce cholesterol correlation may have led researchers to the treatment regimen in the first place, but the end point is staying alive longer. Thus, if the actual mechanism for that is channeling his noodliness, the treatment still works.

Of course, that's not science (or at least not very good science). But it IS the state of medical therapy.

Biology is fiendishly complex and we, as usual, make lots of baby steps and stutters. However, anybody that thinks a doctor in the latter part of this century is going to look like back at 2010 medical practice and decide it's "butchery [] " is smoking some good stuff.

Re:Nice rebuttal, bad example. (1)

DrJay (102053) | more than 6 years ago | (#23949309)

Well, what i was trying to say is that no drug company pursues anything without knowing the molecules it targets, the role they play in the cell, etc. It's doubtful that the FDA would approve the testing of a drug if all the company came up with is "we dump it on cells, and it does X, but we have no idea why."

You're absolutely correct that this sort of knowledge isn't often that deep - we know what serotonin reuptake inhibitors do on the biochemical level, but what that means for the brain is pretty hazy. But there's still a large gap between this sort of shallow knowledge and the "well, it came out of a datamining session" level of understanding.

Re:Nice rebuttal, bad example. (0)

Anonymous Coward | more than 6 years ago | (#23949451)

My thoughts exactly, he has no idea what a screening data set looks like (often 10^7 - 10^8 data points in size). However, new medicines usually have at least some model implicit in the design of high throughput assays which are used to find leads. For example, the model that a derived cancer cell line can tell us anything about actual cancer, vague but still a model at some level. About half the time the model was wrong and something different and interesting is happening, but there was a model in place at somepoint (otherwise they wouldn't have looked for that particular trait). In fact often these surprising results start someone looking for a mechanism which has and continues to launch peoples carriers. Also, the FDA tends to approve drugs only when you have some clue how they work, not necessarily a predictive model, but it interacts with BLAH BLAH BLAH in XYZ tissues sort of level.

A related way to state it, don't confuse medicine with the science of medicine, VERY different.

The other thing at least partially missed by both articles is that real science can be done pretty differently with the cloud of data out there. I can go from idea, to model, to prediction, to rough test using existing data VERY quickly (1-4 days), then refine the model, improve predictions and design a good experiment which specifically tests the idea with a comparatively small data set. The cloud isn't the science, but it leads to the science. What we really need is a good database of published results that can parse the information like a trained scientist (in particular one that has the ability to detect incorrect conclusions and possible other explanations not written into the text). When that happens maybe the cloud will replace science and we'll all become librarians, but until then actual scientists will have to do the work.


Marketing is not a Science (4, Insightful)

phobos13013 (813040) | more than 6 years ago | (#23947865)

Truly, the whole reason someone like Mr. Anderson could claim the end of science because of data is that he is a writer, a thinker, and large part businessman. Businessmen do not think about Science and how to use it to come with a method that produces a conclusion. He uses information to come up with ways to illicit a reaction in people. So to him data is more important than science because he uses it for his purposes. That is marketing, and the "science" of marketing has almost always been that way.

Mr. Anderson was not prescient in any way, he was just speaking his perspective. The only thing is we must be careful to even consider his proposition as a valid reality worth pursuing. Not for true scientists, but from a social perspective, or it will truly be the end of science. There are some in power as it is already attempting to make this happen.

That said, I almost consider responding to yesterday's article as falling for the argument. But, since it hit the /. this article is as cogent a rebuttal as one can make.

Re:Marketing is not a Science (1)

Red Flayer (890720) | more than 6 years ago | (#23948771)

to come up with ways to illicit a reaction in people
elicit == v. evoke; illicit == adj. illegal

BTW, it seemed obvious to me that he equated data discovery with scientific discovery, which is a big mistake. Adding to the sum of human knowledge is not the same as adding to the sum of human understanding, and using datamining and other automated tools for correlation determination does not in any way increase understanding.

Data discovery is about increasing knowledge. Scientific discovery is about increasing understanding.

Re:Marketing is not a Science (1)

ceoyoyo (59147) | more than 6 years ago | (#23950319)

Even from a social perspective I don't think his argument holds water. It's akin to the origin of superstition: when I make a sacrifice to the rain gods, in my experience it tends to rain. Therefore, I should believe in the rain gods.

His central example, Google, doesn't actually support his argument. Google uses an implicit model (which they carefully protect) to rank the likely relevance of search results. Then they give you a giant pile, in order of ranking, and let you sort through it. So not only does Google use a very sophisticated model but they let the searcher, and whatever model is implicit in the searcher, perform the final selection.

Re:Marketing is not a Science (0)

Anonymous Coward | more than 6 years ago | (#23950873)

One point perhaps worth bringing up is that Anderson has a degree in physics and served as an editor on the preeminent science journal Nature and Science, so his perspective may not be as skewed as most journalists. That being said, I agree with your points.

fris7 sTop (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23947917)

1. Ther3fore It's Baby take my

I'm moonlighting in bioinformatics (5, Interesting)

damburger (981828) | more than 6 years ago | (#23947919)

And can back up this rebuttal with a practical example. I am a physicist, I know sod all about blood samples, or proteins, or cancer. I get a pile of mass spec data (about a billion data points or so on some days) and through binning, background subtraction, and a string of other statistical witchcraft I produce a set of peaks labeled according to intensity and significance.

This does not make me a cancer researcher. This data has to go back to the cancer guys and they have to pick out the Biomarkers and thus develop new diagnostic tests, based on principles that I don't understand. I am master of the information but entirely blind as far as the science is concerned. Same goes for google.

we are merely neurons (1)

sneakyimp (1161443) | more than 6 years ago | (#23948709)

I would agree that the scientific method is not dead, but I like this rebuttal. The scientific method as I understand it is
1) Observe
2) Form a hypothesis or create a model to explain some phenomenon
3) Experiment and gather empirical data to support or refute the hypothesis/model

We still do all that but the emphasis does seem to be shifting away from traditional models that are sweeping generalizations (e.g., "An atom has a nucleus of protons and neutrons surrounded by moving electrons") to more nuanced, numerous, highly specific, and esoteric observations which are cobbled together into a patchwork of quasi-models that collectively define a distributed understanding of the real underlying concept. No single person understands the big picture in its entirety and no single model dominates scientific disciplines. Nay! Controversy is rampant.

These quasi-models manifest themselves as scientific papers, correspondence between academics, and flame wars on web vBulletin or phpBB sites and in practice, people subscribe to them a la carte like they were ordering at McDonald's or something.They stitch together their own stylized scientific philosophy from a vast menu of options.

In my opinion, all these claims that "we scientists are still doing science and we do understand the universe" are actually kind of pathetic. To call your data on the propagation of a particular gene variant in D. melanogaster a 'model' is hubris. You are a technician, not a scientist. You are a cog in the machine. We are all just neurons in the collective brain.

Re:I'm moonlighting in bioinformatics (1)

ceoyoyo (59147) | more than 6 years ago | (#23950379)

I'm a computer scientist who was morphed over the last six years into a biomedical researcher. As a computer scientist I can do all kinds of things to an image, including a bunch of statistical magic to tease out any patterns in the database. As a biomedical researcher I know that many of those associations are going to be due to the way the image was collected, or otherwise irrelevant features of the patient. Some may even be introduced by my processing and statistical methods.

Duh! (5, Insightful)

es330td (964170) | more than 6 years ago | (#23947941)

When I read the original article my thought was that someone was just trying to write something to get noticed. The Scientific method, IMHO, is all about a person or group of persons using a logical process to determine the vailidity of an idea. Observing massive amounts of data can reveal relationships that may not have been noticed in other ways, but at the end of the day the process of "I think X, I wonder if it is true", the heart of the scientific method, can no sooner become obsolete than we can stop being human. The questions of What, Why and How are so fundamental to humans as humans that nothing short of total omniscience will ever replace the logical process represented by the scientific method.

maybe... (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#23948053)

becouse it was a shitty idea in the first place?

exploratory experimentation (1)

johnrpenner (40054) | more than 6 years ago | (#23948101)

traditionally, science forms its hypothesis, and performs an experimentum crucis to test the hypothesis; rinse & repeat. it seems to me that 'the cloud' refers to a hitherto statistically huge number of samples of data points from which to extract our knowledge of the world -- a sort of broad collection of facts derived from constantly and systematically varying the experimental conditions -- an exploratory experimentation. goethe outlines a method of Exploratory Experimentation in the essay The experiment as mediator between subject and object [] .

"Theory-oriented and exploratory experimentation are not exclusive categories, but rather members of a spectrum of experimental research strategies. Which is more productive in a given context depends on many factors, including a field's state of development, the sort of knowledge (for example, underlying mechanisms versus phenomenal regularities) sought by the physicist, and the complexity of the system being studied. Our aim in emphasizing the exploratory path has been to bring to light an experimental style that has played an important, but hitherto underrecognized, role in the history of physics.

Physics Today Article []

Rise of Engineering over Science? (5, Interesting)

starfire-1 (159960) | more than 6 years ago | (#23948261)

I have always viewed this debate in the context of scientist vs. engineer. That is one who views data as "good and true" vs. "good enough". That's not a slam on engineers (I am one), but a reflection of the balance between the two. A scientist that never applies theory sits in an empty room. An engineer who build things with out science, sits in a cluttered room surrounded by useless objects.

I do find interesting though that the advent of "google data" may indicate a flip in order of the two disciplines. Historically (IMHO) science has led engineering. A theoretical breakthrough, provable by the scientific method, may take years to give birth to a practical application. Now, with enormous piles of data and the knowledge that "good enough" is often good enough, we may be creating useful objects that will take science many years to explain and model.

The biggest issue and omission in both of these pieces is that this "cloud" of data does not represent "truth" (as the scientist may seek), but rather a summation or averaging of the "perception of truth" as seen by the individual authors. The cloud, therefore, is only as useful as human's ability to divine truth without the scientific method.

My two cents. :)

Re:Rise of Engineering over Science? (3, Insightful)

maxume (22995) | more than 6 years ago | (#23948821)

I have a theory that some of the best engineers are scientists, and some of the best scientists are engineers.

Scientists often need to build crazy stuff to figure things out, and engineers often need to figure things out to build crazy stuff. Because they are each result oriented, they don't get hung up on the things that someone in field would.

Re:Rise of Engineering over Science? (1)

ceoyoyo (59147) | more than 6 years ago | (#23950473)

I think it's the other way. Engineering got a head start on science. When we pile up rocks just so, they tend to stay where we put them, even if you walk across them. Voila, a bridge. Science came along later and explained why those particular arrangements are stable. That explanation lets the engineer investigate other bridge designs that he might not have seen before.

There are perhaps a few areas in which the availability of massive amounts of data may let the engineer go back to his "I've seen it, therefore I'll try it" methodology, but I think in the vast majority of cases he'll wait until science figures out WHY it works. Engineering by trial and error is simply too expensive when you have a viable alternative.

knowledge != understanding (3, Insightful)

mlwmohawk (801821) | more than 6 years ago | (#23948355)

I have a problem with the google generation, sure, they can parrot facts and find things in an instant, as can any slashdotter I'm sure, but knowing something is not the same thing as understanding something.

I coworker asked me yesterday "how do you call a C++ class member function from C [or java]?" The question is an example of pure ignorance.

If they "understood" computer science, as a profession, this would be a trivial question, like how do I or can I declare a C function in C++. The second question is what google can help you with while having to ask the first question means you are screwed and need to ask someone who understands what you do not. Not understanding what you do for a living is a problem.

How programs get linked, how environments function, virtual machines vs pure binaries, etc. These are important parts of computer science, just as much as algorithms and structures. You have to have a WORKING knowledge of things, i.e. an understanding.

Google's ease of discovery eliminates a lot of the understanding learned from research. Now we can get the information we want, easily, without actually understanding it. IMHO this is a very dangerous thing.

Re:knowledge != understanding (1)

zeromorph (1009305) | more than 6 years ago | (#23949273)

Wow, one of the best postings I have read for months.

Although I wouldn't call it "very dangerous", you are so right about the difference between, what you call, knowing and understanding. Raw data and number crunching is only one step towards understanding. Interpretation of the data and in the end really grasping the problem and hopefully a solution are something different.

Theories may have gone wild in some sciences in the sense that theorizing is overvalued compared to data munching, but theories and models will remain integral part of any sane science.

Re:knowledge != understanding (1)

ceoyoyo (59147) | more than 6 years ago | (#23950501)

Mr. Miyagi?

new adds to old, but doesnt end it (1)

peter303 (12292) | more than 6 years ago | (#23948555)

Petabyte technology suggests new avenues of scientific investigation, but doesnt end science or older alternative ways of doing things. The clever thing is to be first to discover the new possibilities.

What? What in the world are they talking about? (1)

yoinkityboinkity (957937) | more than 6 years ago | (#23949083)

This whole thing makes no sense. It's all ambiguous concepts. What? Lot's of data means we don't need to use theories? Lot's of data != Omniscience. If fact, lot's of data is not even yet information. You still need to find how it applies. It's the people are Wired making a religion out of new technology that causes them to say crazy things like this.

science-open , clouds-? (2, Insightful)

GodWasAnAlien (206300) | more than 6 years ago | (#23949107)

Science and openness go together.
Without openness, we all are reinventing private wheels, which we destroy the plans to when there is no profit.
If you work in software, consider for a moment how scientific your work is, considering the work of other companies doing similar work.

This Clouds thing is the "billion monkeys/humans typing on keyboards" model.
Yes, it really can work (with humans).
But, as with science, the chaos development model only works with openness.

Of course, organized science along with a little chaotic development work work even better.

There are forces in our society that do not like any open model. The Microsoft's, the MPAA, the RIAA. These type of organization thrive from closed models. More copyright controls, more DRM, longer copyright and patent terms.
These forces would prefer to own,control and close science and clouds of data. They are unaware of the inevitable impact of such actions.

In a free capitalist society, we are naturally driven my contrary forces.
A desire to hide discoveries, to maximize profits, even at the expense of innovation.
A desire to share discoveries, to contribute to society and for credit.

While it is possible to profit when ideas are shared,
It is more difficult to contribute to society by hiding information indefinitely.

Because (1)

Thelasko (1196535) | more than 6 years ago | (#23949195)

There are coefficients we use in models that we don't fully understand in the physical world. We obtain those coefficients through empirical data. To rely solely on those models for design ignores the fact that those coefficients may change for any reason in the real world, because we don't fully understand what factors influence them.

In my experience this only applies to certain sciences. Most of my experience with such systems is in the area of fluid mechanics, and thermochemistry. Models can save years of lab work, but in the end, the model still needs to be verified.

Missing the point (1)

dylanr (455383) | more than 6 years ago | (#23949423)

The Wired post was a bit over-reaching, sure... but that's Wired for you.

The bigger point is that science is about testability, not story-telling. There may soon come a day when our analysis can prove that something is true without our being able to explain why it is true.

We are already there in many respects, but will be much further along when the current crop of Bayesian diagnostics hits the market. Combine those with the flood of information that personal genomics companies hope to make available and you might see an explosion of insight into diagnosing disease states.

Does that mean we're all done with lab science? Of course not. But our research may come to focus more on understanding what our diagnostics have already proven, rather than on charting new frontiers of knowledge.

Call it what you will, that's a pretty big change in how people organize and gather knowledge.

If Google ads can be so bad... (1)

linhares (1241614) | more than 6 years ago | (#23949605)

...then WTF??

Some time ago some researchers came out with a book which was supposed to be called "the end of intuition". The name of the book actually became "Supercrunchers", because people would click more on that ad than in the "end of intuition". I wondered why the final name shouldn't be "hot college lesbians".

The Eliza effect is so huge that any nice trick machines do seems to give us the immediate feeling that "It's alive!", and it has deep meaning.


As a researcher of psychologically plausible AI models [] , I found the whole idea disgusting, and submitted a paper to a journal explaining why the whole thing is bogus. []

Expect to see more of this overexcited nonsense in the future.

Say... (1)

denzacar (181829) | more than 6 years ago | (#23950533)

I wondered why the final name shouldn't be "hot college lesbians".
Have you ever worked in marketing? You might want to think about giving it a shot if you haven't already.

I have a feeling you could have a brilliant career in that field.

Re:Say... (1)

linhares (1241614) | more than 6 years ago | (#23950589)

well, if selecting a book's title is all about clicks...

Too much information can be a bad thing too (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23949683)

Another point missed here is that background noise can obscure real results. Much of the data cloud is utter garbage. Picking out the useful information is often a complicated and difficult process, in some cases it's easier to just go and do the measurement yourself. I've heard the "a few days in the library can save you weeks at the bench" about as often as the reverse. I think they're both true.


Hmmm... this is how our ancestors did things (1)

foniksonik (573572) | more than 6 years ago | (#23949687)

Ever wonder how early humans discovered medicinal qualities of plants? They didn't use models and scientific method... they used vast amounts of trial and error results. Then they used prediction based on what they had learned to narrow down what kind of plants to try out next. They didn't understand the underlying mechanisms and test out new findings based on that type of model... they used cheap and dirty statistics and record keeping.

This is just an extension of what humans have been doing to discover new correlations, for our entire history... just faster.

I come up with theories all the time based on cross-referenced science articles. Unfortunately I'm not in a position to test any of them, so the best I could do is blog about it - but then I'd join the ranks of the armchair scientists out there and that just seems lame, for now.

One day I'll come across a community that accepts crack pot ideas as the basis for experimentation... lets the community vote on which ones to carry out and takes small donations to fund the projects... then I'll submit my ideas. Hmmm... sounds like a fun community, off to Google to see if one exists already.

WTF?? comment on QM in article (1)

Prune (557140) | more than 6 years ago | (#23949711)

The article states that "we know quantum mechanics is wrong on some level". Oh really? That's news to me. Any serious proposed theories of everything have been quantum in nature. It's amusingly hypocritical that the Arstechnica article refers to the Wired author as unscientific, yet makes such a claim itself.

The only thing "wrong" with quantum theory is that doesn't fit human intuitions. But this is only because people ignore the psychology of perception and are not careful about interpretations; it's easy to create a very reasonable interpretation of QM that doesn't invoke weird stuff like saying QM must have something wrong with it, or strange consciousness stuff, etc. An example is Mohrhoff's [] (also check Marchildon's review of this class of interpretations, it's in the arxiv somewhere linked to this).

systems thinking / 5 categories (1)

sidething (1138899) | more than 6 years ago | (#23950461)

thoroughly fed up with trying to register on wired to argue as the article seemed seriously wrong.
what i was intending to post there, but can't - finally somwhere to post it beyond the g/f's email!

Copied from [] as I was looking for a reference to the information but think that this page sums it up at least as well as I could:

"The content of the human mind can be classified into five categories (Russell Ackoff):

Data: symbols

Information: data that are processed to be useful; provides answers to "who", "what", "where", and "when" questions

Knowledge: application of data and information; answers "how" questions

Understanding: appreciation of "why"

Wisdom: evaluated understanding."

The interpretation on the given url sees understanding as a process that represents the transition between each stage rather than as a stage in itself - information is the understanding of the relationships between data, knowledge is the understanding of patterns of information and wisdom is the understanding of the principles that underpin knowledge and hence make extrapolation to the future possible.

Whichever way it is looked at, the first categories relate to the past with wisdom (the ability to extrapolate) being the only one which relates to the future.

Applying to your example of J. Craig Venter, it can be seen (from my viewpoint) that his research has expanded the amount of data available to us and even possible the amount of information.
However it provides no answers, that I can see - from what is provided in your article - to the questions of "How?" or "Why?" and therefore provides no increase in knowledge, understanding or especially wisdom.

I would argue that it is the scientific method of hypothesize, model, test that provides the answers to the how and why questions and therefore increases Knowledge and Wisdom.

To me what you are arguing is not that the data deluge makes the scientific method obsolete but rather that it provides a new basis for experimentation by the analysis of statistics - it provides a new medium for testing, but provides no ability to hypothesize or test and hence does not increase knowledge, understanding or wisdom.

As such it is a very beneficial development, but the results must be treated with the same caution, indeed more, as any experimental results gained by more traditional meas. To blindy accept the findings without factoring in all the paramaters and testing against a hypothesis (indeed, unless you hypothesize how do you determine what to vary and what to test?) seems to me to be very dangerous and indeed a step backwards in thinking.

Human Comprehension is Limited (0)

Anonymous Coward | more than 6 years ago | (#23950529)

Chris Anderson []
has foreseen the most profound change since the age of reason. Man has
reached the point where his "understanding" can impede evolution. It is
time to concede that some processes may be beyond our

Research in the area of Artificial General Intelligence provides a
crystal clear demonstration of the problem. A half century of research
has led to "intelligent" data mining and voice response systems and
very little else.

However, Koza [] , Fogel []
and many others have observed evolutionary computation machines
creating solutions to real world problems. In some cases these
are patentable solutions beyond previous human achievement, and some of
them defy understanding.

Unless you have unlimited funding and lots of time, it's not necessary
to understand why every complex solution works. It may not even be

A million MRI's of functioning brains are not likely to result in any
Lisp program for AGI, so the search for AGI seems to be coming full
circle back to the "baby bootstrap [] ". Even Ben Goertzel []
is looking to virtual babies to mine the clouds.

Like others who have managed to see beyond the horizon, Anderson will
be widely misunderstood. He is not rejecting scientific method, he is
simply showing us its limitations.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?