Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math Technology

Finding a Needle in a Haystack of Data 173

Roland Piquepaille writes "Finding useful information in oceans of data is an increasingly complex problem in many scientific areas. This is why researchers from Case Western Reserve University (CWRU) have created new statistical techniques to isolate useful signals buried in large datasets coming from particle physics experiments, such as the ones run in a particle collider. But their method could also be applied to a broad range of applications, like discovering a new galaxy, monitoring transactions for fraud or identifying the carrier of a virulent disease among millions of people." Case Western has also provided a link to the original paper. [PDF Warning]
This discussion has been archived. No new comments can be posted.

Finding a Needle in a Haystack of Data

Comments Filter:
  • Google (Score:4, Interesting)

    by biocute ( 936687 ) on Wednesday December 07, 2005 @04:45PM (#14204954)
    Does Google have the technology to do this kind of scientific searches yet?

    If it does, it sure can save these researchers a lot of time; If it doesn't, I'm sure Google will be keen to get involved, especially on the "isolate useful signals buried in large datasets" part.
    • But can it find potential girlfriends for Slashdotters? Now that's what I would call isolating a useful (and rare) signal buried in a large dataset. When i see THOSE results, I will be impressed.
      And if so, I've got a useful signal that could use some burying...
      • Re:Google (Score:3, Funny)

        by sapped ( 208174 )
        But can it find potential girlfriends for Slashdotters?

        Wow. There really are't any out there. Check it out on google [google.com] yourselves.

        The same results come back in images, groups, news, etc. Man. What a sad bunch.
      • it's not that hard. Go outside. Talk to people. Listen (thats the important part).

        Give it a few months and you will be suprised!

        (Of course, get yourself in shape - not too hard. an hour 5 days out of the week on a cycle or treadmill will do the trick. lay off the sugary and fatty snacks)

        Ive even had propositions! THEY came to ME!
    • Re:Google (Score:3, Funny)

      by garcia ( 6573 )
      Does Google have the technology to do this kind of scientific searches yet?

      It's only in Beta thus it's not useful ;-)
  • It just refused to load for me.
  • All you have to do is index it properly, and lots of data can be searched really fast.
    • Re:Indexes (Score:2, Informative)

      by Husgaard ( 858362 )
      They are trying to efficiently find a signal in random and chaotic data. Random and chaotic data isn't easy to index.
      • But that's the trick. Finding a good way to index the data.
        • I don't see that there would be any point in indexing it...In an index you're atomizing it down to it's individual meaningless parts. Each and every part is therefore solitary in an index, and cannot be related to any other part of the index in a meaningful way, because all the other parts are equally unrelated to anything and meaningless as well.

          It would be more useful to transform the apparently random data in some way so as to make signals or discrepancies buried in it obvious. There are all kinds of fun
    • If we had ham, we could have ham and eggs. If we had eggs.
  • by Billosaur ( 927319 ) * <wgrotherNO@SPAMoptonline.net> on Wednesday December 07, 2005 @04:48PM (#14204983) Journal

    I see this as being a boon to SETI [berkeley.edu]. If there was ever a needle in a haystack, it's trying to tease a possible intelligent signal out of the cosmic background noise. If you have an idea what the background is like in general, then it's far easier to detect an abnormality in that background noise. The question will end up being, are we simply detecting more false positives or are these real signals?

    • Also the first "usefull" application for this kind of technique which popped up in my head. Actually, the process in my head to make this one item popup is maybe usefull too (-: Lot of random data, and this one is being associated with the article.
    • it's been a while since i last did much perl, but shouldn't the last line of your sig be:

      ($world = $world) =~ s/bad/good/g;

      otherwise you're making your world better but not ever doing anything with it...
    • I see this as being a boon to SETI.

      I'll read the article tonight and find out if it's applicable and whether it's better than what we are using. In the SETI@home client processing we already take into account the anticipated form of the signal, so I'm not sure this buys us anything. In fact, other than the exact mathematical description as a multidimensional manifold the text makes it appear that we're already using this technique in our searches for repeated pulses and signals matching the Gaussian pr

  • Ya' know... (Score:3, Funny)

    by jacobcaz ( 91509 ) on Wednesday December 07, 2005 @04:49PM (#14204988) Homepage
    82.67% of all statistics are made up anyway...
    • "82.67% of all statistics are made up anyway..."

      Well yeah, 50% of all statisticians finished in the bottom half of their class.
      • Not necessarily... only works if there are an even number of statisticians, and if nobody scored the mean score.

        eg. if there 100 statisticians, the mean score is 37 and 10 statisticians scored that, only 45% of statisticians are techincally in the bottom half (and 45% in the top half). 10% are exactly in the middle.

        You could say that the 10% are in both the bottom and top half... in which case 55% are in the bottom half and 55% are in the top half!!
    • Very true. Also interesting is that 95% of men like to use statistics to seem more intelligent...
    • from the moment you posted that comment, the value you gave increased just a little bit more ...
  • by RandoX ( 828285 )
    I can't even find my keys some days.
  • by AthenianGadfly ( 798721 ) on Wednesday December 07, 2005 @04:50PM (#14205000)

    "But their method could also be applied to a broad range of applications, like discovering a new galaxy, monitoring transactions for fraud or identifying the carrier of a virulent disease among millions of people."

    When asked about more advanced applications for the technology, researchers replied it will probably be "quite a while" before the technology could be used for extremely high noise environments. Said one, "I mean, it's going to be a long time before we're up to finding finding useful comments on Slashdot or something."

  • Numb3rs (Score:2, Funny)

    by vanyel ( 28049 ) *
    Sounds like they've been watching Numb3rs ;-)
  • The Case team discovered a technique that is built on the principle of comparing a set of summary characteristics for any sub region of the observations with the background variation. From these characteristics, attempts are made to find small regions that appear significantly different from the background--a difference that cannot simply be attributed to random chance

    So, basically its the one search engine that can only find the words "horny teen nekkid" if it is NOT on a pr0n-page. I can see uses for that
  • by Anonymous Coward
    Why do we need to be warned that it's a PDF? I can understand an "MS Word Warning" but PDF is platform independent. What's wrong with PDF?
    • My problem with them is that one of my work PCs is very old but still fine for browsing the internet. Clicking on a link on this machine that I did not realize was a PDF sets off a long and tedious series of about 3 minutes where FF locks up until the Acrobat Reader plugin loads, then it downloads and displays the PDF, then scrolling through the file itself is really jumpy, then I have to close it which is slow and sometimes crashes FF.

      Even on my faster PCs, reading a large PDF feels slower than it shoul

      • Why don't you unisntall the plugin in this machine then? It seems to me that the plugin is useless since you're never going to want to use it anyway. Keep the reader as a stand alone app, so you can still view the pdfs if you want to.
  • "...a difference that cannot simply be attributed to random chance..." If it's random, how do you know?
    • Random has NO pattern what so ever. By detecting a pattern, however small, implies non-random data. QED

        -Charles
      • by Stonehand ( 71085 )
        Not really.

        The more you constrain your allegedly random process, such as by insisting that it produce output without "patterns" -- whatever those are -- the less random it actually is.

        To put it in more concrete terms, which is more random -- a coin which flips 50-50 heads/tails with no other constraints whatsoever, or a coin which flips 50-50 but will never, say, flip 100 heads in a row, and will never exactly alternate, and will never produce the bit sequence corresponding to the ASCII encoding of the text
      • If you have an infinite amount of random data, every pattern will be in there somewhere. At least, that's what I was led to believe.
        • If you have an infinite amount of random data, every pattern will be in there somewhere. At least, that's what I was led to believe.

          Yes, but only if you look at smaller segments, which changes your dataset. For example, if you spot the first 30 digits of Pi in an infinitely random set, the question becomes is your random set Pi? If not, the pattern only applies to those 30 digits and thus your set changes and is no longer the infinite set of random data.

          And they aren't dealing with an "infinite" set, but
    • by flynt ( 248848 ) on Wednesday December 07, 2005 @05:04PM (#14205117)
      Whether you "know" or not is always up for debate, but that's usually for epistemology class. In classical hypothesis testing in statistics, you make a distributional assumption about your data, and then calculate a probability from the data you observed (the p-value) given your initial assumption. If this probability is very low (also an interpretation), you assume your initial distributional assumption was incorrect. There are finer points to it of course, but classical hypothesis testing in statistics is pretty much a reductio ad absurdem in logic.
  • by tomzyk ( 158497 ) on Wednesday December 07, 2005 @04:57PM (#14205063) Journal
    FYI: Its abbreviation is not "CWRU" anymore. As of about 2 years ago, they changed it to simply "Case" and gave it the silly new logo of 2 paperclips stuck together.

    Why? I have no idea. Some "university branding" thing that some people thought was important to the growth of the campus or something. Apparently it ticked a bunch of alumni (from the original Western Reserve University) too.

    Knowing is half the battle.
    • The name of the school is still Case Western Reserve University.

      Despite the fact that its OK to officially call it 'Case' now (it wasnt OK to do so in '97), CWRU is still a valid abbreviation. Plus I paid so much money to that place that I'll call it whatever I damn well please.

      - '02
    • by Anonymous Coward
      Actually, its not two paper clips together. It's a fat man holding a surf board. Look for yourself [case.edu]
    • I have to say, I'm glad that my alma mater (Case School of Engineering, 2000) is actually still doing real science. I'm kind of disappointed at all the folks above who posted about "finding useful information in the noise of internet information" though; that type of information gathering is not quite the same as discerning between special-cause and random-cause fluctuations in a signal (mostly because the Internet consists mostly of special-cause variation: i.e., things people have written or created). Dis
    • More on the logo. [case.edu]

      The true offense in the OP was calling it "Case Western". It's not a "reserve university", whatever that means.

      I've always just called it "Case" since I started there as an undergraduate in 1994, while my e-mail address still contains cwru.edu. Both of those are used now - "Case" just validates the fact that most people really get tired of saying the whole name over and over again.

  • by airrage ( 514164 ) on Wednesday December 07, 2005 @04:59PM (#14205081) Homepage Journal
    Someone asked me to give ten different ways to find a needle in a haystack, these are my thoughts:

    1) INDUSTRIAL MAGNENT
    2) BLIND LUCK
    3) BURN THE HAY, PICK UP THE NEEDLE
    4) STATISTICAL ANALYSIS (SINCE NEEDLES IN HAYSTACKS ARE NOT PLACED AT RANDOM, THEY ARE SUBJECT TO REGRESSION ANALYSIS)
    5) OFFSHORE TO CHINA WHERE LABOR IS CHEAPER, SEARCH THE HAY WITH 10000 OF WORKERS.
    6) WAIT YEARS UNTIL THE HAY DECAYS, PICK UP THE NEEDLE
    7) SPREADOUT THE HAY, HIRE BAREFOOT HAY WALKERS
    8) TAKE ALL THE HAY, PUT IN A POOL OF WATER - HAY WILL FLOAT, AND NEEDLE WILL SINK
    9) LET COWS EAT THE HAY, X-RAY ALL THE COWS!
    10) TRIAL AND ERROR - ONE PERSON

    • Mythbusters (Score:3, Informative)

      by everphilski ( 877346 )
      Mythbusters actually did an ep where they built two different needle-in-haystack finding machines, one actually did quite well...

      -everphilski-
      • Their solutions were kinda destructive though.

        I'd like to see a way of finding a needle in a haystack that left you with a (largely) intact haystack afterwards, not a pile of ash or a wet sludge.

        Huge inductive coils would be a good start... probably wouldn't find the bone one though - maybe some kind of MRI?

      • Was that the one where Kari(sp?) mugged and did things for the camera in the cutesy, girly way? Stupid me, that pretty much describes every episode she's been in.

        Scotty was a no-bullshit welder (and very attractive, to boot). Bring her back, she's a *real* babe.
    • 1) INDUSTRIAL MAGNET

      DBAs everwhere are cringing and covering their data.
    • I've got another one..

      11) LET COWS EAT THE HAY, DISECT DEAD COW

      lameness filter blah
      • Ok, time for another one of iamlucky13's little-known redneck nerd facts

        Category: Cattle
        Entry: 1097
        Ranchers will commonly intentionally force feed a smooth magnet [magnetsource.com] to calves. Because of it's weight, it will remain in the rumen or reticulum (the 1st and 2nd stomach compartments, respectively) for the life of the cow. Fields often have stray bits of metal small enough to be accidentally ingested while grazing, such as barb wire bits, fence staples, screws, etc. When stuck on the magnet, the pieces are eff
  • Perhaps this technology can make Usenet useful once again.
  • Would this be useful to reduce the computations needed for the SETI@Home folks too? Seems they have a bit of data to sort through... Hell, genetic enginering too. Look for useful patterns in hundreds of DNA strands.
  • Mythbusters did this one already. They built two machines/processes to find needles in haystacks. One used a process to burn away the hay leaving the needles and the other used magnets and gravity to separate the needles from the hay.

    Oh, wait. Their talking about data. Never mind.
  • THE SINGULARITY

    Throughout history, we championed the content creator. Only a tiny fraction of the population could write or understood math or science. Only a tiny fraction could dedicate themselves to the arts.

    Most individuals' time was consumed by being agrarian generalists: they owned a farm, and they were constantly occupied by all the repairs and maintenance of their property. It wasn't a job, it was a way of life. But now, more and more, our economy makes us all incredible specialists. We're conf
  • by G4from128k ( 686170 ) on Wednesday December 07, 2005 @05:32PM (#14205350)
    Looking for possible patterns in large volumes of data is dangerous because of the high chance that random data will fit some of the myriad patterns tried. If you test data against a thousand possible patterns, then about 50 of them will be found to be present at a statistical significance level of 5% (even if the data is 100% random). "Cancer clusters" are an excellent example of this -- if you slice a dice a population enough different ways you are bound to find some geographic/demographic/ethnographic subgroup with a very high chance of some cancer.


    Its better to either have a a priori hypothesis to look for one specific, pre-defined pattern in a mound data than to see if any pattern is in the data. Or, if one insists on looking for many patterns, then the standards for statistical significance must be correspondingly higher.

    • by zex ( 214881 ) on Wednesday December 07, 2005 @05:54PM (#14205567) Homepage
      If you test data against a thousand possible patterns, then about 50 of them will be found to be present at a statistical level of 5% (even if the data is 100% random).


      If you're not correcting for multiple hypothesis testing, you are correct. If you do have 100% random data that holds to perfect randomness at all scales (which I'm not sure is even possible) and correct for multiple hypothesis testing, then you'll find exactly what you "should" find: no significant pattern.

      You mention "Cancer clusters" as an example of attribution of significance to insignificant findings. However, these clusters are often found (at least in the genetics research realm) by hierarchical clustering, which is self-correcting for multiple hypothesis testing. If you're speaking of demographic surveys which find that (e.g.) "black females in Tahiti who were exposed to .... are more susceptible to brain cancer", then you're probably right. I too see those as examples of restricting the domain of samples until you find a pattern - but the pattern nonetheless exists.
    • Looking for possible patterns in large volumes of data is dangerous because of the high chance that random data will fit some of the myriad patterns tried.

      No, God put the figure of Jesus in the sky, but made it not look too much like Jesus just to test the difference between the believers and non-believers. Trust me, it was not easy to do all that with nobody looking.

  • Current fraud detection systems in use in the financial industry are based on two primary knowledge bases:

    1. A knowledge of your purchasing pattern as a consumer. To wit, having a statistically significant sample of what are valid transactions as well as knowing your credit score and income.

    Do you shop at high-end stores? Do you use your card for primarily travel and entertainment? Do you use your card for everyday purchases? How much of your line-of-credit do you tend to use?

    2. A comparison of recent
  • by $RANDOMLUSER ( 804576 ) on Wednesday December 07, 2005 @05:40PM (#14205434)
    An article posted by Roland Piquepaille with no links back to his site???
    WTF? Roland? You feeling OK?
  • Sort of a dilettante question, but I've been researching using entropy and information gain here at work and some of what they're talking about in the article and the paper seems familiar, though I'm not skilled enough in stats yet to make much out of it. It seems to me to be fairly similiar to how you get an information gain score. If you can classify the background as such, you should be able to sift through data with however many parameters you want and find the parameters that cause the greatest diffe
  • by Lord Byron II ( 671689 ) on Wednesday December 07, 2005 @06:48PM (#14205962)
    As a particle physicist I know exactly the kind of challenge that this is. The SNR is horrible, you've got tons of data, and the data may be distorted by all sorts of sources (background, misalignment, the wrong reaction, etc).

    I also know that these sorts of algorithms are created all of the time. In fact, someone in my lab got his Ph.D. for applying a neural network to this problem. Furthermore, these algorithms are not "plug-n-play". They must be manually adjusted, by a team with a deep in-depth knowledge of the system in order to be useful.

    So trust me when I say that Roland has blown this out of proportion. Congratulations to the CWRU team for getting the PRL paper published, but this is hardly the kind of ground-breaking news that deserves to be on Slashdot.

  • I often have the same feeling about Slashdot. it's like a big haystack, but the needles are larger and easier to find. I have noticed that the Roland Piquepaille needles happen to the most worthless. The obvious solution for finding the proverbial needle in the haystack of data is to make it up. It's not like there's any real world [blueyonder.co.uk] examples. [thenation.com]
  • From the title of TFA, "Case researchers discover methods to find 'needles in haystack' in data". Pet peeve of mine, new techniques are not "discovered", they are "developed" (or something similar). Henry Ford did not discover the Model T by peering though a microscope, and CowboyNeal did not discover SlashCode by analyzing reams of code observations. It may be semantic nit-picking, but I think saying that the researchers just discovered this (surely insanely complex) bit of mathematical analysis takes away
  • is the overwhelming size of the literature. It is getting harder and harder to find the information that you need among a sea of near misses. Even to stay on top of one's subfield would require reading at least five journal papers a day, which is a significant undertaking even before you have to spend large amounts of time hunting for papers. For example, I am a chemist. It is generally not too difficult to find papers about a specific molecule - each molecule is assigned a specific ID number, which can
  • by martin-boundary ( 547041 ) on Wednesday December 07, 2005 @09:27PM (#14206837)
    I don't want to rain on the parade, but the result is quite possibly wrong.

    If you download the linked paper, on the second page they talk about the Breit-Wigner (Cauchy) density psi, and later they claim that their score process has zero expectation. Now, everyone knows that the Breit-Wigner does not *have* an expectation, and it's often used as an example where the asymptotic normal (Gaussian) distribution approximation doesn't hold. But still, they derive all sorts of distribution formulas involving a chi squared and a Gaussian process, as if there was no problem at all with the Breit-Wigner tails.

    I think their derivation is quite possibly wrong.

    • "Now, everyone knows that..."

      You keep using that word. I do not thing it means what you think it means.

    • martin-boundary wrote:

      If you download the linked paper, on the second page they talk about the Breit-Wigner (Cauchy) density psi, and later they claim that their score process has zero expectation. Now, everyone knows that the Breit-Wigner does not *have* an expectation, and it's often used as an example where the asymptotic normal (Gaussian) distribution approximation doesn't hold. But still, they derive all sorts of distribution formulas involving a chi squared and a Gaussian process, as if there was no

  • Oh come on!  It's not that hard!

    public static Object find(Object needle, Object[] haystack) {
      for (int i = 0; i < haystack.length; i++)
        if (haystack[i].equals(needle))
           return needle;
      return null;
    }
  • There are disputed reports that this sort of data mining was used to identify the terrorists who attacked the USS Cole and flew airplanes into the World Trade Center (the official 9/11 commission's findings notwithstanding). The project is well documented on the right-side of the web and was called "Able Danger." According to rumor the project was shut down after identifying Mohammed Atta but it also pointing to Condoleeza Rice and Hillary Clinton as potential foreign spies.

    This raises the issue of false al

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...