Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Earth IBM Math

IBM Claims Breakthrough Energy-Efficient Algorithm 231

jitendraharlalka sends news of a claimed algorithmic breakthrough by IBM, though from the scant technical detail provided it's hard to tell exactly how important the development might be. IBM apparently presented its results yesterday at the Society for Industrial and Applied Mathematics conference in Seattle. The breathless press release begins: "IBM Research today unveiled a breakthrough method based on a mathematical algorithm that reduces the computational complexity, costs, and energy usage for analyzing the quality of massive amounts of data by two orders of magnitude. This new method will greatly help enterprises extract and use the data more quickly and efficiently to develop more accurate and predictive models. In a record-breaking experiment, IBM researchers used the fourth most powerful supercomputer in the world... to validate nine terabytes of data... in less than 20 minutes, without compromising accuracy. Ordinarily, using the same system, this would take more than a day. Additionally, the process used just one percent of the energy that would typically be required."
This discussion has been archived. No new comments can be posted.

IBM Claims Breakthrough Energy-Efficient Algorithm

Comments Filter:
  • Wat (Score:4, Funny)

    by Anonymous Coward on Friday February 26, 2010 @09:58AM (#31284354)

    reduces the computational complexity, costs, and energy usage for analyzing the quality of massive amounts of data by two orders of magnitude.

    I guess they stopped using Windows Vista?

    • by bsDaemon ( 87307 )
      They had to. It won't run on some lame-ass 4th most powerful supercomputer in the world. It requires at least the third most powerful.
      • Re: (Score:3, Funny)

        by biryokumaru ( 822262 ) *
        I heard a rumor that someone got it running on a VM under Linux on a Beowolf cluster of the 6th, 7th and 12th fastest, but the UI was really laggy.
    • I guess they stopped using Windows Vista?

      2006 called and wants its jokes back.

      • I guess they stopped using Windows Vista?

        2006 called and wants its jokes back.

        *ahem* I guess they stopped using Longhorn?

  • by pydev ( 1683904 ) on Friday February 26, 2010 @09:59AM (#31284372)

    Sounds like someone found a faster algorithm (maybe just constants), and since energy efficiency is the hot new thing, "faster" is now translated into "saves energy".

    • by Dunbal ( 464142 ) * on Friday February 26, 2010 @10:08AM (#31284494)

      I find it interesting on a philosophical level to think about what computing is doing to us. CPU's require energy to perform calculations. Then there's the system overhead, a fixed energy cost that included the assembly and set up costs, and the running and maintenance/replacement costs. Now obviously humans have been almost taken out of the equation. Where before you had thousands of workers all requiring to be fed, all requiring furniture and space and light and reasonably cool/warm air, all of them needing transport, and all of them victims of entropy and therefore needing accident and health insurance, taking sick days, etc. We've come a long way.

      Now you just need the brains. Brains to design the system, brains to drive the investigation, and brains to try to improve the algorithms the system uses. To save even more energy. Of course eventually physical limits will be reached. There's no escaping the fundamental laws of our universe. But the energy "savings" from doing it the "old way" is translated into the ability to essentially brute-force the universe with raw computing power. Er, but what are we going to do with all the people who just don't "have" the brains? They get a free ride?

      Sorry I'm waxing philosophical today.

      • by WrongSizeGlass ( 838941 ) on Friday February 26, 2010 @10:17AM (#31284578)

        Now you just need the brains. Brains to design the system, brains to drive the investigation, and brains to try to improve the algorithms the system uses. ... Er, but what are we going to do with all the people who just don't "have" the brains?

        Mmmm, brains ...

      • by biryokumaru ( 822262 ) * <biryokumaru@gmail.com> on Friday February 26, 2010 @10:33AM (#31284776)

        Er, but what are we going to do with all the people who just don't "have" the brains? They get a free ride?

        That [welfareinfo.org] seems [stopthehou...ailout.com] to [propublica.org] be [bnet.com] the [securitygu...sguide.com] way [adelphi.edu] it's [vatican.va] been [funlol.com] working [bloomberg.com] so [thecarpetb...report.com] far [wikipedia.org].

      • by zippthorne ( 748122 ) on Friday February 26, 2010 @11:03AM (#31285090) Journal

        In a post scarcity economy? Yeah, everyone gets a free ride. Everything changes if you can get to that.

        • A "post scarcity" economy is a physical impossibility. The universe doesn't have unlimited resources.

          HOWEVER, it's at least plausible that an economy could exist whereby the essentials to support billions of human lives in decent conditions could be generated with almost no input of human labor. All living humans could get all of their needs, and most of their wants taken care of with little effort on their part. With virtual reality, those humans who wanted things the economy couldn't provide (their own

    • Implemented for Linux, but analogously applicable to other systems. Running this once should reduce your PC's energy consumption to near zoro:


      #!/bin/bash
      #
      # save-energy.sh
      #
      # Save enormous amounts of energy, irrespective of cost in lost computation
      # must be run as root /sbin/shutdown

      Of course, effeciency will be lost if you do anything else with your PC (like turn it back on), but hey, no algorithm is perfect for all use cases.

    • by tomhath ( 637240 )
      Exactly. I once fixed a report that was running for 8 hours, cut it down to 90 seconds. But no one in their right mind would suggest that saved 99.7% of any energy, because the server was still running when the report was done. Now the other report that crashed the server after I fixed it is a different story...
      • by bws111 ( 1216812 )
        But certainly you did save %99.7 of the energy required to run that report. Whether or not your workload management allows you to realize those savings is a separate issue.
  • by QX-Mat ( 460729 ) on Friday February 26, 2010 @10:00AM (#31284378)

    Can it organise my porn?

    • Can it organise my porn?

      Nope. In the early 20th century, Turing proved that no logically consistent porn library can contain both "donkey tennis" and "David Hasselhoff". Sorry

  • I'm all for energy-efficient algorithms and datacenter but this PR is nothing but green-washing. IBM's algorithm is just faster so it uses less energy. Duh.

    Automatically spreading loads across datacenters in multiple locations to take advantage of local environmental conditions so you don't have to use chillers, now that's something.

    • Re:Green-washing (Score:5, Insightful)

      by gandhi_2 ( 1108023 ) on Friday February 26, 2010 @10:14AM (#31284542) Homepage

      With faster algorithms, the machine can just get more jobs done in the same amount of time. But the jobs will just keep coming, so the energy use never changes.

      Or are the new algorithms SO fast that all processing needs of humanity will be done in a week, thereby allowing us to turn off all supercomputers? Now that would save energy.

      • The fans on servers have variable speed. Case closed.

      • With faster algorithms, the machine can just get more jobs done in the same amount of time. But the jobs will just keep coming, so the energy use never changes.

        This is true, but the energy-cost-per-job is still 1/100th the cost and if you're paying to run the job, that could be a significant savings.

        • by c_sd_m ( 995261 )
          Maybe the right approach is to ask whether the job should be done?
          • Sounds like you got a book brewing. I am sure every Starbucks will be clamoring to sell it.
            • by c_sd_m ( 995261 )
              To be followed by the sequel "Should you be buying that overpriced trendy crap?" But perhaps that would be too close to Starbucks' business model or be too close to actionable to make the consumers comfortably smug. (Yes, I'm sure I'm as bad as the rest of them. Pot, kettle, etc.)
              • No, I am sure it would sell well along with the final book of the trilogy "No I am not a hypocrite, just look at the book I am reading".
      • Re: (Score:2, Funny)

        by arkenian ( 1560563 )

        With faster algorithms, the machine can just get more jobs done in the same amount of time. But the jobs will just keep coming, so the energy use never changes.

        Or are the new algorithms SO fast that all processing needs of humanity will be done in a week, thereby allowing us to turn off all supercomputers? Now that would save energy.

        Hey now! Everyone knows that 5 IBM mainframes covers the entire world market for computers.

    • yeah, but its two orders of magnitude. That's significant. All with you on fuck the new red revolution, but this is real savings to people that actually do stuff. It isn't like they converted the savings to tons of carbon dioxide, dihydrogen monoxide or polar bear.
  • Awesome! (Score:4, Insightful)

    by daceaser ( 239083 ) on Friday February 26, 2010 @10:03AM (#31284424) Homepage

    I'll buy three!

    What do they do exactly?

  • Clarification? (Score:5, Insightful)

    by Twinbee ( 767046 ) on Friday February 26, 2010 @10:03AM (#31284426)

    Can someone please clarify exactly what they've achieved here? All I hear is that they can somehow sift through large quantities of data much quicker. What kind of data? What are they trying to extract? And for what end?

    • by Xest ( 935314 ) on Friday February 26, 2010 @10:06AM (#31284474)

      "What kind of data? What are they trying to extract? And for what end?"

      The web. Porn. Fun.

      In that order.

    • Can someone please clarify exactly what they've achieved here? All I hear is that they can somehow sift through large quantities of data much quicker. What kind of data? What are they trying to extract? And for what end?

      I don't know about the rest of it, but I can answer the last question: To boost IBM's stock price! And as the holder of a number of IBM stock options, I must say I think that's a wonderful goal. :-)

    • Re:Clarification? (Score:5, Informative)

      by godrik ( 1287354 ) on Friday February 26, 2010 @11:18AM (#31285234)

      The conference proceedings are not online yet. So I am not sure. I could not even find the title of the talk on the conference web page

      I know people who are at SIAM PP and they are all : "why are they talking about PP on slashdot ?". There was no major anouncement. I'll check the proceedings again next week, but I believe there is no major improvement. IBM is probably just trying to get some more light.

      We can find the following IBM talks in yesterday page :
      http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=9507 [siam.org]
      The paper have the same author and name than this paper published last year :
      http://portal.acm.org/citation.cfm?id=1645413.1645421 [acm.org]

      So they are probable publishing an improvement on their 2009 work.

  • TFA is worthless (Score:5, Insightful)

    by Anonymous Coward on Friday February 26, 2010 @10:07AM (#31284476)

    This would be a real story if it gave implementation details, but it doesn't even tell us what the algorithm does; therefore it's totally worthless. Get this crap off the front page.

    • I have discovered a truly remarkable explanation which this internet is too small to contain.

    • Did you RTFA, they said right there it verifies data. Picture you have 400TB of HC particle data that you want to reduce but somebody stuffs some porn, pictures of their trip to Aruba and a loads of warez right smack in the middle of it. What are you going to do ?

          Well IBM can help, they can verify that data and within mere hours remove that weird 399TB of noise and you're left with pure signal.

      • I don't mean to offend, but... are you even a programmer? There is no magic "verify data" algorithm. Supposing your scenario were to occur, you'd have to produce an algorithm that was specifically designed to either a) detect the specific form of noise you want to remove, or b) detect the signal you want to extract.

        • Was my post that confusing ... removing 399TB of noise from your 400TB HC data leaving the porn and warez .. hint hint.

          Maybe I should have put /snark at the end of the post or something. I was agreeing with the OP that the article was entirely short on content.

          • Maybe I should have put /snark at the end of the post or something.

            You could've, but it's early, I haven't had any coffee yet, and I'm a bit of a dumbass at the best of times, so there's no guarantee that would've made any difference at all.

          • Yes it was. I think we'd all assumed that you'd meant that the porn was occupying the middle 399TB of the drive...

        • by Richy_T ( 111409 )

          Pfft, next you'll be telling us that there's no "Override all security" command.

    • Right, we can have a post that the new Apple tablet device might be called the iSlate because it "just makes sense" , but a story that appears to be true isn't interesting just because it doesn't give implementation details.
  • by somersault ( 912633 ) on Friday February 26, 2010 @10:09AM (#31284508) Homepage Journal

    Without more information, this really sounds like they just had a horribly-slow-but-at-least-it-works algorithm in the first place and now done some work on making it more efficient. They don't even say what type of processing was being done on the data..

  • by qmaqdk ( 522323 ) on Friday February 26, 2010 @10:13AM (#31284538)

    ...for analyzing the quality of massive amounts of data...

    I have an algorithm that does that in O(1):

    return "Not the best quality, but pretty good.";

  • and Business Inteligence software. Things that large corps use to help make decisions (Goldman-Sachs?) and manipulate the banks/markets even faster today so Yea! This is a big deal to corps. Not so big a deal to individuals other then the damn corps can make idiot decisions even faster now.

  • Is there hope for a merge of all virtual fragmented universes into a single universe? We we can explore strange new worlds; seek out new life and new civilizations; boldly go where our avatar hasn't gone before?
  • by AlgorithMan ( 937244 ) on Friday February 26, 2010 @10:30AM (#31284738) Homepage
    ZOMFG!!!! PNOIES!!!!
    • Kernelzation + Memoization
    • Branching-Vector minimization

    regularly produce this magnitude of algorithm speedup...

  • Mixed emotions... (Score:3, Interesting)

    by bwcbwc ( 601780 ) on Friday February 26, 2010 @10:32AM (#31284768)
    As a computer engineer, I'm fascinated by the potential improvements in performance.

    As a wired citizen, I'm terrified of the additional data-mining capabilities this will provide to our corporate overlords.
    • Re: (Score:3, Insightful)

      by magsol ( 1406749 )
      And as an interested academic, I'm disappointed that this (so far) appears to be nothing more than a marketing ploy.
    • by c_sd_m ( 995261 )
      I think the time is ripe for a Society for the Responsible Use of Information and Computational Power. Or we could convert the world to geekdom so we'll all be too busy drooling over this stuff to use it for evil.
  • What was the algorithm? For all I know (having not read TFA), it could be that they replaced bubble sort with quicksort.
    • What was the algorithm? For all I know (having not read TFA), it could be that they replaced bubble sort with quicksort.

      Given that this is from the very well-respected IBM research labs, I really doubt it's anything trivial or obvious.

      Given that the press release came through IBM's PR machine, I'm sure that the announcement overstates the applicability of the result.

    • by Richy_T ( 111409 )

      They pushed the turbo button. Freaking interns.

  • by jbuhler ( 489 ) on Friday February 26, 2010 @10:44AM (#31284876) Homepage

    Here's a link with actual content on what the algorithm does:

    http://www.hpcwire.com/features/IBM-Invents-Short-Cut-to-Assessing-Data-Quality-85427987.html

  • by mattdm ( 1931 ) on Friday February 26, 2010 @11:28AM (#31285372) Homepage

    "Low cost high performance uncertainty quantification", full text available in PDF.

    http://portal.acm.org/citation.cfm?id=1645421&coll=GUIDE&dl=GUIDE&CFID=77531079&CFTOKEN=42017699&ret=1#Fulltext [acm.org]

    And, here's the abstract:

    Uncertainty quantification in risk analysis has become a key
    application. In this context, computing the diagonal of in-
    verse covariance matrices is of paramount importance. Stan-
    dard techniques, that employ matrix factorizations, incur a
    cubic cost which quickly becomes intractable with the cur-
    rent explosion of data sizes. In this work we reduce this
    complexity to quadratic with the synergy of two algorithms
    that gracefully complement each other and lead to a radi-
    cally different approach. First, we turned to stochastic esti-
    mation of the diagonal. This allowed us to cast the problem
    as a linear system with a relatively small number of multiple
    right hand sides. Second, for this linear system we developed
    a novel, mixed precision, iterative refinement scheme, which
    uses iterative solvers instead of matrix factorizations. We
    demonstrate that the new framework not only achieves the
    much needed quadratic cost but in addition offers excellent
    opportunities for scaling at massively parallel environments.
    We based our implementation on BLAS 3 kernels that en-
    sure very high processor performance. We achieved a peak
    performance of 730 TFlops on 72 BG/P racks, with a sus-
    tained performance 73% of theoretical peak. We stress that
    the techniques presented in this work are quite general and
    applicable to several other important applications.

  • Obviously are not releasing details until the Patent application goes through and the Patent Troll company set up. They certainly would not release the information so other people could just steal their idea. Maybe they will package it in sealed application and rent it out. Hmmmm anyone remember the Chess Playing Mechanical Turk? (1770).

  • The description of this "new algorithm" is pretty sparse.
    Any word on if it allows faster solutions to encryption problems so that we now all need longer passwords?

  • I shouldn't be telling anyone this but I found the algorithm. Here it is in Java, please convert it to your preferred language.

    public int computeData(Data data) { return 42; }

    Oh crap, I forgot to post this anonymously. Why are there mice all over the pl@#M *^&I RCS$WE^%

    [CARRIER LOST]

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...