Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Clusters Finally Break the TeraFLOP barrier 223

cworley submitted - several times - this well-linked submission about a slightly boring topic - fast computers. "Top500.org has just released its latest list of the world's fastest supercomputers (updated twice yearly). For the first time, Linux Beowulf clusters have joined the teraFLOP club, with six new clusters breaking the teraFLOP barrier. Two Linux clusters now rank in the Top 10: Lawrence Livermore's "MCR" (built by Linux NetworX ) ranks #5 achieving 5.694 teraFLOP/s, and Forecast Systems Laboratory's "Jet" (built by HPTi) ranks #8 reaching 3.337 TeraFLOP/s. Other Linux clusters surpassing the teraFLOP/s barrier include: LSU's "SuperMike" at #17 (from Atipa ), the University at Buffalo at #22 and Sandia National Lab at #32 (both from Dell ), an Itanium cluster for British Petroleum Houston at #42 (from HP ), and Argonne National Labs at #46 (from Linux NetworX ) reached just over the one teraFLOP/s mark with 361 processors. In the previous Top500 list compiled last June, the fastest Intel based Netfinity 1024 processor clusters from IBM were sub-teraFLOP/s and the University of Heidelberg's AMD based "HELICS" cluster (built by Megware ) held the top tux rank at #35 with 825 GFLOP/s."
This discussion has been archived. No new comments can be posted.

Linux Clusters Finally Break the TeraFLOP barrier

Comments Filter:
  • by rob-fu ( 564277 ) on Sunday November 17, 2002 @04:40PM (#4692109)
    It's going to take me 4 hours to read all of this.
    • by Anonymous Coward
      What he said! Holy crap! This is the main thing I don't like about slashdot, I can hardly ever tell what the main point of the post is if I have to figure out what link to click first.
      • I would tell you to try just reading the title of the post, but half the time thats even more misleading than the post itself.
      • by Jugalator ( 259273 ) on Sunday November 17, 2002 @05:48PM (#4692493) Journal
        What he said! Holy crap! This is the main thing I don't like about slashdot, I can hardly ever tell what the main point of the post is if I have to figure out what link to click first.

        Just act like the average Slashdot member. Never click any links to read the articles and just post your thoughts regarding the subject. :-)

        Everything get so much easier that way!
      • What he said! Holy crap! This is the main thing I don't like about slashdot, I can hardly ever tell what the main point of the post is if I have to figure out what link to click first.

        I hope you don't ever read the quickies [slashdot.org]...

    • by handsomepete ( 561396 ) on Sunday November 17, 2002 @04:47PM (#4692150) Journal
      Imagine a Beowulf cluster of all those links!

      It would probably end up linking to the greatest pr0n site of all time...
    • by jdkincad ( 576359 ) <insane.cellist@gmail.com> on Sunday November 17, 2002 @04:56PM (#4692197)
      Look on the bright side, it might just spread out the /. effect enough to keep all the linked sites on line.
    • sheesh, how'd you get past the slashdotting? It'll take me four hours just to download the first page - to say nothing of how long it'll take to read the rest of 'em.

    • Of course it is. That's the whole point. Or points. Bear with me.
      Scenario 1: He's got two accounts. One devoted to karma whoring, one devoted to FP's. You've guessed it. All consciensous /.'ers will visit the links, read the articles and post an informed view (pun fully intended).
      Scenario 2: Revenge is a dish best served hot. What better way to get back at his enemies, than to slashdot their machines to melting point?

  • Question? (Score:4, Funny)

    by beldraen ( 94534 ) <chad...montplaisir@@@gmail...com> on Sunday November 17, 2002 @04:42PM (#4692117)
    How long until computing powerful enough to render the probability thought patterns of a manager? That's what I want to know..
    • The computer would probably melt before it could figure out what goes on in pointy-haired heads.
    • by Guppy06 ( 410832 ) on Sunday November 17, 2002 @04:55PM (#4692193)
      "How long until computing powerful enough to render the probability thought patterns of a manager? That's what I want to know.."

      Good luck. Last I checked, that one falls under Heisenberg's Uncertainty Theorem.
      • Re:Question? (Score:1, Insightful)

        by Anonymous Coward
        First of all, it's a principle not a theorem. Second, the joke doesn't make any sense! It's all fine and good if your making such a joke to a group of grade schoolers, but if your audience knows better, you come off as a fucking boob.
      • What?
        We don't know were they are when the network's working at it's "usual high speed"?
        Or is it that when the network is not working at it's "usual high speed", you find them waiting for you in your office?
    • by Masa ( 74401 ) on Sunday November 17, 2002 @05:28PM (#4692390) Journal
      How long until computing powerful enough to render the probability thought patterns of a manager?

      That shouldn't be too hard... I bet that my Palm Pilot has enough power to predict exactly, what my boss is going to say in the next meeting tomorrow.

      If it's about schedules, he'll say:

      Work...

      1. harder
      2. smarter
      3. cheaper
      4. faster
      In that order.

      If it's about project goals, he'll ask me to:

      Make...

      1. miracles

      If it's about specifications, he'll say: "I have no idea. You find out yourself." And for anything else it would be just blank. All blank.

      On the other hand... if a manager actually has any real thoughts... Well, that would be as easy as to predict patterns from a pure chaos.

    • It can already be done by taking a Tamagotchi [mimitchi.com] and relabeling the buttons: Feed = Overtime, Attention = Ego Stroke. Cleaning up the poop remains the same.

      If it dies, you're fired.

  • FLOPs (Score:1, Offtopic)

    could anyone point me to a windows based utility that allows me to see how many FLOPs my home computer is doing?
    • Re:FLOPs (Score:1, Informative)

      by Anonymous Coward
      it's FLOPS not FLOPs!
      FLOPS=Floating Point Operations per Second
    • First you find out how many FLOPS your computer is capable of, then multiply by the % of cpu load (over 100) and the number of second.

      Why don't they write it: FLOP/s?
      • Because then it would make no sense. It would be FLO/s if you wanted to get technical, and if you really wanted to get technical, it would be F/s where F= # of Floating point operations.
        I lile FLOPS better.
      • Re:FLOPs (Score:3, Insightful)

        by alannon ( 54117 )
        Why don't they write it: FLOP/s?

        Because FLOPS means FLoating point Operations Per Second

        '/' means 'per'.

        FLOP/s would mean FLoating point Operations Per Per Second

        FLO/s doesn't seem like a very good idea, except for cleaning your teeth.
    • Re:FLOPs (Score:5, Interesting)

      by jelle ( 14827 ) on Sunday November 17, 2002 @05:17PM (#4692331) Homepage
      Since nobody is answering your question: The Top500 supercomputers are ranked [netlib.org] by the results [top500.org] of the LinPack [netlib.org] benchmark.
    • I'm pretty sure that SiSoft Sandra [sisoftware.co.uk] can do it. Get the Standard version or pay for the Pro. Last time I checked the "Advanced" version was adware.

      Russ
  • Ah ha! (Score:2, Offtopic)

    by coryboehne ( 244614 )
    From the first line: cworley submitted - several times

    So, is THAT how you get something accepted? Really I don't know if posting that story with that attached to the front of it was such a great idea.....

    Now everyone who submits a story that they think is good, should it get rejected, they will simply submit like twenty copies of it....

    What a pain for the poor editors.... Really I question the wisdom of telling us this works....
    • Re:Ah ha! (Score:1, Offtopic)

      by isorox ( 205688 )
      My 1 (one) story that got accepted to slashdot was accepted on my second posting - taco might ignore it, them timothy comes along and posts it. Or something. Most of the time it doesnt work though.

    • Well, the fact is I've been trying to submit the story about MCR (which was hoped to make #4, but Los Alamos submitted two halves of the same computer as two identical computers, bumping MCR to #5) for several months. Obviously some of us do not find cluster news boring.
  • by Anonymous Coward on Sunday November 17, 2002 @04:45PM (#4692135)
    a single node from one of these clusters?

    (hey what else can I say, it's already a cluster)
  • by hopbine ( 618442 ) on Sunday November 17, 2002 @04:48PM (#4692156)
    I have often wondered how long it takes to boot one of these things. In the HP-UX world I know how long it takes for a K class (sometimes more than 20 minutes). Superdomes are sometimes faster, but not by much.
    • by jelle ( 14827 ) on Sunday November 17, 2002 @05:10PM (#4692278) Homepage
      It's a cluster, so I can imagine the nodes can boot individually, in parallel. Plus I can imagine the system never goes down as a whole, just some nodes may go down when parts break or other maintenance. 1 bootup per lifetime...

      Perhaps the boot speed is limited by the ramp-up speed of the local power plant.
    • I had a K570 at a previous job that took literally 45 minutes to boot from power-on to login prompt.

      Turning off the extended mem-check reduced this to 25 mins.

      I once had a SCSI cable go bad, and I had to boot that darn thing up about a dozen times, swapping out cables, to find the bad cable. What a bad night that was! Swap cable, take 25-min break, watch SCSI errors from kernel. lather, rinse, repeat. 3 hours to find one bent pin on a scsi cable. yuck.
      • Next time this happens, turn the key fully clockwise and on the console do "CTL B". This means it will stop at the BCH prompt - this ONLY works on the K boxes unfortunately, on all others watch the screen for the "hit any key or wait 10 seconds" message. At the BCH prompt type SER. This will do a search of all potentially bootable devices and should include all disc drives. No drive= bad drive, SCSI cable or SCSI controller. By the way, on any HP computer do not disable the memory check unless your troubleshooting, the extra few minutes might save you problems later on.
    • LinuxBIOS (Score:5, Interesting)

      by bstadil ( 7110 ) on Sunday November 17, 2002 @05:20PM (#4692346) Homepage
      This is not such a dumb question. The LinuxBIOS project [lanl.gov] was started by and for the Los Alamos National Lab [lanl.gov]. One of the nifty things this allows them to do is change Kernel without taking the machines down. You can then switch to a kernel compiled for different purposes.
  • Wow! (Score:4, Interesting)

    by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Sunday November 17, 2002 @04:50PM (#4692165) Homepage Journal
    1 NEC Earth-Simulator 35860.00
    2 Hewlett-Packard 7727.00 Los Alamos

    The distance from the first to the second is pretty impressive. What on earth did NEC really do over there?

    • by Anonymous Coward
      Read it again. What does it say? EARTH-SIMULATOR

      It's gonna take some CPU power to simulate earth, don't you think??
    • Re:Wow! (Score:2, Funny)

      by xenode ( 570497 )
      If we told you, we'd have to kill you.
    • Re:Wow! (Score:3, Informative)

      by girouette ( 309616 )
      The Earth Simulator is a 640-node computer, housed in a building the size of a stadium. Each node is
      a 64 GFlop Nec SX-6 supercomputer. (5,120 CPUs total).

      It has its own dedicated, custom-built power station. 'nuff said.

      Google is your friend, but for starters:

      http://www.sw.nec.co.jp/hpc/sx-e/sx6/index.html

      http://www.nec.co.jp/press/en/0203/0801.html
    • Re:Wow! (Score:3, Informative)

      by CBNobi ( 141146 )
      "Simulating the Planet Earth" [nec-global.com], an article about the Earth-Simulator, has some good information about the system.

      One big item to note is that many of the supercomputers built in the US are for weapons research; as opposed to the NEC supercomputer, which deals with, obviously, changes of the earth.

      More links:
      Press release for the Earth Simulator [nec.co.jp], dated March 8, 2002
      General system information on the cluster [nec.co.jp]
  • How many FLOPS (Score:2, Interesting)

    Is there a way to tell how many FLOPS my linux machine gets. I always wondered.
    • pfft, FLOPS are for weenies - real men use bogomips [clifton.nl]. ;)

      $ grep bogomips /proc/cpuinfo
      bogomips : 2962.22
      • You probably mean that as a joke, but just in case....

        Bogomips are not a mesure of performance by any stretch of the imagination. bogus+mips = bogomips.

        Of course I'm stating the obvious.

        • Bogomips are not a mesure of performance by any stretch of the imagination. bogus+mips = bogomips.

          Whats the matter jericho? 2962.22 to racy for you? ;)

          (yes that is a joke :)
        • Re:How many FLOPS (Score:2, Insightful)

          by lvd ( 72565 )
          Bogomips are not a mesure of performance by any stretch of the imagination. bogus+mips = bogomips.

          Actually, neither are FLOPS. It wildly depends on what you do in your program, and no benchmark is representative.

          As an instructor for the course 'Optimizing for the CRAY J932' told my class: the 'Theoretical Peak Performance' is the performance the manufacturer guarantees you won't exceed.

    • It's interesting. Someone above posted "How many FLOPS does my windows box get?" They get modded down. Parent comment gets modded up for asking how many FLOPS his Linux box gets.

      I'm not trying to start a war or anything. It's just an amusing observation.

  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • now why not try using macs for your supercomputers?

      I know that they arn't as scalable

      I think you answered your own question there.
      • Comment removed based on user account deletion
      • That's not an answer at all, it's a tautology. What does 'scalable' mean in this context? That you can climb to the top of it? To say that you can't build a cluster of Macs because they're 'not scalable' is the same as saying 'because you can't build a cluster of them'. The answer is probably that you get more performance for less cost from Intel or AMD setups, rather than technical issues.
    • Last time i checked a dual g4 1.25 ghz system was below that of a p3 3.06+hyperthreading in graphics benchmarks (adobo after effects + photoshop). (the dual g4 system also cost $1k more).

      It may still be ahead in gflops... I'm not into cpu's enough to answer that but i do doubt that. In any case mac's are for graphics people so that should be a real blow.

      But I'll bet dual 2.4 ghz xeons will kick the 1.25 ghz system's ass in terms of gflops. Plus there only like $650 each so the mobo + processors won't cost more then $1400.
    • by ikekrull ( 59661 ) on Sunday November 17, 2002 @05:37PM (#4692438) Homepage
      Ah, that would be because Apples 'supercomputer on the desktop' marketing drivel was just that.

      Hell, the Sony Playstation 2 was subject to export restrictions because it was 'too powerful', which was driven by/followed with the requisite marketing drivel, but you don't see and PS2 clusters in the 'Worlds fastest supercomputer' list either.

      It has been a long time since Apple PPC was competitive in terms of price/performance with x86s. Of course thats not the only reason to buy a computer, i don't want to get the apple-zealots panties in a bunch.

      It's just that Intel/AMD didn't make a song and dance about breaking the GFLOP barrier, since that happened way back with the P3/Athlon 600-800, hardly cutting edge chips.

      Hell, a 600Mhz Alpha had GFLOP performance years before either the G4 or the x86s.

      The PPC has a nice vector processing unit (Altivec), which could make it a good choice in some situations, but given the premium you pay for Beowulf nodes (Xserves?) from Apple, you will, in general, get a lot more bang for the buck from x86.

      • Actually, Mac's are used in super computer clusters. JPL has an intresting benchmaark [daugerresearch.com] of 33 Xserves. They get 1/5th of a TeraFLOP of performance. Not bad, considering how cheap they are.
        • These are single precision FLOPS on some apple fractal program optimized for Altivec and undoubtedly embarassingly parallel.

          The top500 list is based on double-precision linpack scores. This cluster would not score anywhere near that level on the top500 test because Altivec doesn't do double precision, so you use the regular scalar FPU. Furthermore, you need a fairly fast interconnect to get a good fraction of theoretical peak on linpack, so I would estimate that this cluster wouldn't get more than 40 gflops or so in the top500 test.

          P4s can do a double precision vector, and as a result, they get much better linpack scores in a similarly equipped cluster, and for far less money. This is why you don't see big clusters being built out of macs.
      • "It's just that Intel/AMD didn't make a song and dance about breaking the GFLOP barrier..."

        I don't know 'bout AMD, but Intel has these funny BunnyPeople to promote anything from breaking speed limits to new processors as shown here [intel.com]. So contrary to what you believe, yes Intel does make a song and dance(plus commercial) about [insert_marketing_gibberish_here]!
    • They don't have the kind of memory bandwidth these systems need. With AltiVec, a G4 can indeed get a huge gigaflop number, but SIMD floating point takes up a lot of data (with 128 bit SIMD, 20 bytes per 4 operations) and the G4's memory bus runs at a paltry 1.3 GB/sec (compared to 4.2 GB/sec for a P4). Feeding the G4's AltiVec units at full speed requires 20 GB/sec of bandwidth, so once your dataset falls out of the 256K of L2 cache (which these scientific computing applications surely do) the G4 chokes. Besides, AltiIVec doesn't do double precision floating point, whic is necessary for this sort of thing.
    • They did. And it seems to be missing from the Top 500 list. According to this [slashdot.org], 33 XServes reached 217 GFlops/sec. Now, according to Apple, they should be able to reach a much higher speed than this (roughly twice the performance they actually got), but part of the reason might be that they used 100BaseT instead of Gigabit, and theoretical != real world anyway. This earlier cluster [daugerresearch.com] of 76 G4's even acheived higher results. JPL found Macs to be "capable of excellent scalability in performance. "
  • by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Sunday November 17, 2002 @04:53PM (#4692186) Homepage
    I built a small Beowulf cluster. It was actually very easy, apart from writing the MPI enabled code.

    Step 1: Install the lam packages on all the nodes
    Step 2: Create an account on all nodes, and use a passphrase-less ssh key to avoid prompting.
    Step 3: Compile your code with mpicc (rather than gcc)
    Step 4: Copy to all nodes.
    Step 5: mpirun C ./your-prog

    Admittedly it was only a 4 node cluster, but hey ;)

    Please, someone break it to me gently if this wasn't actually a Beowulf cluster ;))
    • I have some students from Boeing that are just in love with Linux. The Engineer department there just set up a Linux Cluster with 120 nodes for around 100,000 US dollars. They were running tests on it and found it was much faster than the Cray they had previously been using to do the same things.

      The main comment that struck me was how easy it was to set up. The Engineer IT department is mostly Unix (they're all in retraining becuase they are dumping Sun Stations for Intel based systems running XP beleive it or not- becuase Intel chips are so much faster and machines running XP are much cheaper than Sun Sparcs (plus the software they want runs on XP)) so it was of course easy to set up for them.

      Next they'll be setting up another LINUX cluster with maxed out dual or quad processor machines with more RAM. They're really excited.
    • by dsfd ( 622555 ) on Sunday November 17, 2002 @05:55PM (#4692535)
      We built and mantain a Beowulf with about 70 nodes. We use Debian GNU/Linux.

      I agree with you, in principle, it *is* easy to do but the problems increase with the number of nodes. IMHO, the main problems are:

      -Administration effort per node has to be almost zero. Beyond a number of nodes you definitely need things like fully automatic instalation, automatic power control, automatic diagnostic tools, a batch system, etc. All these tools already exist but you need some know-how to put all them together.

      -You need a large enough room with a cooling system that gives at least 100 W per node, 7kW in our case. Room temperature has to be about 20oC.

      -Low cost PC hardware is not allways reliable enough for this application. If you have codes that run 24x7 for months in a large number of processors, the probability to have a hardware problem is very high.

      We have found that our hardware suppliers do not carry out extensive tests on the systems they sell. This is because "normal" users run low quality OSs and they assume that it is normal that the computers just hang from time to time. Therefore, they do not allways detect failures in critical components such as RAM.

      -Of course, your application has to be suitable for parallel computing, specially if your cluster uses a low cost 100Mb/s network. In this case, compared to a "conventional" parallel computer (eg Cray T3E), the processors are roughly equivalent but the network is about 10 times slower and is easily the bottleneck of the system.

      Having said that, despite all the problems, I love Beowulfs. They have totally changed high performance computing, and they are definitely here to stay.

      All this has been possible thanks to free software, so thanks Mr. Stallman/Torvalds and many others...
  • image from first link [llnl.gov]

    I love those giant black racks, even if it's not the fastest cluster in the world the Space Odyssey nostalgia is still there.

    "My God, it's full of stars!"

    -Matt
  • by Anonymous Coward
    I think it would be interesting to look not just at the processing capacity, but also the costs associated with building and maintaining each mainframe.
  • Impressive numbers (Score:3, Informative)

    by DrunkenPenguin ( 553473 ) on Sunday November 17, 2002 @05:07PM (#4692266) Homepage
    Impressive numbers. I suggest you go take a look at that hardware that runs the Earth Simulator [jamstec.go.jp] (#1 on the top 500 list). That flash movie is impressive. .. But don't forget that you got a helluva lot faster CPU inside your head - your brains beat all that expensive hardware all the way.
    ----
  • While most people seem to be complaining about the number of links in the story, if history is any indicator, 90% of people won't click on one of those links, let alone all of them.

  • I was going to suggest creating a cluster from the top 10 there. Would that be possible? A beowulf cluster beowoulf cluster?
  • "cworley submitted - several times - this well-linked submission"

    He probably went all crazy because Linux stories tend to get ignored here at Slashdot.

  • by Alu3205 ( 615870 ) on Sunday November 17, 2002 @05:33PM (#4692419)
    I hope none of those super computers was the webserver or else it's just the top 499 now. :p
  • by Bender_ ( 179208 ) on Sunday November 17, 2002 @05:47PM (#4692484) Journal
    Measuring MFlops does not mean a lot - even if it is from a "real life" benchmark. The TOP500 might look much worse for linux-clusters, if more communication-latency dependent benchmarks were used. Linpack, which works mainly on very large matrices, shifts the benchmarks results a lot towards linux-cluster solutions.

    A real supercomputer supports much faster I/O, higher interconnection bandwidth and lower interconnection latency.

    And btw. the new Cray X1 [cray.com] delivers the performance of a all but the largest linux-clusters in a single cabinet (820 GFlops peak that is..). In terms of computing efficiency it makes even the Earth Simulator look pale. I am really looking forward to the next iteration of the TOP500, when the first X1 machines are included.

    • by Flat5 ( 207129 ) on Sunday November 17, 2002 @06:57PM (#4692853)
      It depends on what you're trying to do. An awful lot of supercomputer sites *are* solving, more or less, very large matrices. In that case it means everything.

      Some applications scale on these kinds of clusters and some don't. But to say that "MFlops does not mean a lot" is just as silly a blanket statement as pretending that the Linpack benchmark is "the speed" of the computer.

      That Cray does look pretty awesome, btw.

      Flat5
  • by SEWilco ( 27983 ) on Sunday November 17, 2002 @05:52PM (#4692513) Journal
    "Linux Clusters Finally Break the TeraFLOP barrier"

    As when other barriers are broken, a bit of a shock wave was created.
    Windows machines for miles around were rattled.

  • did it take?

    And the big question was did they top it all of with a 15" monitor they had lying around?
  • CNN and MS Bias (Score:2, Interesting)

    by Anonymous Coward
    I noticed that the CNN artical, that I read befor my daily scan of /. , did not bother to mention that the clusters were using Linux. If there is somthing that non-MS software can do that MS garbage can't do well, you can bet that main stream news will not report it, even if it is relevent.

    As a side note, I find it rather funny that aside from technical issues, one can not legaly cluster Macrohard systems because EULAs gets in the way!
  • How many TeraFLOPs is actually needed to negate the Slashdot effect?
  • by e_n_d_o ( 150968 ) on Sunday November 17, 2002 @06:20PM (#4692675)
    Are there any Microsoft Windows-based systems that qualify as supercomputers?

    (This is a serious question, I have no idea if they do or do not.)
  • Oh [slashdot.org] my [slashdot.org] God! [slashdot.org] how [slashdot.org] many [slashdot.org] freaking [slashdot.org] links [slashdot.org] does [slashdot.org] one [slashdot.org] story [slashdot.org] need? [slashdot.org]
  • This collection of links failed to mention that the #1 computer is an "Earth Simulator." How kewl is that! Reminds me of the book _Earth_ by David Brinn.

    M@
  • "Slightly Boring" (Score:3, Insightful)

    by msheppard ( 150231 ) on Sunday November 17, 2002 @07:45PM (#4693092) Homepage Journal
    Did I miss the sarcasm tags on the "slightly boring" comment or something? I think there's a large audience on slashdot who are all very excited about high speed computing. Overclockers aside, I know I hate waiting for a compile.

    Latley though, I feel the things I'm waiting for my computer are not a function of how fast the CPU can run, but how poorly the software is written. Can someone can tell me why my windoze machines sometimes block for up to a min when I try to click the "Location" box on the top of the file browser common dialog control? Or the oft-complained about boot time for most everything? Or the time it takes almost any program to load up the first time you load it?

    Anyone else think it's time to start over, and not just assume the fater and faster machines can deal with the laziness we program into the systems we build?

    M@
  • Imagine.... (Score:5, Funny)

    by DarkHelmet ( 120004 ) <mark&seventhcycle,net> on Sunday November 17, 2002 @07:46PM (#4693098) Homepage
    All those computers that meet Doom3's system requirements...

    ... And they're used for trivial things like finding aliens, weather prediction and unified theory.

    1. The weatherman is usually wrong.
    2. Aliens are abducting us. We need to send radio signals to Fife, Alabama, not out into space.
    3. Unified Theory is based on Heisenburg's stuff... You can have relativity and quantum mechanics... but not both at the same time. Damn, that guy was a genius. By the way, the unified theory is:

      e = 42; // always 42.

    Of course, I'm sure Doom3 has this somewhere in its source code, so ummm... go crunch 40 TFLOPS on that ;)

    </humor>

  • check out the report on our NetBSD cluster [feyrer.de] which would easily scale to many nodes.

    It's just a question of proper application software, and OS doesn't really matter - I can't understand all this fuzz about Linux. *shrug*
  • I really thought there would be more Microsoft on the Top 500 Super Computer list, just as a matter of honor and homage to the Chief Software Architect.

    Looking at the list, we can see that Super Computers Prefer ANYTHING BUT Microsoft, 499 to 1. I tried to find out more about the "1", but it has been encrypted by Seoul National University using a character set "charset=euc-kr". If anyone has more info on it, please post it in english.

    I wonder when Steve Jobs will get a MAC cluster on this list :) What a lot of information, thanks for the great article!
  • According to the SETI@HOME stats page [berkeley.edu], SETI is running about 45 TFLOPS, which is slightly ahead of the Earth Simulator's 40 TFLOPS or the LANL 10 TFLOPS machines. [top500.org] This isn't real precise - Top500 uses Linpack as their benchmark, which is a lot more realistic and controlled than SETI, so your mileage may vary. And of course that's Today's measurement from SETI, which is fairly variable in its CPU speed.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...