Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Inside The World's Most Advanced Computer 272

Junky191 writes: "Just came across an informational page for the Earth Simulator computer, which provides nice graphics of the layout of the machine and its support structure, as well as details about exactly what types of problems it solves. Fascinating for the engineering problems tackled- how would you organize a 5,120 processor system capable of 40Tflops, and of course don't forget about the 10TB of shared memory." Take note -- donour writes: "well, the new list of supercomputer rankings is up today. I have to say that the Earth Simulator is quite impressive, from both a performance and architectural standpoint."
This discussion has been archived. No new comments can be posted.

Inside The World's Most Advanced Computer

Comments Filter:
  • by Artifice_Eternity ( 306661 ) on Friday June 21, 2002 @01:24AM (#3741822) Homepage
    Didn't the mice in H2G2 already build such a computer? I think it was called... the Earth.

    Will the Earth Simulator have the nice fjords by Slartibartfast? :)
    • Seems kinda silly if we already know what the answer is. Hmm... double checking, perhaps?
      • Seems kinda silly if we already know what the answer is. Hmm... double checking, perhaps?

        Ahh, but you're forgetting that the Earth wasn't designed to calculate the answer, (Deep Thought had, as you rightly note already told us what that is) it was designed to calculate the question.

        Would've worked, too if the pesky Golgafrinchans hadn't turned up and perturbed the calculations. By the time the Vogons demolished it, the algorithms were way out of whack anyway.

        (Yeah, I know: -1, Offtopic)

  • by Anonymous Coward
    Imagine a beo*SLAP*
  • bold name (Score:1, Interesting)

    by SlugLord ( 130081 )
    "Earth Simulator" is a rather bold name for a supercomputer, especially when you consider it probably can't even simulate the global weather fast enough to predict it (or even tell you what the weather is in real time). The computer looks impressive, but I think they should have stuck to a more abstract name rather than what I see as false advertisement.
  • by Anonymous Coward on Friday June 21, 2002 @01:27AM (#3741834)
    So that in 15 years I'll already know how to code for the PlayStation 6.
  • I guess this is a *little* off-topic, but this really bugs me. They're building this really cool supercomputer, and they list the memory with base-10 prefixes instead of the standard base-2. I mean I can almost understand when dell does that with hard drives (it pumps up the number for advertising purposes), but it's just silly in a scientific arena.
  • Hmmm.. (Score:4, Funny)

    by Peridriga ( 308995 ) on Friday June 21, 2002 @01:31AM (#3741853)
    Could you imagine a beouwolf..... Ahh fuck it..
  • by dnaumov ( 453672 ) on Friday June 21, 2002 @01:33AM (#3741861)
    I am not going to ask "Does this run Linux ?" because it obviously does not, but can anyone point to some good resources on what kind of Operating Systems do these monster machines run ? Are they some kind of a UNIX ? Or are they some elite breed of OS that mortal humans have no chance of understanding ? Linkage appreciated.
    • Here [llnl.gov] is a link for ASCI Red @ Sandia National Labs.

      From the article:
      The system uses two operating systems to make the computer both familiar to the user (UNIX) and non-intrusive for the scalable application (Cougar). And it makes use of Commercial Commodity Off The Shelf (CCOTS) technology to maintain affordability.

      Hmm, I see one familiar OS in there...
    • The IBM units run AIX, the DEC^H^H^HCompaq^H^H^H^H^H^HHP systems run Tru64 Unix, which I believe is derived (or simply renamed) Ultrix. Don't know what ES runs though, maybe extended mode DOS 6.2?
      • You know, that "^H" thing was funny the first quadrillion times I saw it, but since I saw your post, it has suddenly stopped being amusing. Might I suggest ^? for your next post? I'm quite sure I'll find that amusing for the next quadrillion times I see it.
        • Sorry, but belive it or not, I've yet to see anyone do it. Didn't realize it was already hackneyed, though I guess I should have known since it was kinda an obvious thing and all. How about Digital Compackard Corp?
          • could someone explain the ^H thing? Im not getting it. thanx
            • Ahhh young man, sit down and let me explain to you how when I was your age I had to walk 20 miles through the snow uphill to be able to use a pc (my apologies if you're not a young man :)

              Whenever you see the caret (^) followed by a letter, it typically means CTRL+letter. So ^H would mean that holding CTRL while pressing the H. These are ASCII codes that used to do various things on terminals. In the case of CTRL-H, it would be interpreted as BackSpace. ^G is another common one and causes your BELl to ring. Try it from your command line (even works in Windoze). Look up an ASCII chart for those values less than 32 to see what they do.
              • Back in the BBS days you could enter [Control-H]'s into a message and they would become part of the actual text. That way the reader could actually see the words appear and be back-spaced over and re-written. It was a cool effect. Of course it worked better at 300-2400 baud, where you could actually see the characters being drawn.

      • Don't know what ES runs though, maybe extended mode DOS 6.2?

        "Non--system disk or disk error. Replace disk and
        "Strike any key to continue . . ."

        Shit! Where's the floppy drive on this thing! :)

      • Ultrix was a BSD derived Unix.

        Tru64 is nee Digital Unix nee OSF/1 - a project that came about when DEC, HP and IBM came together to found The Open Software Foundation (OSF) to develop "open" unix (they felt threatened by Sun i think). The OSF released OSF/1, a System V R2 based Unix, which was adopted by DEC as it's new Unix to replace RISC Ultrix.
    • NEC machines like the Earth Simulator run a version of UNIX called SUPER-UX or S-UX, for short.

      NEC Press release [nec.co.jp] mentions SUPER-UX.
      NEC SX-6 page [nec.co.jp] has lots of info.

      • by hey ( 83763 )
        So they are running SUX - has it occured to them that it sounds like SUCKS? Maybe they should take a lead from the former the canadian "Conservative-Reform Alliance Party" ... yes C.R.A.P. and change their name [netfunny.com]
        • Naming their OS S-UX would be pretty much par for the course for a Japanese company. A few years ago, Sony took some of their videotape library technology and applied to to data storage. The video version was called "TeleFile," I believe. They decided, since the library could hold as much as a petabyte, to call the data version "PetaFile."

          Shortly thereafter, Sony started referring to the libraries as "PetaSite" systems instead. Say "PetaFile" out loud, and you'll understand why.

          I'd provide a link, but Sony's web site works properly in, like, no known browsers. Pfeh.
    • by mt-biker ( 514724 ) on Friday June 21, 2002 @07:14AM (#3742527)
      These machines tend to be clusters of smaller machines. IBM's SP architecture, for example, runs AIX which doesn't need to scale particularly well.

      The magic in SP is partly hardware (high-speed interconnect between nodes), partly the admin software which allows admin tasks to be run simultaneously of many nodes (a non-negligible task), and is otherwise left up to the application programmers to use MPI or similar to get the application to run over the cluster.

      Single system images typically don't scale this large. Cray's UNICOS/mk (Unix variant) is a microkernel version of the UNICOS OS, used on the T3E and it's predecessors, where a microkernel runs on each node, obviously incurring some overhead, but avoiding bottlenecks that otherwise occur as you scale. Here's [cray.com] some info. Last time I checked, T3E scaled to 2048 processors.

      Out of the box, SGI's IRIX scales very nicely up to 128-256 processors. Beyond that "IRIX XXL" is used (up to 1024 processors, to date). This is no longer considered to be a general purpose OS!

      IRIX replicates kernel text across nodes for speed, and kernel structures are allocated locally wherever possible. But getting write access to global kernel structures (some performance counters, for example) becomes a bottle-neck as the system scales.

      IRIX XXL works around these bottle-necks, presumably sacrificing some features in the process. Sorry, I can't find a good link on IRIX scalability.
      • Out of the box, SGI's IRIX scales very nicely up to 128-256 processors. Beyond that "IRIX XXL" is used (up to 1024 processors, to date).

        Do you have any information to back this up? I don't work for SGI, but I work closely with them, and I've never heard the term "IRIX XXL." I've worked on the 768-processor O3000 system in Eagan, and as far as I noticed it was just running stock IRIX 6.5.14 (at that time).

        Then again, I've never used a 1-kiloprocessor system, either. So maybe we're both right.
        • The SGI Origin2000 scales up to 512 processors. XXL refers to the product ID for IRIX for the larger versions of the O2k. SGI Origin3000 scales up to 1024 processors but is only sold as an actual product up to 512 processors. SGI has sold O3k's with over 512 cpus in a single system image to some customers, such as NASA, but it's not treated as an actual product.
          Standard IRIX therefor scales up to 256 processors on O2k and 512 processors with the XXL version. The only difference between the two is that drivers for the former might not work with the later because a few kernel structures changed. The same is true for the Origin3000 versions of IRIX.
  • What does: "It is now under development aiming to start from FY2001." mean? am i missing something here...
  • Eniac... (Score:3, Interesting)

    by chuckw ( 15728 ) on Friday June 21, 2002 @01:43AM (#3741894) Homepage Journal
    What is really amazing is that in 50-60 years, this amount of computing power will easily fit within the confines of the standard PC case (assuming such a thing even exists 50-60 years from now). Remember ENIAC...
    • Re:Eniac... (Score:2, Interesting)

      by Anonymous Coward
      Where did you get that 50-60 years figure from? Today's PCs can do about 0.5 Gflop/s. Assuming an 18-month Moore's Law, we'll get 40 Tflop/s in our PC case in...

      1.5*log(40000/0.5)/log(2) = 24 years.
    • That's probably true, since the sum of all processing power in silicon valley of about 25 years ago is now wrapped up in a single personal computer.
      weird huh?!
  • by doooras ( 543177 ) on Friday June 21, 2002 @01:44AM (#3741897)
    how many FPS does this bitch get in Quake III?
    • Probably not many. Computers of this size generally don't have exceptionally fast processors, just shitloads of them. Since Quake is a (mostly) single threaded app a standard PC would run it faster.

      Then again you could write a massively parallel i686 emulator (precalculating 5,120 instructions simultaneously) and run the worlds fastest (and most expensive) PC.
    • Probably not as many as you'd think. IT doesn't have any inherant 3d capabilities so you'd have to write a software mode OpenGL renderer. Ok, not a real problem as there is plenty of power to go around, HOWEVER the DSPs that we use to do graphics these days are really fast. YOu'd would probably need a computer in the multi-teraflop range just to emulate a GeForce 4. This is also assuming that graphics calculations like that can be doine in a massively parallel way, whichi I don't know if they can or not.

      A much more imperssive result would be obtained by talking to www.quantum3d.com and getting them to build you a system based on their ObsidianNV system.
  • by FrenZon ( 65408 ) on Friday June 21, 2002 @01:47AM (#3741912) Homepage
    This is a question, and not a statement

    While this does a nice job of crunching numbers, how do they know that their algorithms are any good at doing what they do? Or are they trying to simulate things that aren't continuously kicked around by chaos theory?

    I ask because I've been looking at dynamics in my spare time, and simulating something as small as cigarette smoke accurately seems impossible (although I must say Jos Stam and Co did a nice job of making it look real). So it seems a bit bewildering to see something trying to simulate the earth, even if only at a macro level.
    • simulating something as small as cigarette smoke

      Ah .... but they're simulating big things. Big things are easier to simulate than little things, not harder.
    • They validate the algorithms using experimental data.

      The ASCI computers for instance are undoubtedly being validated against the last few nuclear tests.

      This is harder than it might sound.

      BTW "big things are easier to simulate than small things" (in another reply) is utter crap.

      Goes to show you though that using "off-the-shelf" hardware is not necessarily the best thing to do.
    • I also believe that they don't know if their algorithms are good, just as you said.

      one of the purposes they said was "Establishment of simulation technology with 1km resolution."

      Probabbly use some old data and see if they can predict a little bit into the future with reasonable accuracy.

      as a side note: on the "problem it solves" page, notice how many "seismicity" related items there are (% wise)! i think the true reason they want this thing is to predict earthquakes. i don't blame them; Tokyo is expecting a "big hit", and considering almost 1/2 of the japanese population lives there, probabbly even more than that in terms of $$ in japan -- it's a good idea they are trying to predict these things.

      p.s. Japan experiences small quakes almost daily -- most of them cannot be felt; but it means they have tons of data to verify their simulations against.
    • by lowLark ( 71034 ) on Friday June 21, 2002 @02:21AM (#3742004)
      The easiest way to validate these types of prediction mechanisms is to feed them only part of your data set and see how well it predicts your remaining dataset. For example, if you have an ocean temperature data set from 1920 to the present, you might start by feeding it 1920-1992 and seeing how well its predictions for then past ten years hold up to you actual data. You may think that the known data set it too small for accurate predictions, but there are some fascinating methods (like ice core sampling and tree growth sampling) that seem to allow pretty good deductions as to past climate conditions over a very long period of time.

    • While this does a nice job of crunching numbers, how do they know that their algorithms are any good at doing what they do? Or are they trying to simulate things that aren't continuously kicked around by chaos theory?


      This is an extremely insightful comment.

      What's being suggested here is akin to this - sure, they've got the most powerful car in the world, but to get from LA to New York, you've got to head east. Heading North won't help much, no matter how powerful your car is.

      This is what gets me about all these global warming "earth is going to heat up and cool down and rain and drought and..." predictions. How can they be sure they're even in the ballpark?

      One variable out, they could throw their predictions out by a massive amount. Their simplifications to allow for the computer to do predictions may not take into account the nuances and subtleties of the real world.

      That's why, in many instances, I look at these computers with perhaps more cynicism than most other people. They're great for testing theories, and for allowing scientists to compute algorithms that they possibly otherwise wouldn't be able to do. But just because it's come out of a billion $$$ computer, doesn't mean it's a golden egg.

      It's like that old saying that came out when word processors were first invented - shit in, shit out. Just because it's been through a fancy (or expensive) machine it doesn't make the outcome any more valid.

      -- james
      • While this does a nice job of crunching numbers, how do they know that their algorithms are any good at doing what they do? Or are they trying to simulate things that aren't continuously kicked around by chaos theory?

        Just because it's been through a fancy (or expensive) machine it doesn't make the outcome any more valid.

        Modelling real processes is a science which has been around for as long as computation. Simulations I used to run with Dynamo (discrete simulation of general PDE's) on a minicomputer was in some ways the coolest. It was also the slowest, a 10-state thermal transfer model could take an hour on a $200k processor.

        It is quite possible to look at fine-grained results using finite element or finite-difference methods in mechanical and fluid dynamics problems. For instance looking at vortex-shedding is within the realm of possible for a current model PC or workstation.

        verification is done against known data-sets and most simulation work involves checks on accuracy.

        Yes, problems which are really in the 'butterfly effect' region are very difficult, interesting (useful) work has been done taking such phenomena to the molecular level. For something like crack-propagation finite element methods have to be very detailed indeed to be predictive and while you can use these for useful results, the 'interesting' part needs to be calculated at the atomic level. That, however I have only seen done in simulation of highly regular materials.

        Many of the chaotic results happen where there is a delicate ballance in total energy, e.g. the dynamics of cigarette smoke. 'Useful' problems however usually involve substantial energy transfers and at some computational scale these are not chaotic.

        Solar and geo-thermal energy input into global weather patterns involves a LOT of energy and modelling is generally easier where you are looking at such problems.

        Computational weather prediction has made impressive strides. 10 years ago the ability to predict weather in New England was dismal, today between better sensors and better models the 5-day forcast is now more often correct than not.

    • by Anonymous Coward
      Nowadays, when similating complex environments such as weather systems, the main innacuracy comes from now knowing the "starting state" with enough precision. Obtaining wind, temperature and pressure information is easy for the data points that lie conveniently on the surface of a landmass, but data points way up in the air or out to sea are mostly calculated through interpolating known points. I know that in 1992, the Cray YMP12/128 was doing 10-day predictions of weather with a 320x640x31 matrix of data points and a 15 minute time-step - there's no way it would have had accurate data for many of those points at all. The similation took 6-8 hours - roughly a third of that was pre-processing to compute starting conditions, another third for the time-stepping simulation, and the final third for post-processing to derive qualitative conditions.

      The accuracy of the simulation can be measured in terms of the length of time that the predictions remained within a given error of the actual weather.

      To overcome the problem of inaccurate starting states, high performance computing is now used to run many simulations of the same thing in parallel, each with a slightly different starting state. The hope is to identify many of the "exceptional" outcomes, and assign a probability to that outcome.

      A good example of this is the October 1987 storm in the UK, which the Met Office didn't see coming at all. It is believed that had they been able to run many simulations with different starting states, they would have seen that starting conditions slightly different from those used in their simulation would have lead to the craziness [bbc.co.uk] that ensued.

      More information about the storm and its cause can be found here [stvincent.ac.uk] or in the Google cache [216.239.39.100].
  • Nothing New (Score:2, Funny)

    by RAMMS+EIN ( 578166 )
    ``The Earth Simulator Project will create a "virtual earth" on a supercomputer to show what the world will look like in the future by means of advanced numerical simulation technology.''
    We already have that: http://whatisthematrix.warnerbros.com/
  • by ErikTheRed ( 162431 ) on Friday June 21, 2002 @01:58AM (#3741944) Homepage
    They don't want to admit it, but the real reason for building this thing is so that they can predict appearances of Godzilla....
  • "Advantages" of ES (Score:5, Interesting)

    by binaryDigit ( 557647 ) on Friday June 21, 2002 @02:04AM (#3741965)
    Seems to me that though ES takes the overall performance crown, that the IBM and HP (man that sounds strange) units have some definite advantages over it. Primary of which is the fact that they DO use "off the shelf" parts. ASCI White uses 375Mhz Power3 chips which are comparitively low performance compared to what IBM is shipping now (1.3 Ghz Power4). I don't know what the technical details are behind ASCI White, but it seems that IBM could instantly get a doubling of performance by using new CPU modules. With the "specialized processor" approach that NEC uses, this would seem to be prohibitively expensive. IBM has already amortized most of the cost of the development of new processors through their normal business units.

    Another advantage would be that since ASCI White is a hyper RS6K, you could use a lower end model (and IBM could rather inexpensively offer a lower end model) to develop your models on before using the relatively expensive big boy to do the actual simulations. I have to admit that this point is moot if they don't keep the utilization of the thing up pretty high most of the time.

    Though they mention that ES "only needs 5104" processors vs 8192 for AW, it looks like ES still takes up massive amounts of space. Now ES' storage is significantly larger that AW, so maybe that's where all the space is being eaten, but it would be interesting to see what the actual cabinet space/power requirements for the two machines sans storage are (assuming they are both using standard stuff for storage).

    Others things include since AW is based on OTS parts, is it easier to get parts for when processing units konk out. Is it simpler for a tech to work on the unit. Since Linux is already running on RS6K, theoretically with a few device drivers, you could run Linux on that bad boy

    Of course all this is moot in the non-real-world of supercomputers. With seemingly infinite budgets, the only _real_ measure is absolute performance, and ES obviously has the edge here. But if I were the IBM sales rep for supercomputing, I'd sure be hyping the fact that when it's not simulating nuclear explosions, you can run Gimp and Mozilla.
    • ...compared to what IBM is shipping now (1.3 Ghz Power4)

      Wow... -- somebody else who knows about the power4...

      I am cuious, do you know how they USE the darn things? i mean, the sucker has over 5,000 pins (!!), i suppose the thermal requirements are tremendous too. any info would be appriciated.

      But if I were the IBM sales rep for supercomputing, I'd sure be hyping the fact that when it's not simulating nuclear explosions, you can run Gimp and Mozilla.

      don't forget to mention the terrific pr0n potential.
      • by peatbakke ( 52079 ) <peat AT peat DOT org> on Friday June 21, 2002 @04:31AM (#3742220) Homepage

        I'm not sure how much you've looked up, so some of this information may be redundant, but here's what I've been able to dig up:

        • The Power4 chip has two processor cores and a shared 1.44MB on-chip L2 cache .. which in turn appears to be implemented as three separate cache controllers. The cache lines are hashed across the controllers, which is a pretty neat trick IMHO.
        • It weighs in at about 170 million transistors
        • This PDF [spscicomp.org] mentions that there are over 5500 total I/Os (including > 2200 signal I/Os) that give the chip a raw bandwidth of over 1 Tb/s.
        • According to this page [ibm.com], the chip simulations show the core temperature peaking around 82C (~180F) in certain regions of the chip, and consuming 115 - 140 watts.

        That's a beast of a chip! The packaging looks pretty substantial as well. I don't doubt the cooling systems are fairly remarkable, although I can't find any specific information about 'em.

        cheers!

    • ES has 700TB storage, ASCI W 160TB;
      a small difference... heh
    • by alfadir ( 142096 )

      You can get an SX-6i [supercomputingonline.com] . The processor in ES is not made only for ES. And I don't think you would sell many supercomputers for IBM if you were advocating Gimp and Mozilla as applications...

      • You can get an SX-6i

        True, but if you look at the Top1000 list [top500.org], you'll see significantly more IBM machines across the board then NEC, including a large number of "standard" units (sold as kick ass RS/6000's vs "supercomputers", e.g. the P690 [ibm.com]

        I would think that this gives them a signficant edge in development costs as well as giving their customers more flexibility.

        And I don't think you would sell many supercomputers for IBM if you were advocating Gimp and Mozilla as applications

        Oh come on, nuclear physicists like to clean up photos of their dogs (probably don't have girlfriends) and surf the web just like anyone else ;) Imagine the speed in which those nerdy scientists can apply those Gimp filters to all that pr0n they download.
        • by fb ( 10330 )
          >you'll see significantly more IBM machines
          > across the board then NEC

          As widely known and publicly acknowledged also by one of the authors of that list (Prof. Meuer), the linpack benchmark used to build the top500 list is biased against vector supercomputers, like NEC's.

          Supercomputer performance cannot be measured by a single number, really.
    • by nr ( 27070 )
      The ES is running a standard UNIX called Super UX which I guess is fully POSIX compliant and have the normal ANSI/ISO C compiler. You should be able to compile and run most Open Source programs including Gimp and Mozilla. I have compiled and used many GNU and Open Source programs like Emacs on the universitys Cray running UNICOS which also is a UNIX derviate designed for the Cray vector supers.
    • But if I were the IBM sales rep for supercomputing, I'd sure be hyping the fact that when it's not simulating nuclear explosions, you can run Gimp and Mozilla

      People, people, this was a joke. You know, not intended to be taken seriously. Of course if someone is going to spend 10 figures on a computer, they don't give a flip about Gimp, etc. Chill, it's ok, put the Pepsi down.
  • by bakes ( 87194 ) on Friday June 21, 2002 @02:11AM (#3741983) Journal
    ..a single-cpu one of these!!!
  • ...but will it run pong? :)
  • but I couldn't find anything about the PowerMac G4?

    -- james
    ps humour, not troll/flamebait :)
  • "I predict (Score:3, Funny)

    by zephc ( 225327 ) on Friday June 21, 2002 @02:34AM (#3742029)
    that within 100 years computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings of Europe will own them." - Prof. Frink
  • The Earth simulator will be destroyed to make way for a hyperspacial bypass...
    • The Earth simulator will be destroyed to make way for a hyperspacial bypass...

      Well, it would be kinda ironic if it got knocked out by an earthquake. Especially if it didn't predict it.

      Regards, Ralph.

  • alright now. (Score:2, Insightful)

    by domtropen ( 549086 )
    i'll freely admit that i'm a little freaked out by this. (so much so that i'm delurking.) directed at the discussion about whether it can be a practical machine, does it really matter? it seems as if it was built for one sole purpose, and it appears that it will do it well. can we just give them that?
  • SETI? (Score:2, Interesting)

    Are there any estimates of the processing power of all the worldwide computers participating in the SETI [seti.org] project?
    • The setiathome website claims there are three million users. Guessing that each user has about 10 MFlops, this is a total of 30 TFlops -- about the same as the Earth Simulator.

      However the SETI network could never do what the ES does because although it is compute-distributed, the data is centralized, so the actual compute rate is limited by communication speed. And over the Internet that's really slow compared to real interconnect architectures for these sorts of applications. At least, until the Internet can compete with a multi-gigabyte-per-second local interconnect. Of course by then, the processors will still be outstripping the network, so you probably still wouldn't be able to do it.

  • Topic (Score:2, Funny)

    by sheepab ( 461960 )
    Inside The World's Most Advanced Computer

    How did they get inside my brain???
  • by alfadir ( 142096 ) on Friday June 21, 2002 @02:51AM (#3742063)

    The Earth Simulator is running Super UX. The same operating system as the rest of the NEC supercomputers [nec.co.jp]

    The German Language TV channel 3sat [3sat.com] will broadcast a 30 min film on Earth Simulator on Monday and 24th of June at 21:30 hours and on Tuesday, 25th of June at 14:30 hours.

  • by SJ ( 13711 )
    ES seems all very interesting and all, but I would like to see how it compares to what the NSA has parked under Washington for Echelon and its successors.

    You can get your butt that if information about the "Worlds Fastest Supercomputer" is available to the general public, the NSA has got something bigger and better.
    • I don't mean to nitpick, but the general public doens't get access to this. Researchers will, but you can't just walk up and start playing Doom on this thing.
  • by Anonymous Coward
    Oh well, looks like my university is never going to make it into the top500 list of super computers (not to speak of any other german university).
    Although they are setting up a quite cool Sun Fire Ultra Sparc Cluster [rwth-aachen.de] running Solaris.

    The setup will consist of 16 Sun Fire 6800 SMP nodes (1500 MHz, each node is a 24 processor SMP system with 24 GB shared main memory) and 4 Sun Fire 15K SMP nodes (1500 MHz, each having 72 processors and 144 GB of memain memory) giving an max. arithmetic performance of 4 TFlop/s.
    Check the link to see for yourself (like you dont have anything better to do, right?).

    Sad/funny part of the story: the cluster is going to be finished in 2003 ...
    I should check Moores law on top 500 super computers...

    Alt least know the world knows we do cool stuff too ...

  • Almost 'nuff said. But I guess for the number crunching work that it's designed to do, FORTRAN must be the way to go. Not to mention that it was probably build by engineers. Nya! ^_^


  • Wonder if EarthSimulator could run a version of SimEarth, down to every country,state,city,person,etc. Doesn't the Sim series also throw in weather events? Tornados, etc.? EarthSimulator should be able to crank out a few of those...

  • /.'ed (Score:2, Funny)

    by cibrPLUR ( 176588 )
    the new list of supercomputer rankings is up today.

    I guess top500.org isn't running on one of them.
  • by Anonymous Coward
    I spilled a cup of coffee into it.

    It didn't fry.

    Beat that, Earth Simulator! Beat that!

    (I still use the now-slightly sticky soundcard from it :P)
  • I wonder just how well it can simulate Acts of God(tm). Or maybe they can tap into the Global Consciousness Project [princeton.edu]?
  • by DanThe1Man ( 46872 ) on Friday June 21, 2002 @05:44AM (#3742318)
    The Earth Simulator Project will create a "virtual earth" on a supercomputer...

    Hmmm, now where have I heard of an idea like that? [whatisthematrix.com]
  • by erlando ( 88533 ) on Friday June 21, 2002 @05:44AM (#3742320) Homepage
    There are some nice pictures [jamstec.go.jp] on the ES site as well. I wonder if the colouration of the cabinets is there to prevent the engineers from getting lost..? :o)
  • I was looking at the list of supercomputer rankings [top500.org], and I couldn't help thinking - yeah, but what about all the CLASSIFIED computers? I bet the US gov has secret computers that would blow that list away.

    -
  • Wow (Score:2, Funny)

    by Ilgaz ( 86384 )
    What I read is, its based on Nec SX arch, now imagine if it was a DX...

    Some nostalgia ;)
  • Boy would I like to buy some render time on THAT O_O

    (The article has some nice graphics too!)

    And because I KNOW there's probably 2000 "A beowulf cluster of those..." posts that are below my threshold: I would fear for the safety of mankind if someone made a cluster of /those/.
  • Now there are 23 installations at least this large, that should be the new threshhold: the largest machines that can be bought at any given time.
  • Hans Moravec [transhumanist.com] estimates it would take about 100 Trillion instructions per second to emulate the human brain. At 38 Tflops, Earth Simulator is in the ballpark. Maybe they should have called it human simulator, or just "Sim".
  • by FearUncertaintyDoubt ( 578295 ) on Friday June 21, 2002 @10:09AM (#3743290)
    ...is build a computer capable of withstanding a full slashdotting.

  • SimEarth XP
    System Requirements:

    40Tflop 5120 Processor Cluster

    10TB of System Memory

    256 Color Display

    4X Cdrom Drive

    Arctic Rated Parka

    "Sales thus far have been slow..." confessed Wright, "...however we're expecting at least one large customer in the coming months."

    -Chris

  • You mean the model H3760?

    --pi

    ... Erm, sorry. That's 3760000000. They release too many of these things. And, it only costs 1/8 of it's model number, like the rest do!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...