Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

First Petaflop Supercomputer To Shut Down

samzenpus posted about a year ago | from the so-long-farewell-auf-weidersehen-goodbye dept.

IBM 84

An anonymous reader writes "In 2008 Roadrunner was the world's fastest supercomputer. Now that the first system to break the petaflop barrier has lost a step on today's leaders it will be shut down and dismantled. In its five years of operation, the Roadrunner was the 'workhorse' behind the National Nuclear Security Administration's Advanced Simulation and Computing program, providing key computer simulations for the Stockpile Stewardship Program."

cancel ×

84 comments

Sorry! There are no comments related to the filter you selected.

The Coyote finally won (3, Funny)

Anonymous Coward | about a year ago | (#43328177)

RR must've spent too much time pecking on that Acme birdseed.

Re:The Coyote finally won (5, Informative)

metlin (258108) | about a year ago | (#43328455)

Yeah, just like the OP who was too busy "pecking" and forgot to include the link to the actual article on the decommissioning: http://www.pcmag.com/article2/0,2817,2417271,00.asp [pcmag.com]

In April's Fools' Day _anything_ is possible ... (2)

Taco Cowboy (5327) | about a year ago | (#43328707)

The Coyote finally won

Yeah, that too, is possible !

so when's the auction? (2)

Tastecicles (1153671) | about a year ago | (#43328199)

it's be interesting to see if this thing goes for scrap value, or if someone else'll pick it up for service elsewhere...

Re:so when's the auction? (1)

Dishwasha (125561) | about a year ago | (#43328223)

Keep an eye on Ebay for parts.

Re:so when's the auction? (5, Interesting)

hairyfeet (841228) | about a year ago | (#43328385)

It used a combo of cell CPUs and AMD Opterons [cnet.com] so if they want to recoup some of the cost i doubt selling those chips would be hard.

Of course this is one more reason i don't like the "game console" way the industry is being pushed, with Intel talking about soldering boards to chips and companies pushing more "black box" computing because if it were not for bog standard yet powerful COTS parts things like Roadrunner would be either impossible or insanely expensive. Yet to hear the industry pundits tell it all we need is a tablet and an iPhone...sheesh. Give me a system I can upgrade any day of the week, the laptops and tablet are fine for service calls or as PMPs but they will always be more about style and battery life than performance.

Which OS? (1)

unixisc (2429386) | about a year ago | (#43328835)

Which OS was it using? Linux? AIX? Something else?

Re:Which OS? (0)

Anonymous Coward | about a year ago | (#43329747)

It runs Linux on both the Cell and Opteron blades.

Re:Which OS? (0)

Anonymous Coward | about a year ago | (#43330849)

Which OS was it using? Linux? AIX? Something else?

Fedora 9

How often do you upgrade really? (1)

swb (14022) | about a year ago | (#43331607)

I was relatively late to the build your own PC craze, I built my first one in 1995 after about 8 years of being a Mac owner.

But since that time I have found relatively little worthwhile "upgradability" in my systems. I do remember adding a 3DFX card to my Pentium-166 system and replacing a couple of video cards (in the last 2-3 years) whose fans have quit.

When I built systems I tried to get the best bang for my buck out of my CPU, buying just high enough in the product lineup that my parts were "better" but below the point where they wanted $800 for a CPU because it was in the absolute top tier (eg, the P 180s and 200s were much more expensive than the performance gain, and the P133 savings didn't justify the performance loss from P166). Eventually the P200 became cheap enough that I *could* have upgraded, but the cost wasn't worth the nominal performance increase, especially when I was 3-6 months away from a whole new architecture with all kinds of performance improvements (PCI, PII/PIII, etc).

Although that example seems dated, it's always been like that -- there are either limited upgrades (dead end of a chipset/CPU) or upgrades that hardly seem worth the hassle when a new architecture is available with 10x the performance.

I get it that there are guys that chase the latest & greatest video card or who start with the lowest end CPU for a chipset and then serially upgrade via eBay or other bargain hunting, but my interests are more aligned with what's ON the computer rather than upgrading for the sake of a 5% clock rate jump.

Re:How often do you upgrade really? (1)

hairyfeet (841228) | about a year ago | (#43332739)

Then no offense but "Ur doing it wrong" because I have been able to upgrade every system i have built and extended its useful lifespan. For examples I stayed with LGA775 when it was obvious it was gonna have a long life, then I heard about Intel rigging the market so i switched to the AM sockets which turned out to have an even longer life than LGA775. The machine I'm typing on went from an Athlon dual to a Phenom II quad to now an AMD Hexacore, triple the power of the original chip, went from 2GB to 8GB of RAM, quadruple the memory, and from 400GB to 3TB in space, quintuple the space.

Its really not hard, simply look at a company's roadmap and choose wisely and you can extend their lives easily. But even barring that you are STILL able to take advantage of COTS, after all you are able to upgrade your memory from dozens of vendors which if Apple and Intel get their way will be a thing of the past as Intel is talking about soldering RAM to the board. Your PC came with 2GB and you need more? Tough shit, buy a new computer. We have ALREADY seen Intel pull this shit, the Intel Atom was a 32bit computer processor yet Intel made damned sure their chips couldn't take more than 2GB of RAM, why? To force you to buy more expensive chips silly! This is why I NEVER used Atom for low power office boxes or HTPCs, with AMD Bobcat I can put up to 8GB of RAM which with shared memory for video makes a hell of a buffer for HTPCs and makes videos smooth as butter.

Even if you don't build your own surely you can see the danger in black box computing, it will means millions of systems sent to the dump NOT because it can't cut it but because some corp no longer supports it so the software won't run and thanks to how nasty DMCA is you won't even be able to unlock it and reuse it yourself. For examples see how to this very day a site will get a DMCA if they dare host files that let you easily crack an Xbox-1, a system abandoned by MSFT half a decade ago.

This push to phones and tablets is NOT being done for YOUR benefit, if that were true they'd all had Mini-SD card slots and several would have mini slots for popping in more RAM, nope its all about the corps and pleasing Wall Street, if they can just abandon a system whenever they need higher profits (see WinPhone 7 not being able to run WinPhone 8 apps) it takes planned obsolescence to a whole new level as they won't even have to wait until the flimsy plastic breaks anymore, they can just pull the plug from corp HQ. And that is a truly scary thought.

Re:How often do you upgrade really? (1)

swb (14022) | about a year ago | (#43333527)

I'm not arguing the dangers of planned-obsolescence/DRM/black box computing at all (despite the two iPads and 3 iPhones we have here) at all.

I still build my own PCs, but for a whole bunch of reasons I kind of leave them as built anymore.

I'm fine with my existing two year old Core i5 system -- but I kind of threw some money at it when I built it and it's paid off -- SSD boot disk, 2 TB Raid1 storage, 16 GB RAM.

When I'm done, my young son gets them and by then he's thrilled. He's just about to get a slightly overhauled Q6600 system to replace the Pentium 4 box he has now.

The last couple of systems I've built I get a half-assed urge to just buy one and be done with it -- I price out what i want as parts and then compare the same specs to a Dell and keep on building...

Re:How often do you upgrade really? (1)

hairyfeet (841228) | about a year ago | (#43334393)

But that's not the fault of having the ABILITY to upgrade, its simply that software hasn't kept up with hardware, that's all. my youngest is gaming on an Athlon triple that was originally built as a spare, just a system to have in the closet if one of the family suffered a broken system because I got tired of hearing "How long is it gonna take" but when my hand me down Phenom II quad had a memory stick go tits up (and naturally the Phenom II had DDR 2 and the Athlon DDR 3 so i couldn't just snatch the stick) I handed that to him and said it'd be a week...its been over a year and now the Phenom II is in the closet. Why did he stick with the weaker triple over a quad? Because its more power than his games use and by the time the part was shipped he already had moved his software over and was happy.

So its not the ability to upgrade that is the problem, its just that for some people that need never arises. if your kid only surfs and plays basic games? that C2Q will be extreme overkill and will probably last many years as is. I have a 2003 Sempron 2GHz in the shop I use as a nettop...why? Because for the jobs I have that machine for downloading drivers, looking up solutions to problems and basic web surfing? It works just fine. in fact I'm gonna have to gut it this summer NOT because it can't do the job but because XP is EOL next year and there simply isn't drivers for its hardware with windows 7, planned obsolescence bit me in the ass. Oh and don't say Linux as I have too damned much Windows only software that Linux won't cut the mustard. But thanks to COTS I can just slap a Bobcat board in it for like $100 counting RAM and tada! Another 7 years of use out of the box.

So what you are talking about isn't really connected to losing the ability to upgrade, you simply haven't had any jobs that make it worth the trouble. Now imagine if the RAM was soldered to the board so that C2Q only had 1GB of memory...wouldn't be giving it to your kid would you? Now THAT is the problem, because even simple upgrades like adding memory are being threatened by those who make more money by forcing folks to throw away before its needed. as i tell folks "Imagine that your fridge had a 5 year clock and at the end of that five years the compressor locked solid and you had to throw the whole thing away....would you buy it? Then why would you pay the same money for a cellphone with a timer on it? " Hell they don't even give you the fucking courtesy of the timer because at LEAST you could then make choices based on life of the product, just ask those poor dumb bastards that are stuck with WinPhone 7 and 2 year contracts how they feel. that thing didn't have many apps to start with but they sold it on "more are coming!" yet now that they have killed WinPhone 7 those people paid for nothing, they have no better than a dumb phone at a smartphone price.

Re:How often do you upgrade really? (1)

swb (14022) | about a year ago | (#43337081)

I think the environmental thing is going to bite the planned obsolescence business strategy in the ass ultimately.

I think environmental regulations on hazardous materials, manufacturers being forced to take back and recycle old products, and possibly even cost of materials will make it harder and harder to release intentionally obsoleted gadgets.

Some of this cost can be passed on to end users, but much of it can't be and I've read editorial content from environmental advocates that even suggests that device makers be forced to support devices with software updates and technical support for longer periods of time in a bid to make it unprofitable to obsolete them so quickly.

The other side of this coin, though, is that while there are jobs that a 5-10 year old device can still do and manufacturers do push planned obsolescence, the primary reason these things have gone obsolete is that their newer versions are just so much better in terms of performance. My kid could keep using his P4 for a long time but even basic stuff gets sucky -- web sites are SO javascript dependent anymore that unless you can throw 4 GB RAM and a couple of cores at web pages, you do get shitty performance.

Re:How often do you upgrade really? (1)

hairyfeet (841228) | about a year ago | (#43342367)

I think we'll just keep on poisoning the third world sadly. Ever see a video of the place where most of our PCs end up? Its this little cesspool in Bumfuck Africa where the sky is black from the burning motherboards as the peasants try to get enough metals out of them to survive another week, its really sad and pathetic and the ones "working" there will probably be dead of cancer or heavy metal poisoning by the time they are 30.

We could go into the "why" it has happened, I would argue what we are seeing is the fruits of nearly 40 years of Reaganomics and insider dealing but the why really doesn't matter, all that matters is that thanks to laws like DMCA they'll make it so you keeping that older system? Illegal for pretty much anything but using as a doorstop. How could they do that? Simple just look at how any site that shows an easy way to unlock an Xbox-1, even though those have been abandoned by MSFT for half a decade, get a C&D and sued if they don't comply. What you'll have will be just like game consoles, locked down hardware with locked down OSes that all it takes is the next version to use a different set of APIs and then suddenly you can't get any software for it and the latest website features won't work. You can use it for a doorstop but that will be it, hardware without software isn't worth shit.

And do NOT get me started on JavaScript unless you want to hear a rant, I think JS is the biggest clusterfuck ever heaped upon computing. It was a weak and pisspoor language that used the name of a more popular product just to get its smelly foot in the door, the guys making it never had the word security even cross their minds much less be taken into consideration and now we just keep bolting more and more shit on top of a crapfuck language and wonder why we have so many exploits and why it sucks resources like Charlie Sheen sucks up coke. Its a bad idea, it SHOULD have been replaced 10 years ago with something designed for the job and with security features baked in but until some big corp like Google says "Okay enough is enough" we'll just keep putting bandaids on bullet wounds. Hell have you tried HTML V5 video yet? Frankly you have to have dedicated hardware just to run the damned thing or its a slideshow, I've seen 2.5GHz C2D chips struggle their asses off just to render a slideshow out of that mess, when the same chip can do 1080P Flash, its just another clusterfuck which surprise! being brought to you by the very same corps that want you to throw everything away every 3 years. Why am I not surprised?

Re:so when's the auction? (1)

DigiShaman (671371) | about a year ago | (#43328467)

Sold to China would be my guess.

Re:so when's the auction? (1)

ThatsMyNick (2004126) | about a year ago | (#43328869)

Sold to indonesia would be my guess.

/sarcasm

They could set it to fix the budget. (0, Funny)

Anonymous Coward | about a year ago | (#43328203)

Oh wait, that's PEBCAP.

Stop writing "barrier". (5, Insightful)

Anonymous Coward | about a year ago | (#43328205)

"Sound barrier" was and remains OK because there is a physical difference between flying slower than and faster than the speed of sound. But the word "barrier" is now (over)used to make things sound more dramatic. Raising a number from below to above some arbitrary (usually number base-dependent) threshold does not imply crossing a barrier, unless by barrier is meant "barrier to entry of another over-hyped tech piece".

Re:Stop writing "barrier". (1)

Anonymous Coward | about a year ago | (#43328217)

I just hope nobody breaks my ass-cherry barrier!

Re:Stop writing "barrier". (0)

Anonymous Coward | about a year ago | (#43328233)

barrier to april fool's

Milestone (2, Insightful)

Anonymous Coward | about a year ago | (#43328443)

Also, they should use a proper car analogy and use "Milestone".

Stop complaining about the word "barrier" (1)

gentryx (759438) | about a year ago | (#43332705)

Whenever there is a story on supercomputers on /., there will be a comment stating that there was no barrier whatsoever. But that's not quite true.

The truth is that the performance of supercomputers grows that fast because engineers continuously solve problems, which were deemed intractable before (e.g. power consumption, reliability, network performance). The research may not be groundbreaking in the sense of earth-shattering, but definitely in the sense of "wow, I didn't think one could do that!"

Re:Stop complaining about the word "barrier" (0)

Anonymous Coward | about a year ago | (#43333371)

I agree with your second part. That is why "milestone", rather than "barrier", is the better metaphor.

Aw, crap! (4, Funny)

theoriginalturtle (248717) | about a year ago | (#43328219)

I think that thing could host a kick-ass DOOM session!

Whiners (-1, Troll)

methano (519830) | about a year ago | (#43328225)

I bought my aluminum iMac back in 2007 and it works just fine, though I wouldn't mind an upgrade. What happened to that damn sequester?

On another note, can you really use a computer to accurately calculate how fast a nuclear arsenal will deteriorate? I didn't think so.

Re:Whiners (1)

ceoyoyo (59147) | about a year ago | (#43328597)

They wouldn't mind an upgrade either, which is why they're doing one.

Re:Whiners (5, Informative)

Macman408 (1308925) | about a year ago | (#43328807)

The problem is energy efficiency. In the past 5 years since it was first built, supercomputers have become far more energy-efficient. Roadrunner falls at 444 MFLOPS per Watt, while the current fastest supercomputer (and also a DOE project), Titan, is 2,143 MFLOPS per Watt. Roadrunner uses 2345 kW, and supporting equipment (cooling, backup power adds (on average) 80% more. Assume they get relatively cheap electricity (The Internets tell me the average price charged to industrial customers is 7/kWh), and that means that their electric bill is at least $295.50 PER HOUR. A computer with the same performance but Titan's efficiency would cost $61 per hour. That's the difference between your electric bill being $2.6 million per year and $500,000.

Assuming Titan's cost also scales ($60 million for 17 Petaflops -> ~$3.5 million for 1 Petaflop), then the payback for scrapping it and building a new computer is under 2 years. So yes, it IS saving money to scrap this one. They're not even replacing it with a new one (yet, anyway) - they're using one that was built in 2010.

And also, yes, you CAN use a computer to calculate how your nuclear arsenal is deteriorating. What makes you think they can't?

Re:Whiners (1)

tgd (2822) | about a year ago | (#43329977)

And also, yes, you CAN use a computer to calculate how your nuclear arsenal is deteriorating. What makes you think they can't?

Welcome to the New America -- where two decades of coddling has left the general population with the belief that their opinions have equal validity as the knowledge of experts.

Top supercomputer is Google? (0)

Anonymous Coward | about a year ago | (#43328253)

"In total, Roadrunner takes up 278 refrigerator-size server racks, and connects 6,562 dual-core AMD Opteron and 12,240 Cell chips. "

So way less than Google's Map Reduce computer systems. Why not just buy time on map reduce?

Also these supercomputers are not nuclear stock pile inventory systems (or simulation systems as they put it), and certainly weren't used to help find a cure for aids. They're for SHOW not for use, the stock piles have not changed, yet the computing power needed to track them is 200 times more? BS. And you need to keep re-rerunning the same simulation for a static system? BS.

And home come IBM's *super* computers are benchmarked (well Linpack anyway), yet they use legal contracts to prevent anyone benchmarking their overpriced mainframes?

Instead of spending silly money on a supercomputer for a lab that doesn't use it, why not build a big distributed computing center for universities to use as they wish?

Here, it's even in their mission statement (1)

Anonymous Coward | about a year ago | (#43328303)

http://nnsa.energy.gov/ourmission/managingthestockpile

"Most nuclear weapons in the U.S. stockpile were produced anywhere from 30 to 40 years ago, and no new nuclear weapons have been produced since the end of the Cold War."

And yet you need an ever increasing super computer to track and simulate them???

Then we get to the unpleasant truth:
http://nnsa.energy.gov/ourmission/recapitalizingourinfrastructure

"Recapitalizing Our Infrastructure The FY2011 Budget Request increase represents the investment needed to transform a Cold War nuclear weapons complex into a 21st century Nuclear Security Enterprise. "

It's a boondongle, a broken window, a way to turn printed cash from the Federal reserve into a thing that pumps money into the US economy. An agency with the role of spending budget on a vague list of tasks.

The reality is very dull, USA doesn't build Nuclear power stations, the nukes haven't been expanded since the cold war, the 'terrorists-with-nukes" is security theatre, They get a huge budget, they don't have stuff to spend it on, so they buy a supercomputer from a US vendor, which is only IBM these days. They run benchmarks on it, take pride in it, and then do nothing with it because it has no purpose.

They could instead build a really useful large distributed computing system for Universities across the USA to use, that would be a productive use of Fed created magic money.

Re:Here, it's even in their mission statement (1)

Jeremy Erwin (2054) | about a year ago | (#43332833)

And yet you need an ever increasing super computer to track and simulate them???

In the old days, we just detonated a bomb or two if we needed experimental data. Since the test-ban treaty, however, numerical simulations have been used in lieu of physical experiments. I suspect that the accuracy of numerical simulations is a closely guarded secret, and the DOE hasn't yet decided that the present state of the art can't be improved upon.

There is a rival benchmark, Graph 500 [graph500.org] Roadrunner isn't on it. Neither is Cielo.

But, it's intended to simulate a different sort of problem set.

And yet another perspective comes from Intel’s John Gustafson, a Director at Intel Labs in Santa Clara, CA, “The answer is simple: Graph 500 stresses the performance bottleneck for modern supercomputers. The Top 500 stresses double precision floating-point, which vendors have made so fast that it has become almost completely irrelevant at predicting performance for the full range of applications. Graph 500 is communication-intensive, which is exactly what we need to improve the most. Make it a benchmark to win, and vendors will work harder at relieving the bottleneck of communication.”

The Case for the Graph 500 – Really Fast or Really Productive? Pick One [insidehpc.com]

Re:Top supercomputer is Google? (5, Informative)

friedmud (512466) | about a year ago | (#43328451)

I've worked for the DOE for quite a few years now writing software for these supercomputers... and I can guarantee you that we use the hell out of them. There is usually quite a wait to just run a job on them.

They are used for national security, energy, environment, biology and a lot more.

If you want to see some of what we do with them see this video (it's me talking):

http://www.youtube.com/watch?v=V-2VfET8SNw [youtube.com]

A pellet stress simulation? (-1)

Anonymous Coward | about a year ago | (#43328537)

And Matlab does very similar simulations every day on humble PCs:
http://www.youtube.com/watch?v=TvlIfSlLB0c

I myself do a mass of finite element simulations on hardware that isn't even good enough to run old versions of Crysis.

Re:A pellet stress simulation? (5, Informative)

friedmud (512466) | about a year ago | (#43328565)

I don't get it are you looking for a Funny mod? You linked to a 2D heat transfer simulation done by Matlab. Did you even watch the video?

The second simulation (of a full nuclear fuel rod in 3D) was nearly 300 million degrees of freedom and the output alone was nearly 400GB to postprocess. It involves around 15 fully coupled, nonlinear PDEs all being solved simultaneously and fully implicitly (to model multiple years of a complex process you have to be able to take big timesteps) on ~12,000 processors.

Matlab isn't even close.

And it didn't need to be (-1)

Anonymous Coward | about a year ago | (#43328643)

"The second simulation (of a full nuclear fuel rod in 3D) was nearly 300 million degrees of freedom and the output alone was nearly 400GB to postprocess. It involves around 15 fully coupled, nonlinear PDEs all being solved simultaneously and fully implicitly (to model multiple years of a complex process you have to be able to take big timesteps) on ~12,000 processors."

And it didn't need to be. I can use 1000 times more nodes in a FE analysis and soak up power too. Why would I? That would be dumb! Your simulating a simple heat transfer and simple expansion, NOTHING MORE, no different that any other chemical process simulation in any other factory. Just with a lot more nodes.

It's also an arbitrary simulation serving no purpose. You said "what is that panel is broken right there' then ran a simulation with a stupid number of nodes to soak up a computer. But the pellet was made, it exists, it didn't need your simulation to be made and the simulation make zippo difference. You can run any number of similar simulations with the damage in an infinite number of places or combination of places, and it makes zip difference to the world because you don't know where each pellet is damaged. So NONE of your simulations apply to the actual pellet.

Their mission statement is absolutely clear. Turn cold war spending into security theatre spending and that's your job.

Re:And it didn't need to be (2)

friedmud (512466) | about a year ago | (#43330761)

I know I shouldn't respond to AC's but I'm going to anyway:

And it didn't need to be.

As far as geometry goes, it did need to be that detailed. Firstly, the pellets are round and to get the power and heat transfer correct you have to get the geometry correct. Also, pellets have small features on them (dishes on top and chamfers around the edges) that are put there on purpose and make a big difference in the overall response of the system (the dishes, in particular, reduce the axial expansion by a lot). So the detailed geometry is a very important part of this simulation. But that's not the only reason why it's large.

Your simulating a simple heat transfer and simple expansion, NOTHING MORE, no different that any other chemical process simulation in any other factory. Just with a lot more nodes.

I already explained how that is not the case. These are fully-coupled, fully-implicit multiphysics calculations. It is _not_ just heat conduction going on. Very complicated processes like fission gas creation, migration and release and fission induced and thermal creep, and fission product swelling are all involved. Plus the heat conduction and solid mechanics and thermal contact and mechanical contact and fluid flow model (on the outside of the pin) and conjugate heat transfer. All of these processes feed and are impacted by each other. These are NOT simple calculations.

It's also an arbitrary simulation serving no purpose. You said "what is that panel is broken right there' then ran a simulation with a stupid number of nodes to soak up a computer. But the pellet was made, it exists, it didn't need your simulation to be made and the simulation make zippo difference. You can run any number of similar simulations with the damage in an infinite number of places or combination of places, and it makes zip difference to the world because you don't know where each pellet is damaged. So NONE of your simulations apply to the actual pellet.

Actually, you are very wrong. Firstly, the Missing Pellet Surface problem is a huge problem in industry. What we can do with simulation is explore boundaries of how much tolerance there can be for such missing surfaces. We can vary the missing surface size and run thousands of calculations to determine the sizes that operators need to worry about. They can then adjust their QA practices to take this information into account. We can also run simulations of full reactors and stochastically sprinkle in defect pellets and show the overall response of the system which can help in understanding how to bring a reactor back up to full power in a safe way after refueling.

As for "that pellet exists"... firstly that's not true... but even if it did, doing experiments with nuclear fuel is _very_ costly and takes years (that is something else we do at INL) in order to better target our experimental money we do simulation to guide the experiments.

Their mission statement is absolutely clear. Turn cold war spending into security theatre spending and that's your job.

I don't work in security.... there are many national labs, all with different missions, but they _all_ do non-security work. They all work with US industry to solve some of the toughest problems on the planet. They are all full of extremely smart people and they are all working to add to the competitive advantage of the US. I'm sorry that you feel that way, but if you are interested in learning more about the national labs you should get a hold of me.

Re:A pellet stress simulation? (1)

martin-boundary (547041) | about a year ago | (#43328671)

Nevermind the ACs, that's pretty impressive. Although my first inclination would be to ask how sensitive are the sim results to the amount of detail being modeled? Mathematicians and physicists come up with a lot of approximations to simplify computations and reduce complexity, do those kinds of fully detailed simulations confirm the approximate answers?

Re:A pellet stress simulation? (0)

Anonymous Coward | about a year ago | (#43328723)

Indeed it is impressive, do old pellets have smooth walls like the simulation? Can you measure the internal structures and check they match the simulation? Do old pellets have nice filletted sealed ends like that? If there walls are so smooth and perfect why do they fail? I guess with that depth of modelling, you can calculate the failure to 8th decimal place. Do you have the input data to the 8th decimal place? When a pellet is damaged does they action change to the 8th decimal? Do they stop to run a simulation on the actual damaged pellet, with the actual measurements, before sending it off to be stripped and replaced?

Re:A pellet stress simulation? (0)

Anonymous Coward | about a year ago | (#43329869)

8th decimal place

Indeed, that sounds like a terrible idea... with so many calculations you're begging for rounding errors if you work in single-precision. Which is why no one does it.

By the way, if you're the same AC as above? Matlab does everything double-precision by default too!

Re:A pellet stress simulation? (1)

friedmud (512466) | about a year ago | (#43330989)

Pellets, as manufactured, are _very_ smooth. This is a decent overview I just found from Google: http://www.world-nuclear.org/info/Nuclear-Fuel-Cycle/Conversion-Enrichment-and-Fabrication/Fuel-Fabrication/#.UVmkjas5yZc [world-nuclear.org]

They start life as powder and then are packed in a way that makes them smooth.

However, just as in any kind of manufacturing: defects happen. A working reactor will have over a million pellets in it. Somewhere in there one is going to be misshapen.

Some of what we can do is run a ton of statistically guided calculations to understand what kind of safety and design margins need to be in place to keep problems from occurring. We can also look at modifying the design of the pellets to insure safer operation. Both of these things are very difficult (and costly) to do experimentally.

My lab (INL) does a lot of experimental fuel work... but we use these detailed simulations to guide the experiments so we can use our money more wisely. It literally takes years to develop a new fuel form, manufacture it, cook it in an experimental reactor, let it cool down, slice it open and see what happened. Using these detailed simulations we can do a lot of that "virtually" to help them decide on experimental parameters so that at the end of that whole sequence they have a bunch of _very_ good experimental results instead of half of them just being failures...

Also, we do actually have a bunch of detailed experimental results to compare our simulations to. Even with this fidelity of modeling we are still not able to perfectly capture what happens in all of those experiments. Even more detailed models (like the multiscale one in the video) need to be developed to be able to truly predict all the complex phenomena that goes on in nuclear fuel.

There is still a LOT more work to do...

Re:A pellet stress simulation? (1)

friedmud (512466) | about a year ago | (#43330913)

Thanks!

Certainly the nuclear reactor industry has done "just fine" without these detailed calculations for the last 60 years. Where "just fine" is: "We've seen stuff fail over the years and learned from it and kept tweaking our design and margins to take it into account". They have use simplified models to get an idea of the behavior and it has worked for them (as far as the reactors run safely and reliably).

However, the "margins" are the name of the game here. If you can do more detailed calculations that take into account more physics and geometry you can reduce the margins and provide a platform for creating the next reactor that is both more economical and safer. If you can increase the operating efficiency of a nuclear reactor by even 1% that is millions of dollars. If you can keep something like Fukushima from happening that is even more money (some would say "priceless").

The approximate answers (using simplified models) are good - they are in the ballpark. But if you compare their output to experimental output (which we have a LOT of... and it is VERY detailed) the simplified models get the trends right... but miss a lot of the outlier data. That outlier data is important... that's where failure happens. With these detailed models we get _much_ closer to the experimental data.

To get even closer to the experimental data we have to get even more detailed. The movie showed some of our early work in multi-scale simulation: where we were doing coupled microstructure simulation along with the engineering scale simulation. That work is necessary to get the material response correct to get even closer to the experimental data.

Ultimately, if we can do numerical experiments that we have a great amount of faith in, it will allow us to better retrofit existing reactors to make them more economical and safe and design the next set of reactors.

Re:A pellet stress simulation? (0)

Anonymous Coward | about a year ago | (#43329583)

"The second simulation (of a full nuclear fuel rod in 3D) "

i.e. a pipe with sealed ends.

"was nearly 300 million degrees of freedom"
You chose the number of nodes to max out your supercomputer, you don't know the actual measurement of any actual fuel rod to even a fraction of those. Adding extra nodes like that does not give you more data, it gives you more numbers. Your end output is graph of temperature and pressure over time, which is the same if you ran it with 10000 degrees of freedom and indeed the same calculations performed over and over again in every factory handling any reagent in any pipe.

" on ~12,000 processors"
And when you have more processors you'll calculate it with 600 million degrees of freedom, and still display it on a graph with a 200 pixel Y axis and it will be the same graph because you don't have more data, just calculating a finer mesh from the limited data you have.

Just because you can use a supercomputer to calculate a pressure/temp in a simple tube shaped reaction vessel, doesn't mean you needed a supercomputer. And thank heavens for that! if you needed a supercomputer for that, factories wouldn't be able to calculate their reactions!

Re:A pellet stress simulation? (2)

onyxruby (118189) | about a year ago | (#43330603)

The fact that the result was displayed on a graph of 200 pixels for a summary for the public has jack to do with the production use of the data. Do you think businesses only produce reports for the shareholder meetings and banks only look at pie charts for making decisions on billions of dollars of assets. Your criticism is disingenuous at best and has nothing to do with the working product of the supercomputer.

As for the degrees of freedom, you have to recall that their working needs are different from yours. They require greater accuracy and the ability to work within a given time frame in a logistically workable manner. They took advantage of the resources they had and got the greatest level of accuracy they could by using all of those resources. In other words they wrote their program to take full advantage of the supercomputer that they had at their disposal.

Your also assuming that the single given job you have chosen to criticize is the only job that the supercomputer runs, which is a foolish assumption when you would know that the supercomputer runs many types of jobs. In this case the job represented is one that can take advantage of the DOE's available resources for a given problem, and be safely declassified for public consumption. Do you think the people working on this are going to throw away their career and go to prison to make a point on Slashdot?

I get the impression you have never worked with large scale computing needs and have only ever worked in a math lab in a University somewhere.

Re:A pellet stress simulation? (1)

friedmud (512466) | about a year ago | (#43330789)

See my response above about the fidelity of the calculation.

Industry has been chomping at the bit for decades to get to detailed calculations like this. If you can save a nuclear reactor 1% of their operating cost... that is millions of dollars. Higher fidelity = more money in our economy.

Dear Amazon/Microsoft/Mathworks (0)

Anonymous Coward | about a year ago | (#43330179)

Reading down there is a paper running Linpack on Hadoop and showing it scales excellently.
http://www.st.ewi.tudelft.nl/~iosup/ec2perf-sci-comp09cloudcomp.pdf

And you can see from the pictures that IBM's supercomputer is really no more computing than a datacenter floor of which Amazon, Microsoft (and even Facebook) have many much larger data centers, in their EC2, and Hadoop compatible clouds. They also have far newer kit than the 5 year old processors. Cell processors are not magic, IBM used them because IBM makes them for Sony.

And MS/Amazon are marketing cloud computing.

So wouldn't it be amazing marketing if they cleared enough users off their EC2 cloud, ran Linpack on it and declared their cloud the worlds fastest supercomputer?

I bet Microsoft would love bragging rights over IBM, but even Amazon want to market EC2. It's not a difficult target to beat, there's really nothing special here, its all just parallel linear algebra and Hama has done the legwork.

Also the radioactivity mesh calculation can certainly be done in Mathlab. Wouldn't it be a nice promotion to show the very same simulation done on a humble PC in Mathlab? Calculating the same graphs as his simulation run on the supercomputer.. you won't run 300 million degrees of freedom, but then it doesn't need it, that was just busy work. The output graph will be the same. Sort of "Mathlab, bringing the power of a supercomputer to your PC"... or "PCs are modern day supercomputers", a real marketing win.

Re:Top supercomputer is Google? (4, Insightful)

ceoyoyo (59147) | about a year ago | (#43328611)

Are you somehow under the impression that these supercomputers are used to count nukes and keep track of their addresses?

Nuclear weapons have things like plutonium and uranium in them. The essential part of those is that they're radioactive. That means they decay. So yes, they do change over time. Since the US has agreed not to go firing the things off to see if they still work, the supercomputers are used to simulate the decay process and firing to see if they still work, what the yield is, and how long they're likely to keep working.

It's kind of embarrassing when the president says "turn them into a radioactive parking lot!" after North Korea nukes San Francisco, and the retaliatory strike is a bunch of duds.

Like Clockwork (-1)

Anonymous Coward | about a year ago | (#43328663)

". The essential part of those is that they're radioactive. That means they decay. So yes, they do change over time.... the supercomputers are used to simulate the decay process and firing to see if they still work "

Indeed, so predictable we make atomic clocks out of them. Of course it doesn't take a supercomputer to make an atomic clock because radioactive decay is simple math.

"Are you somehow under the impression that these supercomputers are used to count nukes and keep track of their addresses?"
Keep in mind you're paraphrasing *their* mission statement in a comic way. That is what *they* say their job is.

"Atomic" clocks don't use radioactive decay.... (3, Informative)

Ellis D. Tripp (755736) | about a year ago | (#43329645)

They rely on the resonant frequency of atoms in metal vapors (Cesium or Rubidium), or the output of a hydrogen maser.

http://en.wikipedia.org/wiki/Atomic_clock [wikipedia.org]

Radioactive decay is a chaotic process. So chaotic that it can be used as the basis for a random number generator. Just what you DON'T want in a precise time/frequency reference.

http://www.fourmilab.ch/hotbits/ [fourmilab.ch]

Fair but still predictable as clockwork (0)

Anonymous Coward | about a year ago | (#43329795)

Fair point, and if you take a million atoms of a radioactive element, is the half life chaotic? What about a billion? Still random and chaotic? And a trillion? Random and chaotic?

So is your weapons grade nuclear fuel degrading in a chaotic manner or a predictable manner? Of course its predictable manner, the half life is known and unchanged by pressure, temperature etc. you can say its chaotic at the atom level, but you don't have the data to the atom level, and the effect is macro and well defined and stated as half life.

You see the 'like clockwork' rhetorical point is conceded, but actually, the real point stands. It's a predictable system, which is why the can predict it now, just as they could predict it with 1990s supercomputers. Moores law is brutal, the problems stay the same size, but the computers needed to solve them become cheaper and cheaper at an insane rate.

It isn't that the problems magically scale up to the largest supercomputer you can make, and thus you get the budget for the largest supercomputer and get Linflop bragging rights!

Mod suppression aside, the point is clear (-1)

Anonymous Coward | about a year ago | (#43329407)

Mod point suppression aside, the decay of a fissionable material is not a chaotic systems that needs constant simulation. It's a predictable timely thing, and that's why we can use it for atomic clocks.

Let me ask you, does your simulated warhead work now? Does your simulation say it will work next year? Does your simulation say it will work in 10 years time? Are you claiming that a new computer running the same simulation faster in 10 years will generate a different result?

If you can run the simulation today on a computer of that power, can you run it on a computer of that power in ten years time? Of course you can.

And likewise if you can run that simulation on a supercomputer from the 90's, then it can run on a computer of comparable power in 2013. The problem hasn't gotten 1000 times more complex, it's the SAME problem. So of course you can.

OK, last of the Crays from the 90s was T3e_1350, which could reach 3 Teraflops using 2176 processors, and had 544 GB of memory.
http://www.cs.rit.edu/usr/local/pub/wrc/courses/arch/machines/cray/T3E-1350.pdf

A core i7 will do about 68 GFlops, so about 50 Core i7s boxes, given them 16gb each = 800GB ram.

Or we could also use a Geforce GTX500 and go the GPU CUDA core route, they are about 1.5 teraflops/unit, so a few of those in a single PC is comparable.

Then why are you buying a supercomputer? Why aren't you saving the tax payers money instead? Their mission statement makes the reason clear. It's about spending budget not physics.

Re:Mod suppression aside, the point is clear (4, Informative)

cnaumann (466328) | about a year ago | (#43329783)

Atomic clocks have absolutely nothing to do with radioactive decay. http://en.wikipedia.org/wiki/Atomic_clock [wikipedia.org]

Re:Top supercomputer is Google? (1)

PhamNguyen (2695929) | about a year ago | (#43328773)

These systems do different things.

The MapReduce framework cannot do every possible algorithm efficiently. It can only do a certain subset of problems. Supercomputers are deigned for problems that require "tightly coupling" between processors. A typical problem is multiplying two large matrices together. MapReduce cannot do this kind of problem efficiently.

Google is your friend (0)

Anonymous Coward | about a year ago | (#43328791)

http://www.norstad.org/matrix-multiply/

You can reduce matrix multply across many nodes just as you can split it across many cores in a super computer.

Re:Google is your friend (1)

PhamNguyen (2695929) | about a year ago | (#43328933)

No need to be so rude. I happen to have worked with Hadoop like architectures so I know what I'm talking about. The site says

This algorithm was developed as an exercise while the author was learning MapReduce.

It doesn't give any big O running time bounds. Even if the algorithm could achieve the standard big O bounds for matrix multiplication, the overhead for Hadoop is still much higher than the overhead for cpu's in a supercomputer to talk to each other.

To put this in context (0)

Anonymous Coward | about a year ago | (#43329021)

"No need to be so rude"
Google *is* your friend. You can paralyze matrix multiply, and indeed you need to, since modern supercomputers are parallel beasts they need to run parallel code. If your algo doesn't scale then your supercomputer doesn't scale.

"the overhead for Hadoop is still much higher than the overhead for cpu's in a supercomputer to talk to each other"
Lets put this in context. The super computers used to design and simulate cold war missiles has less processing power than a modern dumbphone. Even the Cray's of the 80's, as the cold war was ending, were 8core x330mflops = 2.6 Giga flops, or about 1/30th the power of a modern desktop PC.

Each *NODE* on a map reduce network is 30 times is more powerful than the WHOLE supercomputer used in the original design could EVER have been and internally each node has far faster ( x10000) interconnect between the cores. Obviously I'm talking about CPU nodes, if the nodes had a GPU, we'd be talking 1000 times more power still.

GP's simulation was a simple finite element model there, its far simpler than most I've seen. Basically a simple tube, not even pipes, and so on that you normally see in such analysis:
http://www.youtube.com/watch?v=T56-cQ5ZDzc

He doesn't even need to map reduce that, he gains nothing by increasing the number of nodes to a silly number other than to soak up processing power. His data doesn't support that number of nodes and it's the sort of calculation a desktop PC running cheapo Lisa will even do.

And if he wants to run silly numbers of nodes, well no problem because finite element analysis scales nicely and even has Map Reduce implementations. (Google is your friend remember).

Re:To put this in context (1)

PhamNguyen (2695929) | about a year ago | (#43329069)

My original point was that supercomputers do things that MapReduce architectures are not designed to do. I'm not sure how half of what you just wrote relates to that point. You seem to be saying that computers are so fast now that we only need one machine. In that case, MapReduce vs supercomputers is irrelevant anyway. Instead of putting what I'm saying in some "context" that is completely irrelevant, why don't you try to understand the original point: MapReduce is not designed for tight coupling. You use MPI or related technologies for that.

You say that finite element has MapReduce implementations. I googled finite element hadoop [google.com] and the first result was "What is Hadoop not good for". Why don't you link to the map reduce implementation of finite element instead of being so insufferably smug.

You might also try googling linpack hadoop [google.com] . The first result explains why there is no Hadoop implementation of linpack.

Re:To put this in context (0)

Anonymous Coward | about a year ago | (#43329259)

"My original point was that supercomputers do things that MapReduce architectures are not designed to do."
Specifically, you listed matrix multiply of large matrices, I did a Google and found someone showing you how.

"I'm not sure how half of what you just wrote relates to that point."
They're both parallel computers, if your algo doesn't scale it won't scale as your supercomputer gets more cores because interconnect times also increase.

"I googled finite element hadoop and the first result was "What is Hadoop not good for"",
I googled "finite element analysis map reduce" and got an IEEE paper on it.

"Why don't you link to the map reduce implementation of finite element instead of being so insufferably smug"
'Google is your friend' is a phrase simply to point you to the wealth of research out there, it's not intended to be smug, simply to point you to the wealth of info available.

"be saying that computers are so fast now that we only need one machine"
Well yes, the CPU alone is 30 times more powerful than the supercomputer used to design/simulate the cold war weapons originally, so a PC can simulate/design 30 such devices simultaneously in parallel. The GPU is insanely more power. Google is your friend there, but I'll also point to NVida's own CUDA core paper for its older (2010) kit:

http://www.nvidia.com/content/GTC-2010/pdfs/2137_GTC2010.pdf

"You might also try googling linpack hadoop [google.com]. The first result explains why there is no Hadoop implementation of linpack."

OK, lets do that:
" There is a project called Hama (I think) which supports some linear algebra operations on Hadoop. However, it will only be effective for a subset of linear algebra operations."

That comment is from 2008, so lets see what happened to HAMA... it got implemented in 2010:
http://csl.skku.edu/papers/CS-TR-2010-330.pdf

Linpack is a 100 side linear matrix calc that can be done in java, so you can run linpack on hadoop. However if its only (100)2 matrix, its way too small to be worth it (it was designed for 1979 computers with very limited memories and processors). However the top 500 is using *Parallel* Linpack which needs a BLAS library and the equation will run on HAMA:
http://www.netlib.org/utk/people/JackDongarra/PAPERS/hpl.pdf

So you should be able to run Linpack on these clusters, and indeed I can find EC2 benchmarks but not Google ones:
http://www.st.ewi.tudelft.nl/~iosup/ec2perf-sci-comp09cloudcomp.pdf

So from the graph with a mall cluster of 16 core, Linpack starts to reach 80% efficiency and rising. Indicating it scales well.

Re:To put this in context (1)

Jeremy Erwin (2054) | about a year ago | (#43333717)

So from the graph with a mall cluster of 16 core, Linpack starts to reach 80% efficiency and rising. Indicating it scales well.

What? Looks as if the efficiency is falling. Collapsing, even.

Titan, by the way, has an RMax of 17,590,000 GFlops, and a RPeak of 27,112,500 GFlops (64 percent). Your little 16 core EC2 Cluster has a peak performance of 88 GFlops.

The EC2 16xc1.xlarge configurations are about 30 percent of peak, if I'm reading table 7 correctly. It may be cost effective for small scale simulations, but some projects demand petascale resources, and EC2 can't be expected to scale up that far. The Top500 list is designed to encourage computer designers to scale up to very large configurations efficiently. And the warehouse computing vendors, while they may have many more nodes, can't connect them efficiently enough to attract the interest of those who actually buy time on "Blue Waters" or other exclusively civilian supercomputers.

Re:To put this in context (1)

PhamNguyen (2695929) | about a year ago | (#43344099)

On multiplication, the issue is efficiency, not whether it can be done. Recall my original claim was

A typical problem is multiplying two large matrices together. MapReduce cannot do this kind of problem efficiently.

You seem to have ignored the questions I raised about the efficiency (in terms of big O running time, and overhead) of the MapReduce implementation.

I saw the IEEE paper on finite element analysis and MapReduce, it was completely contentless (maybe Google wasn't being such a good friend to you as you thought).

HAMA is interestin. Technically it isn't a Map Reduce framework, which confirms my point about the limitations of MapReduce (in fact it was designed to get around these limitations). While I was correct in what I said about MapReduce, HAMA does show that it is possible to do supercomputer like calculations on commodity clusters, but only by dumping the restrictions of MapReduce.

Everything you said about linpack and hadoop is nonsense.

Roadrunner (1)

Anonymous Coward | about a year ago | (#43328279)

I worked on Roadrunner. It was a pain to program, but I'm sorry to see it go. The Cell processor was ahead of its time...

No need to shut it down (3, Insightful)

kurt555gs (309278) | about a year ago | (#43328323)

At today's prices, I'd have it farming Bitcoins.

Re:No need to shut it down (4, Insightful)

thygate (1590197) | about a year ago | (#43328603)

at 2 Megawatt of power you'll probably be broke before you mine your first coin.

Do the math! (1)

gentryx (759438) | about a year ago | (#43332825)

Roadrunner consists of 6480 [wikipedia.org] QS22 Blades. Using Cellminer [github.com] each will yield approx. 56 MHash/s, or 363 GHash/s in total. Using the Bitcoin profitability calculator [bitcoinx.com] we can then estimate that one will gain ~27 BTC/day (ATM $2667) while paying $3360/day for power (assuming cheap $0.07/kWh). So yes: mining on Roadrunner would not be cost-effective.

Re:Do the math! (0)

Anonymous Coward | about a year ago | (#43333101)

mining bicoins is pretty much only cost effective if you live in student housing and have free electricity or are using electricity for heating.

Re:Do the math! (0)

Anonymous Coward | about a year ago | (#43333611)

Or you are a government agency that gets the taxpayers to pay for electricity, but passes regulations to privatize the bit coins so mined.

Whaddya expect? (1, Funny)

Tablizer (95088) | about a year ago | (#43328377)

you have "flop" in the name

Exotic Architecture Failure (0)

Anonymous Coward | about a year ago | (#43328433)

Posting Anonymously on purpose...

Roadrunner was a failure from the beginning. We learned quite a while ago that there is more to a supercomputer than just hardware... you have to have software that uses that hardware. Buying into exotic architectures without thinking about the consequences that's going to have for the people creating the software that needs a supercomputer is a terrible idea.

Roadrunner was as bad as it gets in this regard... proprietary accelerators that caused you to hardcode your software directly for running on that machine and none other at all.

But, it appears that we haven't learned our lesson: http://hothardware.com/News/Solder-Problems-Trip-Up-Titan-Supercomputer-Delay-Final-Certification/ [hothardware.com]

At least Titan's Nvidia cards can be addressed using OpenCL and/or CUDA... so there is some hope that if they ever get it to work properly they might have some software to run on it...

Re:Exotic Architecture Failure (1)

Meeni (1815694) | about a year ago | (#43330859)

I hesitated between moding you up and answering. What you say is somewhat true but wrong nonetheless.

It is true that Roadrunner is very difficult to use. The consequence is that it has been used to run the designed nuclear stockpile application, and that's it. Nothing else, so to speak. And even running that single application has proven difficult.

Now, the machine has been pioneering the accelerator field. It has been the testbed for all new generation computers that are coming now. In some sense, its failure has been enabling current and future success, so it has been far from useless, even if its production record is not excellent.

For the Titan issues, the machine is being remanufactured at no cost by the vendor, so it should be a short lived problem.

Re:Exotic Architecture Failure (0)

Anonymous Coward | about a year ago | (#43332059)

I understand what you're saying, but I still believe that Roadrunner was unnecessary in this progression.

GPU incorporation in supercomputers had already started when Roadrunner was built. Intel was already demonstrating processors with hundreds of cores in it (later to become MIC... and then finally launch as XeonPhi). I believe that all of that would have happened without the need to buy Roadrunner. Nvidia CUDA already had a lot of traction and OpenCL was already on the horizon.

If anything, I think the best lesson we learned (re-learned!) from Roadrunner was that buying into proprietary accelerators without a decent programming model is a bad idea. Nvidia CUDA walks this line well... the programming model still isn't the best, but at least there is one! Intel MIC is _much_ better in this regard... allowing "normal" programming models to be accelerated.

Hopefully, we've finally learned this lesson and won't make this mistake in the future.

Here's a pretty good writeup on a similar situation in Japan (obviously a little one-sided because of the source):

http://blogs.nvidia.com/2012/02/a-japanese-tale-of-two-supercomputers/ [nvidia.com]

throw away mentality (actual arcticle link) (2, Informative)

Nyder (754090) | about a year ago | (#43328439)

I guess it's not new and shiny anymore, so we can just throw it away.

I did want to read the actual article, but the only link is to a 2008 article.

Fail or what?

http://news.sky.com/story/1071902/supercomputer-pioneer-roadrunner-to-shut-down [sky.com]

That is the article. And i see why they are getting rid of it, not as power efficient as new computers.

Re:throw away mentality (actual arcticle link) (1)

Tastecicles (1153671) | about a year ago | (#43328491)

what I've found odd in respect of power/cycle efficiency is that it doesn't seem to be going anywhere except in portable gear.

Example: I'm typing on a dual core AMD laptop (E350 die) which is entirely powered by a 50 Watt power brick. That does the processor, board, optical drive, two hard drives, a bank of 7 flash drives on a bus-powered hub, and a 15.3" widescreen panel. THE SAME HARDWARE on a mini-ITX form factor, *requires* a 200W PSU just for the board.

How does that work??

Re:throw away mentality (actual arcticle link) (4, Informative)

Nemyst (1383049) | about a year ago | (#43328567)

The manufacturers give themselves a lot of headroom. The last thing they want is for you to start whining at them because the PSU you've bought isn't powerful enough.

Keep in mind that unlike laptops, the motherboard manufacturer's got no idea of what you'll be pairing the board with. A low-end, cheap PSU at terrible efficiency may be "rated" at 200W but only give out 100W before crapping out. They also give themselves headroom for people who think the motherboard's rated power requirements also include everything else (ie. RAM, CPU, hard drives, etc.).

Actual power usage is far, far below the recommended power output. My computer's sitting idle at a little above 200W, and that's an i5-2500K overclocked with 16GB of RAM, two Radeon HD6950 2GB GPUs, two 7500RPM 3.5" HDDs plus a Vertex 3 SSD, an optical drive, a mouse, a mechanical keyboard requiring double USB ports, a phone recharging, an external eSATA HDD, all running on a full ATX motherboard geared towards power and not efficiency. Oh, and the reading includes two 23" IPS screens with non-LED backlighting (so much more power hungry).

If I remember well, full load (prime95 torture test and furmark running at the same time) topped at around 550W, again with a bunch of peripherals plugged in, a 1GHz overclock above normal, and 2 screens counted in the total. I'd say that that kind of power is very much in line with your laptop, considering just how ridiculously more powerful it is.

Re:throw away mentality (actual arcticle link) (4, Interesting)

friedmud (512466) | about a year ago | (#43328493)

It costs a _lot_ to keep these computers running (read Millions with a really big M). The power bill alone is an enormous amount of money.

It literally gets to the point where it is cheaper to tear it down and build a new one that is better in flops / Watt than to keep the current one running.

Re:throw away mentality (actual arcticle link) (0)

Anonymous Coward | about a year ago | (#43331883)

How about we get them to use it as full time bitcoin miner?
If I recall the netspec on it it has 10 (?) 10G (?) fibre connections so it should be able to transfer the data at a fast enough speed to keep up with the hashs it runs.

I know they would never do it, but I wonder if running it for that could break even on the cost of keeping it going?

Anyone know if anyone is using something of this capacity to mine for bitcoin?

MPK

Wargames (0)

mongrol (200050) | about a year ago | (#43328539)

So essentially this thing was a real life WOPR.

The server got too old so it wasn't good for... (0)

Anonymous Coward | about a year ago | (#43328685)

The sever got too old so it was no longer good for petafiles.

ummmm guys.... (1)

trum4n (982031) | about a year ago | (#43329727)

How do i buy the now "surplus" parts?

What is a flop? (0)

Anonymous Coward | about a year ago | (#43330049)

Floating-point operations per?

Re:What is a flop? (1)

MXPS (1091249) | about a year ago | (#43330143)

It's per second

Re:What is a flop? (1)

MuH4hA (1579647) | about a year ago | (#43330629)

>> Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate, which would then be "flop/s".

Imagine (0)

Anonymous Coward | about a year ago | (#43330423)

a beowulf cluster of these!

Obvious retirement function (1)

tedgyz (515156) | about a year ago | (#43332065)

Bitcoin server

So, I just wanna know... (1)

vandamme (1893204) | about a year ago | (#43344257)

How many FLOPs has it flipped over its career?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>