Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cray, Intel To Partner On Hybrid Supercomputer

kdawson posted more than 6 years ago | from the you-can-pet-a-dog-and-you-can-pet-a-cat dept.

Supercomputing 106

An anonymous reader writes "Intel convinced Cray to collaborate on what many believe will be the next generation of supercomputers — CPUs complemented by floating-point acceleration units. NVIDIA successfully placed its Tesla cards in an upcoming Bull supercomputer, and today we learn that Cray will be using Intel's x86 Larrabee accelerators in a supercomputer that is expected to be unveiled by 2011. It's a new chapter in the Intel-NVIDIA battle and a glimpse at the future of supercomputers operating in the petaflop range. The deal has also got to be a blow to AMD, which has been Cray's main chip supplier."

Sorry! There are no comments related to the filter you selected.

And although it could be used for medical research (0, Troll)

QuantumG (50515) | more than 6 years ago | (#23235976)

it will most likely just be used for more nuclear weapons simulations.

Re:And although it could be used for medical resea (0)

Anonymous Coward | more than 6 years ago | (#23236000)

Actually, high performance supercomputers are rarely used for nuclear weapon simulations. Yes, that's what it says on the line item because it's easy to get national defense stuff through the budget compared to, say, studying climate change, but these machines are immensely powerful, and every proposal includes plenty of alternate applications other than running bomb simulations.

Re:And although it could be used for medical resea (1)

QuantumG (50515) | more than 6 years ago | (#23236010)

Yes, because clearly this whole nuclear weapons research thing is a smoke screen for studying the weather.

Only on Slashdot.

Re:And although it could be used for medical resea (1)

cyphercell (843398) | more than 6 years ago | (#23236044)

puts tinfoil hat on...

Re:And although it could be used for medical resea (2, Interesting)

CRCulver (715279) | more than 6 years ago | (#23236052)

Yes, because clearly this whole nuclear weapons research thing is a smoke screen for studying the weather.

The OP's point is valid, people requesting funding have better success if they can tie their research to defense, even if it's in some vague way. As a linguist, I've seen this in my own field. For decades, a world centre for the study of the minority languages of the USSR was the University of Indiana at Bloomington. The U.S. government gave enormous amounts of funding to the scholars there, who in return just had to write a few pages in their language textbooks about Russian areal studies (local economy of a region, political organization) before proceeding on to discussions of grammar, lexicon and indigenous literature.

Depending too much on defense funding, however, can result in much disappointment when the government changes its priorities. Once the Cold War ended, most of the funding for the study of the former USSR dried up. One might imagine that it has been reassigned these days to study of the Middle East, but seeing how badly the wars in Iraq and Afghanistan are being managed, I somehow doubt the U.S. is investing in areal studies as much as it could.

Re:And although it could be used for medical resea (1)

Chrisq (894406) | more than 6 years ago | (#23236140)

Yes, because clearly this whole nuclear weapons research thing is a smoke screen for studying the weather.
Maybe, if we make that

Yes, because clearly this whole nuclear weapons research thing is a smoke screen for studying weather control.

Re:And although it could be used for medical resea (1)

F34nor (321515) | more than 6 years ago | (#23236458)

Wasn't 'Coast to Coast' talking about HARP just the other night???

Re:And although it could be used for medical resea (1)

mako1138 (837520) | more than 6 years ago | (#23236300)

If by weather you mean climate, sure. Don't forget protein folding, physical chemistry, lattice QCD, and materials science. "Stockpile stewardship" is definitely there in the list of supercomputer applications, but there's lots of unclassified work that gets done to improve the world.

Re:And although it could be used for medical resea (1)

maxume (22995) | more than 6 years ago | (#23236574)

The military already has weapons that could completely annihilate civilization several times over. If it takes 2,000 bombs right now they may well be motivated to get that down to something more manageable, like 100, but at some point, even crazy destruction bent madmen would be satisfied(like if they had a globebuster).

I'm pretty sure that 'useful' amounts of supercomputer capacity aren't taken offline as new capacity comes online, they are simply repurposed to lower priority topics. So maybe the new one only gets used to make weapons more reliable, but it still frees up the old one to calculate the weather.

ReiserFS (0)

Anonymous Coward | more than 6 years ago | (#23236112)

For when you need to partition your wife.

Re:ReiserFS (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23236208)

CmdrTaco here. A few years ago, while hanging out at the computer lab, I had to take a piss. As I entered the john, a big beautiful all-American football hero type, about 19 or 20, came out of one of the booths. I stood at the urinal looking at him out of the corner of my eye as he washed his hands. He didn't once look at me. He was "straight" -- and in any case I was sure I wouldn't have a chance with him.

As soon as he left, I darted into the booth he'd vacated, hoping there might be a lingering smell of shit and even a seat still warm from his sturdy young ass. I found not only the smell but the shit itself. He'd forgotten to flush. And what a treasure he had left behind. Three or four beautiful specimens floated in the bowl. It apparently had been a fairly dry, constipated shit, for all were fat, stiff, and ruggedly textured. The real prize was a great feast of turd -- a nine inch gastrointestinal triumph as thick as a man's wrist. I knelt before the bowl, inhaling the rich brown fragrance and wondered if I should obey the impulse building up inside me. I'd always been a heavy rimmer and had lapped up more than one little clump of shit, but that had been just an inevitable part of eating ass and not an end in itself.

Of course I'd had jerkoff fantasies of devouring great loads of it (what rimmer hasn't?), but I had never done it. Now, here I was, confronted with the most beautiful five-pound turd I'd ever feasted my eyes on, a sausage fit to star in any fantasy and one I knew to have been hatched from the asshole of the world's handsomest young stud.

Why not? I plucked it from the bowl, holding it with both hands to keep it from breaking.

I lifted it to my nose. It smelled like rich, ripe limburger (horrid, but thrilling), yet had the consistency of cheddar. What is cheese anyway but milk turning to shit without the benefit of a digestive tract? I gave it a lick and found that it tasted better then it smelled. I've found since then that shit nearly almost does. I hesitated no longer. I shoved the fucking thing as far into my mouth as I could get it and sucked on it like a big brown cock, beating my meat like a madman. I wanted to completely engulf it and bit off a large chunk, flooding my mouth with the intense, bittersweet flavor. To my delight I found that while the water in the bowl had chilled the outside of the turd, it was still warm inside. As I chewed I discovered that it was filled with hard little bits of something I soon identified as peanuts. He hadn't chewed them carefully and they'd passed through his body virtually unchanged. I ate it greedily, sending lump after peanutty lump sliding scratchily down my throat. My only regret was the donor of this feast wasn't there to wash it down with his piss. I soon reached a terrific climax. I caught my cum in the cupped palm of my hand and drank it down. Believe me, there is no more delightful combination of flavors than the hot sweetness of cum with the rich bitterness of shit. Afterwards I was sorry that I hadn't made it last longer. But then I realized that I still had a lot of fun in store for me. There was still a clutch of virile turds left in the bowl. I tenderly fished them out, rolled them into my hankercheif, and stashed them in my briefcase.

In the week to come I found all kinds of ways to eat the shit without bolting it right down. Once eaten it's gone forever unless you want to filch it third hand out of your own asshole -- not an unreasonable recourse in moments of desperation or simple boredom.

I stored the turds in the refrigerator when I was not using them but within a week they were all gone.

The last one I held in my mouth without chewing, letting it slowly dissolve. I had liquid shit trickling down my throat for nearly four hours. I must have had six orgasms in the process. I often think of that lovely young guy dropping solid gold out of his sweet, pink asshole every day, never knowing what joy it could, and at least once did, bring to a grateful shiteater.

Most likely? (4, Informative)

Whiney Mac Fanboy (963289) | more than 6 years ago | (#23236158)

it will most likely just be used for more nuclear weapons simulations [emph mine]

The majority (but not all) supercomputers on the top 500 supercomputer list [top500.org] are related not to nuclear weapons research, but meteorological/oceanographic & other scientific uses.

Re:Most likely? (1)

pipatron (966506) | more than 6 years ago | (#23236308)

The majority (but not all) supercomputers on the top 500 supercomputer list are related not to nuclear weapons research

Yeah, but what about those NOT on the list...

Re:Most likely? (1)

Whiney Mac Fanboy (963289) | more than 6 years ago | (#23236334)

Yeah, but what about those NOT on the list...

I'd speculate that most of them would be doing crypto-breaking rather than nuclear weapons simulations.

Re:Most likely? (4, Funny)

encoderer (1060616) | more than 6 years ago | (#23236892)

Like W.O.P.R.

Do you think WOPR is studying the climate?

No way.

It spends it's spare cycles playing a special version of The Sims where all human life is annihilated and WOPR is the supreme ruler.

Oh, and searching for WOPETTE porn.

Re:Most likely? (1)

smallfries (601545) | more than 6 years ago | (#23237392)

Are you sure you wouldn't prefer a nice game of chess?

Re:Most likely? (1)

segin (883667) | more than 6 years ago | (#23238986)

"Skynet online, processing at 60 teraflops..."

Re:Most likely? (0)

Anonymous Coward | more than 6 years ago | (#23240858)

...per second

Re:Most likely? (2, Insightful)

dreamchaser (49529) | more than 6 years ago | (#23236396)

Sure, but posting actual facts doesn't give the same cheap karma boost as posting something anti-war or anti-nuclear.

Then again, I'm sure people would rather see us blowing up actual bombs as tests rather than simulating them (sarcasm).

Re:Most likely? (1)

TheThiefMaster (992038) | more than 6 years ago | (#23236456)

Surely the November 2007 [top500.org] top500 list would be a better link than the June 2003 one? The computer at the top of the list you link to is only 30th on the most recent one.

Especially since the #1 system has the following in it's description:

The upgrading of BGL, notably through the addition of nodes with twice the memory, allows scientists from the three nuclear weapons labs to develop and explore a broader set of applications than the single package weapons science oriented work that has been the mainstay of the machine in the past.

Department of Energy (2, Informative)

flaming-opus (8186) | more than 6 years ago | (#23238026)

DOE, which does the US nuclear weapons simulations, is probably the largest single buyer of capability-class supercomputers, but still a small fraction of the total. Even within DOE, only a large minority of systems are dedicated to Nuke simulation. Sandia, livermore, and Los Alimos all have 2-3 large nuclear simulation machines each. (or will admit it publicly) Large systems at Pacific Northwest, Oak Ridge, Lawrence Berkely and Argonne are used for open science research.

High-end supercomputers are used, in significant ways, for climate research, short-term weather forcasts, seismic modeling, cosmology, fusion research, protein folding, predicting the size of petrolium deposits, automotive and aircraft designs, and a host of other engineering codes. Even with that stated, the piece of the pie chart labelled "other" is 35% of the total.

On the other hand, nuclear weapons simulation is a difficult enough problem, and requires a powerful enough machine, that it subsidizes the design of super-scalable machines that are then sold to other customers for other tasks.

More likely NSA (1)

Foerstner (931398) | more than 6 years ago | (#23240038)

DOE, which does the US nuclear weapons simulations, is probably the largest single buyer of capability-class supercomputers, but still a small fraction of the total. Even within DOE, only a large minority of systems are dedicated to Nuke simulation. Sandia, livermore, and Los Alimos all have 2-3 large nuclear simulation machines each. (or will admit it publicly) Large systems at Pacific Northwest, Oak Ridge, Lawrence Berkely and Argonne are used for open science research.


I suspect that the NSA buys more supercomputing iron than the DOE, but it's impossible to prove that, of course.

Re:Most likely? (1)

lightversusdark (922292) | more than 6 years ago | (#23242964)

The majority of them are simply crunching through the transport equation [wikipedia.org]

You might say that they have general applicability when modelling particular behaviour.
Every Earth Science research institution I have worked with has been largely funded from defence budgets.

Re:And although it could be used for medical resea (1)

azaris (699901) | more than 6 years ago | (#23236296)

it will most likely just be used for more nuclear weapons simulations.

s/nuclear weapons simulations/homeland security boondoggles

Re:And although it could be used for medical resea (1)

Narpak (961733) | more than 6 years ago | (#23236408)

That and filtering through all the p0rn (seriously we need some sort of superduper computer to organize it all for easy access).

Re:And although it could be used for medical resea (0)

Anonymous Coward | more than 6 years ago | (#23236480)

Simulation is generally preferable to live tests.

Does anyone think? (0)

Anonymous Coward | more than 6 years ago | (#23236670)

Does anyone think that Hans Reiser has been forced to swallow his cell mate's load, yet? I'll bet the word "Python" has taken on a whole new meaning for him! LOL, good riddance, nerdboy!

Can it play global thermonuclear war? (1)

Joe The Dragon (967727) | more than 6 years ago | (#23236998)

Can it play global thermonuclear war?
and what is the back door login?

Re:And although it could be used for medical resea (1)

homebrewmike (709361) | more than 6 years ago | (#23237396)

The alternative would be to simply detonate them. Besides being illegal, it's a bit messy.

I'd rather see the physics done in silicon.

AMD worried? (1)

clickclickdrone (964164) | more than 6 years ago | (#23236030)

I'm sure the volumes of chips they sell in Crays is a drip in the ocean compard to other channels. It's not like Supercomputers are a big seller...

Re:AMD worried? (1)

Aranykai (1053846) | more than 6 years ago | (#23236036)

After their acquisition of VIA and then later ATI, they have established themselves in a larger market than simply performance graphics chips for end users. Heck, every Nintendo product since the gamecube has used ATI hardware.

The last line of that summary is clearly flamebait.

Re:AMD worried? (2, Informative)

pipatron (966506) | more than 6 years ago | (#23236096)

every Nintendo product since the gamecube has used ATI hardware

I'll list them for you:

  1. Gamecube*
  2. Wii

*The company that made the Gamecube hardware was later bought by ATI, so ATI didn't have much to do with that.

Re:AMD worried? (1)

master5o1 (1068594) | more than 6 years ago | (#23236166)

Are you telling me that all these chip company mergers are there to get on Nintendo's good side and start making chips for Nintendo instead of their competitors?

I mean, Company Y makes GC chip, but gets bought by ATI, ATI gets branded on GC.
Nintendo likes the chip, gets ATI to make it for Wii. ATI gets branded on Wii.
AMD buys ATI-- ATI stays as a brand name.

SHIT!! This is off topic... ah well.. who cares.

Aquisition of VIA? (1)

jsoderba (105512) | more than 6 years ago | (#23236110)

VIA Technologies is an independent company and I don't recall any significant talk of a merger with AMD. Since AMD acquired ATI they have little to gain from buying VIA anyway.

Re:AMD worried? (2, Insightful)

dreamchaser (49529) | more than 6 years ago | (#23236104)

It's more about bragging rights and PR/marketing than about volume of chips sold. I doubt AMD is terribly worried as they have much bigger concerns right now.

Re:AMD worried? (2, Insightful)

lakeland (218447) | more than 6 years ago | (#23236124)

AMD might be worried. Cray and similar deals are all about bragging rights, not about sales.

Like that Fujitsu supercomputer... it makes you think 'hey, maybe there is something to Fujitsu more than photocopiers...'

I don't know what influences normal customer's perception of a company like AMD. I don't even know who AMD's main customers are - white-box manufacturers? enthusiasts? So while industry analysts put a lot of weight on these high-profile shifts, ... well, it might sway public opinion.

Re:AMD worried? (1)

clickclickdrone (964164) | more than 6 years ago | (#23236222)

>it makes you think 'hey, maybe there is something to Fujitsu more than photocopiers...'
Interesting to see how different territories have different takes on this. I've never seen or hear of Fujitsu making photocopiers. When I think of them I think of laptops/desktops & hard drives.

Re:AMD worried? (1)

monsted (6709) | more than 6 years ago | (#23236858)

And when i think of them i think of pain and suffering, mostly for people who are unfortunate enough to have bought their laptops/desktops.

Re:AMD worried? (1)

networkBoy (774728) | more than 6 years ago | (#23239414)

I have an old P133 Fujitsu and it is a tank. Dropped it 3Ft to concrete and all that broke was the status LCD. I also have a pair of Stylistic pen tablets (486DX4100) and they rock for what they are.
-nB

Re:AMD worried? (0)

Anonymous Coward | more than 6 years ago | (#23237384)

You have fewer options to build a supercomputer in your home:

1. Alpha 64 bit processor. It's RISC buggyless.
2. MIPS64. It's RISC buggyless but lacks hardware capabilities.
3. Sparc64. It's RISC buggy because its assembler is too complex and overfeatured.
4. PowerPC64. It's RISC buggy because its hardware capabilities are complex and overfeatured.
5. x86-64. It's not recommended because it's CISC buggy with deprecated compatible old architecture 8086 and 32-bit float 8087.

What is the next ideal processor?

I think 64-bit and 128-bit float point and 64-bit integer processors (32 or 64 registers with a extra zero-valued register) that none of current known processors exist.

Pipelined, Superscalar, Out-of-Order, Speculative.

--- it's the idea of a spanish boy ---

Re:AMD worried? (0)

Anonymous Coward | more than 6 years ago | (#23239008)

When the project is CPU-hungry with billions of fp-operations, the 32-bit fp is useless because it propagates the round's error due to its smaller mantissa.

64-bit and 128-bit (a.k.a float64 and double128) are sufficient for billions of fp-operations because their mantissas are bigger that the round's error is propagated very little, sufficient for better accuraccy.

Forgot the 32-bit & 64-bit coprocessors from the current microprocessors as Intel or AMD. Forget them!

Build your own silicon masks using the edge-technologies that exist in the market after of simulating the netlists in accelerated FPGAs.

Built the thousands chips, you can build a supercomputer of thousands cores, and run Linux, Minix, *BSD inside or anywise.
You will have ECC, atomic operations and reliable exceptions, ideal for parallel programming.

Never plug your supercomputer to crappier chips as from Intel or AMD because they are buggy and can do you wrong or crashed, undesired results.

The last undesired things in their hardware was the bug of TLB, the poor and unextensible accuracy of their coprocessors, the poor performance, the bloated hw-design, and the bad memory space.

FTAN
FWAIT # it's an instruction of bad idea without out-of-order feature of modern processors.

--- it's the idea of a spanish boy ---

Re:AMD worried? (3, Insightful)

Kjella (173770) | more than 6 years ago | (#23236228)

Please make sure to make a "Supercomputers is an irrelevant little niche" comment in a thread about Linux in supercomputers. Let me know how the charred remains of your karma is doing afterwards. It's all about bragging rights, in particular "the world's most powerful supercomputer" title. Most of these are trying to run some O(ugly) problem and improving the model or algorithms probably means a lot more than just adding 10x more power.

Re:AMD worried? (1)

earthforce_1 (454968) | more than 6 years ago | (#23236274)

Isn't a petaflop Cray the minimum hardware required to run Duke Nukem Forever?

Re:AMD worried? (1)

bvimo (780026) | more than 6 years ago | (#23236374)

Has DNF been released?

A better victim for your humour would have been M$ Vista.

Gay Nigger Association of Australia (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23236042)

gnaa.au [goatse.ch]

Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23236114)

When I see this stuff I wonder if everybody who wanted to have this kind of computing power for themselves could agree on a simple project like F@H or Electric Sheep and connect everybody on the planet to compute as a single unit for a certain portion of the day. The connectivity and power would be awesome and I don't think it is -that- weird an idea. I would rather schedule a super task that could perhaps consider the methods to perform a hyperspace transfer from planet to planet. I would contribute my 8 that I keep on line if some common realistic plan to resolve hyperspace for interstellar travel could be devised as an OSS project and not a closed benefit of a few or one government over another. I'm not an anarchist, just a person who has more respect for people than nations.

Re:Mega-petaflops for people (2, Informative)

chuckymonkey (1059244) | more than 6 years ago | (#23236172)

It's not always about just how much data they can process. It's more about being able to do it quickly and in parallel. Say for instance you want to simulate a black hole. You have so much raw math that needs to be handled all at the same time, there's no way you can do this with current internet technology. Another example is a weather simulation, you have to take so many things into account all at once. That's why the compute nodes in supercomputers are connected by extremely high speed interconnects. They want all the CPUs in these things to have the latency of a local bus. Now if all they need to do is crunch raw data with no emphasis on parallel processes then yes, things like Folding@home are grand for that purpose.

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23236232)

I am not quite sure that you understand systems computing well ( no offense intended ) . It is the program itself and the structure of it as it works with the hardware that allows results with the equipment. I have worked on the design of many -real- super computers and I guess that I have a little practical experience in that area and I say that this concept would kick that hybrid computer's ass by a country light year :) I have access to a 'Blue Gene' at school in genetics and so I am not saying this just because I have experience with the Z80 or 8086. :)

Re:Mega-petaflops for people (5, Informative)

chuckymonkey (1059244) | more than 6 years ago | (#23236330)

Your smug is showing, I work with one on a daily basis for the government in the missile defense arena. Hell in two months I'm going to be building one of those new IBM machines, we just signed the purchase with IBM. Yes I said that I'm going to be building one, IBM is not allowed in our building. I don't even have to rent nodes of it, we have it all to ourselves. It's not the applications or the hardware that is the problem, it's the latency. I don't care how fast your internet connection is, you cannot match the interconnect fabric of these machines. If you want to parse out little bits of data to a vast number of computers using the spare cycles of home computers is great, I'm not trying to downplay that. You just cannot run them in parallel and do real time simulations on them. That is why we have these huge monolithic computers. Let me give you two examples: Protein folding, not parallel and also not time sensitive. More of a when you finish I'll give you a new problem to chew on. Tracking millions of orbits from shit in space, very parallel requires correct timing low latency transactions between CPU nodes. Also needs results as events occur, there's no room for "When your done I'll give you a new one". Working out the problems with star travel as the original parent said is a grand idea using a distributed system, running the simulations in real time to actually have an idea of whether or not those solutions will work is where computers such as the ones I work with come in.

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23236414)

Cool. I love intelligent conversation. I worked on the 'neither confirm nor deny' myself many years ago at DARPA, however, that alone does not qualify me to be the final word on what is possible. I probably should not make any specific quotes from people or situations, however I can say that I was not impressed with the dimensional reasoning of the systems themselves. I do wonder about new methods and I can say that from 'my experience' that this is a valid concept. Then you say that MMORPG's cannot exist, since they require everything to be done in 'real' time? :) The trick is in the software. Your response is vague. I was interested in the specifics of the process and considering a measure of the results using techniques I might propose. The proof of the pudding is in the eating. As far as you 'building them' I also work in semiconductor wafer processing and I see things in development, so I am required to be a bit more forward thinking.

Re:Mega-petaflops for people (4, Informative)

chuckymonkey (1059244) | more than 6 years ago | (#23236542)

MMORPG is real time as far as the human mind is concerned. If you look at all of them they have a latency counter too, they suffer badly sometimes from that problem. Hell the new supercomputer systems are not even real time, they have problems with latency as well. That's usually what the limiting factor as far as computing nodes is, the farther you space nodes out, or the more hops that they take over the fabric all has latency. For instance, one of our old SGI machines is limited to 2048 processors (SGI claims 512) because the NUMA link interface is too spread out beyond that. Of course that's running over copper with electrical signalling, newer systems use fiber which is very fast over the line, but the bottle neck is in the connections. So yet again we run into the problem of latency being the limiting factor. They even have specialized routers in them that are designed to be transparent to the overall machine, but beyond a certain number of hops you still have latency. I wish I could post diagrams and say a little more, but I'm already treading into the "trade secrets" ground. The difference between real-time simulation and an MMORPG though is a little more sticky problem. Think of it like this, the MMORPG connects to a main server, that server has the world running on it, it keeps track of all the other players in the game. The client computer merely syncs with that server, it doesn't do anything other than present the world to the end user and take the data from the server and display it on the screen. There really isn't a strong emphasis on real-time as compared to a weather simulation. When you're running these huge simulations you have multiple independent processes and threads all going through the machine at the same time, all to achieve one single end result. I'm sorry if I'm not doing too well at making sense, I have a little trouble explaining it because I'm more of a visual person. The best I can really say is that the comparative complexity of the problems between the two is vast. Someone out there that's a little better with words feel free to step in and help me out here. Now, when we all have fiber running to every computer connected to the internet maybe then the distributed systems become more of a reality. Another problem that I see with distributed systems though is the variation in hardware. When the programs get written for the supercomputing platforms there is an expectation of sameness for the hardware. All the processors, all the memory, all the fabric links, all the buses, all the ASICS, everything is the same from one point to another. Intelligently identifying hardware differences and exploiting them for real time simulation would be a real trick if someone could pull it off. Hmmm, my firefox spell check seems to think I'm British.

Re:Mega-petaflops for people (1)

chuckymonkey (1059244) | more than 6 years ago | (#23236596)

I know it's bad form to reply to my own post, but as to the MMORPG problem I had another epiphany. The main difference is that in those games you aren't trying to send all the data from a maxed out processor over the internet, it's just sending a lot of little bits of data. You're just sending your position in the world and your actions within it. If you were going to do on the supercomputer level you would be sending not only that, but all the weather that you generated around you, the windspeed of your legs moving through the air, the amount of pressure your foot is exerting on the ground and all the other minutia of the world around you. That would make the MMO server effectively a router, however they are not all of that is pre-programmed. You're not generating every little detail of the world around you every instant that you play. With a supercomputer you're basically sending all the data from the processor from one node to the next or in other words each processor is generating their little bit of the world and telling that little bit of the world to the rest of the simulation in real time.

Re:Mega-petaflops for people (2, Funny)

Zebra_X (13249) | more than 6 years ago | (#23236932)

"I know it's bad form to reply to my own post, but as to the MMORPG problem I had another epiphany."

Indeed. I am not sure we really need you to spend time writing any of this down.

Nothing to see here. Move along.

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23237356)

Great stuff, and I know how you feel about trying to say something and then realizing you are not allowed to say that because of non-disclosure or other issues. I do some MMORPG software and apparently you have too. I grasp that part of what you are saying, however I see any computing problem as NANDs because that is how we always designed. A sea of gates. If I had unlimited money to throw at a solution I could make a CAM 'content addressable memory' array that responds in a single cycle time to any set of information. I see the design framework of how it would be done like I do any system I create. I have done this for manufacturing process control and control in wafer fabrication. It is a similar problem to using multiple CPU's. The amount of independence between data sets allows you to run without having to share a common data. It is like CPU design with branch prediction and look ahead execution, cache and many different techniques which are used to speed processing. It is in the design of the software that makes all the difference. If I were making a blender movie and each person in the group of 100,000 people was responsible for a single frame each we could render a 1 hr movie in 3 seconds. :) and send our frames to a central computer to be viewed for example. I see the greatest problem with distributed solution is the isolation of the dependent order of data sets. I will spend some time and come up with a real measurable proof of concept software that I can give out as open source. I guess the only way for me to be sure is to write the code that I feel can be done. ---- I know the moderators aren't very bright because I have been one.

Re:Mega-petaflops for people (2, Informative)

encoderer (1060616) | more than 6 years ago | (#23237076)

The MMORPG argument is a bit like comparing a VNC session to a cluster.

In both cases you're harnessing the power of at least 2 CPU cores over the internet to accomplish a computing task.

But the capacity of the two is separated by multiple orders of magnitude.

And, really, a 10 second delay is hardly even an annoyance for a human as we swap between our IM, Email, iTunes and the game we're playing. But that same 10 seconds in a parallel computing environment where X nodes are idled waiting for a result from Y?

Also, you seem a bbit like a douche bag. No offense. But emoticons? Seriously?

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23237506)

There still remains the fact that your brain can comprehend what I am saying and it only runs in the millisecond range. I'm guessing just as a simple example: If each computer simulated a single neuron effectively, and they were connected by an IP address that they would in fact simulate a brain of x neurons based on how many computers participated. And I do not think I have any usefulness in feminine hygiene.

Re:Mega-petaflops for people (1)

frogzilla (1229188) | more than 6 years ago | (#23239578)

You're absolutely right that latency matters. However for problems that don't parallelise well single processor computation rates (FLOPS) are still important too. Many important problems require a lot of calculation and a lot of communication between subdomains. This means that they can be parallelised but with diminishing returns for increasing numbers of cpus. Climate models are a good example of this. Running on fewer faster CPUs may be better than simply throwing more slower CPUs at the problem. Take a look at Amdahl's Law [wikipedia.org] for more information about subdividing problems into ever smaller pieces. Having a fast, high bandwidth switch fabric is great as long as the CPUs are also as fast as possible. Also, on cheap multicore machines, the available memory bandwidth is not sufficient to supply access to memory to all cores at the maximum rate. This means that computational processes running on the separate CPUs or cores compete with each other for access to memory slowing down the execution time of all so the processes on the machine. It is often useless to run multiple instances of computationally limited processes on a multi-CPU (cheap) linux computer. You get an overall increase in the wall-clock time. Internode latency and intranode memory access are certainly two reasons to spend more on real supercomputers".

Re:Mega-petaflops for people (1)

Ceriel Nosforit (682174) | more than 6 years ago | (#23240054)

Will you be using it to promote war, or will you be using it to promote peace?

Re:Mega-petaflops for people (1)

tgd (2822) | more than 6 years ago | (#23236298)

Yeah, thats all we need to break the laws of physics, a billion PCs all working together!

Computers can't consider anything. They can't contemplate, they can't theorize.

They pretty much do math.

Of course as I read your post, I realize you're probably joking. Oh well, my statement stands.

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23236356)

I wasn't joking. I work in AI and I have designed supercomputers and I worked with some of the early designs at CRAY and IBM and I do work with RISC. It seems that even though this is the site that I find the most intelligent and funny people in the world usually. I always like to be proved wrong in my assumptions, since it allows me to plug a leak in my mental process, however I see no real reason that this could not work. As far as intelligence and the simulation of such, connectivity is the greatest key to that. IMHO And as far as computers, they pretty much do NAND.

Re:Mega-petaflops for people (1)

tgd (2822) | more than 6 years ago | (#23236602)

I don't know if you have worked on the things you claim and just are confused now, or if you worked for companies that did those things and are just overstating your involvement, or what... but I do find your reply funny in a way. If you think your understanding of modern supercomputing architectures and cognitive science is up to the task, I'm sure you can find someone to back the prototyping of such a system.

However, this sort of reminds me of the guy who inspired this video: http://www.youtube.com/watch?v=E87WAAJt6ZI [youtube.com]

Don't be that guy.

Re:Mega-petaflops for people (1)

moteyalpha (1228680) | more than 6 years ago | (#23237006)

LOL, That was great, yes I have worked on those things. I didn't design Crays, but I did design supercomputers. It is my real name, and the people I have worked with will probably eventually see this post. This is the first subject on Slashdot that I have had an active interest in the development of. I have seen so much vapor ware and announce ware that I really appreciate the youtube. If I had any mod points left, I would mod you up for funny. I hate that guy too, there are sooo many people looking for money for perpetual motion machines and I have seen too many. As far as 'funding' I was just presenting it as an open source concept.

The more things change, the more they stay the sam (0)

Anonymous Coward | more than 6 years ago | (#23236128)

Weren't we using math co-processors to accelerate our main CPUs about two decades ago? I love how cyclical history is.

hmm. (1)

apodyopsis (1048476) | more than 6 years ago | (#23236148)

Always makes me wonder why they need all this power, after all anybody can build a very impressive home cluster these days that would of been classed a super computer a few years ago. I guess computing requirements rise to meet available systems thus fueling demand.

I support AMD right now, and if they got bigger then Intel then I would support Intel.

My belief is that any firm needs adequate competition to keep it innovative, competitive and customer focused. When one of them has a monopoly then we should be concerned.

Re:hmm. (0)

Anonymous Coward | more than 6 years ago | (#23236278)

They usually use this kinda stuff for rendering massive fractals for cellular research. Astrophysics also use a lot of power. Several years ago the ideas of being able to emulate this kinda stuff was unthinkable. So I guess you could say that the demand goes up with the supply.

I completely agree with this. No company can go unchecked by another, otherwise there is no reason to make updates, fixes or advances in technology because they own everything and it won't make them any more money.

Re:hmm. (1)

LightWing (1131011) | more than 6 years ago | (#23236386)

About the only thing I can see this being used for is pixars ever increasing demand for more computer power (with the least possible power consumption). I guess that depends, though. Could supercomputers be used that way? I'm sure pixar would have considered all viable alternatives. I wonder what they would get if they combined a supercomputer and a mainframe (a waste of space?). Sadly, the more informed will have to answer that :(

Re:hmm. (1)

mikael (484) | more than 6 years ago | (#23237462)

The problem with high-end animation is that you need to load in many different textures and geometry models before being able to render the final image and write out a single frame. Most of the supercomputer work seems to have everything in CPU node memory at the same time, and just run one iteration instantly (a 2048^3 3D grid of CFD cells for simulating supernova).

Previous research in parallel processing tried allocating processing nodes to different locations in the scene or different geometric models, or just using a whoever-is-available-at-the-time algorithm. Pixel-planes [unc.edu] tried allocating one processor per pixel.

A relevant article at Outlook Business [outlookbusiness.com]

Re:hmm. (1)

backwardMechanic (959818) | more than 6 years ago | (#23236504)

I run full-wave electromagnetic simulations to investigate fields generated inside the human body. My runtime is reasonable if I pick some parameters, but running an automated optimizer could easily take weeks using a 30 node Opteron cluster. If you give me more cycles, I can think of stuff to keep them busy. But if you want to see a really power-hungry project, talk to the protein folders - the guys that model chemical interactions starting at quantum mechanics, and try to find out how the shapes of protein molecules change through the progress of the reaction. That's neat. Or the cosmologists, simulating the formation of the universe, but they're just crazy...

Re:hmm. (1)

SuiteSisterMary (123932) | more than 6 years ago | (#23237966)

"And are you not," said Fook leaning anxiously forward, "a greater analyst than the Googleplex Star Thinker in the Seventh Galaxy of Light and Ingenuity which can calculate the trajectory of every single dust particle throughout a five-week Dangrabad Beta sand blizzard?"

"A five-week sand blizzard?" said Deep Thought haughtily. "You ask this of me who have contemplated the very vectors of the atoms in the Big Bang itself? Molest me not with this pocket calculator stuff."

Re:hmm. (1)

jimmypw (895344) | more than 6 years ago | (#23236608)

"I support AMD right now, and if they got bigger then Intel then I would support Intel."

Although i dont disagree where your coming from, why would i buy something when i can get something better for the same price from another company.

Advance "Ask Hans Reiser" question for slashdot... (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23236780)

Hans, what is your anal stretching coefficient (1 minus the diameter of your anal sphincter before prison divided by the diameter of your anal sphincter after prison)?

Not very bright, are you? (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#23236900)

That's ok. You just stay in your mom's basement playing WoW, and let the adults do the real work...

Re:hmm. (1)

dave420 (699308) | more than 6 years ago | (#23237188)

You support the little guy solely because he's the little guy? That's pretty silly, surely. Size doesn't mean they're doing the right thing. What if AMD started to throw babies off mountains tomorrow - would you still support them? Your post seems to suggest you will.

Re:hmm. (1)

geekoid (135745) | more than 6 years ago | (#23240774)

"
I support AMD right now, and if they got bigger then Intel then I would support Intel."

that's just stupid.
Why not support the better chip? that's the market, not supporting inferior products because the company is smaller. A lot of RnD came out of Intel before AMD arrived, and will after AMD leaves. In this specific case, I believe the competition held up RnD and advancement. Intel was moving towards dual chips and cores years ago. Most of that focus and money was diverted to makes faster clocks.

While competition is good, some of the great things we use to day came from a very large monopoly.

How many inventions cam,e out of Ma Bell's RnD?
cordless phone, clear global communications, 24/7 phone service, Operating systems... on and on.

supercomputer = top order of magnitude (1)

peter303 (12292) | more than 6 years ago | (#23241580)

My cell phone has more memory and is faster than the original Cray supercomputer.
The fastest computer is 1/2 petaflop. A supercomputer then is anything above 50 teraflops.

Petaflop range? (1)

rastan (43536) | more than 6 years ago | (#23236152)

Sorry, but I have to correct that: petaflops range. Floating-Point Operations Per Second. It is a "unit" without singular or plural forms. Picky me.

Re:Petaflop range? (1)

Dersaidin (954402) | more than 6 years ago | (#23236418)

One meter, two meters...
One petaflop, two petaflops
Two petaflops up to a exaflop would be the petraflops range.

Re:Petaflop range? (1)

rastan (43536) | more than 6 years ago | (#23236822)

Nope, sorry. "flops" (or "flop/s") is the unit, meaning Floating-Point Operations Per Second. See http://en.wikipedia.org/wiki/Flops [wikipedia.org]

Ergo "petaflops range".

Re:Petaflop range? (0)

Anonymous Coward | more than 6 years ago | (#23236958)

One FLOPS, several FLOPS
FLoating-Point Operations Per Second.

Re:Petaflop range? (1)

CoderDevo (30602) | more than 6 years ago | (#23237166)

One meter, two meters...
One petaflop, two petaflops
One mph, two mph
One flops, two flops (not two flopss)
One petaflops, two petaflops

The single trailing 's' cannot be dropped since that is the unit of time over which the work is performed.

I'm not learning much about a computer that is capabile of performing a quadrillion floating point operations. My laptop can do that in 90 minutes. Doing that in a second? Now that's something!

Petaflops? (1)

ThirdPrize (938147) | more than 6 years ago | (#23236186)

at least no animals will be harmed in the process.

Floating-point acceleration unit, sounds familiar (4, Funny)

noidentity (188756) | more than 6 years ago | (#23236214)

Intel convinced Cray to collaborate on what many believe will be the next generation of supercomputers -- CPUs complemented by floating-point acceleration units.

Let me guess, it's going to be called the 8087 [wikipedia.org] .

Re:Floating-point acceleration unit, sounds famili (0)

Anonymous Coward | more than 6 years ago | (#23236270)

No, it'll just be built into your CPU and taken for granted for the next twenty years ;)

Re:Floating-point acceleration unit, sounds famili (0)

Anonymous Coward | more than 6 years ago | (#23236974)

No, it'll just be built into your CPU and taken for granted for the next twenty years ;)
Remember the story that Intel never released a 4.0 GHz processor since 5 years ago.

Now, the new current processors are running less than 3.0 GHz, it's underfrecuencied during 5 years.

The Moore's Law is useless now.

Re:Floating-point acceleration unit, sounds famili (2, Interesting)

TheThiefMaster (992038) | more than 6 years ago | (#23238384)

Or possibly the ludicrously powerful floating point processors known as GPUs?

Perhaps now that Intel and nVidia have commercial "floating-point acceleration units" for supercomputers, AMD/ATI will come up with something too? The Hypertransport bus is already pretty popular with supercomputers for plugging an interconnect into (Infiniband/path, as well as Cray's own) so a GPU (sorry, "floating point accelerator") that plugs directly into that bus and has direct communication with the system's CPU(s) should be pretty nice.

I know I wouldn't mind going from a dedicated graphics card to having a motherboard with two processor sockets with independent ram, cpu in one and gpu in the other. PCIe is just an unnecessary layer when the gpu could be plugged directly into the cpu's main bus.

Re:Floating-point acceleration unit, sounds famili (1)

ari_j (90255) | more than 6 years ago | (#23237414)

8087-64. :P

And the cycle continues... (4, Insightful)

Jesus_666 (702802) | more than 6 years ago | (#23236230)

A few years from now Intel will unveil their shocking new techology - they will build the floating point accelerator right into the CPU! For massive performance gains! And then a few years later they will move it out of the CPU for better performance. And so on, and so forth, etc. etc. etc.

Re:And the cycle continues... (1)

RightSaidFred99 (874576) | more than 6 years ago | (#23239896)

Cute and all, but..no. The "floating point accelerator" is a massively parallel CPU. It would either become the CPU itself, or remain an add-in. You don't add a "massively parallel CPU" to a "CPU".

Re:And the cycle continues... (1)

Jesus_666 (702802) | more than 6 years ago | (#23240022)

Make it a meta-core. Intel wants to go 80 cores, so why not have 40 of them be the FPU?

Everything cool is named Tesla (1)

GPS Pilot (3683) | more than 6 years ago | (#23242802)

The new Nvidia cards... the new electric car... the unit of magnetic flux density... the high-voltage coil...

bla bla bla Skynet (2, Funny)

Narpak (961733) | more than 6 years ago | (#23236376)

Since no one else seems to have mentioned it yet; blah blah blah it's the birth of Skynet (this time with an improved graphical interface).

Performance increasing FASTER than Moore's Law (1)

Lord Byron II (671689) | more than 6 years ago | (#23236482)

The data at Top500 shows a linear increase (on a semi-log plot) for the entire time from 1993 to today. Every seven years, the performance increases by a factor of 100, but Moore's Law predicts an increase of 2^(7/1.5) = 25, meaning that the supercomputer market is besting Moore's Law by a factor of 4.

Moore's law is about transistor density (1)

Xocet_00 (635069) | more than 6 years ago | (#23236926)

NOT performance. It's simply a statement about the number of transistors we can cram onto a chip.

That said, you make a very interesting observation, since performance in the desktop PC market has scaled pretty well with transistor density (and therefore Moore's Law). Given what you're saying, is the ratio of performance in supercomputers to regular PCs increasing?

E4! (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#23236524)

Was4` off hands

A Hybdrid! (1)

GoCanes (953477) | more than 6 years ago | (#23236774)

A Hybrid? All of this has happened before, and it will all happen again.

Re:A Hybdrid! (0)

Anonymous Coward | more than 6 years ago | (#23241440)

I keep hearing this song in my head.
Does the computer have a plan?

Crysis (1)

AioKits (1235070) | more than 6 years ago | (#23237026)

Surprised no one has asked if this thing can play Crysis in all the bloomed particle splendor...

A hybrid computer? (1)

Artuir (1226648) | more than 6 years ago | (#23237144)

Isn't that a step backwards for computing??1 I don't think running a gas/electricity powered system is a good idea outside of generators for power outages..

Anonymous Coward (0)

Anonymous Coward | more than 6 years ago | (#23238420)

This is not new, AMD supercomputer system has been doing this for years. FPGA co-procs can be plugged right into an Opteron socket.

For example:
(http://www.xtremedatainc.com/index.php?option=com_content&view=article&id=106&Itemid=60)

Sun built a system for a for GSIC center in Tokyo that has this same capability. (TSUBAME Grid Cluster - Sun Fire x4600 Cluster, Opteron 2.4/2.6 GHz and ClearSpeed Accelerator, Infiniband NEC/Sun).

Isn't Cray using Itanium? (1)

bsharma (577257) | more than 6 years ago | (#23240284)

I thought Itanium is the ingradient for High Performance Computing.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?