Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM's Eight-Core, 4-GHz Power7 Chip

kdawson posted more than 6 years ago | from the 100-racks-of-goodness dept.

Supercomputing 425

pacopico writes "The first details on IBM's upcoming Power7 chip have emerged. The Register is reporting that IBM will ship an eight-core chip running at 4.0 GHz. The chip will support four threads per core and fit into some huge systems. For example, University of Illinois is going to house a 300,000-core machine that can hit 10 petaflops. It'll have 620 TB of memory and support 5 PB/s of memory bandwidth. Optical interconnects anyone?"

Sorry! There are no comments related to the filter you selected.

First!!!!11 (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24190581)

But will it run Linux?

Re:First!!!!11 (0, Offtopic)

Iceykitsune (1059892) | more than 6 years ago | (#24190593)

As soon as u g1v3 m3 t3h c0d3s!!!111eleven11

Yes it will run GNU/Linux. (1)

inTheLoo (1255256) | more than 6 years ago | (#24190779)

What I'd like to know is how long it will be before I get a laptop or PDA that uses this or one that lasts 8 times longer than current versions.

Finally (4, Funny)

xpro42 (1234496) | more than 6 years ago | (#24190595)

Some vista capable hardware.

Re:Finally (5, Funny)

QuantumRiff (120817) | more than 6 years ago | (#24190719)

Yeah, right, it has optical connections. they will have to be disabled to play videos, otherwise you might copy them!

Re:Finally (0)

Anonymous Coward | more than 6 years ago | (#24190977)

"Optical interconnects anyone?"

Oh, snap!

Re:Finally (0)

Anonymous Coward | more than 6 years ago | (#24190993)

Still can't run Crysis, though.

Re:Finally (0)

Anonymous Coward | more than 6 years ago | (#24191173)

A beowulf cluster of those should be able to...

Re:Finally (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#24191073)

But it still can't play Crisis.

Re:Finally (0)

Matey-O (518004) | more than 6 years ago | (#24191089)

Naw man, my laptop has a sticker that says it's 'Vista Capable'.

Re:Finally (1)

lastomega7 (1060398) | more than 6 years ago | (#24191179)

But it turns out it will only actually run the home edition smoothly.

Sorry Mac fans . . . (0)

Anonymous Coward | more than 6 years ago | (#24190601)

No AltiVec for you!

Re:Sorry Mac fans . . . (2, Funny)

imamac (1083405) | more than 6 years ago | (#24190839)

So still no G5 laptop???? AHHHHHH!!!!!!

Re:Sorry Mac fans . . . (0)

Anonymous Coward | more than 6 years ago | (#24191103)

it (power7) probably only has VMX [wikipedia.org]

Toasty. (5, Funny)

Anonymous Coward | more than 6 years ago | (#24190609)

In other news, temperatures on the University of Illinois campus have mysteriously risen ten degrees. Scientists are still examining possible causes..

Re:Toasty. (5, Funny)

Yvan256 (722131) | more than 6 years ago | (#24190843)

Good thing they have a brand-new supercomputer, analyzing this temperature anomaly will be much faster!

Re:Toasty. (1)

NICK SPACEE (1326009) | more than 6 years ago | (#24191169)

SUPERCOMPUTERS! YES!

PowerPC is not Intel. (1)

inTheLoo (1255256) | more than 6 years ago | (#24190967)

These typically give much better computing efficiency than Intel based processors. That gives you two options, install more cores or pay less for HVAC and electricity.

Re:Toasty. (5, Funny)

jmorris42 (1458) | more than 6 years ago | (#24191045)

> Scientists are still examining possible causes..

Nah. If something gets warmer it is caused by Global Warming and the solution is to eliminate Western industrial civilization.

If something gets colder it is Global Climate Change and the solution is to eliminate Western industrial civilization.

If we have more hurricanes it is Global Warming. Fewer and it is Climate Change. More tornadoes? Global Warming. Floods caused by increased snowfall? Somehow that was also Global Warming, I'd have thought they would have went with Global Climate Change, but every rule seems to need an exception.

Vista capable? (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#24190613)

So will it be fast enough to run Vista then.

Re:Vista capable? (2, Funny)

clang_jangle (975789) | more than 6 years ago | (#24190633)

Yes, but I think you'll still have to disable the aero if you want to get DNF to work right.

Re:Vista capable? (0)

Anonymous Coward | more than 6 years ago | (#24190673)

Y'know, it's kinda lame to make Vista vaporware jokes now that it's been released.

Re:Vista capable? (1)

KillerBob (217953) | more than 6 years ago | (#24190983)

*whooosh*

DNF = Duke Nukem Forever

Re:Vista capable? (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24191035)

Yes, thank you, I'm well aware. DNF on Vista is an old joke about how Duke Nukem is taking so long, it will require Vista. Now that Vista is out and there have been games that require it, the joke is outdated. So go whoosh yourself.

I feel kinda lonely... (0)

Anonymous Coward | more than 6 years ago | (#24190617)

...in here all by myself.

Anyway...that's some killer processing power. When do I get one on my desktop?

Re:I feel kinda lonely... (1)

von_rick (944421) | more than 6 years ago | (#24190695)

The processor is gonna make humans obsolete. Resistance is futile, you will be assimilated.

Core pron (5, Funny)

Anonymous Coward | more than 6 years ago | (#24190631)

"For example, University of Illinois is going to house a 300,000-core machine that can hit 10 petaflops. It'll have 620 TB of memory and support 5 PB/s of memory bandwidth."

I came.

Re:Core pron (4, Funny)

mcpkaaos (449561) | more than 6 years ago | (#24190709)

I came.

I saw.

Re:Core pron (0, Offtopic)

vbraga (228124) | more than 6 years ago | (#24190735)

I won.

Re:Core pron (2, Informative)

McGiraf (196030) | more than 6 years ago | (#24190925)

Veni, vini, vomi.

Re:Core pron (0)

EkriirkE (1075937) | more than 6 years ago | (#24190757)

That's dirty.

Re:Core pron (1)

EkriirkE (1075937) | more than 6 years ago | (#24190771)

Err, I conquered.

iPhone (2, Funny)

giorgist (1208992) | more than 6 years ago | (#24190645)

When can I get an iphone with it ?

G

PPC Linux (3, Insightful)

Doc Ruby (173196) | more than 6 years ago | (#24190663)

I'd be a lot more excited about these PPC lines if Ubuntu 8.04 would install and run properly on the PS3, whose PPC+6xDSP architecture would be a great entry level platform for coming up with parallel techniques for the bigger and more parallel PPC chips.

Re:PPC Linux (2, Interesting)

rbanffy (584143) | more than 6 years ago | (#24191175)

The problem is Sony cripled the environment in ways that make it very hard to use a PS3 as a computer.

I still think one could build a cheap computer with a Cell processor and make a decent profit. Those über-DSPs could do a whole lot to help the relatively puny PPC cores and having such a box readily available would foster a lot of research with asymmetric multi-processors. It's really sad to see future compsci graduates who never really used anything not descending from an IBM 5150

That said, I think there is some interesting stuff coming to the x86 world. That Larrabee x86 thing Intel is readying could be very interesting in itself and even generate some more interesting spin-offs. Imagine having a couple of cores that could, in an emergency, run the same binaries but that were tuned for different applications. One out-of-order core plus 4 in-order multi-threading cores would make a very interesting desktop processor.

I have a serious question: (1, Interesting)

inject_hotmail.com (843637) | more than 6 years ago | (#24190667)

Seriously:

I have a couple dual-core PCs. I notice that some won't ever use 100% CPU even though they easily could. I check "set affinity" in task manager, which says the process should use both cores...but it only ever hits 50% of total CPU. Looking at the CPU graph, it shows that as usage goes up on one core, usage goes down on the other.

Is there any way to force a process to run over 2 cores at 100%?

If not...how would 300,000 cores help unless you are running 300,000 processes, or an app that you know will scale over that many cores?

The preceding was in fact a serious question.

Re:I have a serious question: (5, Insightful)

Anonymous Coward | more than 6 years ago | (#24190699)

The applications that are going to be run on this type of machine are designed to be run on this kind of machine.

Re:I have a serious question: (1)

inject_hotmail.com (843637) | more than 6 years ago | (#24190753)

The applications that are going to be run on this type of machine are designed to be run on this kind of machine.

Thanks for the reply...

I posted so quickly, I should have written "...some processes won't ever..."

Anyway...yeah, it's hit and miss on the processes that do this. I guess it's up to the author of the app to make it multi-core friendly.

Re:I have a serious question: (5, Informative)

badboy_tw2002 (524611) | more than 6 years ago | (#24190701)

If your process looks like this:

int main()
{

while (something)
{
      doSometing();
}

}

It will hit 100% on one core and that's it. Its not multithreaded - one CPU will churn on it forever and the others will sit around waiting for a task from the OS. 2 course, 200,000 cores the results will be the same. These machines are made for tasks that are broken up into lots of smaller jobs and processed individually. Its not magic - more cores won't get a single threaded process done faster.

Seriously.

Re:I have a serious question: (1)

inject_hotmail.com (843637) | more than 6 years ago | (#24190795)

If your process looks like this:

int main()
{

while (something)
{

      doSometing();
}

}

It will hit 100% on one core and that's it. Its not multithreaded - one CPU will churn on it forever and the others will sit around waiting for a task from the OS. 2 course, 200,000 cores the results will be the same. These machines are made for tasks that are broken up into lots of smaller jobs and processed individually. Its not magic - more cores won't get a single threaded process done faster.

Seriously.

Thanks for the quick reply.

I posted so quickly, I should have written "...some processes won't ever..."

Aren't a lot of games and apps single-threaded? Hmmm. I figured that dual/quad-core wasn't all it's cracked up to be. So, essentially, if I have a single-threaded app on a quad core, it'll perform at 1/4th the potential speed.

That doesn't leave me with a warm and fuzzy feeling inside.

The funny thing is that it teeter-totters back and forth from one core to the other. I wish I knew what made it do that.

Re:I have a serious question: (5, Interesting)

jcarkeys (925469) | more than 6 years ago | (#24190907)

You sir, are correct. Most things aren't set to run in parallel and thus don't get the gains. Gains come from optimized code (obviously), but also doing multiple tasks. Leave Photoshop to render an HDR image and play a game, if you want. Though to be fair, it's not "just 1/4th" performance, the other cores are able to handle some of the other CPU tasks, such as running hardware controllers.

And the reason that it kind've oscillates between cores is because "Set Affinity" tells the process that it's allowed to use that core, not that it has to or even should. If you want something to use both cores, open up two processes, set the first to core 1 and the second to core 2. Most of the time that's unusable like that, but I recently transcoded my entire music library and set one process to do songs from A-M, and the other from N-Z. It really helped

Re:I have a serious question: (2, Insightful)

Firehed (942385) | more than 6 years ago | (#24190933)

It's not that simple. While one single task generally is not coded to take advantage of the entire system (single threaded on a dual system, dual thread on a quad system, whatever), you are able to actually use your computer while said task is underway. Ever encoded a DVD on a single core machine? Not so fun - half the time, you can't even use your mouse. Slap the same task on a dual-core box, and suddenly you can continue to work (or play) while that goes on in the background. Alternately, you can encode two DVDs simultaneously and be done in the speed it would normally take to finish one. Parallelism in its most literal sense.

Of course, many video-related apps these days are multi-threaded, but you get the general idea.

Re:I have a serious question: (2, Informative)

Zeussy (868062) | more than 6 years ago | (#24191147)

Or, the other thing I like about dual core and up boxes is that they appear to be more stable, back on a single core machine, when a process really wanted to lock up, in some mysterious while(1); loop it could be a real try of patience to kill the app. On a dual core machine no worries, still got the other core to save yourself with :)

Re:I have a serious question: (5, Informative)

tchuladdiass (174342) | more than 6 years ago | (#24190971)

The funny thing is that it teeter-totters back and forth from one core to the other. I wish I knew what made it do that.

The OS runs the process a few milliseconds at a time, then kicks the process of the cpu for another process to run (if there is one, including OS tasks such as I/O routines). When the OS starts up the process again for a few more milliseconds, it may start it up on a different core. That is why both cores will show 50% average utilization.

Now if you set CPU affinity for that process to be on one core, then it will max that core out at 100% and the other core will be idle. This may result in better performance, because you get better cache utilization if the process stays on the same core.

On a related topic, this can also be the case if the app is multithreaded -- sometimes it is more efficient to run multiple threads on the same CPU instead of across CPUs, if each thread is accessing the same region of memory. Otherwise, if the threads are on different CPUs or cores, then the threads are constantly invalidating the cache on the other core, causing more (expensive) reads/writes to main memory.

Re:I have a serious question: (1)

inject_hotmail.com (843637) | more than 6 years ago | (#24191219)

The OS runs the process a few milliseconds at a time, then kicks the process of the cpu for another process to run (if there is one, including OS tasks such as I/O routines). When the OS starts up the process again for a few more milliseconds, it may start it up on a different core. That is why both cores will show 50% average utilization.

Now if you set CPU affinity for that process to be on one core, then it will max that core out at 100% and the other core will be idle. This may result in better performance, because you get better cache utilization if the process stays on the same core.

On a related topic, this can also be the case if the app is multithreaded -- sometimes it is more efficient to run multiple threads on the same CPU instead of across CPUs, if each thread is accessing the same region of memory. Otherwise, if the threads are on different CPUs or cores, then the threads are constantly invalidating the cache on the other core, causing more (expensive) reads/writes to main memory.

Quite informative -- thanks for the insight. I thought it was the OS, but didn't know those specifics. I had no idea that maxing out one core would be more efficient though.

Re:I have a serious question: (5, Informative)

vux984 (928602) | more than 6 years ago | (#24191033)

Aren't a lot of games and apps single-threaded? Hmmm. I figured that dual/quad-core wasn't all it's cracked up to be. So, essentially, if I have a single-threaded app on a quad core, it'll perform at 1/4th the potential speed.

Yes, although, most high end games and game engines actually are multi-threaded. Few are designed to take advantage of more than 2 cores though, and none that I know of will use 8 or 300,000...

So, essentially, if I have a single-threaded app on a quad core, it'll perform at 1/4th the potential speed.

Not necessarily. If you have 3 women can you make a baby in 3 months instead of 9? Given that it still takes 9 months and 2 of the women are idle, would you say that these women are performing at 1/3rd the potential speed? Same sort of logic applies here. If the task is inherently sequential, having more cores (or ladies) won't make it any faster.

Somethings -are- highly parellizable, like ray-tracing or cutting down all the trees in a forest.. and other things are partly parallelizable... like changing tires (a pit crew can change 4 tires at once... but adding more staff to allow you to change 5 tires at once doesn't make your team any faster...)

That doesn't leave me with a warm and fuzzy feeling inside.

Yes, in general computing applications, an 8GHz CPU would be faster than a quad core 2GHz. (And even under optimal parallilizable situations the 2ghz quadcore would just barely surpass the 8ghz cpu due to lower task switching overhead.) So the faster single cpu is almost always better. The reason we have quad core 2Ghz cpus is that they are much much more practical to actually make, and a lot of the stuff that takes a long time (rendering 3d, encoding movies, etc is actually highly parellizable so we do see a benefit. And much of the single threaded sequential stuff we see is waiting on hard drive performance, network bandedith, or user input... so cpu isn't the bottleneck there anyway.

The funny thing is that it teeter-totters back and forth from one core to the other. I wish I knew what made it do that.

If you look at task manager, there what? some 40+ processes running. The OS rotates them onto an off of the 2 cores based on what they all need in terms of cpu time. So your 'cpu heavy task' gets pulled off a core to give another task a timeslice, and then once its off, it can be scheduled back onto either core. Ideally should stay on one core to maximize level one cache hits, etc, but if its been off the core long enough for the other processes to cache all new memory it doesn't really matter which one it gets assigned to, and in any case flipping from one to the other every now and then makes a almost immeasurably small performance difference.

btw - the 'set processor affinity' feature tells the OS that you really want this process to run on a given cpu/core, instead of hopping around. But in most cases its not something one needs (or gains any benefit) from doing.

Re:I have a serious question: (1)

friedman101 (618627) | more than 6 years ago | (#24190813)

you're assuming that doSomething() is entirely unparallelizable and that every line depends on the previous line.... not the case for generic code. a smart compiler or informed programmer can make even the most trivial programs work on multiple cores.

Re:I have a serious question: (1)

KillerBob (217953) | more than 6 years ago | (#24191017)

On the other hand... if your process looks like this:


int main() {
    while (TRUE) {
        fork();
    }
}

It might do a pretty good job of tying up resources. :)

Re:I have a serious question: (1)

modmans2ndcoming (929661) | more than 6 years ago | (#24191145)

it will get you multiple single threaded processes done faster though.

Re:I have a serious question: (0)

Tom9729 (1134127) | more than 6 years ago | (#24190717)

Seriously:

I have a couple dual-core PCs. I notice that some won't ever use 100% CPU even though they easily could. I check "set affinity" in task manager, which says the process should use both cores...but it only ever hits 50% of total CPU. Looking at the CPU graph, it shows that as usage goes up on one core, usage goes down on the other.

Is there any way to force a process to run over 2 cores at 100%?

If not...how would 300,000 cores help unless you are running 300,000 processes, or an app that you know will scale over that many cores?

The preceding was in fact a serious question.

My guess is that the Windows scheduler is doing it's job and preventing your processes from eating the CPU.

Have you tried playing with process priorities?

Re:I have a serious question: (1)

inject_hotmail.com (843637) | more than 6 years ago | (#24190833)

My guess is that the Windows scheduler is doing it's job and preventing your processes from eating the CPU.

Have you tried playing with process priorities?

I posted so quickly, I should have written "some processes won't ever..."

Yep, that's the "set affinity" part. And that's what doesn't make sense. If I 'set affinity' to both CPUs on a dual-core system, I say it should max 'em both out...but it never does.

Um, nope, his apps stink (1)

tjstork (137384) | more than 6 years ago | (#24190893)

No, what this means is that his applications really aren't burdening the CPU. If you build an MT Windows App that genuinely scales, then, it will most certainly give all the CPUs up. What's happening in his case, most likely, is that he's i/o bound and his cpus are doing nothing. OR, his applications aren't even multithreaded. Or both. I've written some MT C++ apps for Windows for crunching insurance prices and yep, they'll peg all the CPU that you can give it. Also, I wouldn't recommend even setting the task affinity in TM.

Re:I have a serious question: (1)

encoderer (1060616) | more than 6 years ago | (#24190733)

You figured it out: it's all in the programming.

Multi-threading is difficult, and until just recently, it was a niche market. Most people just didn't have dual processors.

That'll change. Until then, you're still benefiting by running OS and background processes on one core and the foreground process on another.

Re:I have a serious question: (0)

Anonymous Coward | more than 6 years ago | (#24190741)

Try here. [wikipedia.org] And here. [eskimo.com]

Re:I have a serious question: (1)

QuantumRiff (120817) | more than 6 years ago | (#24190743)

Yes, run a multithreaded app on the system, or run two single threaded apps. (photoshop is multithreaded, as are many rendering software, etc) Not much software is yet multi-threaded, at least not on windows.

Re:I have a serious question: (1)

WilliamBaughman (1312511) | more than 6 years ago | (#24190789)

If not...how would 300,000 cores help unless you are running 300,000 processes, or an app that you know will scale over that many cores?

The preceding was in fact a serious question.

Having 300,000 cores wouldn't help if you didn't have enough cores. However, University of Illinois probably won't be using it to run one instance of McAfee and one instance of Word. Chances are, they'll be using it for meteorological simulations.

Re:I have a serious question: (1)

WilliamBaughman (1312511) | more than 6 years ago | (#24190829)

Having 300,000 cores wouldn't help if you didn't have enough cores. However, University of Illinois probably won't be using it to run one instance of McAfee and one instance of Word. Chances are, they'll be using it for meteorological simulations.

Sorry, that should read 'Having 300,000 cores wouldn't help if you weren't running enough processes.'

Re:I have a serious question: (1)

inject_hotmail.com (843637) | more than 6 years ago | (#24190943)

Having 300,000 cores wouldn't help if you didn't have enough cores. However, University of Illinois probably won't be using it to run one instance of McAfee and one instance of Word. Chances are, they'll be using it for meteorological simulations.

Sorry, that should read 'Having 300,000 cores wouldn't help if you weren't running enough processes.'

So if this thing is 4GHz, 8 cores would mean 500MHz per? Honestly, it doesn't sound so appealing to me knowing that some apps won't ever use 4GHz. Scaling it up, I hear that Intel plans to have 1000s of cores on a CPU...does this mean ultra slow apps?

Re:I have a serious question: (5, Informative)

Annymouse Cowherd (1037080) | more than 6 years ago | (#24191025)

No, each core is running at 4Ghz. That does not total up to 16 Ghz processing power though, because only multithreaded programs can take advantage of more than one core at once, and they still have to wait if they're sharing data.

Nah, they are just building a new kind of bomb (0, Offtopic)

tjstork (137384) | more than 6 years ago | (#24190913)

They only -say- they are using all this computer power for weather forecasting. In reality, they are devising a new kind of anti-matter bomb, and they'll just use that sucker to take over the world. PRetty soon, the capital of Earth will be the University of Illinois, so, if you stole any books from the campus library, you'd better pony up, or you'll be on your way to the Gulag they plan on constructing in North Dakota.

Re:I have a serious question: (1)

tfranzese (869766) | more than 6 years ago | (#24190889)

The problem is that software is ultimately limited by how much of the algorithm is sequential. You cannot parallelize (I'll make that word up if I have to) every portion of software, there is always some sequential portion of code that cannot be split up that will prevent that. Certain software is limited more than others. It is up to the software developer to design their algorithm to exploit as much parallelism as possible be it by running routines that can be done concurrently into two threads to a point where they must communicate. The applications typical on a desktop environment have not always been designed with this in mind, and at times it can be a challenge. Now, there are other classes of problems unlike those typically seen on the desktop. These problems are what's called embarrassingly parallel. A lot of scientific problems fall under this category and can reap huge rewards out of a cluster. If the software is largely sequential (either by design or necessity), the process is unlikely to see any benefit from multiple cores and is at the mercy of the operating system's scheduler as far as I know. I think this gives some idea, but I've only had one course in parallel computing and it's been a while. It's pretty interesting stuff, and hopefully I didn't butcher too much.

Re:I have a serious question: (1)

tfranzese (869766) | more than 6 years ago | (#24190963)

I should add/correct myself that the designer should not limit themselves to a particular number of threads. If the algorithm can only be made concurrent with two threads, that may be the case, but generally if the problem can utilize all 99 to get the problem done in a reasonably more efficient time, then it should utilize them.

That said, and this is somewhat off topic, I think it would be good (if it hasn't already been done) to allow the user to limit the number of cores the system uses or customize the power options so that they can choose when to bring certain cores out of sleep for the OS to utilize. I hate wasting power when all I want to do is browse the web when I'm not busy with more demanding tasks.

Re:I have a serious question: (1)

prod-you (940679) | more than 6 years ago | (#24191041)

AFAIK, set affinity only means that it'll only use that one CPU. Theoretically, with an intense program, that cpu should get to 100% and leave the other one available for other things.

In windows though, a multicore processor will try to split workload by threads, not processes, so even a single program can take advantage of multiple cores; especially if multiple threads do heavy lifting.

Re:I have a serious question: (0)

Anonymous Coward | more than 6 years ago | (#24191131)

If you're looking for 'common user' cases of multithreaded applicaitons, I'd say take a look at some of the video conversion. I don't believe that editing videos is very commonplace, but with all the ipod videos and other portable video players around these days people are certainly going to look for a way to put that xvid or h264 video they got off a torrent onto their ipod. This usually entails reencoding and there are various programs, free and nonfree that combine all the necessities into an easy to use gui that actually churns out a .mp4 or m4v file for the ipod or psp. Some use the x264 codec which is multithreaded, so before anyone discounts multithreading for lack of 'common user' uses think of all the ipods and psps out there.

Re:I have a serious question: (0)

mikael (484) | more than 6 years ago | (#24191163)

You would have to have all your programs to take advantage of multi-threading ie. pthreads

With multi-threading, there are different ways of improving the performance of applications. The simple way is to have separate threads for user input, background tasks, and display.

The more complex way is to replace each single function that would run on a CPU (ie. color adjusting an image) to run on multiple cores/threads. For any function, the application creates a whole series of tasks which are retrieved and processed by separate threads. For image processing, this would involve splitting the processing of the image into many subimages - each thread processes a particular chunk of the main image.

For these kind of supercomputers, the users are going to have massive 3D grid volumes (eg. 2048^3 or greater) with each core/thread processing a particular sub-volume. Calculations like CFD can require a good many passes to run a single time-step of the simulation. And both a current generation grid and a next generation grid need to be stored. All this information needs to be processed so that it can be rendered in real-time).

Re:I have a serious question: (1)

ZuggZugg (817322) | more than 6 years ago | (#24191229)

The "problem" you are noticing is that most software is not programmed to take advantage of multiple execution cores.

The problem in a nutshell is that writing parallel execution routines in software is not trivial.

What you point out is exactly the problem that many have been "freaking out" about for a while. That multi-core is all fine and dandy for workloads that can leverage parallelism. But for a lot of applications this is very difficult to accomplish.

In the case of this "computer" at this university, it's likely a number crunching "computer" or supercomputer. Very likely to be just a gang of machines networked together to process ridiculously parallel problems.

Not something you'll ever boot Vista on and expect to run Half Life any faster on...

Re:I have a serious question: (1)

NerveGas (168686) | more than 6 years ago | (#24191237)

Only if it's written for it.

I have a Q6600 (quad core), and regularly play a video game on one monitor and an H.264-encoded movie on the other. Between the two, it will keep all four cores at 100%.

Tick...tick...tick... (0, Offtopic)

HaeMaker (221642) | more than 6 years ago | (#24190677)

...tick...tick...tick... [singularity.com]

Steve Jobs is crying in his pillow tonight. (0)

pecosdave (536896) | more than 6 years ago | (#24190679)

If I would have held out just a little bit longer!

Re:Steve Jobs is crying in his pillow tonight. (0)

Anonymous Coward | more than 6 years ago | (#24190745)

yeah, sure. Because THAT would fit perfectly in a laptop.

Apple changed ship because IBM was unwilling to commit to a laptop worthy chip.

and judging by the amount of laptops Apple sells, the decision was wise.

Re:Steve Jobs is crying in his pillow tonight. (1)

pecosdave (536896) | more than 6 years ago | (#24190931)

I've actually been wondering if it would be possible to put the PS3's Cell processor into a laptop. I would pay reasonable money for that.

Re:Steve Jobs is crying in his pillow tonight. (3, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#24190987)

Look at the heatsink in a PS3 and you have your answer.

Re:Steve Jobs is crying in his pillow tonight. (1, Informative)

Anonymous Coward | more than 6 years ago | (#24191051)

A geek's wet dream if there ever was one.

Toshiba actually announced [cnet.co.uk] such a beast (albeit with less SPEs, wich might be a way to use slightly defective cell chips to increase the yields).
The only problem is that if this is only used in one machine, noone is going to bother writing applications for it.

Re:Steve Jobs is crying in his pillow tonight. (2, Interesting)

Orion Blastar (457579) | more than 6 years ago | (#24190765)

Chances are IBM will still have a problem supplying them, plus new game consoles will get a priority in shipping in 2010, when that XBox 720 or Playstation 4 comes out.

It is also possible that the eight core chip will be really expensive, and in order to keep up with it a PowerMac would cost $4000 or more just to eliminate bottlenecks and use optical technology like super computers use to be able to use the chip properly. Not to say that nothing stops Apple from bringing out PowerPC based Macs in 2010 as Mac OSX already runs on PowerPC code and would have to be modified to run on the Power7 instruction set. Which is very doable. Apple could have Intel Macs for low cost systems for home and small businesses, and Power7 Macs for high end workstations and servers for middle to large businesses. I don't see why Apple couldn't bring back PowerMacs and sell them next to Intel Macs, unless IBM starts to have production problems again and can't supply Apple the number of PowerPC chips that they need?

Re:Steve Jobs is crying in his pillow tonight. (0)

Anonymous Coward | more than 6 years ago | (#24190975)

That and the fact that this isn't a PowerPC processor, anyways.

Re:Steve Jobs is crying in his pillow tonight. (0)

Anonymous Coward | more than 6 years ago | (#24191101)

IBM didn't have production problems, they just didn't think it was all that important to cater to what a small consumer, like Apple, wanted.

Apple Got Dumped By IBM. (0)

Anonymous Coward | more than 6 years ago | (#24190957)

When IBM secured the contracts for all three console companies they no longer had any reason to bother with Apple anymore. Apple and Jobs were a nightmare to work with for IBM compared to other companies.

IBM told Apple to fuck off and that they had no intention of bothering with a mobile G5 class chip. Jobs in a panic turned to PA Semi who didn't have anything ready at the time.

So Jobs finally had to turn to Intel as his 'first choice'.

Yeah, high five Apple and Jobs you really 'made a smart move' there. Effectively being forced into a position where the main selling point of your new computers is they are actually useful to the mass market due to them being able to run your dominant competitor's OS.

Too bad for Jobs being an overpriced niche x86 OEM box manufacturer isn't viable long term.

Re:Apple Got Dumped By IBM. (1)

pecosdave (536896) | more than 6 years ago | (#24191027)

I'm still a bit disappointed they didn't go with AMD. Seriously, Intel seems to "normal" at least AMD would be cheaper, at the time performed better, has an awesome mobile chip, and was doing 64 bit better before Intel. Though I'm not a big ATI fan, considering AMD and ATI have merged, and AMD is really trying to do the right thing with ATI I can see loads of benefit for Apple in the AMD camp.

Re:Apple Got Dumped By IBM. (4, Insightful)

wish bot (265150) | more than 6 years ago | (#24191189)

I remember something from that time that suggested it was simply a supply issue - AMD weren't big enough to guarantee supply. I remember looking at the figures and being surprised (about the capacity of AMD).

I also remember Jobs saying Intel had shown him _very_ exciting things, hint hint. And they were too.

Re:Apple Got Dumped By IBM. (3, Insightful)

ya really (1257084) | more than 6 years ago | (#24191119)

IBM told Apple to fuck off and that they had no intention of bothering with a mobile G5 class chip.

Actually, it was the other way around.

Jobs stated that Apple's primary motivation for the transition was their disappointment with the progress of IBM's development of PowerPC technology, and their greater faith in Intel to meet Apple's needs. In particular, he cited the performance per watt (that is, the speed per unit of electrical power) projections in the roadmap provided by Intel. This is an especially important consideration in laptop design affecting hours of use per battery charge.

In June 2003, Jobs had introduced Macs based on the PowerPC G5 processor and promised that within a year the clock speed of the part would be up to 3 GHz. Two years later, 3 GHz G5s were still not available, and rumors continued that IBM's low yields on the POWER4-derived chip were to blame. Further, the heat produced by the chip proved an obstacle to deploying it in a laptop computer, which had become the fastest growing segment of the personal computer industry. wikipedia.org [wikipedia.org]

Intel chips outperform the PowerPC cpus without a doubt. PowerPC cpus were horrible. The first MacBook pros with Intel chips were 2-3 times faster than the ones before with PowerPC chips. If anything, it was a good move for Apple to start using Intel. I'm not a huge Mac Fan. I own one Apple product, a Nano with RockBox currently on it. However, I do hate when people don't do their fact checking and simply want to troll about a company they hate without justification.

Re:Apple Got Dumped By IBM. (0)

Anonymous Coward | more than 6 years ago | (#24191129)

Yeah, high five Apple and Jobs you really 'made a smart move' there. Effectively being forced into a position where the main selling point of your new computers is they are actually useful to the mass market due to them being able to run your dominant competitor's OS.

Great anecdote.. here's another (and this is from a guy running Debian on the desktop): Apple machines, especially the laptops, are selling like hotcakes and I support about a hundred of them along side a few hundred Windows and UNIX boxes. I've seen no more than ten of them using Bootcamp or a VM like parallels. Nobody wants Vista, and after discovering OS X very few have a use for XP outside from an app or two that doesn't yet have a Mac equivalent. Even if you need to run one of these apps you can run them on your Mac desktop via Parallels coherence feature.

Say what you will, but it will all happen again. Last time it was DEC. This time it will be MS. Sorry Microsoft, people aren't going to wait until 2014 for you to get your shit together.

Can I imagine a ... (1)

krkhan (1071096) | more than 6 years ago | (#24190729)

... beowulf cluster of these 300,000 core machines? I want to be able to play high-definition video without any lag on Windows 7.

Release set for 2010 (1)

EkriirkE (1075937) | more than 6 years ago | (#24190747)

WTF? By that date, using the planned arch, it will be obsolete if not already scrapped before then.

2010 eh? (1)

Yvan256 (722131) | more than 6 years ago | (#24190869)

"My god, it's full of cores!"

Hmmm (0)

Anonymous Coward | more than 6 years ago | (#24190755)

Isn't there still no consumer-grade CPU marketed as 4ghz? Last I heard, that clock speed race ended at 3.8ghz due to heat issues.

Can these be used in PCs? (1)

the0 (1035328) | more than 6 years ago | (#24190809)

Can they? Probably not... but why? If IBM has the capacity to manufacture these (presumably) awesome chips, why doesn't it develop for the general PC market?

  Hint to the mods: These are insightful/interesting questions.

Re:Can these be used in PCs? (1)

Yvan256 (722131) | more than 6 years ago | (#24190885)

I'd guess that general PC market = low-margins, not to mention that this new processor will probably require a 5KW power supply.

Re:Can these be used in PCs? (1)

Wesley Felter (138342) | more than 6 years ago | (#24191013)

Putting a server processor in PCs was tried; it was called the G5. It turned out OK, although having no laptop version was annoying.

Great (4, Interesting)

afidel (530433) | more than 6 years ago | (#24190827)

So you can get 16 cores in a low end box but it still won't have enough I/O slots so you will have to buy a shelf at $obscene_amount, seriously why does IBM put such few I/O slots in the lower end P series boxes?

how much closer are we now to ... (1)

bizitch (546406) | more than 6 years ago | (#24190859)

"daisy, daisy ....." or "stop that dave ...."

4 Threads per core? (4, Interesting)

jdb2 (800046) | more than 6 years ago | (#24190875)

It should be noted that previous POWER architectures had 2 threads per core. They also had SMT ( Simultaneous Multi-Threading [wikipedia.org] ) support, which gave them an "effective" 4 threads per core. I wonder. Are the all the threads on the POWER7 "true" threads ( ie. 4 execution units -- 1 per thread ) or is it a 2 thread setup with SMT? On the other hand, if the POWER7 really does have 4 "true" threads, then with SMT you'd get an "effective" *8* threads per core.

jdb2

Re:4 Threads per core? (1, Informative)

Anonymous Coward | more than 6 years ago | (#24191007)

Actually POWER6 is has an effective two threads per core. If I boot a system with 16 cores and SMT enabled, AIX sees 32 "processors." With SMT disabled, I see 16 "processors."

Imagine a Beowulf cluster... (0, Redundant)

ducomputergeek (595742) | more than 6 years ago | (#24190927)

errr....wait a minute, that sounds exactly what U of I is building!

Memory Bandwidth (5, Interesting)

Brad1138 (590148) | more than 6 years ago | (#24190989)

It'll have 620 TB of memory and support 5 PB/s

Is that kind of memory bandwidth possible? You could access the entire 620TB in ~120 milliseconds. I guess nothing is ever to fast, it just seems unrealistically fast.

Re:Memory Bandwidth (2, Interesting)

irtza (893217) | more than 6 years ago | (#24191209)

I believe the memory is aggregate and so is the bandwidth...so per core memory bandwidth is only 5PB/300K cores/s

The real question is how memory allocation is done in per core - does each core have unrestricted access to the full 620TB or is it a cluster with each machine having unrestricted access to a subset and a software interface to move data to other nodes.

if anyone here has insight on this, please fill in the giant blank.

I just Want to Cry (0)

Louis Savain (65843) | more than 6 years ago | (#24191061)

Every time I read about the latest multicore processor that uses a parallel programming model based on multithreading, I just want to cry. How many times must it be repeated to the industry that multithreading is not part of the future of parallel computing? Multithreading simply sucks. Concurrent threads are non-deterministic, prone to errors and hard to program. There is an infinitely better way to design and program parallel computers that does not involves threads at all. To find out why multithreading's days are numbered, read Parallel Computing: Why the Future Is Non-Algorithmic [blogspot.com] .

Re:I just Want to Cry (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24191091)

You've got a guy on Blogspot going up against the large number of researchers at Intel who actually designed the chips, as well as researchers who can design and assemble supercomputers and are doing so with the belief that these chips are suitable. I wonder who wins.

Re:I just Want to Cry (1)

Louis Savain (65843) | more than 6 years ago | (#24191167)

You've got a guy on Blogspot going up against the large number of researchers at Intel who actually designed the chips, as well as researchers who can design and assemble supercomputers and are doing so with the belief that these chips are suitable.

Not true. IBM, Intel, AMD, Nvidia, Microsoft and the others do not believe that they have the answer. In fact, judging by the amount of money (hundreds of millions) that they are currently spending in research labs around the world trying to find a solution, it's obvious that the industry is in a state of panic.

I wonder who wins.

It's not over until it's over.

5.0ghz Power6 (0)

Anonymous Coward | more than 6 years ago | (#24191137)

is already available.

In other news... (1, Redundant)

billybob_jcv (967047) | more than 6 years ago | (#24191143)

...IBM also announced a performance upgrade kit for the Power7 to enable it to meet the minimum HW requirements for Vista.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?