Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Announces Xeon E5 and Knights Corner HPC Chip

Unknown Lamer posted more than 2 years ago | from the one-treeleeon-flops dept.

Intel 122

MojoKid writes "At the supercomputing conference SC2011 yesterday, Intel announced its new Xeon E5 processors and demoed their new Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Intel has been shipping the new chips to a small number of cloud and HPC customers since September. The new E5 family is based on the same core as the Core i7-3960X Intel launched Monday. The E5, while important to Intel's overall server lineup, isn't as interesting as the public debut of Knights Corner. Recall that Intel's canceled GPU (codenamed Larrabee) found new life as the prototype device for future HPC accelerators and complementary products. According to Intel, Knights Corner packs 50 x86 processor cores into a single die built on 22nm technology. The chip is capable of delivering up to 1TFlop of sustained performance in double-precision floating point code and operates at 1 — 1.2GHz. NVIDIA's current high-end M2090 Tesla GPU, in contrast, is capable of just 665 DP GFlops."

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

Pfffft! (-1, Troll)

Anonymous Coward | more than 2 years ago | (#38075754)

Pfffft! My prosthetic horse cock penis can deliver 500 OPH (orgasms per hour) to even the most blown out vags. Plus it's so long and thick that even Mandingo's penis looks like the size of CmdrTaco's micropeen in comparison.

Re:Pfffft! (2)

ColdWetDog (752185) | more than 2 years ago | (#38075782)

4chan is still down? Maybe we should lend them a hand.

Re:Pfffft! (-1)

Anonymous Coward | more than 2 years ago | (#38075880)

No one cares you /b/tard loser.

Re:Pfffft! (1)

fa2k (881632) | more than 2 years ago | (#38076418)

Pfffft! My prosthetic horse cock penis can deliver 500 OPH (orgasms per hour)

"DP" is double precision in this case, not the other one;)

Mac Pro Now $3000 with 4gb ram! (-1)

Anonymous Coward | more than 2 years ago | (#38075778)

Mac Pro Now $3000 with 4gb ram!
1TB HDD + lowend video card as well. Dual CPU system with 8Gb at $4500

may I say (0)

Anonymous Coward | more than 2 years ago | (#38075794)

time to do some bitcoin mining?

Huh? (2)

PowerCyclist (2058868) | more than 2 years ago | (#38075812)

I mostly understand the figures this post states, but it sounds like engineering dialog from 'Star Trek: Voyager'. But, all this means to me is that the chips from last year are now cheaper that they've been out-classed.

Re:Huh? (1)

Anonymous Coward | more than 2 years ago | (#38075858)

Summary: Faster chips out. You can't get them. Also a 50 core chip was released.

Re:Huh? (2)

SuricouRaven (1897204) | more than 2 years ago | (#38077266)

More importantly, an x86 chip. Not a GPU. Which means anyone who knows even the fundamentals of programming can use one with minimal additional training. No screwing around with the inability of GPUs to do recursion or deep nesting, no trying to deal with your data as if it were a texture. Just code and go.

Re:Huh? (1)

gstrickler (920733) | more than 2 years ago | (#38078180)

You mean the fundamentals of parallel processor programming. It's not exactly a widely held skill yet.

Re:Huh? (0)

Anonymous Coward | more than 2 years ago | (#38078324)

But you still need a good parallel algorithm -- a much bigger and more fundamental issue for many problem areas than learning some oddball programming skills, and somewhat worse than needing a given good parallel algorithm, to also be a non-recursive algorithm.

Re:Huh? (1)

SuricouRaven (1897204) | more than 2 years ago | (#38079608)

KC will have no problems with recursion, it's an x86. It's GPUs that don't do recursion. At least not current ones - remember they were made for processing graphics, not something where recursion is of much use.

Re:Huh? (1)

GigaplexNZ (1233886) | more than 2 years ago | (#38079818)

Whoosh. Parent was suggesting that GPUs lack of recursion is not the big hurdle, knowing how to do parallel algorithms in the first place is the biggest issue. x86 doesn't solve that.

Re:Huh? (1)

HappyPsycho (1724746) | more than 2 years ago | (#38082006)

Just to clarify, the 50-core beast hasn't been released as yet.

They gave a demo of what is most likely a prototype chip.

Re:Huh? (1)

Surt (22457) | more than 2 years ago | (#38076080)

If you could make your question clearer, you'll probably get a more effective answer.

Re:Huh? (1)

gstoddart (321705) | more than 2 years ago | (#38076712)

If you could make your question clearer, you'll probably get a more effective answer.

An interjection followed by two statements does not a question make. ;-)

Re:Huh? (1)

Anonymous Coward | more than 2 years ago | (#38076832)

huh?

Re:Huh? (1)

Surt (22457) | more than 2 years ago | (#38077744)

Huh?

A question asked directly does, though.

Re:Huh? (1)

gstoddart (321705) | more than 2 years ago | (#38078604)

Huh?

A question asked directly does, though.

Well, to continue the pedantry ... in and of itself, "Huh?" is merely an interjection [englishclub.com] .

Interjection is a big name for a little word. Interjections are short exclamations like Oh!, Um or Ah! They have no real grammatical value but we use them quite often, usually more in speaking than in writing. When interjections are inserted into a sentence, they have no grammatical connection to the sentence. An interjection is sometimes followed by an exclamation mark (!) when written.

It's a grammatical equivalent to a grunt.

Ergo, no question was ever posed. :-P

Re:Huh? (0)

Anonymous Coward | more than 2 years ago | (#38081214)

IBM's Cell processor reached 1Tflop per-processor 7 years ago, and thats using a 90nm process.

Dead (0)

Anonymous Coward | more than 2 years ago | (#38075844)

Xeon is dead remember

-L. Ellison

Little Intel has growed up (0)

ackthpt (218170) | more than 2 years ago | (#38075876)

When they said nobody needed multicore processors I heard the echos of "640K should be enough for anyone" and "There is no reason for any individual to have a computer in his home" Now they're trying to see how many they can jam on one die. 50 is a pretty odd number, though. Usuall see things in powers of 2 (2, 4, 8, 16) Perhpas they neede space on the die for Mickey or an etched portrait of Jobs.

Re:Little Intel has growed up (1)

Anonymous Coward | more than 2 years ago | (#38075922)

When they said nobody needed multicore processors

[citation needed]

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38076264)

When they said nobody needed multicore processors

[citation needed]

[{Fallacy:Appeal to authority}] [nizkor.org]
(Protip: It does not make something more true or false, if person X links to person Y stating it with quotes and "Intel said" around it, than if person X states it directly. Maybe you hung in Wikipedia mailing lists for too long.)

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38077654)

Then who the fuck is "they". "They" seem to always say things, but nobody can ever identify who "they" are. It's almost as those "they" don't exist at all and are merely a convenient fabrication for the sake of being able to argue one's own opinion or set up a strawman.

Re:Little Intel has growed up (1)

Maximum Prophet (716608) | more than 2 years ago | (#38075968)

More than likely, there's 64 cores but only 50 are activated because they can't get a decent yield of perfect chips. That also means that you might be able to get samples of 25 core chips that didn't even make the 50 core cutoff. (One core might also be dedicated for book keeping purposes)

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38076260)

I believe I read somewhere (maybe HPCWire) that there are inactive cores on the chip, and the reason is a combination
of yield and heat management.

Re:Little Intel has growed up (1)

RightSaidFred99 (874576) | more than 2 years ago | (#38076424)

What natural phenomenon would require that the number of course on a chip be a power of 2? I can't think of any.

Re:Little Intel has growed up (4, Informative)

gstoddart (321705) | more than 2 years ago | (#38076872)

What natural phenomenon would require that the number of course on a chip be a power of 2? I can't think of any.

Because computers count in binary, which is powers of two. And, I'll assume you meant cores.

Historically such things have been powers of two to make the addressing simpler without having extra magic or control lines left over. So, 1, 2, 4, 8, 16, 32 and 64 all make sense in terms of being expressable in a fixed number of bits ... 50 to some of us seems like a fairly arbitrary choice. Since you use an unusual combination of wiring, it might as well be 37 or 51 since it's not a number that 'naturally' lends itself to computers. The device is likely wired in such a way that it could count to 64 ... or they're doing things in a slightly odd way.

Anyway, that's why some of us find it to be a little odd. And it's also why the hard-drive makers deciding "1 GIG" is "1,000,000,000 bytes" is irksome ... with all of those extra powers of two, it should be "1 073 741 824 bytes". Which means you lose about 72MB/GIG ... so my 2TB drive isn't.

Re:Little Intel has growed up (-1, Flamebait)

Dog-Cow (21281) | more than 2 years ago | (#38077056)

Which means you lose about 72MB/GIG ... so my 2TB drive isn't.

No, it means you're an idiot who cannot deal with the fact that the prefixes have a specific meaning unless one is talking about computer memory (RAM/ROM). The storage vendors are using the terms correctly.

Re:Little Intel has growed up (3, Insightful)

gstoddart (321705) | more than 2 years ago | (#38077144)

No, it means you're an idiot who cannot deal with the fact that the prefixes have a specific meaning unless one is talking about computer

So, are you always an asshole, or just on Slashdot?

Re:Little Intel has growed up (0)

Anomalyst (742352) | more than 2 years ago | (#38077254)

I'm guessing its an incurable condition brought on by exposure to Apple iDevices

Re:Little Intel has growed up (2)

gstoddart (321705) | more than 2 years ago | (#38077378)

I'm guessing its an incurable condition brought on by exposure to Apple iDevices

Well, since I own 3 iPods and an iPad ... you'd think I'd be the one being accused of being an asshole by that logic.

I'm going to go with self-righteous prick who feels entitled to be an ass on the internet because he's got a 5-digit Slashdot ID and therefore considers himself to be l337.

Re:Little Intel has growed up (1)

RightSaidFred99 (874576) | more than 2 years ago | (#38077156)

Indeed, cores. And I still don't see any reason, and AMD has 3 core processors. I can have 3G of memory. I can have 9G of memory. Binary numbers are not pervasive by mandate in all areas of computing.

Though I do agree base-10 usage for hard drives is ridiculous.

Re:Little Intel has growed up (2)

gstoddart (321705) | more than 2 years ago | (#38077828)

Indeed, cores. And I still don't see any reason, and AMD has 3 core processors. I can have 3G of memory. I can have 9G of memory.

Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

Binary numbers are not pervasive by mandate in all areas of computing.

Nope, absolutely not. Not saying that ... just saying that traditionally such things have been architected to use powers of two because it was most efficient.

Obviously, for other reasons, Intel decided to go with 50 cores.

Re:Little Intel has growed up (1)

Score Whore (32328) | more than 2 years ago | (#38078588)

Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

Certain models of Xeon processor have three memory controllers. Which, when configuring for maximum memory bandwidth, leads to memory being measured in terms of three times powers of two (3 x 2^30.)

Re:Little Intel has growed up (1)

petermgreen (876956) | more than 2 years ago | (#38081724)

Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.

However certain intel processors do use interleaved triple channel memory so there must be a division by 3 going on in the memory addressing system somewhere.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38079422)

AMD has 3 core processors

Technically, AMD has 4 core processors that have had a core disabled for various reasons.

Re:Little Intel has growed up (1)

c (8461) | more than 2 years ago | (#38080186)

The device is likely wired in such a way that it could count to 64 ... or they're doing things in a slightly odd way.

Or it's 64 cores with an average usable yield of 50 "good" ones.

Re:Little Intel has growed up (1)

Maximum Prophet (716608) | more than 2 years ago | (#38076930)

Addressing.

Let's say you've set aside 6 bits in every data structure that deals with core administration. You can grow to 2^6, or 64 cores without re-architecting your data structures.

As long as we are using binary in computers, making everything 2^N will make the most efficient use of space.

Of course, space isn't always the limiting factor, so sometimes for cost or speed reasons, we see objects that number 2^N-M.

Re:Little Intel has growed up (1)

RightSaidFred99 (874576) | more than 2 years ago | (#38077180)

Nah. That extra bit you lose isn't going to cause anyone any heartburn. And nobody's using 6 bits for anything. It's a specious argument.

Re:Little Intel has growed up (1)

Score Whore (32328) | more than 2 years ago | (#38078690)

It's not storing 6 bits in a data structure. It's running traces (if that's even what they're called in IC design) throughout the die connecting these things together. At that level adding two extra traces to carry those two bits is an expense you might want to forgo. However once you've got six wires/bits out there, the only reasons I can think of to not use 64 whatevers is the previously mentioned heat management and die yield issues.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38077016)

If it were a square with a root of a power of two.

Admit that you would expect it to at least be a square.

50^.5 = 7.07. The next whole number is 8.

8^2 = 64, happens to be a power of two.

Re:Little Intel has growed up (3, Interesting)

mlts (1038732) | more than 2 years ago | (#38076492)

I wonder if Intel is taking a page from IBM's playbook.

Upper end POWER7 CPUs have the ability to have half their cores turned off. The cores that are on can then use the disabled neighbor's caches, and run at a higher clock speed. For some things, this switch actually speeds up some tasks that can't be evenly broken up into balanced threads.

I can see Intel doing this where some cores are disabled due to manufacturing defects (which happen to all dies), and having the operable cores use nearby caching which would otherwise go to waste.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38076620)

Intel's been doing that since the Core i5 line, I believe; they call it "turbo mode" or some other such insipid marketing name. The idea is sound, though; disable one core and run the higher one at a greater speed; as long as it's within the same thermal element it should be fine.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38076660)

It's Larrabee reborn! Whatever.

Re:Little Intel has growed up (3, Insightful)

Anonymous Coward | more than 2 years ago | (#38075972)

Odds are... they have it lined up such that... they are in a 5x10 grid. Or a 5x5 Grid front/back.

Just because it's a computer doesn't mean it's bound by the power of two. Boards are rectangular. Chips laid out aren't necessarily in binary distribution.

Re:Little Intel has growed up (1)

Azmodan (572615) | more than 2 years ago | (#38076026)

This would be a +1 if you werent posting as AC.

Re:Little Intel has growed up (1)

jessehager (713802) | more than 2 years ago | (#38076992)

Tilera's 100-core processor is built like this. It's a 10x10 grid of cores.

Re:Little Intel has growed up (1)

Kjella (173770) | more than 2 years ago | (#38078624)

I'm guessing 5x10, if you look at their Intel Core i7 3960X [anandtech.com] the cores are about twice as wide as they are high.

Re:Little Intel has growed up (1)

UnknowingFool (672806) | more than 2 years ago | (#38076044)

Your average consumer doesn't need 50 cores. For HPC which this was designed for, multiple cores are essential. As for the number of cores, I would guess die size was a factor. There also might be redundancy. It's a RAID 50 CPU. ;)

Re:Little Intel has growed up (4, Informative)

David Greene (463) | more than 2 years ago | (#38076288)

Your average consumer doesn't need 50 cores.

Sure they do. What do you think a GPU is? History has shown over and over that we can never have enough computing power. Now that we're at the physical limits of clock speeds, parallelism is going mainstream.

Re:Little Intel has growed up (1)

Desler (1608317) | more than 2 years ago | (#38076600)

Now that we're at the physical limits of clock speeds,

Since when? You can easily overclock most modern chips to 4ghz and with enough cooling to 5 or 6+ ghz. The i7 sandy bridge chips for example have been overclocked past 6ghz. So exactly what supposed "physical limit" do you mean?

Re:Little Intel has growed up (3, Interesting)

zpiro (525660) | more than 2 years ago | (#38076834)

At 6Ghz, you are very close to the speed of light in copper, so unless you can break the speed of light... its a "physics limit".
Below this point you have the problem of energy efficiency, i.e. whats the point of spending more energy on cooling than on actually powering the thing?
Intel's 3d-transistors are HUGE because of this, they can push higher clock speed more easily.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38077876)

At 6Ghz, you are very close to the speed of light in copper, so unless you can break the speed of light... its a "physics limit".

During one 6 GHz clock cycle, light travels 50 cm, or something like 50 times the width of a CPU die. I'd say there's easily room to go up to 60 GHz before the speed of light becomes a serious limitation. Ought even to be able to push off-die busses towards 10 GHz before running into that particular physics limit.

Re:Little Intel has growed up (2)

magnusk (569300) | more than 2 years ago | (#38080128)

No, light travels 5cm in one 6 GHz clock cycle, in a vacuum. Speed of light limitations have been a consideration for years. The Cray1 was designed in the early 70s and its physical design allowed for the propagation speed of electricity in copper. It only ran at 80MHz. It's not just about cycle time - what's the duration of your edges? What other latencies are there in the electronics? In 2004, IBM's POWER5 MCM was 9.5cm wide and the CPUs ran at ~2GHz. Not sure what speed the interconnect ran at.

Re:Little Intel has growed up (1)

hechacker1 (1358761) | more than 2 years ago | (#38078044)

I agree generally, like AMD's bulldozer hitting 8GHz on a single core before failing to the limits of physics (even with extreme cooling). I'm assuming nobody will never be able to get more than 1 or 2 cores active (out of 8) while getting to 8GHz on that architecture.

But these days, the chips run in multiple clock domains. I believe the Intel chips are separated by a base clock, L3 Clock, Core clocks, RAM clocks, and bus clocks. The architectures are moving ever toward asynchronous operation in order to pack billion upon billion of transistors on a package without having to synchronize them all the time.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38079960)

The architectures are moving ever toward asynchronous operation in order to pack billion upon billion of transistors on a package without having to synchronize them all the time.

If by this you mean they're going for true asynchronous (no-clock) design, then no, can't agree. There's been research into async circuit design over the years but it's never achieved any real commercial success. The latest try I know of is Achronix, a FPGA startup which uses async techniques in its FPGA fabric, with tools that are supposed to automate the job of translating your clocked logic design to async. It remains to be seen if the technology will be commercially viable.

If you mean they're using more clock domains and that some domains may be asynchronous WRT other clock domains, then yeah, that's true. It's a bear to synchronize things across the whole die at multiple GHz, and you don't tend to need the whole chip under one clock tree in multi-core processors anyways.

Re:Little Intel has growed up (0)

RocketRabbit (830691) | more than 2 years ago | (#38079268)

Overclockers are up to 8.4 GHz now, with AMD chips.

Re:Little Intel has growed up (1)

David Greene (463) | more than 2 years ago | (#38079450)

Since when?

Since the point we reached the ability to handle the power and heat dissipation requirements economically. Engineering is about tradeoffs. Until we get better materials, multicore is more cost-effective than push the clock beyond the reasonable cost envelope.

Re:Little Intel has growed up (1)

Bengie (1121981) | more than 2 years ago | (#38076378)

"Your average consumer doesn't need 50 cores [yet]"

Games are getting pretty good at using my 1536 core GPU, which is just a co-processor

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38077054)

And if you count cores in such a way as to get 1536 off a GPU, there are many more than 50 cores on MIC. Quite right, though, with the right application the cores are easily used.

Re:Little Intel has growed up (1)

Sloppy (14984) | more than 2 years ago | (#38076688)

Your average consumer doesn't need the 80386. There's hardly any software compiled to take advantage of its features anyway. I can see maybe someone using them for servers, but that's a pretty small niche.

Re:Little Intel has growed up (1)

gstoddart (321705) | more than 2 years ago | (#38076944)

Your average consumer doesn't need the 80386. There's hardly any software compiled to take advantage of its features anyway. I can see maybe someone using them for servers, but that's a pretty small niche.

I remember almost exactly that quote in PC Magazine back in the day. I think at the time it was the 80486, but same thing. they probably said the same thing about the '386 too.

Of course, I have a quad-core machine sitting on my desk at home with 8GB of RAM, and running at a clock speed two orders of magnitude higher than my '486 did. :-P

So, obviously consumer needs for CPU speed is growing far faster than anyone would have predicted in the late 80's/early 90's. I still remember the first time I saw a PC with a 1GB hard-drive ... a bunch of us stood around it thinking "WTF will we ever do with that much disk space?".

Re:Little Intel has growed up (1)

yuhong (1378501) | more than 2 years ago | (#38078676)

Ah, the disaster that is the move from real to protected mode.
Summary: First fiasco was that in year 1982 MS ignored the announcement of the 286 around the time and proceeds to develop a real-mode multitasking version of DOS, and only in around 1985 when IBM refused to license it that it was realized it was a mistake. And while the resulting OS/2 1.x sucked and lost it's chance with Windows 3.x (which was incompatible and both designed for 16-bit protected mode), second fiasco was when MS broke the JDA with IBM in year 1991 before the 32-bit OS/2 2.0 (which had been developing since year 1989) was even given a chance. Then later on MS attacked OS/2, particularly in the Wrap era when MS resorted to tactics like astroturfing (look up "OS/2 Microsoft Munchkins" for example). Imagine if MS embraced OS/2 instead. Both fiascos delayed the move to protected mode by years, not to mention MS's attacks on DR-DOS as OS/2 did not depend on DOS.

Re:Little Intel has growed up (1)

Sloppy (14984) | more than 2 years ago | (#38079120)

Still bitter?

Re:Little Intel has growed up (2)

yuhong (1378501) | more than 2 years ago | (#38079596)

Yea, I know it is too late. The good news is that the x64 transition went much better.

Re:Little Intel has growed up (1)

mikael (484) | more than 2 years ago | (#38076066)

I'm guessing there would have to be glue logic to get all these processors to share the memory space as well as read/write access. From the promotional pictures of other multi-core chip dies, each core is usually surrounded by a band of interface logic as well as a hefty large block of cache memory. That seems to be the biggest change in the evolution of CPU's. It seems easier to just create larger caches or more cores than anything low level.

Maybe they accept one or more non-functional cores in exchange for increased yields. Those cores that don't function correctly could simply be disabled.

Re:Little Intel has growed up (0)

Anonymous Coward | more than 2 years ago | (#38076078)

always scaling by powers of 2 only really makes sense when the only resource impacted by the increase in size is the registers that store values. computers store numbers in binary, so yes, as you add bits to a register, the number of cases it supports doubles for each bit added. hence, color depth over the history of gaming went from 8-bit, to 16-bit, to 32-bit, etc. But nothing about the process by which you physically etch a copy of a cpu layout onto a wafer of silicon scales naturally in powers of two. The only thing that scales naturally as a power of two for that is the variable in which you store information like the unique-identifier for each core on the die. But the number of bits required to assign each core an id number is not the limiting factor. The limiter is the physical size and shape of each core, and the number of times the individual core area divides into the manufacturing constrains of the equipment that does the etching.

Re:Little Intel has growed up (1)

SuricouRaven (1897204) | more than 2 years ago | (#38079158)

In 32-bit color, only 24 bits are actually used. It's just more efficient to process one pixel in a 32-bit register than have to screw around with ANDs and shifts to get the data you want. The leftover eight bits are usually zeroed, sometimes used to store alpha or depth information.

Re:Little Intel has growed up (1)

Sebastopol (189276) | more than 2 years ago | (#38079502)

except for that pesky 8-bit alpha channel, which clearly isn't used.

Re:Little Intel has growed up (3, Informative)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#38076088)

Intel's period of dismissive attitude toward advanced features(multiple cores, 64-bit support on x86, something that sucked less than FSB) was never really serious. Back when they still thought that they had a chance of making IA64 the 'serious' platform and gradually letting x86(and AMD) sink into the bargain bin, they did some tactical rubbishing of what "normal users" needed in order to justify restricting those features to the high-end SKUs; but they worked on them.

Once it became clear that that particular plan wasn't a happening thing, and that AMD was delivering serious server parts and knockdown prices, and Nvidia was doing interesting things with GPUs, and ARM licensees were pumping out increasingly zippy low-end chips, they stopped fucking around. These days they'll still charge as hard as they can for the features provided; but their hopes of sandbagging x86s in order to sell IA64s are dead

Re:Little Intel has growed up (1)

Surt (22457) | more than 2 years ago | (#38076272)

Who said nobody needed multicore processors? That seems like a pretty unlikely claim, particularly from intel who were very much into selling multi-cpu systems to the high-end long before multicore became the norm. I had a dual-socket pentium II consumer grade system ages ago. That we were headed to multicore was obvious even then.

Re:Little Intel has growed up (1)

Kristian T. (3958) | more than 2 years ago | (#38076628)

Reminds me that I have a dual Deschutes 350 in the attic somewhere. Served me faithfully from 1998 to 2004. If it wern't for the 128MB of memory and the price of electricity - I might still have it do..... uhm something. Trouble is it's still hard to do multithreading, and our programming languages are still inherently single thread, maybe with some thread primitives glued on.

RON PAUL (-1, Offtopic)

Kraftwerk (629978) | more than 2 years ago | (#38075898)

Ron Paul Moves Into Second Place in Iowa AND New Hampshire While Newt Gingrich attracts media attention as he moves up in national polls, Ron Paul is enjoying a surge of his own. The Texas congressman has moved into second place in Iowa and New Hampshire, according to two Bloomberg News polls released this week. [HERE and HERE] In Iowa, he sits one point behind Herman Cain with 19 percent support, according to a poll released Tuesday. And a poll released Wednesday shows him at 17 percent in New Hampshire, far behind Mitt Romney (who has 40 percent support) but six points ahead of the third place finisher, Gingrich. Paul's campaign has demonstrated organizational strength in his numerous straw poll victories. That ability to mobilize supporters could come into play on Jan. 3 in Iowa's caucuses.

Re:RON PAUL (-1)

Anonymous Coward | more than 2 years ago | (#38076876)

The only GOP candidate that really has a chance is Huntsman. Well he doesn't stand a chance really of getting the GOP nomination, but if he did he would have a better chance at knocking off Obama than any of the other GOP candidates.

Romney may be a safe bet in some ways for the GOP, but really he is just another John Kerry. A bland candidate that really doesn't connect with anyone, let alone the opposition.

Remember anyone who gets the GOP nomination also has to gather Democratic votes to actually win the race.

Still Nvidia wins (-1)

Anonymous Coward | more than 2 years ago | (#38076382)

And still i will buy Nvidia every time It is just a pity that AMD brought ATI instead of Nvidia . Intel and ATI graphics suck

Re:Still Nvidia wins (0)

Anonymous Coward | more than 2 years ago | (#38077092)

ATI graphics suck

But they don't.

How can that be? (3, Insightful)

gr8_phk (621180) | more than 2 years ago | (#38076580)

A 50 core chip at 1GHz is going to need to perform 20 double precision floating point ops per cycle per core to achieve 1Tflop performance. OK, so 1.2GHz cuts that down to 16flops/clock. Since when can anything Intel Architecture achieve that many flops per cycle? Two 4-element dot products is only 14 flops. I suppose if they did two vector-scaler multiply-adds that would get 16 flops per cycle. So I just answered my own question. But can they really keep the FP unit running continuously at that rate? On all 50 cores?

Re:How can that be? (2)

parlancex (1322105) | more than 2 years ago | (#38076696)

Maybe, but probably not. The key to high performance computing when dealing with parallel workloads like this is not just raw processing power, but memory bandwidth. The Nvidia Tesla M2090 mentioned in TFS has a peak memory bandwidth of 177GB/s with specially designed memory and controllers designed for raw throughput. Conventional CPUs with fastest DDR3 memory available can barely crack a small fraction of that. A terraflop of sustained DP performance is going to be completely useless without the memory bandwidth to back it up.

Re:How can that be? (1)

drewm1980 (902779) | more than 2 years ago | (#38081106)

Depends on how much cache is on the chip, and how big the problem being solved is. GPU's have a lot of FP units, but they have such a tiny amount of cache that they basically have to transfer ~everything they operate on over the memory bus. On a CPU, your dataset can be several MB and still fit on-chip, but of course you have fewer FP units. The algorithm I designed for my Ph.D. operate on the same few megabytes of data many times, and it ended up being about equally fast on both architectures, so I'm hoping KC will bridge the cache size chasm that exists between CPU's and GPU's.

Re:How can that be? (1)

Nom du Keyboard (633989) | more than 2 years ago | (#38076806)

A 50 core chip at 1GHz is going to need to perform 20 double precision floating point ops per cycle per core to achieve 1Tflop performance. OK, so 1.2GHz cuts that down to 16flops/clock.

By your math it means that each core has a 1024-bit wide vector unit. And that means 64-bit FP, not 80-bit. Not impossible, but perhaps unlikely to ever run at theoretical max across all cores in anything but the most carefully crafted case.

Re:How can that be? (1)

MacGyver2210 (1053110) | more than 2 years ago | (#38076840)

You seem to be forgetting about SIMD and vectorization. If you pack more instructions into the bits available for one, it can do much more than your typical 32- or 64-bit core. That is often how early benchmarks are tested to give the highest results possible for the data throughput.

Re:How can that be? (1)

the linux geek (799780) | more than 2 years ago | (#38076950)

Intel claims each core can perform 16 FLOPS per cycle, at least at SP. Each core has a 512-bit wide vector unit. I'm not sure where their DP claims are coming from, though.

Re:How can that be? (2)

six (1673) | more than 2 years ago | (#38077466)

The vector unit must be FMA capable just like Larrabee, hence the doubling of FLOPS/cycle.

Re:How can that be? (1)

markhahn (122033) | more than 2 years ago | (#38079126)

there are lots of useful computations that are more flops-intensive (relative to memory footprint) than dot-products. matmul, fft, almost anything montecarlo, etc.

Re:How can that be? (2)

David Greene (463) | more than 2 years ago | (#38079526)

OK, so 1.2GHz cuts that down to 16flops/clock. Since when can anything Intel Architecture achieve that many flops per cycle?

Since LRBni and its 512-bit vectors. A double-precision FMA gets you 16 ops in a clock.

But can they really keep the FP unit running continuously at that rate? On all 50 cores?

Easily. HPC codes regularly keep thousands of cores busy.

Not all that exciting (1)

mrjimorg (557309) | more than 2 years ago | (#38076734)

Today I can go to the store and buy the Nvidia board that they mention. When can I buy a system with a Knight's Corner chip? What about a PCI-E board? The answer is never. It will only be sold to Intel's partners in labs and research environments for special projects. It means very little to most of us.

Re:Not all that exciting (2)

the linux geek (799780) | more than 2 years ago | (#38076960)

Intel claims it will be released as a commercial product in the near future.

How about a consumer version? (1)

Jeng (926980) | more than 2 years ago | (#38077432)

Wonder if they'll produce a consumer version.

I use an ATI card as my main video card, wouldn't mind sticking a physics card in the other PCI-E slot. The thing is that if I put in an Nvidia card it won't work as a physics card since Nvidia has written the drivers in such a way that if you have a non-Nvida video card as your primary video card Nvidia will not allow you to use their cards just for physics.

So my hope is that if Intel puts out a consumer version then either I'll be able to buy an Intel board just for physics or Nvidia will drop their stupid restriction.

Either way if Intel puts out a consumer physics card I win.

Re:Not all that exciting (1)

Sebastopol (189276) | more than 2 years ago | (#38079538)

"It means very little to most of us."

Just like your comment.

Intel's side entry into the GPU market (1)

JDG1980 (2438906) | more than 2 years ago | (#38077608)

We may yet see high-end Intel discrete graphics cards in the future.

Knights Corner sounds like it is basically a high-end GPU without the actual graphics output. This lets Intel position it as a professional product for HPC and supercomputing, and squeeze out as much profit as possible from the early models. Then, once the R&D cost has been amortized and the fab technology is advanced further, they can add a HDMI output, dedicated RAM, and glue logic, and write appropriate drivers to make it a full-fledged graphics card. Of course this may lack some features of the professional Knights Corner (ECC support?) so it won't cannibalize the high-end market. But it has the potential to be much more power-efficient than AMD and nVidia enthusiast products.

Re:Intel's side entry into the GPU market (1)

the linux geek (799780) | more than 2 years ago | (#38078280)

It originally was a video card (Larrabee project), but things didn't look good for consumer performance and they repositioned it.

Re:Intel's side entry into the GPU market (1)

timeOday (582209) | more than 2 years ago | (#38078340)

Knights Corner sounds like it is basically a high-end GPU without the actual graphics output.

To me it sounds like much more. The "cores" on a GPU are not equivalent to CPU cores [langly.org] , whereas on Knight's Corner you get 50 actual x86 cores. It is sure to be much more general purpose. From the article: "Unlike other co-processors, the MIC is fully accessible and programmable as though it were a fully functional HPC node." It sounds like a cluster on a chip. I am curious about the memory model.

Re:Intel's side entry into the GPU market (1)

drewm1980 (902779) | more than 2 years ago | (#38081378)

I'm curious about the memory model too. I'm pretty certain that bit about "cluster on a chip" is just marketing hyperbole, and it's actually still a shared memory system running one instance of the linux kernel. They're not going to make you run 50 linux kernel instances and communicate between them using network sockets.

MIC presentations at SC11 (3, Informative)

Nite_Hawk (1304) | more than 2 years ago | (#38078446)

I'm at SC11 right now and just attended NIC's MIC presentation. The scaling looks fantastic according to various codes that they compiled to run on it, but what was notably absent was performance relative to traditional x86 chips. The final presenter even said that now that the technology has been demonstrated to work (with minimal porting effort required) the next step will be to optimize and improve performance. The take away is that relative to Intel's other chips, MIC performance wasn't impressive enough to include in the presentation. That's fine in my book because it's an ambitious project, but it sounds like there is still some work to do.

Remember ASCI Red?? (1)

GrandTeddyBearOfDoom (1483117) | more than 2 years ago | (#38078976)

Just shows you the progress in CPU power: ASCI Red was the first supercomputer to go over 1TFlop, and was massive, now we have this with just one chip!

Re:Remember ASCI Red?? (1)

blair1q (305137) | more than 2 years ago | (#38079816)

And the massive computers are going 100 Petaflops [theregister.co.uk] (that's 100,000 Teraflops).

Imagine... (1)

Iamthecheese (1264298) | more than 2 years ago | (#38080634)

A beowolf cluster of these! but seriously even one wouldn't be efficient enough to be worth it yet even in top-of-the-line OS's. We need a whole new paradigm of algorithms and maybe even a new language to do this right.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?