Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Next-Gen CPU Has Memory Controller and GPU

kdawson posted more than 7 years ago | from the mmmm-threads dept.

Intel 307

Many readers wrote in with news of Intel's revelations yesterday about its upcoming Penryn and Nehalem cores. Information has been trickling out about Penryn, but the big news concerns Nehalem — the "tock" to Penryn's "tick." Nehalem will be a scalable architecture with some products having on-board memory controller, "on-package" GPU, and up to 16 threads per chip. From Ars Technica's coverage: "...Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core. Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it. Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, 'we view Nehalem as the first true dynamically scalable microarchitecture.' What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments." More details, including Intel's slideware, appear at PC Perspectives and HotHardware.

Sorry! There are no comments related to the filter you selected.

What is this? (-1, Offtopic)

Goaway (82658) | more than 7 years ago | (#18527127)

Opinion Center: Intel?

*snore* (-1, Redundant)

Fordiman (689627) | more than 7 years ago | (#18527149)

Hey, wake me when they've got 512M of base RAM directly on the chip, will you?

Is AMD beaten? (4, Interesting)

Anonymous Coward | more than 7 years ago | (#18527167)

It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)

Re:Is AMD beaten? (5, Funny)

Fordiman (689627) | more than 7 years ago | (#18527191)

I do. I feel that AMD should stop beating itself and get back to beating Intel!

No, seriously, though. I'm holding out on the hope that AMD's licensing of ZRAM will be able to keep them in the game.

Re:Is AMD beaten? (3, Insightful)

Applekid (993327) | more than 7 years ago | (#18527245)

"Anybody have an opposing viewpoint?"

I think "AMD fan" or "Intel fan" is a bad attitude. When technology does its thing (progress), it's a good thing, regardless of who spearheaded it.

That said, if AMD becomes so obviously a bad choice, Intel who is in the lead will continue to push the envelope just not as fast since they don't have anything to catch up to. That will give AMD the opportunity to blow ahead as it did time and time again in the past.

The pendulum swings both ways. The only constant is that competition brings out the best and it's definitely good for us, the consumer.

I'm a "Competition fan."

Re:Is AMD beaten? (1)

BlueTrin (683373) | more than 7 years ago | (#18527597)

Yeah we can see a parallel at the moment with the overhyped GeForce 8 series. Although the product is good it suffered of bad drivers. Still because it is the only DX10 compatible card and its performances are quite good, most of the enthusiasts go for it. And there is no incentive to push the prices down for now since the R600 from ATI has been delayed.

Monopolism is bad for the users. The only advantage brought by competition is that you don't have any compatibility issues on your overpriced unfixed ... operating syst... *coughs* ... I mean software/hardware !!!

I am no AMD/Intel/Nvidia/ATI/MacOSX fan or MS-hater but it would be nice that Intel and Nvidia get challenged by their competitors alit bit more ...

Re:Is AMD beaten? (3, Interesting)

Vulva R. Thompson, P (1060828) | more than 7 years ago | (#18528139)

That will give AMD the opportunity to blow ahead as it did time and time again in the past.

That's assuming they'll have the cash and/or debt availability to do so; a large chunk went into the ATI acquisition. Their balance sheet reads worse now than any time in the past (imho) and the safety net of a private equity buyout is weak at best. Now that ATI is in the mix, it seems that competition in two segments is now at risk.

Point being that the underdog in a two horse race is always skating on thin ice. Let's hope that he doesn't hit a spot that's too thin this time.

Re:Is AMD beaten? (3, Insightful)

Anonymous Coward | more than 7 years ago | (#18528201)


#define Competition > 2

What you have here is a duopoly, which is apparently what we in the US prefer as all our major industries eventually devolve into 2-3 huge companies controlling an entire market. That ain't competition, and it ain't good for all of us.

Captcha = hourly. Why, yes, yes I am.

Re:Is AMD beaten? (1)

abshnasko (981657) | more than 7 years ago | (#18528293)

I think "AMD fan" or "Intel fan" is a bad attitude. When technology does its thing (progress), it's a good thing, regardless of who spearheaded it
That doesn't mean you can't pick sides...

Re:Is AMD beaten? (4, Interesting)

LWATCDR (28044) | more than 7 years ago | (#18527327)

Simple Nothing has shipped yet.
So we will see. Intel's GPUs are fine for home use but not in the same category as ATI or NVidia. The company that might really loose big in all this is NVidia. If Intel and AMD start integrating good GPU cores on the same die as the CPU where will that leave NVidia?
It could be left in the dust.

Re:Is AMD beaten? (2, Interesting)

eddy (18759) | more than 7 years ago | (#18527473)

Just you wait for the Ray Tracing Wars of 2011. Then the shit will really hit the fan for the graphics board companies.

Re:Is AMD beaten? (0)

Fordiman (689627) | more than 7 years ago | (#18527769)


Re:Is AMD beaten? (-1, Troll)

GundamFan (848341) | more than 7 years ago | (#18527961)


Re:Is AMD beaten? (1, Troll)

treeves (963993) | more than 7 years ago | (#18528089)

No, spelling correction. Almost as bad.

Re:Is AMD beaten? (1)

mr_mischief (456295) | more than 7 years ago | (#18527865)

Hopefully to license GPU technology to Intel as an outside think tank, while making motherboard chipsets and high-speed PCI Express add-in cards for things that still aren't integrated onto the CPU. They have experience in making some pretty nice chipsets, after all, and more experience than most in making high-performance PCI Express peripherals.

PCI Express is 2.5Gbps per lane each way, so x16 means 40Gbps full duplex. I haven't seen any x32 anywhere, but there's supposed to be specs for it. That's 80Gbps full duplex on one interface. The companies that are still patting their own backs about the jump from ISA to PCI are not likely to be the leaders in putting peripherals other than video on x8, x16, and x32 connections. NVidia could be if it positions itself well.

Imagine a server with a 16x PCIe four port 10Gbps Ethernet card. (Sure, there's overhead, but the chances of sustaining the maximum on all four ports simultaneously for very long is small, and large FIFOs could make the issues pretty much moot. Make it three ports if that's such a concern.) As the processors scale, memory limits soar, and flash memory (maybe even holographic in a few years) replaces slower hard drives, machines will be able to satisfy more network requests as long as there is bandwidth into and out of the machine that can keep the data flowing.

Also, it'd make sense for NVidia to do physics, encryption, and other things which could use some acceleration outside the CPU.

Don't rule out a merger with the perpetual back runner, either. Via/S3/NVidia might just have enough know-how together to mount a cost/performance attack for low-end desktops and the entire mobile space.

Re:Is AMD beaten? (1)

LWATCDR (28044) | more than 7 years ago | (#18528301)

I so hope that physics engines don't go mainstream. I fear what eye-candy might end up on my GUI. The wobble windows are bad enough.

Re:Is AMD beaten? (0)

Anonymous Coward | more than 7 years ago | (#18528247)

Don't count NVidia out.

The way this new technology seems to point is to make it possible to have drop-in cores, currently so that you can add cores with more cache or optimized for certain functions. If the architecture provides a standard "pinout" and geometry for the inner core, Intel could very easily license the standard to NVidia so they can design a core to be included in a laptop version of the processor.

This would give NVidia access to Intel's fabs, and Intel a industrial strength graphics coprocessor and access to their know-how. All without a messy acquisition/merger.

NVidia could have some fun with it, designing cores to be used in parallel with a video card (Say, geometry shader on CPU, rasterizer on card).

Intel could have some fun with it, making a gamer processor with tri-procs, NVidia GPU, PhysX engine and optimized memory architecture (intercore communication via cache?). For home users MPEG decoder, hardware blu-ray decription, aero-glass GPU . Who-knows-what for DB servers. etc.

Heck, you could even throw in some Cell PPUs if you want to get silly :)

Re:Is AMD beaten? (4, Interesting)

Gr8Apes (679165) | more than 7 years ago | (#18527355)

It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)
Oh, good lord. Intel announces the "new" technology for something that's not due for years (most likely 2) which happens, just happens, to be tech you can already buy from AMD today (or with their next CPU release in the next few months) and you're running around "the sky is falling, the sky is falling".

This reminds me of MS during the OS/2 days, when they first announced Cairo with its DB file system and OO interface (sound familiar? It should - features of Longhorn, then moved to Blackcomb, and now off the map as a major release). Unlike MS, I don't doubt Intel will finally release most of what they've announced, but to think that they're "ahead" is ludicrous. At this moment, their new architecture will barely beat AMD's 3+ year old architecture (See Anandtech or Tom's, I forget which, but there was a head to head comparison of AMD's 4X4 platform with Intel's latest and greatest quad CPU, and AMD's platform kept pace. That should scare the bejeebers out of Intel, and apparently it has, because they're now following the architectural trail blazed by AMD, or announced previously, like multi-core chips with specialty cores.

In other words, not much to see here, wake me when the chips come out. Until Barcelona ships, Intel holds the 1-2 CPU crown. When it ships, we'll finally be able to compare CPUs. AMD still holds the 4-way and up market, hence its stranglehold in the enterprise. Intel's announcement of an onboard memory controller in Nehalem indicates that they're finally maybe going to try to tackle the multi-CPU market again, depending upon how well architected that solution is.

Re:Is AMD beaten? (5, Informative)

jonesy16 (595988) | more than 7 years ago | (#18528067)

I'm not sure what reviews you've been looking at but AMD is not nearly "keeping pace" with Intel, not for the last year anyway. i=2879 [] clearly shows the intel architecture shining, with many benchmarks having the slowest Intel core beating the fastest AMD. At the same time, Intel is acheiving twice the performance per watt, and these are cores, some of which have been on the market for 6-12 months. Intel has also already released their dual-chip, eight core server line which is slated to make its way into a Mac Pro within 3 weeks. AMD's "hold" on the 4-way market exists because of the conditions 2 years ago when those servers were built. If you want a true comparison (as you claim to be striving for) then you need to look at what new servers are being sold and what the sales numbers are like (I don't have that information). But since the 8-core Intel is again using less than half of the thermal power an 8-core AMD offering, I would wager that an informed IT department wouldn't be choosing the Opteron route.

AMD is capable of great things but Intel has set their minds on dominating the processor world for at least the next 5 years and it will take nothing short of a major evolutionary step from AMD to bring things back into equilibrium. Whilst AMD struggles to get their full line onto the 65nm production scheme, Intel has already started ramping up the 45nm, and that's something that AMD won't quickly be able to compete with.

Intel's latest announcement of modular chip designs and further chipset integration are interesting but I'll reserve judgement until some engineering samples have been evaluated. I'm not ready to say that an on-board memory controllers is hands-down the best solution, but I do agree that this is a great step towards mobile hardware (think smart phones / pda's / tablets ) using less energy and having more processing power while fititng in a smaller form factor.

Re:Is AMD beaten? (2, Interesting)

Gr8Apes (679165) | more than 7 years ago | (#18528369)

I think you missed the point. The AMD 4X4 solution kept pace with Intel's best under the types of loads where multiple cores are actually loaded. From your link:

When only running one or two CPU intensive threads, Quad FX ends up being slower than an identically clocked dual core system, and when running more threads it's no faster than Intel's Core 2 Extreme QX6700. But it's more expensive than the alternatives and consumes as much power as both, combined.
My point was that 3 year old tech could keep pace with Intel's newest. The 4X4 system is effectively nothing more than a 2-way Opteron system. With an identical number of cores, AMD keeps pace with Intel's top of the line quad. That would concern me if I were Intel, especially with AMD coming out with a quad on a smaller die than those running in the 4X4 system within the next couple of months. You can expect at least equivalent performance to the 4X4 with the new quad (they just co-located the two CPUs together) and because of the new additional shared L3 cache with individual L2 caches per core (Intel has two L2 caches in its quad, each L2 is shared between 2 cores) things should be much better for the Barcelona chip. Now imagine if you plug 2 Barcelona's into that 4X4 system....

In any case, I'm waiting on Barcelona to come out and see what the effects of that release is on the market in general, including expected price cuts on the QX. Price vs performance is what it's all about, after all.

Re:Is AMD beaten? (2, Informative)

berwiki (989827) | more than 7 years ago | (#18527357)

AMD will catch back up. Intel is a monster, much like Microsoft. Sure, they gain a step here and there, but because they are so large and slow, I'm sure AMD will catch up.

In fact, Intel Quad processors currently are two dual-core dies mashed together, where AMD is coming out with a pure Quad core solution. It wouldnt be surprising to see them gain a temporary advantage. (the back and fourth is amazing for consumers) [] (includes a picture!)

Re:Is AMD beaten? (5, Funny)

vivaoporto (1064484) | more than 7 years ago | (#18527363)

I agree. Despite of the fact of AMD market share growing in the past 3 years, the most recent products coming from AMD are headed to beat the AMD ones, unless AMD takes a shift in the current direction and starts to follow AMD example. Nowadays, when I order my processors from my retailer, I always ask for AMD first, and only if the AMD price is significantly lower, I order AMD. I remember back in the days when you could only buy AMD processors, while now you can choose between AMD and AMD (and some other minor producers), isn't competition marvelous?

From your truly,


Re:Is AMD beaten? (0, Offtopic)

foniksonik (573572) | more than 7 years ago | (#18528181)

Marklar is that you? Maarrrrkllllaaarrrr!!!!! Marklarr mmarkklar marrkllar marklar Marklar. MMMMarrklarr maaarklarr marklar marklar.

Good to see you up and about again Marklar ;-p-klar

Best marklars,


Intellectually, Intel is playing catchup here. (4, Insightful)

mosel-saar-ruwer (732341) | more than 7 years ago | (#18527563)

It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)

Look at the title of this thread: Intel Next-Gen CPU Has Memory Controller and GPU.

The on-board memory controller was pretty much the defining architectural feature of the Opteron family of CPUs, especially as Opteron interacted with the HyperTransport bus. The Opteron architecture was introduced in April of 2003 [] , and the HyperTransport architecture was introduced way back in April of 2001 [] !!! As for the GPU, AMD purchased ATI in July of 2006 [] precisely so that they could integrate a GPU into their Opteron/Hypertransport package.

So from an intellectual property point of view, it's Intel that's furiously trying to claw their way back into the game.

But ultimately all of this will be decided by implementation - if AMD releases a first-rate implementation of their intellectual property, at a competitive price, then they'll be fine.

Re:Is AMD beaten? (1)

afidel (530433) | more than 7 years ago | (#18527585)

I wouldn't say they are beaten, at least for what I'm using them for. Here's [] a little spreadsheet I created to do a cost/benefit analysis for Vmware ESX. There are some assumptions built in, and it's not yet a full ROI calculator, but it gets most of the big costs. Cell A1 is the number of our "standard" systems to be compared (4GB dual cpu 2003 machines). The DL580 is 4xXeon 7120 with 32GB of ram, local RAID1 on 15k disks, dual HBA's and a dual port addon NIC. The DL585 is 2xOpteron 8220HE with 32 or 64GB of ram (the 580 with 64GB was more expensive than buying two with 32GB!) and the same equipment. The 360 is our standard build currently, dual 5110's with 4GB ram and local RAID1 and an HBA. After about 17 "systems" (the point where 3 Intel's are needed due to memory constraints) the AMD comes out cheaper, and keeps that lead. Quad core Intel's aren't even an option because 32GB of memory is insanely expensive for the DL380.

Re:Is AMD beaten? (1)

Gr8Apes (679165) | more than 7 years ago | (#18527711)

AMD rules the server market, especially once you go beyond 2 processors.

Re:Is AMD beaten? (1)

afidel (530433) | more than 7 years ago | (#18527863)

Damn it, the 585 is four dual cores, not two.

Re:Is AMD beaten? (1)

Penguinisto (415985) | more than 7 years ago | (#18528109)

Actually no... the DL-580 is the Intel variant, and the DL-585 is the Opteron variant (I used to work w/ the 585's @ my previous employer, and the way HP did NUMA on the thing made it an occasional ECC Chipkill nightmare).


oops - corrections: (1)

Penguinisto (415985) | more than 7 years ago | (#18528145)

My bad... missed the whole context (thought you were saying the 585 had Intels on it). OTOH, You can get a 585 with 2 or four chips IIRC.


So, basically... (2, Interesting)

GotenXiao (863190) | more than 7 years ago | (#18527273)

...they're taking AMD's on-die memory controller, AMD/ATi's on-die GPU and Sun's multi-thread handling and putting them on one chip?

Have Intel come up with anything genuinely new recently?

Re:So, basically... (2, Insightful)

TheSunborn (68004) | more than 7 years ago | (#18527315)

If they manage to combine all these features in single chip, they really have made some genuinely new chip production process :}

Re:So, basically... (1)

ari_j (90255) | more than 7 years ago | (#18527319)

If you do the same thing as everyone else but do it better, you don't have to come up with anything new. What new things do you really want in a CPU?

Re:So, basically... (5, Funny)

maxwell demon (590494) | more than 7 years ago | (#18527541)

If you do the same thing as everyone else but do it better, you don't have to come up with anything new. What new things do you really want in a CPU?
The dwim instruction.

Stable connectors... (1)

DrYak (748999) | more than 7 years ago | (#18528375)

What new things do you really want in a CPU?

One stable and open socket technology. So you can pop custom hardware accelerators or FPGA chips in the additionnal sockets in a multi-CPU mother board.

Like AMD's AM2/AM2+/AM3 and hyper transport bus, with partners currently developping FPGA chips.

Not like intel who change controller with each chip generation, at least twice to screw the custommers (423 vs. 478) The Slot 1 used during the Pentium II / III / Copermine / Tualatin era was a good solution to keep 1 interface for the whole range.

Currently, Intel is in a situation where they lost a lot of valuable time and ressouece on the Pentium IV Netburst dead-end.
This has left time for AMD to catch up with nice technology, while Intel had to reboot from old technology (the P3 derived Pentium Mobile).
Now that Intel has slowly catched up (AMD64 intructions set, on-die memmory controller, on die specialized acceleretors), AMD won't be able to count on it to attract custommers.

What's left for AMD are 3rd praty developpers through their opening of socket/bus standart.
This is something that the Intel team won't be able so easily just by throwing money at it.

Re:So, basically... (1)

sarathmenon (751376) | more than 7 years ago | (#18527341)

It really doesn't matter. These are basic computing concepts, and anyone can draw up such an architecture. What's amazing about Intel is that they did it, and it looks like they have a killer chip in the making. Being an AMD guy, I hate to say that Intel is making me convert - and I not ready to forgive them for the P4 pipeline design.

But all in all, its good news - now let's see what the other camp comes up with that will be 45 nm ready.

That wasn't the impression I got (0)

Anonymous Coward | more than 7 years ago | (#18527747)

I read the article as saying that the die was going to be modular so that a GPU or other type of unit (or two) could be added in place of one or (two) of the cores. This will give Intel the flexibility to use the same design across many different market segments. I expect the memory controller to similarly flexible. If so, this is a pretty innovative design.

Re:So, basically... (2, Insightful)

GundamFan (848341) | more than 7 years ago | (#18527753)

Is it really fair to attribute the GPU-CPU combo to AMD/ATi if Intel gets to market first? As far as I know neither of them have produced anything "consumer ready" yet.

One of those new computers? (2, Funny)

Afecks (899057) | more than 7 years ago | (#18527993)

Looks like you've got one of those new computers that runs faster based on originality. I bet those Lian Li cases really make it scream then!

Re:One of those new computers? (1)

GundamFan (848341) | more than 7 years ago | (#18528153)

I'd say that making a chip that is hands down better than everything on the consumer market after years of being behind the eight ball (being "beaten" by a smaller competitor using dubious shenanigans no less) is pretty darn original.

Re:So, basically... (1)

trimbo (127919) | more than 7 years ago | (#18528279)

Have Intel come up with anything genuinely new recently?

Yes, they can actually make the stuff with extremely high yields. That's Intel's contribution to the innovation. Maybe the ideas came from elsewhere, but Intel are the world's best chip fabricators.

Sure thats nice but... (1)

1_brown_mouse (160511) | more than 7 years ago | (#18527305)

What do the Names mean? What is intel's naming scheme? Why do the select them that way?

And the next one will be faster, stronger and able to leap gigaflops in a single bound.

Re:Sure thats nice but... (1)

BigBuckHunter (722855) | more than 7 years ago | (#18527421)

What do the Names mean? What is intel's naming scheme? Why do the select them that way?

All Intel x86 code names are derived from the names of rivers in the (northwest?) USA.


Re:Sure thats nice but... (4, Interesting)

MrFlibbs (945469) | more than 7 years ago | (#18527965)

Not quite. Intel projects are usually named after local geographical features, not all of them rivers. For example, Banias, Dothan, Yonah, and Merom (Centrino/core2 duo project names) are not rivers in Israel. Also, the first PIII project was done in Folsom and named "Katmai" -- again, there is no Katmai river in Northern California.

It's quite common in the industry to give projects names that don't mean anything, and each company uses a different scheme for generating the monikers. One interesting story is what happened when Apple used an internal project name of "Sagan". Carl Sagan took exception to this use of his name and threatened a lawsuit. Apple responded by changing the project name to "BHA", a TLA for "Butt-Head Astronomer". Sagan filed a lawsuit over this but it was thrown out of court when the judge ruled the new name was a generic one since Sagan was probably not the world's only butthead astronomer. (As least that's what I recall of it. Perhaps someone who worked at Apple during this time can add more detail?)

Re:Sure thats nice but... (0)

Anonymous Coward | more than 7 years ago | (#18527507)

Who knows, but the next 10 designs will be called Marklar.

Re:Sure thats nice but... (1)

ihatewinXP (638000) | more than 7 years ago | (#18527517)

Well with Nehalem I think this has to do something with the team they designed it with. From what I hear they have been working with a number of Israeli design firms and engineers (who are apparently really top-notch and forward thinking in advanced chip design) to produce, or at least influence, the next-next generation of intel chips.

Based on the very Hebrew sounding name I would think this is some of the fruition of that partnership....

Just my conjecture though....

Re:Sure thats nice but... (0)

Anonymous Coward | more than 7 years ago | (#18527971)

A lot of their chips (Merom, Banias, Dothan) have Israeli names but Nehalem is one of the Oregon-derived ones.

Re:Sure thats nice but... (1)

QuantumRiff (120817) | more than 7 years ago | (#18528135)

Actually, they stick with their naming convention of NW Rivers, namely, the Nehalem River [] in Oregon. Right smack in the Middle (up and down wise) of the state, on the coast. Very near Tillamook, where they make awesome cheese.

Re:Sure thats nice but... (0)

Anonymous Coward | more than 7 years ago | (#18527895)

Don't know about the other one, But Nehalem means "Rivers" in Hebrew, while Merom means "High Place".

And this is now wonder, since the core technology for these chips have been designed and some of it manufactured in Intel's dev. center in Israel.


Re:Sure thats nice but... (1)

treeves (963993) | more than 7 years ago | (#18528173)

It may mean that, but more specifically, Nehalem is the name of a river in Oregon, where the new chip is designed and built. (D1D fab - Hillsboro, OR)

/me drools. (1)

ThinkingInBinary (899485) | more than 7 years ago | (#18527313)

This is awesome. I'm just sitting here, waiting for more and more cores. While all the Windows/Office users whine that "it's dual-core, but there's only a 20% performance increase", I just set MAKEOPTS="-j3" (on my dual-core box) and watch things compile twice as fast. Add in the 6-12 MB of L2 cache these will have, and it's gonna rock. (my current laptop has 4 MB--that's as much RAM as my old 486 came with. (There. I got the irrelevant "when I was your age" comparison out of the way. (Yes, I know one of you had a computer whose RAM was as small as the L1 cache. Good for you.)))

Re:/me drools. (0)

Anonymous Coward | more than 7 years ago | (#18527685)

Watching stuff compile! Yay, that's fun!

Bah! (0)

Anonymous Coward | more than 7 years ago | (#18527707)

The first work machine I was on had a disk drive that held less than that!

By the time I started we had 10MB RL02 diskpacks, so it wasn't all that bad.

And a Whopping 64K extra memory on a 12" square board...

Re:/me drools. (1)

asc99c (938635) | more than 7 years ago | (#18527979)

Damn. (You just made me check whether the bracketing was right (this isn't code! (so no need to bracket so much)) (it was (mine isn't)))).

Here it goes- (0)

Recovering Hater (833107) | more than 7 years ago | (#18527335)

Yeah, but does it run linux?

Take your mod points, strike me down with all of your hatred...


Re:Here it goes- (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18527551)

In this case that's actually a relevant question. Will the full GPU specs and/or open source drivers be available?

Re:Here it goes- (2, Interesting)

maxwell demon (590494) | more than 7 years ago | (#18527617)

Yeah, but does it run linux?
Imagine an on-board beowulf cluster ...

Image quality? (1)

rsilvergun (571051) | more than 7 years ago | (#18527367)

doesn't the quality of onboard graphics suffer from being directly on the mobo? I know there's a thriving market for sub $70 dollar graphics cards that replace onboard graphics for the sake of better image quality. Wouldn't having it on chip make this worse? I'd love to have onboard graphics (especially if I could get good tv out with it ) to save on heat/noise, but the stuff I've seen has been pretty lame.

Re:Image quality? (1)

crow (16139) | more than 7 years ago | (#18527553)

No, the quality of onboard graphics do not suffer from being directly on the motherboard. The quality suffers because they typically use the cheapest solution for the onboard graphics, because they're targeting the business market--the onboard graphics are good enough for Office, so there's no need to buy a separate card.

It probably doesn't make sense to put high-end graphics on the chip, because people in that market want to upgrade graphics more often than CPUs (not to mention that they probably want nVidia). What does make sense is something more in the mid-range; something good enough for all the fancy Vista eye candy, and something good enough for HDTV playback.

Re:Image quality? (1)

Joe The Dragon (967727) | more than 7 years ago | (#18527555)

On board video suffers as it needs to use chip set io / cpu power to run and it needs to use slower ram then you find on real video cards as well as needing to share it with the rest of the system punting in the cpu may help but then if intel is still useing the FSB it may choke it up.

Re:Image quality? (1)

maxwell demon (590494) | more than 7 years ago | (#18527739)

I guess the quality problems come from the analog part of the graphics hardware. I think the processor-integrated graphics would only cover the digital parts (i.e. doing 3D calculations etc.) while the analog parts (creating the actual VGA or TV signals) would still be handled by separate hardware.

This "analog problem hypothesis" should be quite simple to test: Does onboard graphics image quality also suck when using a digital connector (e.g. DVI-D)? If I'm right, then it shouldn't, because in this case all the analog hardware is in the screen, not on the mainboard.

Re:Image quality? (1)

jhfry (829244) | more than 7 years ago | (#18528189)

doesn't the quality of onboard graphics suffer from being directly on the mobo?

Uhh... think about what your asking. Does placing the graphics processor closer to the source of it's information (ram) and on a much faster bus (CPU internal bus) make it slower?

The reason onboard graphics suck on most machines is not because they are integrated, it's because the mobo manufacturers have no interest in integrating the latest and greatest video processors and massive quantities of RAM into a motherboard.

Most onboard video is there simply so that system builders can build an average desktop machine for a minimal cost. The reason that AGP and now PCIe slots are there is so that those who need graphics performance can upgrade.

I have seen some onboard graphics that do great TV out... TV out is really relatively unaffected by the power of the graphics chip, though a low budget implementation is not likely to rival a high dollar one. Of course if you want high fps out of the latest 3d title, you will need a high end graphics card with tv out on it. But I watch all my TV through mythtv and an old NVIDIA FX5200 as it's one of the best tv out's on a low budget cards I could get at the time.

Suffice to say, by embedding the GPU in the main processor the system will benefit because the two cores can work together to perform tasks. The graphics can use system memory without the performance penalty currently associated with sharing system ram for graphics. And the best part is that applications that are not being used to generate images for display can still harness the power of the GPU.

For example, many GPU's can do MPEG2 playback without using the main processor, however when you want to re-encode an mpeg2 to mpeg4, the main processor must read the mpeg2 stream itself because its target is not the display; with the GPU integrated into the CPU and connecting to main memory, it is possible to harness its power for any purpose, not just driving the display.

Please don't equate your bad experiences with embedded graphics with what is to come... what we are going to see is a departure from what you know now... a shift to embedding for increased performance rather than embedding to reduce cost. I suspect that the high end processors with embedded GPU's will just slaughter the power of current add-on cards once they are properly written for in software. Not to mention, the fact that your sharing your system RAM can keep the costs a bit lower and make things even faster because multiple cores could manipulate the same memory locations.

And best of all... power requirements drop, cooling requirements drop, and a tiny machine could pump out FPS like nothing you have seen before. So we might actually see the size of high end gaming rigs come way way down. Perhaps you will carry a tiny MacMini sized machine to your next LAN party frag fest.

Imitation is the highest form of flattery (5, Interesting)

Zebra_X (13249) | more than 7 years ago | (#18527377)

Intel has a lot of cash, and the ability to invest in expensive processes earlier than most. Certainly, earlier than AMD.

However, it's worth noting, that these are clearly AMD ideas.
* On die memory controller - AMD's idea - and it's been in use for quite a while now
* Embedded GPU - a rip off of the AMD fusion idea, announced shortly after the aquisition of AMD.

Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.

Re:Imitation is the highest form of flattery (2, Insightful)

madhatter256 (443326) | more than 7 years ago | (#18527489)

Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.
Actually, Intel is leading on something very important, mobility and power consumption. Take a look at the Pentium M series. Laptops with the Pentium M series always outpaced the Athlon Turion series in both battery life and in speed, in most applications. Now we see Intel integrating that technology into the desktop CPU series.

Re:Imitation is the highest form of flattery (1)

Zebra_X (13249) | more than 7 years ago | (#18527631)

I would say that AMD has let Intel lead in that segment. There are very few SKU's associated with the AMD's mobile segment. Being a smaller company, AMD chose to attack the server market first with the Opteron, and the high end PC market with the FX line. Both of those lines are driving innovation at AMD. The 3XXX 4XXX 5XXX and 6XXX lines as well as the turions are all reduced implementations of their server chips.

Re:Imitation is the highest form of flattery (1)

trigeek (662294) | more than 7 years ago | (#18527599)

Just because Intel is announcing it now, doesn't mean that Intel wasn't planning it before AMD announced. As features size shrinks, and the GHz war is over, you gotta use the real estate for something. It's kind of a no-brainer to integrate a GPU.

AMD has a history of announcing very early. Intel, on the other hand, has a history of announcing late.

Re:Imitation is the highest form of flattery (1)

jimicus (737525) | more than 7 years ago | (#18528251)

Intel is no longer leading as they have in yeas past

Did they ever? Maybe for desktop PCs, but not for chips in general. The DEC Alpha chip was way ahead of anything Intel had at the time.

Integrated Graphics? Uh-oh! (1)

madhatter256 (443326) | more than 7 years ago | (#18527391)

So it took Intel almost 9? years to integrate the Intel i740 GPU onto a CPU? I always wanted native DirectX 6.1 support right from the get-go!

Re:Integrated Graphics? Uh-oh! (0)

Anonymous Coward | more than 7 years ago | (#18527715)

DirectX is totally irrelevant to me; all I want is good 2D, adequate OpenGL performance and open source drivers. The majority of business users don't care about D3D support either so I don't see what you're complaining about?

Re:Integrated Graphics? Uh-oh! (0)

Anonymous Coward | more than 7 years ago | (#18528129)

Right, because business users are the only people in the world.

Tick tock? Where's the clock? (-1)

Anonymous Coward | more than 7 years ago | (#18527443)

Enough with the "tick" "tock" crap.

GPU (1)

FunkyELF (609131) | more than 7 years ago | (#18527475)

What will this mean if the GPU is integrated with the CPU?

Will we still need drivers? If we do, hopefully there will be open source versions since it is Intel and all.

Re:GPU (1)

FunkyELF (609131) | more than 7 years ago | (#18527513)

(replying to self)

...Does this mean that the firmware of the GPU will not be able to be updated?

Re:GPU (1)

terraformer (617565) | more than 7 years ago | (#18527633)

Will we still need drivers?

Yes, how else will the *OS* access and interface with the GPU?

For servers and settops and businesses... (1)

argent (18001) | more than 7 years ago | (#18527737)

What will this mean if the GPU is integrated with the CPU?

It'll mean that if you want graphics performance that doesn't suck, you'll still need an external video card with dedicated VRAM, but for embedded systems, servers, and business laptops and desktops where Intel's ghastly GPUs are acceptable it'll be OK.

This will also probably make Microsoftwood happy, since it'll guarantee there's no open traces on the video card for you to use to pirate your HD movies on Vista.

Re:For servers and settops and businesses... (1)

drinkypoo (153816) | more than 7 years ago | (#18528351)

I'd imagine that the bus speeds will be very high on the chip, and so for anything short of gaming, it should be perfectly adequate. Accelerated desktop, preview for cad or 3d modeling... it should do all of that without much issue. Also because it's on the CPU and will likely have a very fast connection, if you're not using its (admittedly limited) power for graphics, you might be able to make use of it as a coprocessor unit more easily and efficiently.

Penryn and Nehalem? (5, Funny)

j00r0m4nc3r (959816) | more than 7 years ago | (#18527499)

I can't wait for the Frodo and Samwise chips

Re:Penryn and Nehalem? (2, Funny)

WeblionX (675030) | more than 7 years ago | (#18527813)

Small and annoying, but somehow still manange to get the job done, but only with the accidental help of some other even smaller one? Oh, and lots of big old ones helping support it too. They don't sound particularly good to me...

Re:Penryn and Nehalem? (1)

Hoi Polloi (522990) | more than 7 years ago | (#18528285)

One chipset to control them all?

Price still factors, though, and AMD competes. (4, Interesting)

sjwaste (780063) | more than 7 years ago | (#18527527)

In the meantime, you can get an AMD X2 3600 (65nm Brisbane core) for around $85 now, and probably in the $60 range well before these new products hit. The high end is one thing, but who actually buys it? Very few. I don't know anyone that bought the latest FX when it came out, or an Opteron 185 when they hit, or even a Core2Duo Extreme. All this does is push the mid- to low-end products down, and a ~$65 dual core that overclocks like crazy (some are getting 3 GHz on stock volts on the 3600) would seem like the best price/performance to me.

AMD's not out because they don't control the high end. Remember, you can get the X2 3600 w/ a Biostar TForce 550 motherboard at Newegg for the same price as an E4300 CPU (no mobo), and that's the board folks are using to get it up to crazy clock speeds.

Re:Price still factors, though, and AMD competes. (1)

Jackie_Chan_Fan (730745) | more than 7 years ago | (#18527703)

too bad intel's gpu's all suck huh? Gee.. whens the last time i bought an Intel video card... uhm... never.

AMD and ATI have a better partnership. I'm still waiting for intel to try and buy Nvidia. With Nvidia's latest disaster known as the geforce 8800gtx, i'm curious if they're ready to sell out to intel.

The 8800gtx performs like shit in opengl apps. an $80 apg ATI card out performs the geforce 8800gtx ($600) in opengl applications in XP.

NVidia has released a driver for the geforce 8800's since jan. It still doesnt function in Vista correctly, and it has major issues with xp, including many games rendering very fucked up, overlay issues with adobe applications that just stop redrawing any adobe windows, Vsync issues that cause the card to run like absolute shit, and of course opengl... its a wonder why people keep buying this dam card cause i regret it so much.

Re:Price still factors, though, and AMD competes. (0)

Anonymous Coward | more than 7 years ago | (#18527911)

But don't most people use Directx anyway?

Re:Price still factors, though, and AMD competes. (1)

Joe The Dragon (967727) | more than 7 years ago | (#18528099)

apple uses open gl

Re:Price still factors, though, and AMD competes. (0)

Anonymous Coward | more than 7 years ago | (#18528283)

Most people don't use Apple...

Two problems (3, Insightful)

tomstdenis (446163) | more than 7 years ago | (#18527721)

1. Putting a GPU on the processor immediately divides the market for it. Unless this is only going to be a laptop processor it probably won't sell well on desktops.

2. Hyperthreading only works well in an idle pipeline. The core 2 duo (like the AMD64) have fairly high IPC counts, and hence, low amount of bubbles (as compared to say the P4). And even on the P4 the benefit is marginal at best and in some cases it hurts performance.

The memory controller makes sense as it lowers the latency to memory.

if Intel wants to spend gates, why not put in more accelerators for things like the variants of the DCT used by MPEG, JPEG and MPEG audio? or how about crypto accelerators for things like AES and bignum math?


Re:Two problems (2, Insightful)

jonesy16 (595988) | more than 7 years ago | (#18527839)

The point of this processor is that it will be modular. Your points are valid but I think you're missing Intel's greater plan. The GPU on core is not a requirement of the processor line, merely a feature that they can choose to include or not, all on the same assembly line. The bigger picture here is that if the processor is as modular as they are claiming, then they can mix and match different co-processors on the die to meet different market requirements, so the same processor can be bought in XY configuration with an on-board GPU, or in AB configuration with on-board physics engine, etc.

Re:Two problems (0)

Anonymous Coward | more than 7 years ago | (#18527859)

1. Putting a GPU on the processor immediately divides the market for it.

Why? My servers have intel onboard graphics but have only ever displayed VGA and I no longer purchase nvidious graphics cards (because I hate tainting my kernels with blobus proprietarious). Finding a mobo with intel graphics and the features I want is hard so if this CPU does good 2D and basic OGL, I'm sold.

2. Hyperthreading only works well in an idle pipeline.

With 8 cores in the package, I'd suggest that pipelines may tend towards idleness?

Re:Two problems, integrated sells very well. (2, Insightful)

guidryp (702488) | more than 7 years ago | (#18527909)

1: Integrated sells very well on the desktop almost every single machine in your big box shops has integrated graphics. I am sure it is outsells machines with separate graphics cards in the desktop. Gamers are not the market.

2: I am skeptical about hyperthreading, but it all depends on the implementation. I don't think this is something they are pursuing just for marketing. They must have found a way to eek out even better loading of all execution units by doing this. I can't imagine this being done if it actually performs worse than hyperthreading in P4. We have to wait and see.

Re:Two problems, integrated sells very well. (1)

tomstdenis (446163) | more than 7 years ago | (#18528297)

1. Yes there are many integrated graphics cards, but most gamers won't use them. There is a huge market for gamer PCs (hint: who do you think those FX processors are made for?)

2. Don't give Intel that much credit. The P4 *was* a gimmick. And don't think that add HTT is "free" at worst. It takes resources to manage the thread (e.g. stealing memory port access to fetch/execute opcodes for instance).

In the case of the P4 it made a little sense because the pipeline was mostly empty (re: it was a shitty design). In the core 2 duo case, the pipeline is less empty and there really just aren't useful bubbles to fill up with another thread.


Re:Two problems (1)

Joe The Dragon (967727) | more than 7 years ago | (#18528031)

Will if there can be some kind of a sli / cross fire set up with the cpu based gpu and video card where the display is plugged into but this more likely on a amd system and the upcome amd desktop chip sets is listed as supporting HTX slots so you can run this over the HT bus.

Re:Two problems (1)

rikkus-x (526844) | more than 7 years ago | (#18528259)

Won't sell well on desktops? What about office users? What about people who don't care about gaming? I'm sure it'll be enough to run Aero Glass, which is probably enough for most people.

Re:Two problems (1)

tomstdenis (446163) | more than 7 years ago | (#18528347)

even my box at the office has a PCIe addon GFX card. The onboard nvidia was just too buggy (cursor didn't work, lines would show up, etc) even with the tainted nvidia drivers. I bought a low end 7300 PCIe card and problems solved.

What happens when you hit a limitation/bug in the Intel GPU?

Also, don't misunderestimate :-) the revenue from the home/hobby/gamer market. The R&D cost of most processors is paid for by the gamer/server costs. AMD for instance, doesn't pay off the R&D/fab costs by selling $50 semprons. It's by selling $2000 Opterons (which are more or less the same design).

Adding the GPU isn't "free" as others suggested. A lot of testing has to go into it and every configuration you support makes verification that much harder. You also have to gamble, do I make 50,000 plain cores, or the ones with a GPU this month?


Re:Two problems (1)

Ant P. (974313) | more than 7 years ago | (#18528345)

Doing common operations in hardware sounds like a much better plan than just throwing more general-purpose processing at it.

I really wish they'd add hardware accel for text rendering, considering it's something everything would benefit from (using a terminal with antialiased TTFs is painfully slow). There's supposedly graphics cards that do this, but I've never come across one.

Re:Two problems (1)

tomstdenis (446163) | more than 7 years ago | (#18528381)

Text rendering sounds like a job for the GPU not CPU. Things like DCTs [for instance] could be done in the GPU, but they're more universal. For instance, you can do MPEG encoding without a monitor, so why throw a GPU in the box?

Crypto is another big thing. It isn't even that you have to be faster, but safer. Doing AES in hardware for instance, could trivially kill any cache/timing attacks that are out there.


Bursts of CPU (2, Interesting)

suv4x4 (956391) | more than 7 years ago | (#18527853)

I can see those being quite hot for servers, where running "many small" tasks is where the game is.

On a desktop PC you often need the focused application (say, some sort of graphical/audio editor, game, or just a very fancy flash web site even) to get most of the power of the CPU to render well.

If you split the speed potential in 16, would desktop users see actual speed benefit? They'll see increased responsiveness from the smoother multitasking of the more and more background tasks running on our everyday OS-es, but can a mostly single-task focused desktop usage really benefit?

How of course, we're witnessing ways to split concerns of a single task application into multiple threads: the new interface of Windows runs in a separate CPU thread and on the GPU, never mind if the app itself is single threaded or not. That's helping.

Still, serial programming is, and is going to be, prevalent for many many years to come, as most tasks a casual / consumer applications performs are inherently serial and not "paralelizable" or whatever that would be called.

My point being, I hope we'll still be getting *faster* threads, not just *more* threads. The situation now is that i's harder harder to communicate "hey we have only 1000 threads/cores unlike the competition which has 1 million, but we're faster!". It's just like AMD's tough position in the past, explaining their chips are faster despite having slower clock-rate.

Desktop CPU/GPU (0)

Anonymous Coward | more than 7 years ago | (#18527937)

I sure hope they keep this crap out of the high-end desktop market CPUs. I would hate having to pay extra for a GPU I'm not going to use. The idea of having built in GPUs for gaming PCs is a bad one. Flexibility is key. I'm sure AMD and Intel know this, though.

Where's the Software? (2, Interesting)

Doc Ruby (173196) | more than 7 years ago | (#18528007)

OK, these new parallel chips aren't even out yet, and software has to get the hardware before SW can improve to exploit the HW. But the HW has all the momentum, as usual. SW for parallel computing is as rudimentary as a 16bit microprocessor.

What we need is new models of computing that programmers can use, not just new tools. Languages that specify purely sequential operations on specific virtual hardware (like scalar variables that merely represent specific allocated memory hardware), or metaphors for info management that computing killed in the last century ("file cabinets", trashcans of unique items and universal "documents" are going extinct) are like speaking Latin about quantum physics.

There's already a way forward. Compiler geeks should be incorporating features of VHDL and VeriLog, inherently parallel languages, into gcc. And better "languages", like flowchart diagrams and other modes of expressing info flow, that aren't constrained by the procedural roots of those HW synthesis old guard, should spring up on these new chips like mushrooms on dewy morning lawns.

The hardware is always ahead of the software - as instructions for hardware to do what it does, software cannot do more. But now the HW is growing capacity literally geometrically, even arguably exponentially, in power and complexity beyond our ability to even articulate what it should do within what it can. Let's see some better ways to talk the walk.

Re:Where's the Software? (1)

stardash (988251) | more than 7 years ago | (#18528059)

I totally agree, software needs to be overhauled so fast to keep up with hardware. While this is a cool innovation I am going to hold off for a while. Why bother rushing into a hardware situation that will cost you an arm and a leg that will not even give you any real advantages? When the software is there I will put this purchase into consideration.

WOOT!? FPv! (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#18528115)

sorely diMinished.

Von Neuman bottleneck (2, Insightful)

gillbates (106458) | more than 7 years ago | (#18528363)

It is interesting to note that Intel has now decided to put the memory controller on the die, after AMD showed the advantages of doing so.

However, I'm a little dismayed that Intel hasn't yet addressed the number one bottleneck for system throughput: the (shared) memory bus itself.

In the 90's, researchers at MIT were putting memory on the same die as the processor. These processors had unrestricted access to its own, internal RAM. There was no waiting on a relatively slow IDE drive or Ethernet card to complete a DMA transaction; no stalls during memory access, etc...

What is really needed is a redesign of the basic PC memory architecture. We really need dual ported RAM, so that a memory transfer to or from a peripheral doesn't take over the memory bus used by the processor. Having an onboard memory controller helps, but it doesn't address the fundamental issue that a 10 ms IDE DMA transfer effectively stalls the CPU for those 10 milliseconds. In this regard, the PC of today is no more efficient than the PC of 20 years ago.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?