Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Launches Power-Efficient Penryn Processors

CmdrTaco posted more than 6 years ago | from the moores-law-i-battle-thee dept.

Intel 172

Bergkamp10 writes "Over the weekend Intel launched its long-awaited new 'Penryn' line of power-efficient microprocessors, designed to deliver better graphics and application performance as well as virtualization capabilities. The processors are the first to use high-k metal-gate transistors, which makes them faster and less leaky compared with earlier processors that have silicon gates. The processor is lead free and by next year Intel is planning to produce chips that are halogen free, making them more environmentally friendly. Penryn processors jump to higher clock rates and feature cache and design improvements that boost the processors' performance compared with earlier 65-nm processors, which should attract the interest of business workstation users and gamers looking for improved system and media performance."

Sorry! There are no comments related to the filter you selected.

revolutionary? no, but still noteworthy (3, Informative)

Anonymous Coward | more than 6 years ago | (#21323931)

While Penryn is a small increase in performance, it is not a big change in the architecture. Instead of upgrading to Penryn, customers can expect Nehalem, the next major revision in the Intel architecture, was responsible for the release in 2008.

At the Intel Developer Forum in San Francisco in September Intel showed, and said it would be a better yield per watt and better system performance through its Quick Path Interconnect system architecture. Nehalem chips will also provide a memory controller integrated and improved communication between system components.

Re:revolutionary? no, but still noteworthy (1)

cayenne8 (626475) | more than 6 years ago | (#21323985)

I'm wondering when the new chips will show up in the macbook pro.

I was about to buy one, but, if this is coming up soon, I may wait...

Re:revolutionary? no, but still noteworthy (1)

stoolpigeon (454276) | more than 6 years ago | (#21324079)

When you get one, be sure to buy the shirt also. []

Names of Rivers? (3, Interesting)

spineboy (22918) | more than 6 years ago | (#21326415)

I'm just wondering which will end first - Moores law, or the number of river names left in Washington. For those of you who don't know, all of Intels chip names are named after rivers in Washington state.

Re:revolutionary? no, but still noteworthy (1)

Parag2k3 (1136791) | more than 6 years ago | (#21323997)

Penryn offers a few extra features over the existing Conroe/Kentsfield design. Mainly lower power requirements, higher clock speeds as well as SSE4 which is useful for video encoding. This is just the tock in Intel's tick-tock strategy. Nehalem should be much more exciting.

Re:revolutionary? no, but still noteworthy (1)

ircmaxell (1117387) | more than 6 years ago | (#21324003)

Well, Nehalem will kill AMD's last big advantage in CPU design (integrated memory controller)... For now, AMD still has that advantage (not to say that Intel doesn't have it's own advantages)...

I'm curious to see how Penryn will stack up against existing archetectures (including AMD's offerings), and if it will be worth the cost to upgrade (seeing as there is another major upgrade on the way from intel in the next year). Benchmarkers have at it...

Re:revolutionary? no, but still noteworthy (5, Insightful)

Azuma Hazuki (955769) | more than 6 years ago | (#21324085)

I am a dedicated AMD fangirl...every computer I've ever built had an AMD chip in it. But Intel really hit it on the head with the Core 2 arch and I see no sign of them slackening. I am actually looking forward to Nehalem and its shrink (which is probably the next time I'll have the money to spend on anything not college or food/supply-related).

If this is how it ends for AMD, this is how it goes. I'll be sad, and may buy AMD anyway for some other reason (even if it's just stubborn fangirlism) but I respect Intel's design team. Their ethics, no, but their design is top notch this time around.

Re:revolutionary? no, but still noteworthy (4, Insightful)

AvitarX (172628) | more than 6 years ago | (#21324311)

One reason to buy AMD is that if they go out of business Intel may stop innovating.

Even if you are getting a worse deal in the short run, an upgrade cycle or two in the future may be much worse (comparatively) if everyone goes Intel.

Re:revolutionary? no, but still noteworthy (4, Informative)

Pojut (1027544) | more than 6 years ago | (#21324413)

Another good reason is that it is far cheaper (at least last time I checked prices) to go with AMD...especially if you aren't doing any gaming or audio/video work. While Core 2 blasts AMD out of the water, the price difference makes AMD a very smart buy for every-day use. For gaming, AMD's offerings still work great, and the money you save on the processor can instead be used towards a more powerful video card.

Re:revolutionary? no, but still noteworthy (3, Interesting)

dreamchaser (49529) | more than 6 years ago | (#21325505)

You should probably check the prices again with an eye towards price/performance ratios. AMD hasn't been cheaper for a long time. You can save a few bucks by settling for lower performance, but not enough to upgrade that video card or any other significant components.

Re:revolutionary? no, but still noteworthy (0)

Anonymous Coward | more than 6 years ago | (#21325809)

Intel has the best bang4$ down to the e2140 Core2 based CPU which is around $70. The Celeron-L is also Core2 based, but only uses 1 core, and will kill any single-core AMD CPU when overclocked.

AMD wins in the super-budget category ($40), but Intel has the market covered at every other price-point.

Re:revolutionary? no, but still noteworthy (5, Informative)

ircmaxell (1117387) | more than 6 years ago | (#21326229)

Ummmm.... Check this out... []

This chart shows that in terms of Price/Performance for the average user, Intel has only two CPU's that can compete with AMD's leading X2 (non-FX) processor (the 6000+, which is the highest AMD they have benchmarked). The first is the E2160, and the second is the P4E 613.

The field is LARGELY domainated (at the best scores that is) by AMD... Intel has 5 in the top 20, 1 in the top 10, and 0 in the top 5. AMD, conversely, has 2 x2's in the top 5...

Price/Performance? Who shops that way? (0)

DanLake (543142) | more than 6 years ago | (#21326719)

If some manufacturer could sell a 1GHz CPU for $5, it would blow away everything else on that price/performance chart but would not run most modern applications. There are only a half-dozen of the high-end desktop processors anyone should even consider purchasing for a new PC. Intel and AMD both have processors in that category, and apparently AMD is ahead in the price/performance metric. In all of the purely performance-based reviews however, Intel has held most of the top spots.

AMD Cannot Compete Unless... (2, Insightful)

MOBE2001 (263700) | more than 6 years ago | (#21325925)

If this is how it ends for AMD, this is how it goes.

AMD is fighting a losing battle. Intel defined the current market and AMD cannot beat them at their own game. They are condemned to always play second fiddle unless they can find a way to redefine the market. They can only do so by reassessing the current state of the art in multicore CPU architecture and computer programming and correct what is wrong with it. And there is a lot that is wrong with it. I call it The Age of Crappy Concurrency [] . Check it out.

Now that the industry is transitioning to massive parallelism, AMD has the chance of a lifetime to change the computing landscape in its favor and leave Intel and everybody else in the dust.

Re:revolutionary? no, but still noteworthy (1)

somasynth (1088691) | more than 6 years ago | (#21325137)

From what I remember, only 'extreme' and server models of the architecture will have integrated controllers.

Re:revolutionary? no, but still noteworthy (3, Informative)

necro81 (917438) | more than 6 years ago | (#21325853)

The biggest thing about Penryn is the move to 45-nm fabrication, and the technological advances that were required to pull it off. IEEE Spectrum has a nice, in-depth (but accessible) article on those advances [] . High-k dielectrics and new metal gate configurations will be how advanced ICs are produced from now on. It is as large a shift for the fabs as a new chip architecture is for designers.

Still sticking (1, Interesting)

guruevi (827432) | more than 6 years ago | (#21323947)

It's sad that the industry is still sticking to the x86 instruction set. It should've been replaced a long time ago with a pure RISC instruction set especially now with the quest for less power-hungry chips. The Power/PowerPC architecture was good but because they didn't have enough demand, the price was high and development low. A few failures (compare to Netburst) and their customers (amongst them Apple) went running to the competitors.

We're still running PowerPC here because they're low-power and do certain mathematics very well (I'm not the science guy). Hopefully Apple will switch back to PowerPC or so now that they are fully "Universal" and IBM has some promising chips lined up.

Re:Still sticking (1)

Peter Cooper (660482) | more than 6 years ago | (#21324127)

What you say is directly comparable to the internal combustion engine, say. It makes a lot of sense (and has done so for a lonnnnnng time now) not to use gasoline and to instead work on alternative engine technologies, compressed air, hydrogen, ethanol, and so forth.. but these things are still sideline projects. The engine / automotive industry is far more fragmented (in terms of suppliers and target markets) than the PC industry and a lot older.. and if they haven't learned the lessons, I can't see alternative instruction set technologies taking off until a transition becomes entirely seamless and transparent to the average user (Apple made a great step in this direction with the PPC/x86 "Universal" stuff).

Re:Still sticking (0, Offtopic)

j-pimp (177072) | more than 6 years ago | (#21325845)

What you say is directly comparable to the internal combustion engine, say. It makes a lot of sense (and has done so for a lonnnnnng time now) not to use gasoline and to instead work on alternative engine technologies, compressed air, hydrogen, ethanol, and so forth.. but these things are still sideline projects.

Putting aside wars, and peak oil, and the envirorment gas is currently the best way to get an internal combustion engine from point a to b except for perhaps diesel. Diesel trades effeciency for acceleration. Ethanol has less hydrocarbons on a chain so gas will always outperform it unless we find a different exothermic reaction for it.

Hydrogen holds promise, but that research is probably going to come out of an independent party. I'd much rather see a non car manufacturer create a hydrogen powered V8 (if its a renewable clean fuel source there is no reason I can't be wasteful and drive a car that faster than I need it to be), and sell it to GM, Ford, Crystler, Mercedes, BMW, Honda, Toyota, and everyone else.

Re:Still sticking (2, Insightful)

OwnedByTwoCats (124103) | more than 6 years ago | (#21326433)

Rather than old-fashioned reciprocating engines, how about outside-of-the-box thinking? A small gas-turbine, powering a generator, battery packs, and then electric motors driving all 4 wheels and offering regenerative braking as well?

Hydrogen power is best when it doesn't suffer the 40% losses of combustion, i.e. when it goes through a fuel cell and is converted to electricity with 85% efficiency.

Re:Still sticking (4, Informative)

Waffle Iron (339739) | more than 6 years ago | (#21324155)

It should've been replaced a long time ago with a pure RISC instruction set

It was, when the Pentium Pro was introduced circa 1997. The instruction set the programmer "sees" is not the instruction set that the chip actually runs.

CISC to RISC runtime translation (3, Interesting)

Z-MaxX (712880) | more than 6 years ago | (#21325219)

An often overlooked benefit of the way that modern IA32 processors achieve high performance through translating the CISC x86 instructions into microcode instructions is that the chip designers are free to change the internal microcode architecture for every CPU in order to implement new optimizations or to tune the microcode language for the particular chip's strengths. If we were all coding (or if our compilers were coding for us) in this RISCy microcode, then we, or the compiler, would have to do the optimizations that the CPU can do in its translation to microcode. I agree that the Power architecture is pretty cool, but I'm tired of hearing people bash the Intel x86 architecture for its "obsolete" nature. As long as it is the fastest and best thing I can buy for a reasonable amount of money, it's my top choice.

Re:CISC to RISC runtime translation (1)

OwnedByTwoCats (124103) | more than 6 years ago | (#21326385)

IA32 is going the way of the passenger pigeon. There may be a few rapidly diminishing flocks left in the wild, but they'll be gone in a blink of the (metaphorical) eye.

AMD-64 for evah! (or at least, the next decade). Oh, that's also spelled "Core 2"...

Oblig. Hackers quote (1)

athdemo (1153305) | more than 6 years ago | (#21325955)

RISC architecture is going to change everything!

Re:Still sticking (1, Interesting)

Anonymous Coward | more than 6 years ago | (#21326271)

The instruction set the programmer "sees" is not the instruction set that the chip actually runs.

Huh. That's a strange definition of "replaced" you've got.

This is like having ATMs that only gave out dimes, complaining about the dimes, and being told "no, we do all transactions in units of $10; the dimes you 'see' are not the same monies that we actually transfer".

As a user, I don't care what the processor does internally -- could use black magic for all I care. I've written PPC compilers before, but I can't wrap my brain around x86. Could this be why so few new (non-byte)compiled languages exist -- because nobody can figure out how to write a code-emitter for the monstrosities that pass as recent CPUs?

Re:Still sticking (1)

Jah-Wren Ryel (80510) | more than 6 years ago | (#21326807)

Huh. That's a strange definition of "replaced" you've got.
No it's not. The context of the statement was someone declaring that x86 should have been replaced with RISC by now. RISC was not developed to improve the lives of programmers, it was developed improve the lives of CPU designers. So, in that context, no one cares if you can grok x86 or not, what matters is if the design principles of RISC have been implemented in these CPUs and they have.

x86 already has elements of RISC & PowerPC is (4, Insightful)

Blahbooboo3 (874492) | more than 6 years ago | (#21324193)

I believe that x86 already has many of the benefits of RISC chips incorporated into them. Way back in 1995 [] Intel added to the Pentium Pro a RISC core. From the Wiki article, "During execution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces, micro-ops, which are readily executed by a micro-architecture that could be (simplistically) described as a RISC-machine without the usual load/store limitations."

As for PowerPC Macs, I doubt it. The switch to Intel is what made most new Mac users switch because there was no longer a risk of not being able to run the one Windoze program they might need. If Mac ever went to a non-mainstream CPU again it would be a big big mistake.

Re:x86 already has elements of RISC & PowerPC (1)

Serge_Tomiko (1178965) | more than 6 years ago | (#21324507)

The NexGen Nx586 was actually the first x86 chip to have a risc core... It came out in 1994.

Re:x86 already has elements of RISC & PowerPC (1)

porpnorber (851345) | more than 6 years ago | (#21325011)

This is a bit like saying that a truck with a rocket plane inside has 'many of the features of a rocket plane.' The point of RISC is to manage the complexity of the processor, minimise the amount of unnecessary work, and shift load onto software wherever that has zero or negative performance impact. By, effectively, adding an on-the-fly compiler in hardware, the Intel engineers have not done this, even if they have streamlined the back-end execution engine using tricks published in the RISC literature.

But Intel's traditional expertise is in memory and process—and since caches now dwarf execution units, well, there's no need to worry about doing it 'right' anymore! And sadly, I almost mean that.

The situation is common in computing. The engineering design of familiar systems such as C++ or the Web itself is nothing short of incoherent: layer upon layer of patches and transformative interfaces where a little planning and a more minimalist approach would reduce both resource consumption and programmer effort all around. But performance and efficiency are nowhere near as important to industry as back-compatibility and, well, marketing; and the overheads are concealed by providing capacity that honestly grows much faster than the task at hand.

Re:x86 already has elements of RISC & PowerPC (1)

nanoflower (1077145) | more than 6 years ago | (#21326305)

You are forgetting one of the most important factors in business. Time to market. A fantastic product that's been optimized as much as possible is wonderful, but it won't matter if someone else already has a similar product and controls the marketplace (See how hard AMD has to work to take market share from Intel, or Apple from Microsoft.) Once someone has the market (and the mind share) it's very hard to win it back. So businesses concentrate on getting their product to market as quickly as possible. Yes, that means that products like the Penryn may not be as efficient as possible, but if they are good enough and in the marketplace soon enough, then that is enough to make the company money. Also keep in mind, as I understand it, the back end of the Intel processors may be changing greatly ever few years if the engineers find a better way to speed up their processors. So the underlying micro-code may only have a few years to be worked on before it is replaced with something new. That will greatly limit the amount of time the engineers have to optimize the code, and limit the amount of time and money the company wants to put in to such efforts.

Re:x86 already has elements of RISC & PowerPC (2, Informative)

homer_ca (144738) | more than 6 years ago | (#21326851)

You're correct that the x86 instruction set is still cruft, and a pure RISC CPU is theoretically more efficient. However, the real world disadvantage of x86 support is minimal. With each die shrink, the x86 to micro-op translator occupies less die space proportionally, and the advantages of the installed hardware and software base gives x86 CPUs a huge lead in economies of scale.

I know we're both just putting different spins on the same facts, but in the end, practical considerations outweigh engineering purity. x86 is even competing against ARM in the embedded space now, not just in higher powered UMPCs, but also routers too like this one [] with a 486 class CPU.

Re:Still sticking (5, Informative)

jonesy16 (595988) | more than 6 years ago | (#21324301)

Actually, one of the reasons that Apple jumped off of the PowerPC platform was BECAUSE of their power inefficiency. The G5 processors were incredibly power hungry, enough so that they could never get one cool enough to run in a laptop and actually offered the Mac Pro line with liquid cooling. Compare that to the new quad-core and eight-core mac pro's and dual core laptops that run very effectively with very minimal air cooling.

RISC vs. CISC (4, Informative)

vlad_petric (94134) | more than 6 years ago | (#21324309)

That's a debate that happened more than 20 years ago, at a time when all processors were in-order and could barely fit their L1 on chip, and there were a lot of platforms.

These days:

  • The transistors budgets are so high that the space taken by instruction decoders aren't an issue anymore (L1, L2 and sometimes even an L3 is on chip).
  • Execution is out-of-order, and the pipeline stalls are greatly reduced. The out-of-order execution engine runs a RISC-like instruction set to begin with (micro-ops or r-ops).
  • There is one dominant platform (Wintel) and software costs dominate (compatibility is essential).

One of the real problems with x86-32 was the low number of registers, which resulted in many stack spills. x86-64 added 8 more general purpose registers, and the situation is much better (that's why most people see a 10-20% speedup when migrating to x86-64 - more registers). Sure, it'd be better if we had 32 registers ... but again, with 16 registers life is decent.

Re:RISC vs. CISC (3, Interesting)

TheRaven64 (641858) | more than 6 years ago | (#21324845)

The transistors budgets are so high that the space taken by instruction decoders aren't an issue anymore (L1, L2 and sometimes even an L3 is on chip).
Transistor space, no. Debugging time? Hell yes. Whenever I talk to people who design x86 chips their main complaint is that the complex side effects that an x86 chip must implement (or people complain that their legacy code breaks) make debugging a nightmare.

Execution is out-of-order, and the pipeline stalls are greatly reduced. The out-of-order execution engine runs a RISC-like instruction set to begin with (micro-ops or r-ops).
Most non-x86 architectures are moving back to in-order execution. Compilers are good enough that they put instructions far enough away to avoid dependencies (something much easier to do when you have lots of registers) and the die space savings from using an in-order core allows them to put more cores on each chip.

There is one dominant platform (Wintel) and software costs dominate (compatibility is essential).
Emulation has come a long way in the last few years. With dynamic recompilation you can get code running very fast (see Rosetta, the emulator Apple licensed from a startup in Manchester). More importantly, a lot of CPU-limited software is now open source and can be recompiled for a new architecture.

x86-64 added 8 more general purpose registers, and the situation is much better (that's why most people see a 10-20% speedup when migrating to x86-64 - more registers)
Unfortunately, you can only use 16 GPRs (and, finally, they are more or less real GPRs) when you are in 64-bit mode. That means every pointer has to be 64-bit, which causes a performance hit. Most 64-bit workstation spend a lot of their time in 32-bit mode, because the lower memory (capacity and bandwidth) usage and cache churn give a performance boost. They only run programs that need more than 4GB of address space in 64-bit mode. Embedded chips like ARM often do the same thing with 32/16-bit modes. If x86-64 let you have the extra registers with the smaller pointers you would probably see another performance gain.

Re:RISC vs. CISC (4, Interesting)

vlad_petric (94134) | more than 6 years ago | (#21325689)

High-performance computing isn't moving away from out-of-order execution any time soon. Itanic was a failure. The current generation of consoles are in-order, indeed, but keep in mind that they serve a workload niche (rather large niche in terms of deployment, sure, but still a workload niche).

The argument that the compiler can do a reasonable job at scheduling instructions ... well, is simply false. Reason #1: The problem is that most applications have rather small basic blocks (spec 2000 integer, for instance, has basic blocks in the 6-10 instruction range). You can do slightly better with hyperblocks, but for that you need rather heavy profiling to figure out which paths are frequently taken. Reason #2: compiler operates on static instructions, the dynamic scheduler - on the dynamic stream. The compiler can't differentiate between instances of the instructions that hit in the cache (with a latency of 3-4 cycles) and those that miss all the way to memory (200+ cycles). The dynamic scheduler can. Why do you think that Itanium has such large caches? Because it doesn't have out-of-order execution, it is slowed down by cache misses to a much larger extent than the out-of-order processors.

I agree that there are always ways to statically improve the code to behave better on in-order machines (hoist loads and make them speculative, add prefetches, etc), but for the vast majority of applications none are as robust as out-of-order execution.

Re:RISC vs. CISC (1)

petermgreen (876956) | more than 6 years ago | (#21326201)

Most non-x86 architectures are moving back to in-order execution. Compilers are good enough that they put instructions far enough away to avoid dependencies (something much easier to do when you have lots of registers) and the die space savings from using an in-order core allows them to put more cores on each chip.
OTOH most non x86 architectures are used in environments where it is feasible to compile for the specific chip.

to win in the PC market chips must be able to perform reasonablly well on code compiled by compilers targetting older chips as that represents the code that most people will be running.

Re:Still sticking (2, Insightful)

pla (258480) | more than 6 years ago | (#21324369)

It's sad that the industry is still sticking to the x86 instruction set.

Why? Once upon a time, the x86 ISA had too few registers. Today, that problem has vanished (simply by throwing more GP registers at the problem) - And even then, so few people actually see the problem (and I say that as one of the increasingly rare guys who still codes in ASM on occasion) as to make it a non-issue, more a matter of trivia than actual import.

The Power/PowerPC architecture was good

I know I risk a holy-war here, but: No, not really. PPC didn't suck, and held its own for its era. But it didn't scale well, it always cost significantly more for a given level of performance, and even its biggest advantage, "Vector" processing (aka SIMD), vanished with the introduction of the original MMX into the x86 line. After that point, only clock speed and number of execution units mattered (and of course price, never forget price), and the PPC simply fell further and further behind. Apple "switched" for a damned good reason, and "Intel Inside" doesn't describe it.

It should've been replaced a long time ago with a pure RISC instruction set especially now with the quest for less power-hungry chips

First of all, all modern chips have a native RISC-like core with an x86 frontend implemented entirely in microcode - So if the world still wanted PPC, Intel could release a C2D tomorrow that exported that as the visible interface. Arguing CISC vs RISC in today's world has as much meaning as arguing over case colors.

Second, the CPU's ISA has no (direct) effect on power consumption. RISC processors traditionally drew less power because they simply had fewer transistors (and a painfully small instruction set to show for it). A "modern" RISC processor, with multiple cores, multiple deep pipelined execution units, a variety of FP and SIMD units, and multiple levels of fairly large cache, would draw power comparably to anything currently available from AMD or Intel.

Finally, this battle died with DEC and SGI and MIPS. Let it rest in peace.

Re:Still sticking (1)

Pope (17780) | more than 6 years ago | (#21324969)

The G4 (PPC 74xx) line with AltiVec came out in 1999, two years after MMX debuted in the Pentium. The x86 family still don't come close to the PPC 970 line when it comes to SIMD execution.

It's not really true (2, Informative)

Moraelin (679338) | more than 6 years ago | (#21325191)

Well, bear some things in mind:

1. At one point in time there was a substantial difference between RISC and CISC architectures. CPUs had tiny budgets of transistors (almost homeopathic, by today's standards), and there was a real design decision where you put those transistors. You could have more registers (RISC) or a more complex decoder (CISC), but not both. (And that already gives you an idea about the kinds of transistor budgets I'm talking about, if having 16 or 32 registers instead of 1 to 8 actually made a difference.)

Both sides had their advantages, btw. If it were that bleeding obvious that RISC = teh winner and CISC = teh loser, a lot of history would be different.

The difference narrowed a lot over time, though, so neither is purely CISC or RISC any more (except marketting bullshit or fanboy wars.) Neither the original RISC idea nor the CISC one scaled past a point, so now we have largely the same weird hybrid in both camps.

E.g., the Altivec instruction set on PowerPC is the exact opposite of what the original RISC idea was. The very idea of RISC was never to implement in hardware what a compiler would do for you in software. So the very idea of having whole procedures and loops coded in the CPU instead of in software would have seemed the bloody opposite of all that RISC is about, back in the day.

At any rate, what both are today is what previously used to be called a microcoded architecture. It's sorta like having a CPU inside a CPU. The smaller one inside works on much simpler operations, but an instruction of the "outer" CPU translates into several of those micro-operations. Which in turn are pipelined, reordered in flight, etc, to have them execute faster.

What both sides are doing nowadays for marketting reasons is basically calling the inner architecture "RISC", because marketing really likes that term, and the lemmings have been already conditioned to get excited when they hear "RISC". Really, PowerPC's architecture is only "RISC" on account of basically "yeah, but deep down inside it's still sorta RISC like"... and ironically the x86's can make the exact same claim too.

At any rate, whether you want to call that RISC or not, once you look inside it, both the PowerPC and the Pentiums/Athlons have nearly identical architectures and modules. Sure, the implementation details differ, and some have advantages over other implementations (the Netburst ones had too long pipes, while a G4 had a tiny pipe, so the G4 did have better IPC), but essentially they both are based on the exact same architecture. Neither is more RISC than the other. We can lay that RISC-vs-CISC war to rest.

2. That said, the x86 still was somewhat hampered by the lack of more general purpose registers. Although the compilers and the CPU itself did optimize heavily around the problem, they didn't always do the optimal job.

That has changed in the 64 bit version, though. AMD went and doubled the number of registers for programs running in 64 bit mode, and Intel had to use the same set of instructions so they have that too nowadays.

The performance penalty of that architecture basically became a lot lower than it was in the days of G4 vs Pentium 4 flame wars.

Re:Still sticking (1)

doublefrost (1042496) | more than 6 years ago | (#21326621)

I don't know about that. The G5's we have seem to produce insane amounts of heat. One day the air conditioning went out, and the room with the G5s got real hot.

Story ? (0)

Anonymous Coward | more than 6 years ago | (#21323955)

Don't know whether it's my adblock or what, but I don't see any story at that link. Here's an alternative link & story: []

Intel's 45nm Penryn desktop expected to pack a big wallop
Sharon Gaudin

November 12, 2007 (Computerworld) Intel Corp.'s new 45-nanometer chip for the desktop, part of the newly released Penryn family, should give gamers, researchers and serious multitaskers a significant performance boost, according to analysts.

And that is not good news for rival Advanced Micro Devices Inc., which recently started shipping its quad-core Barcelona processor -- built using a 65nm manufacturing process. AMD isn't expected to move to 45nm technology until the second half of 2008.

The release of Intel's Core 2 Extreme quad-core processor came as part of a larger release of Penryn processors, including 15 server dual-core and quad-core 45nm Hi-k Intel Xeon processors. To make the move from 65nm to 45nm processors, Intel designed a new transistor, stemming leakage and improving energy efficiency. With 820 million of these newly designed transistors in just one chip, Intel is calling it one of its biggest advancements.

On the desktop side, all of this should add up to a major performance boost.

Dean Freeman, an analyst at Gartner Inc., said he expects Penryn will be 20% to 50% faster than Intel's previous chip releases in general purpose applications and 10% to 40% faster in technical applications, multimedia and games. For example, someone using Microsoft Excel or PowerPoint should see a 20% to 50% boost, while an Adobe Photoshop user should see a 10% to 40% increase.

"It's going to mean a faster desktop. It's a more powerful tool, operating applications faster," said Freeman. "Basically, it means that for those of us who are concerned about the speed at which applications work on our desktop, the good news is that it will work faster."

Boyd Davis, a general manager at Intel, said a larger L2 cache and support for new SSE4 media instructions are part of the chip's performance boost.

And while no one will be expectantly lining up around the block for the new chips, Charles King, an analyst at Pund-IT Inc. in Hayward, Calif., said that Penryn is a "step up" from previous Intel designs and should appeal to the high-end gamers and workstation customers.

"The Penryn architecture blends notably high performance with significant steps forward in power efficiency," he added. "It's a bit like a new sports car that hits a higher top speed than previous models, while simultaneously delivering better gas mileage."

Dan Olds, an analyst at Gabriel Consulting Group Inc., said the Penryn desktop won't just appeal to the gaming community. Power users with more than 10 applications open at once, video editors and researchers are going to be eager for a performance boost.

Olds added that with this "big step forward" for desktop performance, he's not sure what AMD has to respond with.

"AMD has their work cut out for them," he said. "[Penryn] will be hands-down the fastest desktop chips in existence ... And it's not just this generation. Intel will just crank this thing faster and faster, and it will be a challenge for AMD to respond."

Intel last month opened a new $3 billion manufacturing facility in Chandler, Ariz., kicking off mass production of its new 45nm microprocessors. Freeman has previously noted that the opening of the new Arizona facility, named Fab 32, is expected to boost production of 45nm wafers from 5,000 a month in the pilot program at an Oregon facility to 25,000 to 30,000 wafers a month. Davis added that two other new 45nm fabrication sites -- one in Israel and one in New Mexico -- are expected to go online, boosting 45nm production over Intel's 65nm production by the second half of 2008.

Today, Intel is coming out with one Core 2 Extreme processor, which is geared to the high-end gaming and research market. The company is slated to unveil Core 2 Quad processors and Core 2 Duo processors in the first quarter of 2008, pushing the new chips from the high-end desktop market out to the general market.

Faster and less leaky? (0, Funny)

Anonymous Coward | more than 6 years ago | (#21323973)

...faster and less leaky...
What a coincidence! Precisely the traits that I look for when switching condom brands!

Re:Faster and less leaky? (1)

julesh (229690) | more than 6 years ago | (#21324475)

...faster and less leaky...
What a coincidence! Precisely the traits that I look for when switching condom brands!

In that case, may I suggest you try some high-K metal-gate condoms?

i pooped my pants (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21324005)

it smells awful.

Re:i pooped my pants (2, Funny)

tttonyyy (726776) | more than 6 years ago | (#21324463)

it smells awful.
Clearly you're like earlier 65-nm processors - slow and leaky.

Halogen free (2, Informative)

jbeaupre (752124) | more than 6 years ago | (#21324021)

I'm sure they mean eliminating halogenated organic compound or something similar Otherwise I think eliminating halogens from chips themselves is just a drop in the ocean. A deep, halogen salt enriched ocean.

Re:Halogen free (1)

julesh (229690) | more than 6 years ago | (#21324393)

I'm sure they mean eliminating halogenated organic compound or something similar Otherwise I think eliminating halogens from chips themselves is just a drop in the ocean. A deep, halogen salt enriched ocean.

Halogens are elements. Halogenated organic compounds are compounds that contain halogens. In order to eliminate halogens from the chip, they'll have to eliminate all compounds of halogens. I'd have thought that was fairly obvious...?

Re:Halogen free (1)

jbeaupre (752124) | more than 6 years ago | (#21324755)

Seems you missed my joke. Which was based on what you describe as obvious. So let me go very slowly. The ocean contains trillions of tons of halogen salts (and by extension, trillions of tones of halogens, all just laying around). So claiming you are getting rid of halogens from computer chips is either silly or they mean something else. I'm guessing here, but hey, let's just assume they aren't being silly. Maybe they mean the halogenated organic compounds that are used to clean and process chips that are link with cancer, ozone depletion, and global warming. Not halogen salts on the chip itself, salts that likely exist in the ocean.

So let's reiterate:
1) Not all halogens bad
2) Oceans contain halogens
3) Intel must be eliminating bad halogen compounds
4) or they are eliminating halogens that would be "a drop in the ocean." Get it yet?

Re:Halogen free (1)

ajlitt (19055) | more than 6 years ago | (#21324535)

Good point. I was pretty sure that Intel would have a hard time manufacturing chips without HF.

Re:Halogen free (1)

cyfer2000 (548592) | more than 6 years ago | (#21325145)

1, I think the GP means organic halogenated flame retardant in the epoxy and PCB used to package the chip.

2, I am not sure about Intel, but I know many fabs have stopped HF wet etching and use dry etching instead. Because dry etching is actually cheaper and faster.

Dungeons & Dragons (1)

cthulu_mt (1124113) | more than 6 years ago | (#21324065)

Intel engineers should not be allowed to name the new chip lines after their D&D characters.

Re:Dungeons & Dragons (1)

julesh (229690) | more than 6 years ago | (#21324435)

Intel engineers should not be allowed to name the new chip lines after their D&D characters.

Err... like most recent Intel chip codenames, Penryn is a place.

Can somebody explain (2, Informative)

sayfawa (1099071) | more than 6 years ago | (#21324099)

Why is there so much emphasis on size (as in 45nm) for these things? Does making it smaller make it inherently faster or more efficient? Why? I've looked around (well, I looked at wikipedia anyway) and it's still not clear what advantage the smaller size has.

Re:Can somebody explain (1)

Prof.Phreak (584152) | more than 6 years ago | (#21324177)

You can fit more of them on a die, making it cheaper. A die defect kills fewer CPUs.

Or to make chips more complicated (by using more gates in the same space)---do more with 1 clock cycle.

Or some combination of both.

Also, smaller usually means more energy efficient.

Re:Can somebody explain (2, Informative)

Chabil Ha' (875116) | more than 6 years ago | (#21324211)

Think of it in these terms. Electricity is being used to transmit 1 and 0s inside a circuit. We can only do so much to make the conductivity less resistant, so we need to shorten the distance between gates. The less distance an electrical signal has to travel, you can increase the number of operations that are performed in the same amount of time.

Re:Can somebody explain (1)

OwnedByTwoCats (124103) | more than 6 years ago | (#21326551)

Ahh, but going to smaller features means, and shorter distances between gates, also means that the lines become narrower. Resistance is proportional to length, and inversely proportional to cross-sectional area. So if you halve the length, and halve the area, total resistance stays the same. Basic EE, folks.

Smaller might not mean less resistance, unless the lines get shorter faster than they get narrower.

Re:Can somebody explain (5, Informative)

compumike (454538) | more than 6 years ago | (#21324213)

The energy required to switch a capacitor from zero to Vdd volts is 1/2*C*Vdd^2.

Smaller logic sizes can operate faster because the physical gate area of the transistor is that much smaller, so there's less capacitance loading down the piece of logic before it (proportional to the square of the scaling, of course). However, it also tends to be the case that the operating voltages scale down too (because they adjust the semiconductor doping and the gate oxide thickness to match), so you get an even better effect on energy required. Thus, scaling helps both with speed and operating power.

The problem they're running into now is that at these smaller sizes, the off-state leakage currents are getting to be of the same magnitude as the actual switching (operating logic) currents! This happens because of the reduced threshold voltage when they scale down, so the transistor isn't as "off" as it used to be.

That's why Intel has to work extra hard to get the power consumption down as the sizes scale down.

NerdKits: electronics kits for the digital generation. []

Re:Can somebody explain (1)

matt_martin (159394) | more than 6 years ago | (#21325017)

Keep in mind too, that the gate dielectric is usually thinned with each generation, increasing the capacitance per area. A typical corresponding reduction of operating voltage (so-called constant field scaling) with each generation contributes to the CV^2 reduction when going to smaller dimensions.

Of course, the new high-K dielectrics may shift the curve as they give even more capacitance per unit area for a given thickness while possibly allowing higher voltage.

And, all of the modern dynamic VDD-scaling features blur things even more, but the basic concepts still hold.

Re:Can somebody explain (1)

tehcrazybob (850194) | more than 6 years ago | (#21324221)

Just as you guess, making the parts smaller drops their heat output and power consumption considerably for a given speed. It's also necessary to advance the technology further, because it allows them to create new, faster parts without raising the power consumption.

Re:Can somebody explain (1)

Tim C (15259) | more than 6 years ago | (#21324243)

Does making it smaller make it inherently faster or more efficient?
Yes, basically. For one thing, a smaller chip size means that you can get more of them out of a silicon wafer, and wafer defects kill fewer chips. As for efficiency, that should be obvious - smaller chips mean shorter electrical pathways means less distance for the electrons to travel means less energy required to move them about and less heat generated means higher efficiency.

Re:Can somebody explain (5, Informative)

Rhys (96510) | more than 6 years ago | (#21324259)

Smaller size means signals can propagate around the chip faster. It also means you need less signal-fixing/synchronization hardware, since it is simpler to get a signal synced up at a given clock rate. Smaller size generally means less power dissipated. Smaller feature sizes means the CPU is physically smaller (generally), so more CPUs fit on a silicon wafer. For each wafer they produce (a high but relatively fixed cost vs the number of CPUs on the wafer) they get more CPUs out (= cheaper). If a CPU is bad, that is a smaller percent of the wafer that was "wasted" on that CPU.

Re:Can somebody explain (4, Interesting)

enc0der (907267) | more than 6 years ago | (#21325241)

Smaller size means faster but at the expense of more power. As a chip designer I can tell you that the smaller you go, the more leakage you have to deal with in the gates, and it goes up FAST. Now, with the new Intel chips, they are employing some new techniques to limit the leakiness of the gates, these techniques are not standard across the industry so it will be interesting to see how they hold up. I do not understand what you mean by signal-fixing/synchronization hardware. Design specific signal synchronization doesn't change over the different gate sizes. What changes is the techniques that are used as people find better ways to do these things. However, these are not technology specific and tend to find their way back into older technologies to improve performance their as well. In addition, cost is NOT always cheaper because die yield is generally MUCH LESS at newer technologies. For those on the bleeding edge. In addition, development costs go up because design specific limitations, process variance, and physical limitations cause designs to be MUCH HARDER to physically implement than at larger sizes. Things like electromigration, leakage power, ESD, OPC, DRC, and foundry design rules are MUCH worse. What is true is that these people want faster chips, and you can get that, as I said. Although the speed differences are not that amazing. Personally, I don't think the cost justifies the improvement in what I have worked on. Especially on power. Now, going out a few years from now, as they solve these problems at these specific gate geometries, THEN we will start to see the benefits of the size overall.

Re:Can somebody explain (1)

Waffle Iron (339739) | more than 6 years ago | (#21324289)

Does making it smaller make it inherently faster

Generally, yes, mostly because the capacitance and inductance of electrical components usually scales with size. The logic speed is often limited things like R*C time constants. At high enough speeds, speed of signal transmission accross the chip comes into play as well.

Another factor is with smaller parts, more can be packed onto a die. The more parts you have, the more caching and concurrency tricks you can implement to increase speed.

more efficient?

Up to a point, but they seem to have hit a wall. Smaller inductance and capacitance means less power dissipated repeatedly charging and discharging tiny parts of the chip. But now they've made things so small that electrical current is starting to leak through the transistors even when they're "off"; this was a big problem with the hot Pentium 4s. To address that problem, they're switching to strange materials like hafnium. That seems to fix the problem for now, but we'll see how much further they can push it.

Re:Can somebody explain (1)

liquiddark (719647) | more than 6 years ago | (#21324341)

Electromagnetic signal is a function of the charge and the inverse of the square of the distance. The distance between gates is smaller, meaning a smaller amount of charge is required to stabilize the electrical state of the system.

In addition, the smaller size of gates means that more gates fit in the same-sized die. This effect goes as the square of the change in linear dimension, so a reduction of 33% (~60->40nm) means a net twofold increase in the number of transistors available per unit area. This allows, as the other poster suggested, shrinking of processors OR enrichment of feature set. At certain critical points, there are actually more spaces on the wafer, as well, since wafers are not rectangular in shape (or at least, weren't the last time I researched this topic, which was a few years back). Since more chips can be produced from the same raw materials, the cost of production drops, which typically means the cost of the chip drops.

There are also opportunities to take advantage of the inherently more "exact" nature of a finer lithographic resolution, but I'm not really familiar with them, so I'll leave that to someone else to discuss.

Re:Can somebody explain (1)

darkmeridian (119044) | more than 6 years ago | (#21324491)

Each silicon wafer that goes through processing to become a bunch of microprocessor costs pretty much the same to make. Having a smaller die makes each chip on the wafer smaller, so you get more chips on each wafer that you process. This also increases your yield so you can sell more chips. Furthermore, the electrons on the chip has to take a shorter path, so there's less heat being evolved when the processor is run. Thus, the chip can be run at a higher clock frequency before heat becomes a problem. In conclusion, moving to a smaller process increases the yield, lowers the cost, and increases the performance. However, moving to 45 nm and smaller processes require updated fabrication plants, and is very hard to do and design for because quantum issues become significant. (For instance, quantum tunneling becomes not insignficant.)

Re:Can somebody explain (1)

JoeMerchant (803320) | more than 6 years ago | (#21324519)

Short explanation: size matters.

It's a better measure than clock-speed these days, size has fairly good correspondance to power efficiency and power efficiency is the main thing holding back higher clock speeds. It's also nice to be able to fit more cores on a single chip, and size helps with that.

Re:Can somebody explain (1)

spirit of reason (989882) | more than 6 years ago | (#21324677)

Smaller feature size definitely can make the chip faster. All the signals in a processor are collections of electromagnetic waves (where we're concerned with the voltages), so data cannot travel any faster than the speed of light. By making features smaller, we decrease the distance that the signals must travel and we can raise the clock speed.

Re:Can somebody explain (1)

cyfer2000 (548592) | more than 6 years ago | (#21324803)

It depends. As many people explained to you, the smaller gate opens/closes faster. But thinner interconnect has higher resistance and closer interconnect has higher capacitance. At current gate size, the speed of CPU is dominated by RC delay [] . So copper has been used to lower the resistance and low K [] materials has been used to lower the capacitance. Also, as the gate becomes smaller, the leakage become bigger due to tunneling effect, which makes the efficiency low, so high K [] materials has been used to insulate the gate from the silicon.

As you may have figure out now, the shrinking of line width doesn't absolutely mean higher speed and higher efficiency. But one thing for sure is that they can stuff more SRAM based cache in the CPU, which helps the "real life" speed a lot.

Re:Can somebody explain (1)

owlstead (636356) | more than 6 years ago | (#21325855)

Well, you can make the die smaller, as others have pointed out, or you can add cache, which *can* also make things faster. Look at the 12 MB caches of the Xeons mentioned in the article. That's quite a number of MB's, that won't take too much space (to keep the costs down). Actually, many of my - smaller - applications could fit easily within the cache alone. Of course, with multiple cores, virtualization and the bottleneck of the main memory, having a big cache *can* really help.

Note: *can* because it rather depends on the applications used

Where's the article? (1)

markswims2 (1187967) | more than 6 years ago | (#21324151)

The link to Computerworld is just a title and no article. Penryn must have left the writers speechless...

Re:Where's the article? (0)

Anonymous Coward | more than 6 years ago | (#21324513)

The link has been fixed.

Very good for Intel (1)

Trollvalds (1187979) | more than 6 years ago | (#21324215)

but not so good for Transmeta [] who have power saving chips long before Intel invent them.

Re:Very good for Intel (1)

Sunar (1100779) | more than 6 years ago | (#21326157)

Unless i'm mistaken they pretty much sucked. These are supposed to be a speed improvement and power savings. ~Sun

Re:Very good for Intel (1)

treeves (963993) | more than 6 years ago | (#21326451)

It's not a matter of "power-saving chips" vs. "non-power-saving chips". It's a matter of degree, and Intel has increased the degree, "performance per watt" as they like to call it, by going to a 45nm process.

Kind of funny, and kind of obnoxious (1)

HellYeahAutomaton (815542) | more than 6 years ago | (#21324277)

I've followed the stories about these machines since the hype about their "V8" setups, and even now they miss the important info: Who is going to ship systems with these in them, how much and how soon? Oh, and Intel should be bitchslapped for not making a multiprocessor motherboard that takes the Socket T (LGA775).

WTF? (1)

thatskinnyguy (1129515) | more than 6 years ago | (#21324349)

When TFA is not informative, seek the source [] . Enjoy.

Energy efficiency with next-GPUs? (1)

failedlogic (627314) | more than 6 years ago | (#21324471)

While I realize that GPUs may be doing more calculations that CPUs (I'm not a Programmer), the power consumption of many graphics cards/GPUs at idle is getting ridiculous (some are 100 to 200 watts), never mind what is needed during gaming. On the one hand, I would buy an on-board accelerator or a cheap PCI-x with the knowledge it won't need additional power to the board, but for the odd games that I play, I need more GPU power. Game consoles' -as a whole like the X360- consumes about 200 Watts @ max draw. This is for the whole system - may PC video cards draw this for the video card alone!

On the subject of these new chips, I'm quite interested in building a new desktop with a Penryn - been a long time since I upgraded. I'm particularly interested in the Xeon chips because previous designs from Intel included fanless/passive coolers. If this continues on the Penryn, I'll definitely buy one. I'm all for a quieter desktop.

Power efficient??? (1)

pla (258480) | more than 6 years ago | (#21324605)

Intel's own spec sheet shows the best of these (and only a single one at that) with a TDP of 65W.

Call me a pessimist, but my two main systems peak at less than that at the wall, and I have yet to find them too slow for any given task (though I admittedly don't do much "twitch" gaming).

Re:Power efficient??? (1)

cyberjock1980 (1131059) | more than 6 years ago | (#21325331)

No way your computer draws only 65W, unless you have a VERY old computer or a shuttle that can barely do anything. Laptops at the store that are 'power efficient' use 90W power supplies. My system which is in no way a power house, draws 98W idle. Not to mention your power supply is at max 85% efficient.

Re:Power efficient??? (2, Informative)

pla (258480) | more than 6 years ago | (#21326185)

No way your computer draws only 65W, unless you have a VERY old computer or a shuttle that can barely do anything.

Provide an email address and I'll send you a picture of a Kill-A-Watt reading in the high-50W range with the CPU pegged (and in the low 40s idle). I respect your pessimism, but really do run two such systems; One even has something vaguely resembling a decent GPU, though no doubt the hardcore gamers would sneer heartily at it (not that I care, as I said, as I mostly prefer RPG and RTS over FPS).

As for "older", AMD has two entire lines of modern, dual-core chips running between 31W (Turion) and 45W ("BE" parts). While true that dual 2.3Ghz cores don't rock the world anymore, as I said, they perform so much more than "okay" that I don't see myself upgrading for at least another two years (barring any revolutionary advances in CPU technology before then, which looks exceedingly unlikely IMO).

Not to mention your power supply is at max 85% efficient.

I've had enough crappy low-end PSUs take out systems in the past that I buy only the best now - And as a side effect of "quality", you tend to get "efficiency". I personally favor SeaSonic's hardware, of which the newer ones push 88% efficient; Though yes, the ones I have now only claim 85%.

Regardless, keep in mind that that number applies multiplicatively to whatever your CPU and GPU (and the negligible rest) draw... 0.85*(35+16) wastes only 9W, while 0.85*(120+107) wastes over 40W. Just think about that for a sec - A carelessly designed midrange PC can easily waste, just in PSU losses, my total light-use draw.

or a shuttle that can barely do anything.

I run one of those (well, a home-built EPIA system) as my home file server. 22W at-the-wall (not counting the bank of HDDs except the boot drive), and it can perform its one and only real "task" (saturating a gigabit network connection) juuuuuust fine.

Re:Power efficient??? (1)

OwnedByTwoCats (124103) | more than 6 years ago | (#21326669)

I want to build a home file server that I can run and not feel guilty about wasting power. Can you share details?

Re:Power efficient??? (1)

flokemon (578389) | more than 6 years ago | (#21325449)

Some LV versions will probably come later. The same happened with Clovertown.
Standard Xeon 5300 rate at 80W too, X53xx at 120W. The L53xx Clovertown: 50W. Dual Core Xeon 5138 and 5148: 35W and 40W.

Re:Power efficient??? (1)

SrJsignal (753163) | more than 6 years ago | (#21325943)

Well, first off, it's not possible that your machine is less than 65W at the wall unless it's a laptop, which is hardly a fair comparison.

Also, TDP is not really a good measure of power efficiency. TDP has to do with how systems need to be designed to get the most out of a chip. This basically means under ~95% load the chip is going to use 65W so the system needs to be designed to handle that. At idle it will use considerably less, so chips with the same TDP can have substantially different real world power usage (especially at idle).

Re:Power efficient??? (1)

pla (258480) | more than 6 years ago | (#21326337)

it's not possible that your machine is less than 65W at the wall unless it's a laptop, which is hardly a fair comparison

See my other response [] on this topic. Not just possible, really pretty easy, with some care.

Also, TDP is not really a good measure of power efficiency.

Agreed, if for no other reason than because it means different things to different companies. But I did say "at the wall", and I meant it.

Don't get me wrong, I truly applaud Intel's attampts to reduce power consumption. But for me personally, they have a looooong way further to go.

Price drop imminent? (1)

goldspider (445116) | more than 6 years ago | (#21324765)

Might this be followed by a price drop in their current offerings? I'm about to buy a new C2D, so I'd wait if it meant a significant savings.

Re:Price drop imminent? (0)

Anonymous Coward | more than 6 years ago | (#21325735)

Well, the other reason to wait at least a month is AMD will be releasing their new stuff at the end of the month. I think the press release is November 17th or 16th, something like that. It's not going to be faster than C2D, but it will definitely force Intel to lower its prices :)

How Much Hafnium? (1)

Nom du Keyboard (633989) | more than 6 years ago | (#21324837)

Just how much Hafnium is there in the world, and has Intel cornered the supply before AMD could get their hands on any of if?

Re:How Much Hafnium? (1)

cyberjock1980 (1131059) | more than 6 years ago | (#21325389)

I may be wrong, but I remember reading somewhere that the US Navy owned something around 80% of the Hafnium known to exist for their nuclear reactors. Sure they have more than they can use, but its better to have too much than run out. Hafnium is very rare and extremely important in the design of nuclear reactors for many countries.

Re:How Much Hafnium? (0)

Anonymous Coward | more than 6 years ago | (#21326417)

They have prepurchased all of the Hafnium still in the ground?

Halogen-free (1)

Iowan41 (1139959) | more than 6 years ago | (#21324879)

What I want to know is, when are they going to finally make the oceans halogen-free?

Forget laptops (1)

TheDrewbert (914334) | more than 6 years ago | (#21324889)

I'm more interested in what high efficiency chips like this could do for my server room. I have two huge air conditioners to cool a 10'x20' data center. They simply can no longer keep up with the heat coming from the servers. I could install newer, better, bigger air conditioners but that seems to be attacking the wrong end of the problem. VM, 2.5" sata raid, and SAN have all helped somewhat, but the biggest heat problem is still the processors.

Re:Forget laptops (1)

Hoi Polloi (522990) | more than 6 years ago | (#21326469)

Simple, get rid of your heating systems and reconnect them to your server room. Problem solved!

And if they get hotter you can reconnect your hot water system to your servers too. Just think, a CPU that doubles as a coffee maker. If you want a fresh cup just set the scheduler to run a job to search for prime numbers.

Silicon gates? (0)

Anonymous Coward | more than 6 years ago | (#21324909)

What the FUCK is a silicon gate people? This type of tech journalism really grinds my gears. Should we go back to the basics? A standard CMOS transistor is basically a stack comprised of a silicon substrate, an insulator, and a polysilicon gate. The new thing that Intel is doing is replacing the insulator with a high-K hafnium based material instead of the silicon dioxide that's been used for the past 40 years. And the polysilicon is being replaced by metal. The metal is needed due to the work function discrepancy between polysilicon and the high-K insulator. But cheese and rice people, calling the old style gate a silicon gate does a great disservice to the people who have spent the better part of their lives perfecting the silicon dioxide and polysilicon.

No More Pentium? (1)

KaoticEvil (91813) | more than 6 years ago | (#21324995)

Is Penryn the core name or the CPU series name? Does this mean the end of the Pentium brand that we have all come to know and love and hate and love again?

Re:No More Pentium? (1)

stevel (64802) | more than 6 years ago | (#21325205)

Penryn is the code name for the CPU series, not a brand name. Specifically, it's the code name for the not-yet-released mobile processor, but as with the previous generation Merom, it has been used to apply to all three processor types (mobile, desktop, server) built from that technology generation. Intel is not introducing any new processor brand names for this as far as I know.

The processors will be generally sold as "Intel Core 2" or "Intel Xeon". The Pentium and Celeron brand names may also be applied to low-end models.

Since no new brand names are being used, I expect people to continue to use the code names to try to distinguish these new processors from their predecessors (code named Merom, Conroe, Woodcrest, Clovertown).

lead and halogen free (1)

cinnamon colbert (732724) | more than 6 years ago | (#21325203)

anyone care to calculate the ratio of lead and halogens in intels world output to lead acid car batterys or american's swimming pools ?
talk about BS
(swimming pools use large amounts of that well known halogen, chlorine)

it's not the halogen atoms themselves (2, Informative)

Quadraginta (902985) | more than 6 years ago | (#21325887)

The problem is not the halogen atoms themselves, but the chemical reactivity a carbon atom gets when it's bonded to a halogen atom. That is, an organic compound that contains carbon-chlorine bonds is obnoxious not because of the chlorine atoms, but because the chlorine atoms "activate" the carbon atoms to which they're bonded (more precisely they make it far easier for nucleophilic and radical reactions to happen at the carbon atom) so that the carbon atom can do chemistry inside you (or inside some other animal) that you really don't want to happen, e.g. mutating your DNA. This is why chlorinated organic compounds (e.g. PCBs, perc, carbon tet) tend to be tightly regulated.

The halogens themselves (Cl_2 et cetera) and the halogen-oxygen compounds you find in swimming pools (e.g. hypochlorite anions) are merely noxiously caustic, like acid. At high enough concentrations they might scar your lungs and skin, or kill you, but they won't seep into your tissues and do insidious chemistry that gives you cancer or lupus, and they're quite harmless at low concentrations (e.g. what you find in your pool, or in seawater).

High-K eh? (1)

steveo777 (183629) | more than 6 years ago | (#21326053)

Didn't know Intel was into the dietary supplement business. Anyone know where I can pick up a bag of these?

Come Full Circle (2, Interesting)

IorDMUX (870522) | more than 6 years ago | (#21326195)

Once upon a time (1970's), everybody used metal for their FET gates. Those aluminum gates are where we got the names MOSFET (Metal-Oxide-Silicon Field Effect Transistor) and CMOS (Complementary MOSFET). In the 1980's, pretty much every fab gave up metal gates for the polysilicon that has been used since, amidst various enhancements in polysilicon deposition technology, self aligned gates, etc.

Now, the trend seems to be to return to the metal gates of yesteryear and ditch the oxide (the 'O' in MOSFET) for high-k dielectrics (not high-k metals, as the summary seems to say)...

That's all well and good, but I have one question... when will we get around to updating the term "CMOS"?

Yes, is a decent processor... (1)

asm2750 (1124425) | more than 6 years ago | (#21326565)

....but every time I look at a motherboard for a intel processor I think of this quote.

"People can have the Model T in any long as it's black." -- Henry Ford
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?