Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Details of Intel 45nm Processors Leaked

ScuttleMonkey posted more than 6 years ago | from the magic-eight-ball-says dept.

Intel 104

DCC writes "TechARP has gotten some juicy news from Intel. This time, it's the top secret details of the Intel 45nm desktop processors, both Yorkfield and Wolfdale with benchmarks and pricing included! 'As promised earlier, Intel will launch their 45 nm processors by the end of this year. In fact, we have been told that the launch date had already been set at November 11, 2007, so mark your calendars. [...] Code-named Yorkfield XE, the Intel Core 2 Extreme QX9650 will be a quad-core processor built from two 45 nm Wolfdale processor dies. It will displace the Core 2 Extreme QX6850 (Kentsfield) processor as the top desktop processor model until Q3, 2008'"

cancel ×

104 comments

Sorry! There are no comments related to the filter you selected.

more like (-1, Troll)

Anonymous Coward | more than 6 years ago | (#20813371)

Yankfield and Wolfdong!!

Meh. (-1, Troll)

Anonymous Coward | more than 6 years ago | (#20813391)

Marketing > Engineering, again.

Speed bump, ooh shiny! ONLY $xxxx.99!?

Unforkable!

Sure, but can it run linux? (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#20813481)

Oh, it can? Ok.

So shall we expect... (0)

Anonymous Coward | more than 6 years ago | (#20813489)

9.65GHz equivalent speed on this beast, or have the marketing folks - unshackled from any comparison to old-school processors - gone fully overboard?

Not all that new (3, Informative)

CajunArson (465943) | more than 6 years ago | (#20813491)

Anandtech had a preview of Wolfdale including benchmarks back in August (here [anandtech.com] ). The ironic thing is that with the limited availability of the K10 and its late arrival at most review sites, I've seen about as much real benchmarking of the unreleased Intel parts as I have of the supposedly widely-released AMD parts.

Re:Not all that new (1)

Sycraft-fu (314770) | more than 6 years ago | (#20813761)

I wasn't even aware the K10 was out, thus far I've seen nothing on it on any of the sites I normally look at. Very strange.

Re:Not all that new (1)

FuturePastNow (836765) | more than 6 years ago | (#20814933)

I've yet to see a single Barcelona-core processor for sale at retail, and AMD only sent samples to one or two review sites. K10 was essentially a paper launch.

Re:Not all that new (2, Informative)

mdm-adph (1030332) | more than 6 years ago | (#20814967)

I wasn't even aware the K10 was out, thus far I've seen nothing on it on any of the sites I normally look at. Very strange.

I know -- it's been a weird release, to say the least. I haven't really heard very much about them at all, and for a chip this neat that's kind of surprising.

You can buy them now on Newegg here [newegg.com] -- they were up a few weeks ago for about $800, but then they were taken down, and now they're back up. Who knows, eh?

Re:Not all that new (1)

pete314 (877866) | more than 6 years ago | (#20814995)

Intel confirmed the November 12 launch date in September at IDF - November 11 was an August rumor. At IDF Intel also said that it would ship 20 processor models on November 12, and another 20 in the first quarter of 2007. Also at IDF, Intel released a projection for 45nm performance on the SpecFP benchmark. Looks like Techarp forgot to do its fact checking.

Time for a new naming convention? (3, Insightful)

Psychor (603391) | more than 6 years ago | (#20813567)

Maybe it's time to come up with processor names that actually mean something again instead of confusing and usually meaningless numbers? This is especially true for AMD, whose numbers seem to be based around the clock speed an equivalent Intel chip might have run at many years ago when they invented the convention, but Intel's new "random model numbers" naming doesn't seem much better.

Of course old style Megahertz numbering doesn't make much sense these days either, with the proliferation of multi-core processors. I think it would be nice if the chip makers could agree on some kind of general performance benchmark number that could be used in names to make processors more easily comparable. Even some kind of very basic number relating to cores/speed like the 4x2200 for a 4 core, 2.2Ghz chip would be better than the current mess in my opinion though.

Re:Time for a new naming convention? (1)

jimstapleton (999106) | more than 6 years ago | (#20813699)

Maybe it's time to come up with processor names that actually mean something again instead of confusing and usually meaningless numbers? This is especially true for AMD, whose numbers seem to be based around the clock speed an equivalent Intel chip might have run at many years ago when they invented the convention, but Intel's new "random model numbers" naming doesn't seem much better.


So... Let me get this straight... You are complaining about meaningless numbers - and then stating that a number that actually has a solid base in performance (if an old one) is worse than one that doesn't really give you a comparison at all?

Is not "not much better", that idicates it is actually better- it's just plain worse. AMDs "fake-mhz" is at least useful in that it gives you a fairly useful metric within it's own sphere, as well as a good metric with another sphere.

But I skip the whole thing anyway. I look at the benchmarks to get a rough comparison between arch's and then compare Mhz/Cache within an arch. If I can find a set of benchmarks between the exact processors of interest, I use those.

Re:Time for a new naming convention? (1)

torkus (1133985) | more than 6 years ago | (#20813727)

>I think it would be nice if the chip makers could agree on some kind of general performance benchmark number that could >be used in names to make processors more easily comparable

An admirable idea, but then CPU makers will write micro-code or silicon to enhance the individual benchmark result(s).

You're right though, the numbers are slightly better than nonsense. The bigger issue is that there's SO many different processors. The mfgs are aiming to have the perfect fit for every segment and every pricing tier and every...everything.

Re:Time for a new naming convention? (1)

default luser (529332) | more than 6 years ago | (#20825889)

Yes, in the past both compiler developers and chip designers have been known to optimize for certain aspects of benchmarks. Thus, to make numbers at all useful, you would have to have multiple benchmarks stressing different aspects of computing. This would prevent optimizing for benchmarks while hurting performance in other areas.

In addition to that, you have the problem that computer usage needs are constantly changing. This is the main reason why benchmarks change from year-to-year. How are you supposed to compare processor models when they advertise benchmark numbers from different versions of a given test suite?

In the end, marketing people from both AMD and Intel have discovered one ultimate truth: in a world where anyone and their dog can sell you an x86 computer, price is one of the easiest ways to gauge performance. People who care about performance will simply pay more and expect it. The rest of the world simply doesn't care, because they assume any processor is fast enough for their needs (and are probably right).

Re:Time for a new naming convention? (1)

gEvil (beta) (945888) | more than 6 years ago | (#20813763)

I think it would be nice if the chip makers could agree on some kind of general performance benchmark number that could be used in names to make processors more easily comparable.

You mean like BogoMIPS?

Re:Time for a new naming convention? (2, Funny)

dextromulous (627459) | more than 6 years ago | (#20817231)

I think it would be nice if the chip makers could agree on some kind of general performance benchmark number that could be used in names to make processors more easily comparable.

You mean like BogoMIPS?
That reminds me of my favorite lines from the Linux 2.4 source code: arch/i386/kernel/smpboot.c
/*
* Allow the user to impress friends.
*/
After which is the calculation for bogomips.

Re:Time for a new naming convention? (3, Informative)

EvanED (569694) | more than 6 years ago | (#20813849)

Even some kind of very basic number relating to cores/speed like the 4x2200 for a 4 core, 2.2Ghz chip

Okay, now how do you mark different versions of that? Ones with different sized caches? Different FSB speeds?

I'm not claiming that the Intel numbers make all that much sense, but they still manage to convey a fair bit of information. Higher "hundreds" digits are faster clocks. (The Q6600 and E6600 both have the same clock speed.) Numbers with the same leading digits, e.g. the E6700 vs. the E6750, are different revisions. (The E6700 has a 1066 MHz FSB, and the E6750 1333 MHz.) The E prefix says that it is dual core; quad cores have Q. If the thousands digit is 2, then things shift around a bit, but that information alone tells you that you're working with one of the budget chips. (Slower clock, no VT, smaller cache.)

Re:Time for a new naming convention? (1)

FuturePastNow (836765) | more than 6 years ago | (#20815003)

Exactly. I don't see the problem. If a model number doesn't make sense, just look it up (there's this thing called the internet now), and you'll have all the information you desire.

Re:Time for a new naming convention? (1)

InvalidError (771317) | more than 6 years ago | (#20817399)

I liked how things worked in the P3's days... the brand gives you the architecture (P3), the MHz gives you a general performance figure and suffixes tell you what extras the chip supports... I liked my P3-650E and my P3-1066EB, though I kind of wish I had held off a year longer to get a P3-1200T or P3-1200S instead.

Cryptic numeric product codes do pretty much the same thing in a slightly more compact format. What is really annoying is when more than one feature affect a given digit in the product code and when features are dropped rather than added... you end up with xxx5 models that have extra features over the xxx0 models and xxx2 models that lack some of the xxx5/xxx0 features, making the whole CPU selection process needlessly unintuitive.

With the suffix/prefix system, longer model numbers meant more features, just need to make sure those you want are in there.

Re:Time for a new naming convention? (2, Informative)

qortra (591818) | more than 6 years ago | (#20813949)

some kind of very basic number relating to cores/speed like the 4x2200 for a 4 core, 2.2Ghz chip

Of course, that would insufficient; You would need some other indicator to mention that fact that it is, say, a Wolfdale instead of a Conroe (Wolfdale's being, say, 10% faster). Also imagine that another axis has to be considered; power-efficient, or non-power-efficient. That would make your model name even more complicated: "Conroe4x2200PE". That's quite a mouthful. This is only an example to indicate that specification-based model numbers have a tendency to get prohibitavely complex. The spec-based model number becomes more and more complicated as each axis of of variability is added. So, most companies settle for more arbitrary model numbers (NVidia, ATI, BMW, Lexus, etc).

Consider BMW; for many years, their model numbers were usually of the form *model**displacement**feature* (an incomplete schema would be [3,5,7][15,20,25,30,35,40,50][i,ci]). However, at some point, they realized that there might be several different models with the same engine size, and rather than making the feature indicator more complex, they made displacement inaccurate (sometimes pro-rating it to indicated a higher performance version of an identical displacement engine).

All that to say, I don't think that having arbitrary model number is such a bad state of affairs; just have a publically accessible lookup table that relates a single model number to all the relevant specs. Saying "NVidia 7600GT" is a heck of a lot easier than saying "NVidia 256-128-22.4-6.7-700" (for memory, memory interface, memory bandwidth, fill rate, and vertices-per-second).

Re:Time for a new naming convention? (1)

evilviper (135110) | more than 6 years ago | (#20814393)

Hmm... So... Did you actively plagiarize your comment from B5_geek and my own comments from THIS THREAD, 2 days ago, [slashdot.org] or was it entirely subconscious?

This is especially true for AMD, whose numbers seem to be based around the clock speed an equivalent Intel chip might have run at many years ago when they invented the convention,

AMD's numbers are consistent... You can be pretty sure a 4000+ will be 2X as fast as a 2000+. What system could possibly be better? There is no inherent natural metric for computing power, so you have to start with an arbitrary number somewhere, and AMD's PR number is as good a starting point as any.

Intel's new "random model numbers" naming doesn't seem much better.

In fact I'd say (and DID say) it's infinitely worse, since the model numbers they choose are ENTIRELY arbitrary. If people start using model numbers as a relative performance metric, they are left wide-open to manipulation (model-number inflation) when Intel wants to increase sales.

Even some kind of very basic number relating to cores/speed like the 4x2200 for a 4 core, 2.2Ghz chip would be better than the current mess in my opinion though.

"No, what it'll do is bring-back the MHz myth, in full-force.

Gee, Intel has this 4Ghz CPU, and AMD has this 3GHz CPU for a bit less, and VIA has this 2.5GHz CPU for half the price...

Buy VIA CPU... Go home and spend the next two years wondering why the hell your computer is ridiculously slow, and pondering the meaning of MHz myth."
--Evilviper, slashdot.org, September 29, 2007


Only VIA (and ARM/MIPS/etc. suppliers) want to keep the MHz myth going.

Re:Time for a new naming convention? (1)

Psychor (603391) | more than 6 years ago | (#20815253)

I hadn't even seen that thread, perhaps you're not quite the visionary and Slashdot celebrity that you thought. I read it now and while your general sentiment is the same (that chip model numbers aren't very meaningful), the posts in question are hardly identical. I find it strange that you'd be so hostile that you'd make a post here accusing me of copying, especially since you've then decided to point out why you think I'm wrong, points which I would surely have taken on board already had I read your thread.

I hardly think that on a technical news site where articles are read by many many thousands of people, it's inconceivable to you that multiple people could have views on chip naming conventions, and express them in articles related to that subject. Independent development of an idea may not have much weight in the patent world at the moment, but it's very much a likelihood in real life.

Re:Time for a new naming convention? (0)

Anonymous Coward | more than 6 years ago | (#20816241)

The posts are completely different. The sentence structure is completely different. Perhaps you should reread his post?

Re:Time for a new naming convention? (0)

Anonymous Coward | more than 6 years ago | (#20818259)

Hi, I'm a lawyer. Call me, 1-800-IPprotect, we can recover your IP.

Re:Time for a new naming convention? (4, Informative)

TheThiefMaster (992038) | more than 6 years ago | (#20815495)

For the AMD Athlon 64 X2 processors, the number does actually mean something.

For a 1MB cache (per core) cpu, it's exactly 2x the clock speed in megahertz. The X2 4000+ is 2000MHz. This continues every 200MHz all the way up to the top cpu, the 6400+ (3200MHz, 1MB cache).

For a 512kB cache (per core) cpu, it's 200 lower than that. The X2 3800+ is 2000MHz as well, but 512kB cache. This continues every 100MHz all the way up the line to the 5400+ (2800MHz, 512kB cache).

For a 256kB cache (per core) cpu, it's 200 lower again. The X2 3600+ is ALSO 2000MHz, but has 256kB cache. There is only one 256kB cache X2 cpu. There is also a X2 3600+ that is 1900MHz and 512kB cache, which still fits the pattern.

The single core Athlon 64s seem to have a similar numbering scheme, but with more factors affecting it, including hypertransport speed (800MHz/1000MHz), and socket (754/939). Some of the cpus were numbered slightly differently, but this is 99% accurate:
The base is a 512kB cache socket 754 hypertransport 800MHz 2000MHz cpu, which is rated at 3000.
Socket 754 cpus were rated 200 higher for every 200MHz higher cpu speed.
1MB cache versions were mostly 200 higher (one was 300), and 256kB cache versions were 100 lower.

Socket 939 cpus were rated 200 higher than socket 754. (Due to the support for dual-channel ddr, they were better).
1000MHz HT cpus were rated an additional 100 higher for every 200MHz higher cpu speed than the base (2000MHz). The cpus that were 200MHz slower than the base didn't get an additional 100 points deducted though.
Again, 1MB cache versions were 200 higher.

This doesn't cover the 1500+, which was only used in a HP Blade PC.

The AM2 cpus were mostly the same as the 1000MHz HT S939s, except for the 4000+, which was a 2600MHz/512kB cache instead of 2400MHz/1MB, and the details above would have scored it at 4100+.

As you can see, the numbers are mostly arbitrary, and mostly derived from the features of the cpus instead of a comparison against intel.

Re:Time for a new naming convention? (1)

LWATCDR (28044) | more than 6 years ago | (#20815575)

How about this. Just about every CPU available now is bloody fast!
If you don't transcode HDVideo and you don't play super high end games then just about any modern CPU is going to be fast enough.
The performance of the CPU is such a small part of PC performance. For most people more Ram will mean more than a faster CPU. Then maybe a faster HD or video card!
I develop software for a living and Athlon X2 3800 is pretty fast with enough ram.

Re:Time for a new naming convention? (1)

edwdig (47888) | more than 6 years ago | (#20816789)

Maybe it's time to come up with processor names that actually mean something again instead of confusing and usually meaningless numbers? This is especially true for AMD, whose numbers seem to be based around the clock speed an equivalent Intel chip might have run at many years ago when they invented the convention, but Intel's new "random model numbers" naming doesn't seem much better.

That's what AMD at least used to do. Contrary to popular belief, their numbers had nothing to do with Intel's speeds.

When AMD went from the original Athlon to the Athlon XP, performance per MHz went up. Rather than try to explain why a 1.4 GHz new chip was faster than a 1.5 GHz old chip, they rated the new chips relative to the old ones. An Athlon XP 2000 was equivalent to an original Athlon chip clocked at 2 GHz, despite the clock speed being lower than that. Coincidentally, the chip also happened to perform at roughly the same speed as Intel's 2 GHz chips of the time.

As the Athlon 64 came along, AMD kept the same rating scale as performance per GHz went up further. This created confusion for two reasons. First, the performance number no longer matched up well with Intel's chips, annoying the people who believed that was what the number was supposed to mean. Second, other people got confused when a equivalent speed bump in the chips made the performance number go up different amounts. This made perfect sense if you worked out the performance ratios, but annoyed the people who didn't really understand the method.

Once AMD went dual core, they basically just started making up the numbers, as they felt doubling the single core rating would create unrealistic expectations.

Re:Time for a new naming convention? (0)

Anonymous Coward | more than 6 years ago | (#20818285)

It would be best to go back to the old 8086 naming convention with small alterations.

For example an Intel Core 2 Duo could be a 986DX2-3 which means it's a 9th generation x86 CPU that is not an economy model (Celeron), it has 2 cores and runs at 3 GHz.

A Core 2 based Celeron could be a 986SX-2 while a Core 2 Quad could be 986DX4-2.5.

The only things I've changed from the old convention is the number following the DX or SX CPU class now represents the number of cores instead of clock multiplier and the clockspeed has been upscaled from reflecting MHz to GHz. It's easy to read and easy to understand just what each CPU is.

Re:Time for a new naming convention? (1)

mdwh2 (535323) | more than 6 years ago | (#20818771)

For example an Intel Core 2 Duo could be a 986DX2-3 which means it's a 9th generation x86 CPU that is not an economy model (Celeron), it has 2 cores and runs at 3 GHz.

A Core 2 based Celeron could be a 986SX-2 while a Core 2 Quad could be 986DX4-2.5.


Although note there are a lot more "levels" than just 2. The Core 2 Duo has at least two lines (the E4xxx and E6xxx), and there's also the new Pentium Dual Core chips (E2xxx), which sit between Core 2 Duo and Celeron.

Re:Time for a new naming convention? (1)

chromozone (847904) | more than 6 years ago | (#20819703)

I think the manufactures like the names that don't signify much. I think they know the knowledgable enthusiast will be able to tell the quality cpus no matter what its called. The general public on the other hand can be allowed to think they are buying the best when they are not. A lot of people bought computers they thought were "dual core" when they were actually "Pentium Dual Core" - an inferior cpu to dual core.

Vista (0)

Anonymous Coward | more than 6 years ago | (#20820017)

Call me crazy, but isn't the Windows Experience Index [wikipedia.org] widely available? It will be in just a year, and then it won't matter whether you're a linux only guy. Like it or now, they will refer to it the way we ask about number of USB ports or cache size.

Bogus or not, it's still an ingenous API for common folk to check. I went to a computer store before buying my current laptop. Checking the score section of the control panel, I noticed someone had beat me to the punch in changing the views settings to about a third of the sub-1.5k machines. It is true that the more expensive it got, the less likely I found that someone else had checked the Vista specs, but if some other geeks are doing it at 3 different stores, then we must be on to something.

Vista does the CPU score assesment on its first user-boot. It is transparent to the market as Windows Product activation has been. Nobody talks about it any more, and yet everyone manually buying a retail Windows OS deals with it.

Still FSB and dual dual-core (3, Informative)

Joe The Dragon (967727) | more than 6 years ago | (#20813573)

The true AMD quad-cores may blow intel away the desktop ones will use faster and lower lag desktop ram then the sever ones that are out now.

And The amd 4x4 system with 2 amd quad cores with desktop ram will be alot better then intel Skulltrail with FB-DIMMS and poor chipset io Full sever chipset + 2 nvidia chipset linked by a pci-e x16 bus 1.1 from the intel chipset to the nvidia chip and HT from nvidia to the other nvidia chipset with 2 x16 pci-e 1.1 sli slots. Amd system will cost less with cheaper ram and
a less costly MB.

The amd system will likey have the choice of a nvidia based system with 2 Full sli x16 slots pci-e 2.0 slots + other pci-e 2.0 slots with HT links form the cpus to the nvidia or a
ATI one with
        * Codenamed RD790
        * Dual or single AMD CPU configuration
        * Supports socket AM2+ and socket F CPU
        * Allowing maximum of four physical PCI-E x16 slots at x8 lanes bandwidth or 2 PCI-E x16 slots at maximum bandwidth (16x-16x or 8x-8x-8x-8x CrossFire)
        * Discrete PCI-E x4 slot
        * Providing a total of 52 PCI-E lanes [4], 41 lanes in Northbridge
        * Two to four cards CrossFire, with reported 2.6 times of performance than single card
        * Support of HyperTransport 3.0
        * Support for HTX slots
        * Support of PCI-E 2.0
        * Supports Dual Gigabit Ethernet, and teaming option
        * Discrete chipset cache memory of at least 16 KB to reduce the latencies and increase the bandwidth
        * Reference board codenamed "Wahoo" for dual-processor (Quad FX) reference design board with three physical PCI-E x16 slots, and "HammerHead" for single socket reference design board with four physical PCI-E x16 slots, also notable was the reference boards includes two ATA ports and only four SATA 3.0 Gbit/s ports (as being paired with SB600 southbridge), but the final product with SB700 southbridge (see below) should support up to six.
        * Northbridge runs at 3 W when idle, and maximum 10 W under load

http://en.wikipedia.org/wiki/AMD_700_chipset_series [wikipedia.org]

Re:Still FSB and dual dual-core (4, Informative)

CajunArson (465943) | more than 6 years ago | (#20813685)

Every single review I saw of the 4x4 had it losing to Intel quad cores using the "crippled" FSB. Hypertransport is great for 4 socket+ systems which is why Intel is going to a point-to-point interconnect next year. However, on smaller systems like desktops and up to 2 socket servers, Hypertransport's benefits are much less clear. For example, when Anandtech did the initial K10 benchmarks it turns out that it took the K10 about 76ns to transfer data between any 2 cores on the chip using its layer 3 shared cache (which is faster than the Hypertransport used in the 4x4).
    However the more interesting number was that it took Intel's FSB 77ns to transfer data between the dual-dies, and if the data were only going between cores on the same die that time was only 26ns. So the upshot was, that the worst case scenario for Intel's data latency was less than 2%, while the better case scenario (which is not too hard to achieve) gave Intel a 50% reduction in data latency. If you want to talk about 4 socket+ systems then Hypertransport is a winner, but on a desktop I wouldn't obsess over it too much.

Re:Still FSB and dual dual-core (1)

Joe The Dragon (967727) | more than 6 years ago | (#20813911)

But intel's Skulltrail is likely to be a poor gameing system FB-DIMMS and server chipset + nvidia chipset running over pci-e x16 1.1 bus split to 2 x16 slots 1.1 slots.

Re:Still FSB and dual dual-core (2, Interesting)

RightSaidFred99 (874576) | more than 6 years ago | (#20814045)

Skulltrail and 4x4 are for bragging rights only - pretty much nobody will buy either except a few uber-rich people who don't care about money. That said - the numbers will belie your impression. Skulltrail will score almost identically to 4x4 because it's all about SLI/Crossfire. In terms of raw computing power, Skulltrail will be superior to Barcelona simply because Barcelona, which is a good core, will only ship at 2.0GHz or _maybe_ 2.5GHz this year.

This really isn't a good time to be an AMD fanboy, I'm afraid - not like a few years ago when their products were better in pretty much every way than Intel's.

Re:Still FSB and dual dual-core (0)

Anonymous Coward | more than 6 years ago | (#20816971)

This really isn't a good time to be an AMD fanboy, I'm afraid

Especially if you are as illiterate as "Joe The Dragon". Seriously- dont his posts look like they were typed by a 6 year old throwing a tantrum? I'm amazed that you responded so rationally.

Re:Still FSB and dual dual-core (1)

ZachPruckowski (918562) | more than 6 years ago | (#20816501)

Skulltrail isn't a gaming system, it's a dual-socket graphics workstation. Intel would love to up-sell some idiot on it as a gaming station, but barring exotic set-ups (a personal Counter-Strike server in your basement), a dual-socket system is a workstation system. 2x quad-core processors with 2-4 graphics cards and 4GB+ of RAM is hitting $2500-3000.

Most of the work on a modern video game is in the video card. If you have a quad-core processor at >2GHz, you have the processor requirements for games handled for the next 2-3 years. And as you point out elsewhere, quad-SLI or quad-Crossfire is basically useless, as I'm paying 4x the price to get 2.6x the performance. So really, the only thing that's a "good" gaming platform is a single-socket SLI or Crossfire platform with DDR2, because that's the only way to build a performance system for under $2500, and in that field, Intel's Wolfdale and new high-end Motherboard are pretty solid.

Re:Still FSB and dual dual-core (1)

Junks Jerzey (54586) | more than 6 years ago | (#20822433)

Most of the work on a modern video game is in the video card.

What are you basing that one? In my experience--writing commercial video games--it isn't true.

Re:Still FSB and dual dual-core (1)

ZachPruckowski (918562) | more than 6 years ago | (#20822735)

What I meant to say was that the bottleneck is in the video card (I wrote that kind of fast). Even a sub-$150 processor in a new system will not be the bottleneck, whereas increasing video-card power is very helpful.

Yep (1)

Sycraft-fu (314770) | more than 6 years ago | (#20814003)

All the time on Slashdot I see people bag on Intel's "Double double" design as I like to call it. Ok, it's not 4 cores on a chip, but why should I care? I've got a quad core at work and the thing is smoking fast. Works great, I can run two VMs at the same time, have another intensive process running on the host, and still have a responsive system. The processors gets the job done, despite it's theoretical inferiority.

What Intel seems to think, and what my admittedly limited testing seems to bare out, is that you can double up on your cores and it works fine for normal usage. They did it with the Pentium D (2 single cores) and now with the Core 2s. Perhaps we'll see more of it, 2 4 core sets to make an 8 core. It seems to work well in the ability to offer more cores on a package sooner and at a lower cost, and still give good performance.

I'm sure there are cases where it doesn't work so well, but unless those are the majority, I don't see the problem.

Re:Yep (2, Informative)

Chirs (87576) | more than 6 years ago | (#20815905)

I suspect it's just the principle of the thing. Intel is calling it a quad-core, but us techno-types know that it isn't. AMD's really is quad-core.

There is a perception that AMD's solution is more elegent and regardless of benchmarks is somehow "better". Intel's design is a "throw cache at the problem" sort of solution--but it works for most normal usage.

I suspect that many people would like to see what Intel could do if they got off their seats and really did something original...like if they can do this good with half-assed lashups, how good could their cpus be if they actually did some novel *design*!?!

Re:Yep (1)

Sycraft-fu (314770) | more than 6 years ago | (#20816275)

Ummm, you are talking but I don't think you are understanding what you are talking about. Intel isn't doing novel designs? Really? You mean like the Core 2? The same Core 2 that is blazing fast, very energy efficient, cheap, and so on?

Ya, about that.

I think what it really is is that AMD zealots are pissed off. Intel has been really putting the screws on AMD hard lately. For most people, this is nothing but good. We've got no stake in who makes our hardware and it's great to see companies doing everything they can to out awesome each other. However for AMD zealots it is horrible as their chosen company isn't doing well. There's more than a few of those on /. since AMD is "the little guy" and there's a bunch of contrarian types that hang around here.

What Intel is doing is perfectly sensible. It allows them to produce processors for a better price. I'm sure they could do a quad core on one die, I'm sure they could do more than that. However being able to do it, and being able to mass produce it cheaply are two different things.

The Core 2 is an excellent processor because it is fast, cheap, readily available and so on. Nobody outside of pedants and zealots cares if it is done "right" they care if it works well.

Re:Yep (1)

dfghjk (711126) | more than 6 years ago | (#20819287)

BS. 4 cores on a processor is quad-core. You "techno-types" think you can define terms to suit your prejudices. It doesn't matter at all how many dies are inside the part.

There may be a "perception" that AMD's solutions are elegant and Intel's are not, but that's just retarded thinking among those who don't know any better. If AMD's engineering is so much better then why can't they keep up? What matters is what can be provided at what cost and in what timeframe. Intel has been innovative because it isn't tied down to what armchair, self-important geeks think is the right thing to do. Intel has real engineers working on its parts.

Since when hasn't Intel done something original? If anything, Intel has suffered from too much original thinking.

Re:Yep (2, Interesting)

therufus (677843) | more than 6 years ago | (#20819423)

LOL, mod parent up funny!!!

Wait... you mean that wasn't sarcasm?

Look, to start with I'm an AMD fanboy (I guess), mainly due to the fact they call a spade a spade and don't lie to their (marketing zombie) customers about what their chips actually are. Intel are a marketing company first, a CPU manufacturer second. If you want to believe that the Q-series CPU's from Intel are actually quad core, you can take your ill-informed self to your nearest retailer and buy your double-duel-core CPU with your hard earned cash. Thats what your master... I mean... Intel want you to do.

But 4 cores on the one chip, a quad core does not make!

As long as data has to jump on the bus and take a trip down pipeline lane in order to see the neighbour next door, it's NOT A QUAD CORE! If you were to classify it, it's a 4-core CPU. With AMD's design, all 4 cores have equal access to data and can split and share properly.

There are even some applications out there that run SLOWER on an Intel 'quad core' CPU due to the constant bus hopping going on.

Why can't AMD 'keep up' as you say? Well, it's simple really. Because Intel market the crap out of an average product, they sell more units based on lying and deceiving customers. Not to mention the kickbacks and underhand tactics with your tier one OEM's (Dell, HP, etc). Sell more CPU's and you'll get more money. AMD have taken the noble road. They're suffering for it now, but they'll be on top again, just as they were with the first 1000MHz CPU (beat Intel to that cherry) and the first 64-bit desktop CPU (oops, Intel lucked out again). Come to think of it, the Athlon X2 was out before the Pentium D if I recall correctly (that seems to be the trifecta).

(sarcasm)Oh yea, AMD sure can't think things up before Intel.(/sarcasm)

Re:Yep (0)

Anonymous Coward | more than 6 years ago | (#20820341)

wikipedia lists the following release dates:

Pentium D: may 25th 2005
Athlon 64 x2: may 31st 2005

the difference was that amd had been working on the X2 for a long time, while intel did a last minute rush job once they found out about the X2 comming out, but intel WAS FIRST.

now i was a pretty big fan of amd back in the amd64 haydays, but then intel came back on top, and guess what im running in my main rig?

Now back in the pentium D days i was the same as you, criticizing intel for its FSB gluejob, crappy design etc. etc. However, if i actually used my PC to some degree today, you bet i would have a Q6600 in there. Truth is that conroe (and its derivatives) handle the extra latency a lot better then the piece of crap that netburst was. IIRC memory latency on the C2D is lower then on a AMD 64 X2, mainly due to very clever prefetching and memory handling.

As far as AMD calling a spade a spade, id like to see what they have to say for themselves in the whole current radeon debacle, where the only way their cards can even dent a stick of butter (dutch proverb, being able to dent a stick of butter, doesnt translate very well now that i think about it) is to MELT its way through it with 200+ watts of heat.

Compared to netburst, AMD did wonderfull with the K8 and the X2, it was an elegant architecture and performed well, but C2D performs better, has a wonderfull cpu architecture (platform architecture is a bit more meh) with loads of neat little features optimizing performance way past the K8

you are probably just dissapointed with barcelona and cant face that yet, instead you blame intel and refuse to see that is where the performance is at.

Its okay, you dont have to get rid of your amd CPU's, intel doesnt require your house to be AMD free before you can have one of their cpu's, my laptop and several secondary machines are still AMD, but my main rig, where the performance counts, is running intel

Re:Yep (1)

MojoStan (776183) | more than 6 years ago | (#20819639)

What Intel seems to think, and what my admittedly limited testing seems to bare out, is that you can double up on your cores and it works fine for normal usage. They did it with the Pentium D (2 single cores) and now with the Core 2s. Perhaps we'll see more of it, 2 4 core sets to make an 8 core. It seems to work well in the ability to offer more cores on a package sooner and at a lower cost, and still give good performance.
FYI, Intel will break this "two dies on one package" pattern with Nehalem, the 45nm successor to the current Core 2 architecture. Intel's first 8-core CPUs will actually have all 8 cores on one die. Also, Nehalem will have an on-die memory controller and QuickPath Interconnect (a HyperTransport-like system interconnect).

Anandtech has a nice write-up of Intel's Nehalem presentation at IDF: "Nehalem: Single die, 8-cores, 731M transistors, 16 threads, memory controller, graphics, amazing." [anandtech.com]

I agree that Intel's "two dies on one package" strategy worked great for the Core 2 architecture, but it seemed pretty lousy when they did this for the Netburst architecture (Pentium D). I guess the superior per-core performance of Core 2 overcomes the "messy hack" of putting two dies on one package.

Re:Still FSB and dual dual-core (0)

Anonymous Coward | more than 6 years ago | (#20815935)

Maybe this is why Intel's evangelist was saying recently how Linux should give more control over what processes run on what CPU cores -- because as it is now, where cores are assigned on a 'fair' basis, AMD chips will look really good whereas if you can tweak the OS to run threads from the same process on the same core, or the other on-die core, then Intel chips will look better.

It's annoying when 'defective' processor designs are covered up in software with hacky schedulers and magic tricks. Whoever wins in the market must have the best technology I guess...

Re:Still FSB and dual dual-core (1, Insightful)

Anonymous Coward | more than 6 years ago | (#20813735)

Kind of funny you mention that, considering that Intel's "archaic" FSB and "glued together" quad core greatly outperforms AMD's Barcelona in virtually every meaningful benchmark (and no, FP_Rate is not a meaningful benchmark).

Whether or not you like it (I don't), AMD dropped the ball with K10/Barcelona.

Re:Still FSB and dual dual-core (0)

Anonymous Coward | more than 6 years ago | (#20814525)

Kinda funny to see posts like this--let's call it--"dreaming." Until AMD actually releases something for the desktop/gamer market that actually competes with Intel in the high-end real-world benchmarks, posts that include phrases like "The true AMD quad-cores may blow intel away the desktop ones" should be modded either Funny or Troll. Or if the mods have any pity for fanboys, they could use "Overrated" I suppose.

Re:Still FSB and dual dual-core (2, Interesting)

TeknoHog (164938) | more than 6 years ago | (#20816543)

I don't get this alleged superiority of "true" quad-cores vs. dual dual. I used to think that two discrete CPUs should outperform a dual-core, because the latter must share the external communication channels, other things being equal. Now I'm not so sure of it, because of things like shared cache and hypertransport, which may improve performance in certain situations, but it's not obvious. The difference between a dual dual and a quad is much more subtle. Would you prefer a "true" octal-core to your dual-quad contraption?

Re:Still FSB and dual dual-core (1)

Agripa (139780) | more than 6 years ago | (#20823547)

The devil is in the details.

Integration allows lower latencies and higher bandwidths at the expense of die size and perhaps packaging cost. The advantages of Intel's dual die quad core include improved yields because of smaller dies and larger cache at the cost of splitting the last level of cache between each pair of cores and increased packaging costs. AMD's quad core shares the entire last level of cache between the cores and has lower latency because of the on die memory controller but the cache is smaller.

Two discrete CPUs probably would outperform a dual core if all other things are equal but that is unlikely to be the case. The former could have twice the total die size and 3 or 4 times as much cache to make up for the lack of integration but die area is expensive never mind the dual socket motherboard.

Re:Still FSB and dual dual-core (1)

StikyPad (445176) | more than 6 years ago | (#20817109)

Shh... you had me at "Skulltrail."

Honestly, who mods this stuff up? Run on sentences, parroted market-speak, and theoretical performance figures.. the only thing missing is a bad car analogy.

Re:Still FSB and dual dual-core (1)

Paperweight (865007) | more than 6 years ago | (#20819367)

I swear that was some kind of computer-generated reply.

Re:Still FSB and dual dual-core (1)

smash (1351) | more than 6 years ago | (#20820381)

Blah blah blah... all i heard was "AMD isn't shipping anything decent yet".

I thought it was the internal chip specs leaked.. (3, Insightful)

Marcion (876801) | more than 6 years ago | (#20813579)

... but it turns out to be some pricing details.

Nothing to see here, move right along.

six month head start over AMD? (0)

Anonymous Coward | more than 6 years ago | (#20813597)

So this makes them about six months ahead of AMD on 45nm?

Or do you think this will make AMD push it's commercial ship date to very early Q1 2008?

Just trying to decide if I should break away from AMD for the holidays.

My goal is not really more speed but to get my power bill under $1/day (for just my computer, currently overclocked on 90nm and sucking tons of power).

(and I wonder if 32nm is going to be a huge wall for both of them)

failzords!! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#20813633)

One or 7Ihe other

Was This An Accident? (2, Interesting)

Nom du Keyboard (633989) | more than 6 years ago | (#20813707)

Was this an accident, or FUD to put the brakes on AMD sales prior to the official release of the Intel processors? This way Intel get the news out without collecting the grief that comes from pre-announcing their next moves themselves.

Mod Parent Informative (At least) (2, Insightful)

mpapet (761907) | more than 6 years ago | (#20813939)

This kind of summary is a pet peeve of mine. "Top Secret Whatever is Leaked!" like this is advertising disguised as news.

Given the end-of-year release of the product, it's in sales, marketing and mass production hands now so there's nothing secret about it.

As a general rule, if something is "leaked" 3 months out, then it's advertising disguised as news because the product is ready for market, sales reps are out placing & promoting the product.

Parent is right on.

Re:Was This An Accident? (1)

hchaos (683337) | more than 6 years ago | (#20814113)

No, it's a press release.

here we go again (2, Funny)

Connie_Lingus (317691) | more than 6 years ago | (#20813713)

Jimbob's Corollary to Moore's Law...

Every 18 months I will become ever more numbed by the announcement of denser and denser chips.

Re:here we go again (3, Informative)

644bd346996 (1012333) | more than 6 years ago | (#20814193)

Not likely. Intel is currently developing their 32nm technology, and IBM has tested 29.9nm lithography. That's only around 600 times the Bohr radius (radius of a hydrogen atom). Within the next 10 years or so, we will have reached the fundamental limits on the size of a silicon transistor, and once those chips are brought to market, that's it. If Moore's law continues at all, it will be applied to something like quantum computers, not semiconductors.

Of course, there are many parts of a CPU that traditionally don't scale as well as the basic transistor, so with continued work, we can probably keep shrinking CPUs. But we'll be doing it in small increments with increasing marginal costs, not by the factors of 2 we've been seeing for the past 20 years.

Re:here we go again (1)

Jimmy_B (129296) | more than 6 years ago | (#20816959)

Not likely. Intel is currently developing their 32nm technology, and IBM has tested 29.9nm lithography. That's only around 600 times the Bohr radius (radius of a hydrogen atom). Within the next 10 years or so, we will have reached the fundamental limits on the size of a silicon transistor, and once those chips are brought to market, that's it. If Moore's law continues at all, it will be applied to something like quantum computers, not semiconductors.
Actually, Moore's Law doesn't care about fundamental physical limits, because Moore's law is a prediction of *price* per transistor, not density. If density stops increasing, then Moore's Law predicts that either the chips will get exponentially bigger for the same price, or prices will fall exponentially. Either one is good for us consumers.

Re:here we go again (1)

Kjella (173770) | more than 6 years ago | (#20820071)

Or more specialized CPUs, for a great many years CPUs have been improving so fast there seemed to be no point - it'd be eaten up by the progress in general CPUs. I think there's still a lot of room for improvement once it's clear die shrinks won't get us further.

Top Secret Information (1)

ScubaS (600042) | more than 6 years ago | (#20813757)

lol @ "top secret"

Re:Top Secret Information (0)

Anonymous Coward | more than 6 years ago | (#20814801)

gotta love em intel slashvertisements...

I sort of don't care (1, Interesting)

Anonymous Coward | more than 6 years ago | (#20813827)

Most of the fossil fuel/greenhouse warming effect of a computer is in the manufacturing process. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/9100/28876/01299692.pdf [ieee.org] That becomes more dramatic when one takes into account that the internet uses something like 6% of our electricity. I don't know how much it is, but manufacturing computers must spew a whole pile of greenhouse gases if they use so much electricity in operation.

My current motherboard is more than five years old. It runs Ubuntu Feisty Fawn fast enough to keep me from grumbling (I don't play games).

I realize that Vista needs some serious horsepower but I'm avoiding it. Lots of people and businesses are doing the same. Have we reached the place where most people and businesses don't have to upgrade every couple of years? Will environmental concerns put a brake on new computer sales?

Re:I sort of don't care (1)

Charcharodon (611187) | more than 6 years ago | (#20815867)

Most people never upgraded every couple of years anyway, but we are rapidly getting to the point where the typical PC uses(web/music/video/email/images) can be run well on the most modest a machines whether they have Linux,Windows, or OSX on them. That will have a bigger affect on sales than any environmental concerns.

That link you posted makes some flawed asumptions anyway. You buy a PC to do something, and that is part of the cost of it. The tree huggers pick something new to pull their hair out over every couple of years and the PC is it's current target. Think about what a PC replaces, a multitude of things that took energy and created polution to make. Here is the short list of what I no longer have in my home or no longer go out in public to do. I bet you a beer that a single PC is cheaper to make and operate than all these things and I'm sure there are many more that I missed.

Things I no longer have in my home:

Photography chemical dark room
CD player
DVD player
telephone
TV
Stereo
newspapers
magazines
encyclopedias

I no longer make trips to the...
the store (online purchasing)
the bank
the post office
government offices for info/paper work
to school(Online college education)
places and getting lost along the way, map directions
trips when the weather is bad at a distant location

...and so on.

There is one other thing I no longer pay much for any more and that is a winter heating bill. The combination of really good insulation and two PC's running 24/7 doing cancer research for Rosseta Stone (BOINC) keeps my house nice and warm and hopefully find cures for some really horrible diseases, not to shabby for something that uses half as much electricty as a fridge, not that a fridge even uses all that much in the first place.

It's not what things cost, but what you get in return.

Would you care to answer the questions? (0)

Anonymous Coward | more than 6 years ago | (#20817263)

I agree with what you say. The question isn't what the computer does better than many other things. The question is whether computers have begun to over-serve the market. In other words, are the computers now on our desks so good that we will not need to replace them for a very long time?

Developing new chips is very expensive. If the market for the fastest chips isn't there, Intel has a problem. Other chips are becoming more able. Other devices are doing what only computers used to be able to do. For instance, my wife's Blackberry can do many of the tasks you list.

I wonder what the market will do and I wonder how long Intel can keep up their pace of development.

Re:Would you care to answer the questions? (1)

Charcharodon (611187) | more than 6 years ago | (#20826283)

For the average user, I'd say yes the market is and has been over served for more than a couple of years now. Most of the big PC builders are all shrinking in size. I agree PC's are going the way of the standard appliance, something to be replaced when it breaks, but not before.

Trend wise for the average user, things will slow to a crawl once the PC on a chip stage of developement is reached, we are getting close, but for the enthusiest it'll be some time before enough will be enough. I doubt gamers will stop running out every year to get the latest and the greatest until something along the lines of the Matrix or a holo-deck (or if girls become alot easier to talk to/date) is available.

Re:I sort of don't care (1)

petermgreen (876956) | more than 6 years ago | (#20822567)

Most people never upgraded every couple of years anyway, but we are rapidly getting to the point where the typical PC uses(web/music/video/email/images) can be run well on the most modest a machines whether they have Linux,Windows, or OSX on them. That will have a bigger affect on sales than any environmental concerns.
People have been saying this for years but in my experiance it hasn't held out for a few reasons.

1: software bloat is ever increasing and shows no real sign of stopping. Sure you can stick with old software to a point but there are issues with security patches and with remote content getting heavier (the ajax revoloution) or forced upgrades (quite common with network dependent software) or they get thier machine infested with spyware and have lost the original disks/keys for all thier suitable software (this is especially an issue now that product activation has become so common) and decide to buy a new machine that can handle modern software.
2: it's not in computer stores interests to actually sell thier bargin basement machines, they are likely to be the lowest margin machines in the shop so they tend to push people to buy more than they actually need.
3: new apps come along, a decade ago people would have laughed if you had mentioned video editing on a home PC.
4: since we have become CPU rich and bandwidth poor lots of CPU power is invested into compression, the latest example being video (compare say divx to mpeg 2). Online HDTV will probablly take this to new heights.

Re:I sort of don't care (1)

Charcharodon (611187) | more than 6 years ago | (#20825721)

I specifically remembering a friend bitching and moaning over needing 1286mb of RAM for win95 to run well and the fact that he couldn't find a replacement copy when he lost one of the 15 floppies needed to install it. The only versions available were on CD-ROM which he thought was a pointless device to buy.

People have been crying over software bloat for the last decade. I for one welcome all the bloat, I'll take that over command line monocrome screened computing anyday. Frankly until I can walk through a door and say to no one in particular "Computer begin program and have a holo-deck simulate my entire house, I don't think bloat will ever slow down, nor should it.

Re:I sort of don't care (1)

ianare (1132971) | more than 6 years ago | (#20817167)

Have we reached the place where most people and businesses don't have to upgrade every couple of years?
It depends. If you don't want to upgrade software then we have already reached that point long ago (minus games of course). However each new version of most software makes them more bloated and the hardware requirements keep on going up. If you upgrade software then you need to upgrade hardware every couple years.

Will environmental concerns put a brake on new computer sales?
They might if regulation is passed to increase the cost of computer parts to reflect their carbon output during manufacture. Given that they are manufactured in China and the companies are US based, both of which are not very eager for this type of regulation (to say the least), I doubt this will happen anytime soon.

Re:I sort of don't care (1)

drsmithy (35869) | more than 6 years ago | (#20818803)

I realize that Vista needs some serious horsepower but I'm avoiding it.

No, it doesn't. For most people's needs, Vista runs quite adequately on modestly upgraded 6-7 year old hardware. If you've got a 1+ Ghz CPU (really dictated by your applications (or games)), a gig of RAM (more helps, but is not necessary) and a video card less than 3 years old (for Aero/video acceleration), Vista will run fine.

Heck, even for an "optimal Vista experience", the hardware required hasn't been "serious horsepower" for years.

Lots of people and businesses are doing the same. Have we reached the place where most people and businesses don't have to upgrade every couple of years? Will environmental concerns put a brake on new computer sales?

We reached that point ca. 2000 (even earlier for people whose web-browsing doesn't involve Flash). Most people don't do anything that needs large amounts of CPU power (more than a ca. 1Ghz P3). They benefit most from RAM and (to a lesser extent) video card upgrades.

PC gamers are a niche market. Heavy multitaskers are a niche market. Business users doing anything more demanding than email, simple web browsing, word processing, spreadsheets and presentations are a niche market. Probably the most CPU intensive task the average PC does today is watch Youtube videos and play Flash games.

Re:I sort of don't care (1)

WuphonsReach (684551) | more than 6 years ago | (#20824397)

We reached that point ca. 2000 (even earlier for people whose web-browsing doesn't involve Flash). Most people don't do anything that needs large amounts of CPU power (more than a ca. 1Ghz P3). They benefit most from RAM and (to a lesser extent) video card upgrades.

Yes, thereabouts (I'd personally peg it at 2001-2002). Computers stopped getting twice as fast every 12-15 months right around that time. We stopped needing to replace machines every 3 years (a 386 was a lot faster then a 286 and the 486 was a big step up as well) because a 3 year old machine was no longer 1/4 to 1/8 the speed of a new one.

The machines that we've been ordering for the office over the past year are dual-core, 2GB RAM, RAID1 hard drives running WinXP. We fully expect them to last at least 7 years (and possibly 10 years) before they need to be replaced. If we do upgrade them it will be to add one more gigabyte of RAM (to get to 3GB) and to add 22" 1680x1050 LCD displays.

Of course, that assumes that the motherboards don't die, or capacitors leak... Everything else can be easily replaced.

(We're buying whatever dual-core CPU we can get for $100-$120. Maybe spending $200 for a power-user workstation. So I only have moderate interest in the latest and flashiest. Mostly it keeps driving more and more powerful chips down into our price range.)

Re:I sort of don't care (1)

drsmithy (35869) | more than 6 years ago | (#20831507)

Yes, thereabouts (I'd personally peg it at 2001-2002). Computers stopped getting twice as fast every 12-15 months right around that time. We stopped needing to replace machines every 3 years (a 386 was a lot faster then a 286 and the 486 was a big step up as well) because a 3 year old machine was no longer 1/4 to 1/8 the speed of a new one.

Personally, I think it has more to do with software maturity. Until ca. 2000, software - particularly Windows - was increasing in capability (and subsequently hardware requirements) in significant increments. That basically stopped in 2000/2001 - even Vista's only real hardware requirement above and beyond what was available in 2000/2001 is a $30 video card.

The story on Macintosh is a bit different, of course, because of relatively less powerful/more expensive hardware (for most of the last decade or so) and with OS X, a relatively more resource-intensive OS (at least until the release of Vista). Apple were ~5 years late to their "next generation OS" after a few false starts and subsequently leapfrogged Windows in a few areas, but both Vista and OS X have ended up in basically the same spot as of 2007.

The machines that we've been ordering for the office over the past year are dual-core, 2GB RAM, RAID1 hard drives running WinXP. We fully expect them to last at least 7 years (and possibly 10 years) before they need to be replaced. If we do upgrade them it will be to add one more gigabyte of RAM (to get to 3GB) and to add 22" 1680x1050 LCD displays.

Sounds quite reasonable. It's likely hardware failures will kill these machines long before lack of usefulness does.

WWW.SAVELOUIS.COM (0, Offtopic)

Porn Perez (1146003) | more than 6 years ago | (#20813953)

YOOR VIEWS: WWW.SAVELOUIS.COM

Leopard? (2, Insightful)

Anonymous Coward | more than 6 years ago | (#20814259)

I wonder if the processors will be announced in the form of spanking new Mac Pro towers to coincide with the release of Apple's Leopard? It's the kind of big glitzy media event that Apple love and Intel would love to be included in.

Re:Leopard? (0)

Anonymous Coward | more than 6 years ago | (#20816291)

If there's a "Xeon" variant of this chip, then I wouldn't be too surprised at that.

Flawed Analysis (2, Informative)

paulsnx2 (453081) | more than 6 years ago | (#20814379)

In the article, the author scaled the performance based on the clock speed each time a comparison was made between chips with different clock speeds. This was mostly done in favor of the new Intel chips.

The problem is right there in the Author's analysis. For example:

"If you extrapolate the data, then the Yorkfield processor is really about 12-21% faster than the Kentsfield at the same clock speed. This is almost entirely due to the 50% larger cache in the Yorkfield processor. The very large 81% boost in DivX 6.6.1 is again mostly due to SSE4-optimized code in DivX."

But But But!!!! Changing the clock speed doesn't make the cache any bigger! You can't then assume a linear relationship between performance and clockspeed if the difference is primarily how long you are going to have to wait to fill the cache!

The article isn't too flawed. They give actual results. But do yourself a favor as you read the aritcle and completely discount any "extrapolation" done by the author to get "really" numbers. When comparing processors, the "really" numbers are always the hard cold facts, not the "I wish" numbers generated by speculating what would happen if you changed the processors in some way.

mobile processors? (1)

datapharmer (1099455) | more than 6 years ago | (#20814763)

Okay, 45nm desktop processors, great. Any idea when the mobile version will be available? That is what I'm really interested in.

WTF with the names already? (1)

pseudorand (603231) | more than 6 years ago | (#20814883)

Is anyone else simply baffled by the names Intel chooses for it's processors? Back in the day, they were criticized for ?86 being confusing to non-nerds. Calling the 586 "Pentium" was a step in the right direction, but now they've completely hosed things again. Core 2? Is that a dual core? If so, than what the heck is a Core 2 Duo? Clearly it's not a quad-core, but the CPU from TFA, IS a quad-core, even though it's still called Core "2". Is this some sort of a sick joke by the marketing department?

Re:WTF with the names already? (1)

Chlorus (1146335) | more than 6 years ago | (#20816141)

You also get such wonderful combinations as "Core 2 Solo", and bizarre words as "Tigerton". I heard there was an attempt by Intel to simplify their nomenclatures, but that got canceled. I found an article on Arstechnica concerning it: http://arstechnica.com/news.ars/post/20070808-intel-announces-plan-to-unify-product-naming-scheme.html [arstechnica.com] Of course, that proposed naming system made even less sense...

Re:WTF with the names already? (1)

Emetophobe (878584) | more than 6 years ago | (#20821997)

Intel originally made the Pentium, followed by the Pentium 2, Pentium 3 and then the Pentium 4. Later Intel made a whole new architecture and called it "Core". Intel then made a second version of the Core and called it the Core 2, just like how the successor to the original Pentium was named the Pentium 2...

I will agree that Intel's naming convention is pretty confusing for people that haven't read up on all their past chips.

Re:WTF with the names already? (1)

EvanED (569694) | more than 6 years ago | (#20816511)

Core 2? Is that a dual core? If so, than what the heck is a Core 2 Duo? Clearly it's not a quad-core, but the CPU from TFA, IS a quad-core, even though it's still called Core "2".

Hey, I'm SSHed into a dual Core 2 Quad at the moment.

How's that for a description of a machine? ;-)

Re:WTF with the names already? (1)

mdwh2 (535323) | more than 6 years ago | (#20818821)

If so, than what the heck is a Core 2 Duo? Clearly it's not a quad-core, but the CPU from TFA, IS a quad-core, even though it's still called Core "2".

"2" is the version, not the number of cores (like Pentium 2 vs Pentium).

Core 2 Duo is dual core - I can't see where the article talks about a Duo Quad core?

Re:WTF with the names already? (1)

bronney (638318) | more than 6 years ago | (#20819109)

Imagine you're just hired by Intel and your department is responsible to come up with the processors' names. Imagine you took your desk when the Pentium Pro launched. How are you going to name the Slot-1 ones? Hexium.. mm doesn't sound good. Came out to be Pentium 2's.

A few years later, you continue this trend, Pentium 3, 4. Now a new dude came in and took your job. His boss tell him to be "creative" and please don't give me any pentium 5's (or Vista 2's). He HAS TO deliver something. What the hell, let's call it Core.

My point is, there's absolutely nothing personal about naming the chips. Even in naming operating systems, it's a matter of roles changing and the never ending process of us dying. What name you think the intel chip will be in 50 years? It could be Intel "Elvis". \o/ Creative(TM)

64nm is all anyone should ever need (1)

Luxifer (725957) | more than 6 years ago | (#20815681)

and you can quote me on that.

Must be trying for raw speed (1)

asm2750 (1124425) | more than 6 years ago | (#20818759)

Wow still using two dual core dies to make a quad. I guess they are just trying to use their raw speed and fab abilities to fight AMD. Crude but it can be effective. Still I don't use Intel chips as often unless I'm buying a mobile device, something about the desktop motherboards for Intel products keeps turning me off to them.

Re:Must be trying for raw speed (1)

smash (1351) | more than 6 years ago | (#20820369)

Dunno if i'd call it crude, i'd call it an effective way of reducing costs and increasing yields.

Being able to churn out core2 duo cores for everything and just glue some together to make quads - saves on fab costs - and hence they can provide quad core at a much lower price point.

This isn't exactly a "leak" (1)

Super Jamie (779597) | more than 6 years ago | (#20818781)

How is Intel following their roadmap for process downsizing a "leak" worthy of news? I'll leak you some more things right now - they're looking to go to 32nm in another 2 years, and further to ~20nm 2 years after that.

Re:This isn't exactly a "leak" (1)

babyshiori (1091815) | more than 6 years ago | (#20821509)

Leaked as in most don't even know the pricing and also benchmarks of both Yorkfield and Wolfdale.

Leak is defined as "to become known unintentionally (usually fol. by out): The news leaked out"

I bet Intel doesn't want the details of the processors to be known so early and this is probably leaked from the roadmap presentation of some kind.

Better value than a moderatly OCed Q6600 SLACR? (2, Interesting)

funkdancer (582069) | more than 6 years ago | (#20818985)

Just wondering if anyone here thinks the new CPUs will deliver better value than the Q6600 SLACR, which only costs around ~A$350 (US$315) and will easily reach 3ghz with virtually nill effort. Put a little bit of work in it and it will reach 3.4ghz or even 3.6ghz with just a little bit more, on a good air cooler

Whilst the running costs would be lower due to the lower energy usage, I'm just wondering if any of the new CPUs will come anywhere close to the absolutely fantastic performance/value that is currently represented by the SLACR.

I'm looking to buy a new CPU & motherboard for my Zalman HD160XT HTPC case in the next month or so. I already have a Q6600@3GHZ in my self built desktop (based on Asus Blitz Formula in an Antec Nine Hundred w/2GB of RAM) and it is supreme in desktop usage with lots of apps running in Vista, just totally outclassing the Core2Duo 2.67ghz WinXP desktop (IBM IntelliStation M Pro 9229/also 2GB of RAM) which I have at work.

Re:Better value than a moderatly OCed Q6600 SLACR? (1)

smash (1351) | more than 6 years ago | (#20820277)

If you're overclocking the Q6600 then take into account the possibility of overclocking the new CPU... or the fact that when overclocking any guarantee of stability is gone.

Re:Better value than a moderatly OCed Q6600 SLACR? (1)

funkdancer (582069) | more than 6 years ago | (#20831175)

Stability testing is par of the course with any proper overclock...
I'm asking about the prospect of if even better price/performance will be attained, as it looks like the new CPUs will be priced higher.
And yes, temperature wise you'd think they'd be able to reach much higher speeds with the smaller die.

Re:Better value than a moderatly OCed Q6600 SLACR? (1)

smash (1351) | more than 6 years ago | (#20833413)

You can do all the testing you like. If you run into a miscalculation or crash, and you're not running at the factory clock-rate, all bets are off.

Don't get me wrong, if you want the speed for gaming, go ahead. If the CPU malfunctions, who cares? If it's for business use though, get the company to fork out the $$ for a vendor supported solution. Risk of malfunction (not necessarily crash, but perhaps miscalculated financial/scientific data, etc) is just not worth it in a production environment, imho. Sure, that could happen at the stock clock-rate, but the risk will be far less.

Leaked? (1)

Godji (957148) | more than 6 years ago | (#20819319)

Leaked... yeah, right ;) Someone wants to taunt AMD, me thinks.

The article is updated! (1)

lifehack (1165785) | more than 6 years ago | (#20822379)

Seems like Tech ARP just corrected the launch date from 11th to 12th November. Their source said that 11th will be the date Intel set the price for the processors to be launched. :D

In addition, the author has just talked to Pat Gelsinger a few hours ago with some confirmation and additional info.

"The November 12 launch will include server-grade processors like the quad-core Xeon code-named Harpertown (12 MB L2 cache, TDPs of 50W, 80W and 120W) and a dual-core Xeon code-named Wolfdale-DP (6 MB L2 cache, TDP of 40W, 55W and 80W)."

Wait new processors. (1)

Artemon12 (1166035) | more than 6 years ago | (#20828303)

Wait anxiously new 45-nm processors. Immediately verse to itself.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?