Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Core I7 Launched, Nehalem and X58 Tested

CmdrTaco posted more than 5 years ago | from the time-to-go-shopping-again dept.

Upgrades 194

MojoKid writes "Today marks the official launch of Intel's new Core i7 processor, the most major overhaul of Intel's core processor architecture since the release of their Core 2 design. As has been reported, the Core i7 is a major departure from Intel's aging Front Side Bus architecture of old, now replaced by Intel's QPI (Quick Path Interconnect) serial links. This 20 lane bi-directional (40 lanes total) point-to-point connection provides 6.4 GT/s of bandwidth and scalability for future multi-socket designs as well. In addition, the Core i7 now has an integrated triple channel memory controller offering over 3X the bandwidth of the previous Core 2 architecture with DDR3 system memory. Though the product is set to ship in volume later this month, the early benchmark numbers show Intel's new chip is markedly faster clock-for-clock versus their previous generation CPU and much faster than anything AMD has out currently."

cancel ×

194 comments

Sorry! There are no comments related to the filter you selected.

Not out... (5, Insightful)

GenP (686381) | more than 5 years ago | (#25611821)

It's not out until I can buy one from Newegg.

Re:Not out... (1)

thornomad (1095985) | more than 5 years ago | (#25613051)

Exactly. Wasn't it in May [slashdot.org] that VIA announced the Nano ("Isiah") processor ?

I still don't see them on NewEgg

Re:Not out... (2, Funny)

Anonymous Coward | more than 5 years ago | (#25613077)

You didn't see the bundle offer for one of these with a copy of Duke Nukem Forever?

Re:Not out... (1)

MikeDirnt69 (1105185) | more than 5 years ago | (#25615541)

Yeah, I've heard that the sound track comes from G'n'R Chinese Democracy.

Re:Not out... (4, Insightful)

betterunixthanunix (980855) | more than 5 years ago | (#25613633)

I would wait several months before buying from Newegg. This CPU will undoubtedly have some major errata, and you'll probably want to know about it before you go ahead and throw down hundreds of dollars. Personally, I'll be waiting until at least April before I even consider it to be a viable option.

Re:Not out... (1)

blair1q (305137) | more than 5 years ago | (#25614915)

I got my Yorkfield from a kit-builder long before Newegg actually had them in stock. Call around. Someone out there ordered 5 or 6 more than he had plans for.

Sweet! (4, Interesting)

symbolset (646467) | more than 5 years ago | (#25611841)

A little hot, but on time, in time for Christmas and slamming the benchmarks. Hey, there is a system that can run Crysis with all the features turned on!

Maybe a price break on the LGA775 quad lineup now please?

Re:Sweet! (5, Informative)

Wintergr33n (1369379) | more than 5 years ago | (#25612361)

Funnily enough a gaming performance review found not that much difference in running Crysis on i7 (http://www.bit-tech.net/hardware/2008/11/03/intel-core-i7-920-945-965-review/4) and in fact worse performance for the brand-new Far Cry 2 (http://www.bit-tech.net/hardware/2008/11/03/intel-core-i7-920-945-965-review/5). It remains to be seen whether or not other new games show a similar effect or not...

Re:Sweet! (0, Redundant)

ThePhilips (752041) | more than 5 years ago | (#25612853)

Real workloads are never should be mixed with benchmarks.

Especially since it is well known that benchmarks often optimized specifically for Intel CPUs.

I expect that games like Crysis and Far Cry would stress heavily RAM, meaning that they hit limit on memory bandwidth (e.g. physics data, textures, etc) before they actually hit top CPU performance (e.g. calculations).

Re:Sweet! (1)

ip_fired (730445) | more than 5 years ago | (#25613119)

Yes, but some of the nifty new feature of the core i7 is the new interconnect and triple channel ram. It should have *a lot* of memory bandwidth.

Re:Sweet! (4, Interesting)

ThePhilips (752041) | more than 5 years ago | (#25613431)

But it doesn't magically increases RAM bandwidth.

i7 memory interconnect would help applications which are not hand-crafted to maximize performance. And I expect that games like Crysis already optimized through the nose to utilize all bandwidths to max.

Or to put it in other words: unoptimized code would gain from i7 more than highly optimized code, since in former case CPU would have more opportunities to optimize memory accesses on its own and better fill up the data bus.

But I also can be wrong and hand crafted code of Crysis/etc is simply cannot take advantage of i7 features.

Re:Sweet! (4, Funny)

Missing_dc (1074809) | more than 5 years ago | (#25613883)

Or to put it in other words: unoptimized code would gain from i7 more than highly optimized code, since in former case CPU would have more opportunities to optimize memory accesses on its own and better fill up the data bus.

I see!! You mean Vista might actually run well on this processor??!!

Re:Sweet! (5, Funny)

MikeDirnt69 (1105185) | more than 5 years ago | (#25615589)

Run is too much for Vista... maybe a walk.

Re:Sweet! (0)

Anonymous Coward | more than 5 years ago | (#25616069)

But it doesn't magically increases RAM bandwidth.

The high end i7 has almost triple the memory bandwidth of a Core2 (48GB/s VS 12.8GB/s) with DDR3 running at 2GHz.

The Tech Report has a pretty good description of the i7's new features.

Re:Sweet! (1)

ThatDamnMurphyGuy (109869) | more than 5 years ago | (#25614075)

This isn't too shocking. With the move of the mem control into the i7, there's an extra latency that has to be programmed for/around in video drivers/cards. I suspect that once updated drivers/cards start flowing, the performance of these games on i& will roll back to the top of the pile.

Re:Sweet! (0)

Anonymous Coward | more than 5 years ago | (#25612631)

So now people finally get to find out how much this game really sucks?
Great! Maybe then those jerks at Crytek will stop turning out the same junk over and over..

Re:Sweet! (0)

Anonymous Coward | more than 5 years ago | (#25612849)

"over and over"? They only have 2 full length games.

Re:Sweet! (3, Informative)

MadnessASAP (1052274) | more than 5 years ago | (#25613485)

3 if you count FarCry which was developed by Crytek but published by Ubisoft rather then EA. It's also worth pointing out that IMHO FarCry was the better game.

Re:Sweet! (0)

Anonymous Coward | more than 5 years ago | (#25613599)

I was counting Far Cry.
Their only games are Far Cry, Crysis, and Crysis: Warhead (a short expansion). Far Cry 2 was done by Ubisoft.

Re:Sweet! (1)

TheSambassador (1134253) | more than 5 years ago | (#25613303)

Far Cry 2 isn't even related to Far Cry (1) or Crysis. It's made by different people with a different storyline run by a different engine. Even gameplay is extremely different, and the setting is no longer a jungle, but plains of Africa.

Good job showing us that you've never actually played the games.

units (1)

dmbasso (1052166) | more than 5 years ago | (#25611859)

wow! 64 * 10^9 (giga) * 10^12 (tera), or 6.4 * 10^20 bytes per second!!! awesome!!!~

Re:units (2, Funny)

Anonymous Coward | more than 5 years ago | (#25611943)

It's GT/s, not GTB/s, there's no bytes. Just 6.4 * 10^21 per second. Or maybe 6.4 * 10 ^ 9 Teslas per second. Or 6.4 cars per second.

Re:units (0)

Anonymous Coward | more than 5 years ago | (#25612493)

that's no toon?

Re:units (1)

TheThiefMaster (992038) | more than 5 years ago | (#25612073)

T=Transfers maybe?

Please stop using the GT/s performance indicator. (5, Insightful)

ciderVisor (1318765) | more than 5 years ago | (#25611897)

It's not big and it's not clever. I like my bytes and bits, thank you very much.

Re:Please stop using the GT/s performance indicato (4, Insightful)

mdmkolbe (944892) | more than 5 years ago | (#25611941)

What is a GT/s? (Honest question, looking for an honest answer.)

Re:Please stop using the GT/s performance indicato (5, Informative)

dkf (304284) | more than 5 years ago | (#25611995)

What is a GT/s? (Honest question, looking for an honest answer.)

Giga-Transfers per second [tmworld.com] (or at least that's what google found).

Re:Please stop using the GT/s performance indicato (3, Funny)

Loibisch (964797) | more than 5 years ago | (#25612059)

What is a GT/s? (Honest question, looking for an honest answer.)

Damn, if you had been looking for a biased answer I'd have linked you to Wikipedia...

Re:Please stop using the GT/s performance indicato (5, Funny)

ciderVisor (1318765) | more than 5 years ago | (#25612107)

GoaT/se ?

Re:Please stop using the GT/s performance indicato (0)

Anonymous Coward | more than 5 years ago | (#25616195)

That's a shitload of bandwidth.

Re:Please stop using the GT/s performance indicato (2, Interesting)

Anpheus (908711) | more than 5 years ago | (#25614297)

But the bus doesn't transmit bits or bytes always. Different buses have different quantities they send on a transfer, and the Core i7 can feed those available today (PCI, PCI-Express, etc) with 6.4 billion per second.

No bits or bytes anywhere to be seen.

Re:Please stop using the GT/s performance indicato (1)

thelexx (237096) | more than 5 years ago | (#25616337)

WTF?

Quantities of what then? Digital data breaks down to bits, regardless of how it's transported.

If what you say is literally true, then they have to be converting the digital to an analog signal or there's no data either.

Better explanation required.

Re:Please stop using the GT/s performance indicato (1)

Anpheus (908711) | more than 5 years ago | (#25616475)

In the case of PCI-Express, Serial ATA and a number of other technologies, it's an 8b/10b encoding. 10 bits are sent which encode 8 bits of data to ensure there are no errors. 10 gigabit ethernet uses a different type of encoding so the transfer size to that over the bus may be different (and perhaps, slower per transfer.)

I think the goal is to make the transfer mechanism not care what physical data is sent over the line, much like the physical layer of the OSI model, and to allow the CPU or other handling mechanisms decode the physical data.

Anyway, think of gigatransfers per second as gigapackets per second. Then number of bytes per packet can vary depending on the type of packet and the data encoded.

i7? (4, Funny)

hcdejong (561314) | more than 5 years ago | (#25612239)

Of course, "Core 3" was what everyone expected them to do, so Intel couldn't possibly use that. Using imaginary numbers is much more logical.

Re:i7? (1)

Thundersnatch (671481) | more than 5 years ago | (#25613457)

Wouldn't that make it "Core 7i" instead of "Core i7"?

Re:i7? (2, Funny)

Anonymous Coward | more than 5 years ago | (#25613723)

Fortunately, the imaginary unit and real numbers commute.

Re:i7? (1, Funny)

Thundersnatch (671481) | more than 5 years ago | (#25614947)

But it defies the common notation. I think "Core 0+7i" would be unambiguous. What the hell is wrong with those marketing types?

Re:i7? (1)

mobby_6kl (668092) | more than 5 years ago | (#25615253)

But 7i makes it sound like the processor is fuel-injected, which is also not the case.

Re:i7? (1)

Thundersnatch (671481) | more than 5 years ago | (#25615401)

Are you sure? I haven't looked at the block diagrams in detail, but apparently this thing is the bee's knees.

Re:i7? (1)

Amazing Quantum Man (458715) | more than 5 years ago | (#25613559)

So a Beowulf cluster would have negative 49 cores?

Re:i7? (1)

jebrew (1101907) | more than 5 years ago | (#25613757)

sqrt(-49) cores to be pedantic.

Re:i7? (1)

Amazing Quantum Man (458715) | more than 5 years ago | (#25614345)

I was assuming each core was i7.

Re:i7? (1)

VisceralLogic (911294) | more than 5 years ago | (#25615507)

Only if your cluster is comprised of 7 imaginary nodes!

Re:i7? (4, Funny)

Amazing Quantum Man (458715) | more than 5 years ago | (#25615555)

Well... I was trying to *IMAGINE* a Beowulf cluster....

Why would you expect Core 3? (5, Funny)

Phat_Tony (661117) | more than 5 years ago | (#25614303)

Why on earth would you be expecting the the Core 3 to follow the progression of:

Core
Core Duo
Core2 Duo

The correct answer should be the 2Core2 Duo, or the Core2 Duo Dos, or the BiCore2Duo. Maybe the DuoCore2 Duo? Anyway, follow the pattern- keep adding things that mean "2." In several years, we should have had BiDuo2Core2DoubleDuo Dos MarkII.

Instead, it looks we're heading for the e8, or the pi9, or the ln10, or maybe the 11!. Except for that they'll change the pattern again, because now everyone's expecting math terms.

Re:Why would you expect Core 3? (1)

DragonWriter (970822) | more than 5 years ago | (#25614569)

Why on earth would you be expecting the the Core 3 to follow the progression of:

Core
Core Duo
Core2 Duo

Because that's not the progression.

The technology is Core -> Core 2.

The "Duo" indicates that there are two cores of the appropriate type (Core or Core 2). (And the alternative "Quad" indicates four cores, and "Extreme", oddly enough, is used for 2 or 4 cores, but indicates better support for overclocking.)

So, in terms of the part of the branding used to indicate the core technology, Core 3 would be a not unreasonable expectation as a successor to the prior Core and then Core 2.

OTOH, it also makes sense that they are going with a new style of moniker, since this is a pretty significant change and all the new processors are planned to be quad-core hyperthreading processors with overclocking support, so the old Duo/Quad/Extreme distinction won't make sense and there are good reasons to not portray it as a simple step upgrade from the Core 2 but as a bigger break.

Re:Why would you expect Core 3? (1)

bhtooefr (649901) | more than 5 years ago | (#25614885)

And, there's also a Solo moniker.

But, Core 2 (or, rather, the Core microarchitecture that Core 2 is based on) is as big of a leap over the ancient P6 (from 1995) that the original Core Duo and Core Solo were based on. (Core Duo essentially being two Dothan Pentium Ms sharing a cache, with better SSE support, and a die shrink, and Core Solo being the single-core version.)

Re:Why would you expect Core 3? (1)

bhtooefr (649901) | more than 5 years ago | (#25614983)

Er, I screwed up. The Core microarchitecture was as big of a leap over P6 (or bigger) as Nehalem is over Core.

Re:Why would you expect Core 3? (1)

strong_epoxy (413429) | more than 5 years ago | (#25616565)

The next name in the progression would actually be:

Core2 Duo Turbo Pro Gold Elite

Re:i7? (1)

krakelohm (830589) | more than 5 years ago | (#25614535)

I know for a fact that 7 is a real valid number, sorry to burst your bubble.

Re:i7? (0)

Anonymous Coward | more than 5 years ago | (#25615147)

Of course, my first read of the topic title ("Intel Core I7" in sans-serif) made it look like they had skipped cores 3-16. I can't wait for my Core 17 Quad to come out.

Re:i7? (1)

Goalie_Ca (584234) | more than 5 years ago | (#25615279)

Well this early release was about 7 days out of phase or so.

But... (2, Funny)

Computershack (1143409) | more than 5 years ago | (#25612307)

Will it still play solitaire?

imagine (1)

XLR8DST8 (994744) | more than 5 years ago | (#25612359)

a beowulf cluster of these!

We're all serialists now? (4, Interesting)

jcr (53032) | more than 5 years ago | (#25612401)

This trend towards serial links reminds me of the INMOS Transputer [wikipedia.org] . Of course, those links were a hell of a lot slower than modern LVDS communications, but it's funny to see these ideas come back around.

-jcr

Re:We're all serialists now? (4, Informative)

frieko (855745) | more than 5 years ago | (#25612643)

Crosstalk and synchronization issues make parallel links impractical in the GHz range. There's a reason USB, PCI Express, HT/QPI, Ethernet are all serial and packet-based. The only major holdout is RAM, but I see it going serial eventually.

Re:We're all serialists now? (4, Informative)

jcr (53032) | more than 5 years ago | (#25612891)

The only major holdout is RAM, but I see it going serial eventually.

Well, depending on how you look at it, is sort of has already. FB-DIMM does parallel to serial conversion right on the DIMM. The DRAM chips themselves still have a parallel bus, but that bus doesn't even make it to the socket anymore.

-jcr

Re:We're all serialists now? (0)

Anonymous Coward | more than 5 years ago | (#25613913)

Remember Rambus? And all the rigamarole that surrounded it? Faster but more expensive didn't work out in that case.

Re:We're all serialists now? (4, Insightful)

Jerrry (43027) | more than 5 years ago | (#25615369)

"Remember Rambus? And all the rigamarole that surrounded it? Faster but more expensive didn't work out in that case."

There was nothing wrong with Rambus technology that caused it to ultimately fail. It was the lawsuit happy tactics of Rambus Inc. that caused the problems. The technology was sound, but the owner of the patents went out of their way to repeatedly shoot themselves in the foot.

Re:We're all serialists now? (2, Insightful)

Vanders (110092) | more than 5 years ago | (#25615597)

There was nothing wrong with Rambus technology that caused it to ultimately fail.

I don't think the crappy Rambus controller on the Intel i820 chipset helped it's technical reputation too much, but you're right that the legal shenanigans probably damaged them to most.

Re:We're all serialists now? (0)

Anonymous Coward | more than 5 years ago | (#25614633)

Yes, I am serial.

Being an innovator not always smart? (5, Insightful)

wikinerd (809585) | more than 5 years ago | (#25612407)

AMD was brave enough to quit using FSBs in PC CPUs and replaced them with HyperTransport. Years later, Intel also says goodbye to FSBs and uses a similar technology. The innovator took all the costs, and now someone with more resources gets the marketshare. After all, the consumers only want a speedy CPU, they don't care who was the innovator, and speedy CPUs are more readily available by whoever has the most resources to build them. It is, therefore, seen that being the innovator is not always a smart movement in the business chessboard, at least not if you cannot build your innovation in sufficient quantity. That said, I congratulate Intel for finally bringing the cores closer to the RAM, which is a much better technical solution than using an FSB. They should, perhaps, have done that much earlier.

Re:Being an innovator not always smart? (5, Interesting)

jcr (53032) | more than 5 years ago | (#25612481)

The innovator took all the costs,

Not hardly. There were a lot of other companies [hypertransport.org] involved in developing Hypertransport, and Intel spent their own money to develop their alternative.

-jcr

Re:Being an innovator not always smart? (5, Insightful)

Enderandrew (866215) | more than 5 years ago | (#25612521)

I thought HyperTransport was developed as open technology, allowing anyone to use it. I thought it was one of AMD's advantages, and I can't believe it took Intel so long to ditch the traditional FSB. What hurts AMD is pushing release dates back over and over again. What hurts AMD is not being able to keep up with Intel's fab processes. What hurts AMD is Intel using illegal tactics to bump AMD out of the market. AMD decides the only way to stay in the market is to sell their procs super-cheap, but then they don't make any money doing so.

It didn't help that when AMD was kicking Intel's butt in performance (Athlon 64 vs P4) AMD didn't gain much in market share because guys like Michael Dell said he'd never ship an AMD processor in one of his desktops, regardless of price and performance. Now that Intel is kicking AMD to the curb on high-end performance, all AMD has going for it is the low-cost market.

Re:Being an innovator not always smart? (1)

jcr (53032) | more than 5 years ago | (#25612613)

It didn't help that when AMD was kicking Intel's butt in performance (Athlon 64 vs P4) AMD didn't gain much in market share because guys like Michael Dell said he'd never ship an AMD processor in one of his desktops, regardless of price and performance.

Well, going for higher quality in the windows/PC world was a sucker bet from the day that Dell opened for business.

-jcr

Re:Being an innovator not always smart? (1)

eabrek (880144) | more than 5 years ago | (#25613859)

Moving from FSB to a point-to-point link isn't as straight forward as it looks.

The move usually includes integrating the memory controller, which then ties your processor design to DRAM standards (RDRAM vs DDR2 vs DDR3, etc). There are also different voltages involved.

That said, Intel should of had some point-to-point solution sooner... Even if it wasn't an across the board switch.

Re:Being an innovator not always smart? (1)

Enderandrew (866215) | more than 5 years ago | (#25614055)

Yes, but because AMD integrates memory controllers and such directly in their processors, AMD motherboards and cheaper than comparable Intel motherboards. This really is a win-win.

Re:Being an innovator not always smart? (1)

eabrek (880144) | more than 5 years ago | (#25614447)

Yes, but because AMD integrates memory controllers and such directly in their processors, AMD motherboards and cheaper than comparable Intel motherboards. This really is a win-win.

It's a short term win for the customer. But it is killing AMD.

Instead of designing new microarchitectures, their processor designers were forced to integrate all sorts of different memory controllers into their processor designs...

And their fabs were forced to run different part mixes (mixing one channel parts with two channel, etc.) That reduces yield and complicates inventory management...

Re:Being an innovator not always smart? (1, Interesting)

pseudorand (603231) | more than 5 years ago | (#25614071)

Where are you getting the idea that AMD's technology is so much faster. Crappy benchmarks that try to sum up a complex problem into a single number?

I do IT support for scientific computing and I just don't see it. In fact, I spent all last week benchmarking some of my user's programs on an AMD 2212 vs an Intel E8400, and the Intel system is wiping the floor with the AMD system (20% faster) for this particular program. And I'm not an Intel fanboy. I used to be an AMD fanboy, but then I got a whole mess of various different models of Tyan AMD motherboards that consistently got MCEs and kernel panics under load (yes, I'm using both memory and CPUs from Tyan's list of supported chips for that specific board). It could be Tyan and it could be AMD, but since I switched to Intel, I can't get an machine check to save my life.

My point is that the very small subset of that actually run CPU-bound programs for any significant length of time know that the best performance is:
a) Very application specific
b) Switches back and forth all the time
c) Represents differences less than a factor of 1, so if your code is actually too slow to run, you'll have to resort to tuning your software rather than buying better hardware (unless, of course, your hardware is many years old).

Re:Being an innovator not always smart? (4, Insightful)

Enderandrew (866215) | more than 5 years ago | (#25614175)

You didn't read my post. I never said AMD was faster now. I said that AMD *WAS FASTER* at one point, and these days all AMD has is the low price point.

For instance, the last time I built a computer for me (a little over a year ago) AMD offered a dual core processor for $35. The Intel equivalent that it was compared to in benchmarks cost $150. In the price-performance comparison, AMD came out way ahead at the low price point. At the very high end, AMD didn't have anything that could produce Intel's performance.

Not to mention that scientific computing is vastly different from general processing.

For a scientist, you sure don't seem to understand what I wrote. Go back and reread it.

Re:Being an innovator not always smart? (1)

grotgrot (451123) | more than 5 years ago | (#25616083)

You also forgot why people like me stopped using AMD (after using them exclusively from 1995 till 2005). They kept changing the sockets which meant I couldn't upgrade my system one piece at a time any more.

Re:Being an innovator not always smart? (3, Informative)

Enderandrew (866215) | more than 5 years ago | (#25616469)

Intel didn't change sockets? How many sockets have they launched in the past six years? AMD has launched 3 main sockets in that time (754, 939 and AM2). Anyone remember Intel ditching Socket 423 after less than a year?

And AMD would release one proc on different sockets so you could still upgrade with your old mobo. For instance, when they came out with Socket 939, they were still releasing new procs under Socket 754. Even though they have Socket AM2/AM2+, you can still get Socket 939 procs.

AM2 came out in early 2006, and when I build my next rig in the spring, I'll still likely be building an AM2 rig. That being said, I'll probably go with a new motherboard for a faster bus, and faster memory support.

I could keep my existing mobo which will support quad-core AM2+ processors with a BIOS update, but to get the full potential, I need a new motherboard for the bus speed and memory improvements.

Intel is in the same boat. Chipsets and cores change often enough that you need to replace everything to get the best possible results.

Your logic was that you didn't want to change sockets and replace your entire system (AMD provided you that option to stay on the same socket) so you replaced your whole system and changed sockets to go to Intel.

How does that make sense?

Re:Being an innovator not always smart? (4, Informative)

TheGratefulNet (143330) | more than 5 years ago | (#25612529)

DEC invented that hypertransport for the DEC alpha. AMD liked the idea and adopted it. it was not AMD's idea.

Re:Being an innovator not always smart? (2, Informative)

644bd346996 (1012333) | more than 5 years ago | (#25613013)

You seem to be thinking about the Alpha EV6 front-side bus architecture that AMD used on the original Athlon. It's very different from the HyperTransport bus, and predates it by several years.

Re:Being an innovator not always smart? (2, Interesting)

TheGratefulNet (143330) | more than 5 years ago | (#25613407)

yes, I was thinking of that. but how radical is the new amd vs that older ev6 stuff?

the whole idea is that its NOT a front side bus and its pt-pt from every node to every node.

intel still has this FSB notion and amd dropped that years ago (?)

Re:Being an innovator not always smart? (3, Insightful)

illumin8 (148082) | more than 5 years ago | (#25616359)

AMD was brave enough to quit using FSBs in PC CPUs and replaced them with HyperTransport. Years later, Intel also says goodbye to FSBs and uses a similar technology. The innovator took all the costs, and now someone with more resources gets the marketshare. After all, the consumers only want a speedy CPU, they don't care who was the innovator, and speedy CPUs are more readily available by whoever has the most resources to build them. It is, therefore, seen that being the innovator is not always a smart movement in the business chessboard, at least not if you cannot build your innovation in sufficient quantity. That said, I congratulate Intel for finally bringing the cores closer to the RAM, which is a much better technical solution than using an FSB. They should, perhaps, have done that much earlier.

Amen. I'm tired of explaining to my colleagues why AMD Opteron servers outperform Intel for use in database servers because of memory bandwidth and ccNUMA architecture. It's nice that Intel has finally realized that they can't keep designing processors for desktop PCs and not care about I/O bandwidth. This does mean I can finally be confident that when I buy a new 8-CPU, 8-core (64 total core) database server from Intel I don't have to worry about my poor MCH (memory controller hub) choking access to that nice 512GB of RAM I have hanging off of it.

Those of us building database servers, VMware clusters, and other high memory bandwidth applications can rejoice because the Nehalem architecture is finally almost here.

moD up (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25612545)

to tHe original

When are the 2 way ones that will be in the mac pr (1)

Joe The Dragon (967727) | more than 5 years ago | (#25612595)

When are the 2 way ones that will be in the next mac pro coming out?

For the desktop where are the nvidia boards and the lower end MB we need more the just the high cost X58 boards.

Also apple should have a 1 cpu core i7 system as well.

Re:When are the 2 way ones that will be in the mac (1)

rogermcdodger (1113203) | more than 5 years ago | (#25614951)

Intel's CEO Paul Otellini said production would start in January so expect Apple to ship sometime in Q1.

Re:When are the 2 way ones that will be in the mac (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25616573)

Who gives a crap? Apple sucks anyway.

Expen$ive (2, Interesting)

Eddy Luten (1166889) | more than 5 years ago | (#25612601)

Looks great and everything but who has money for such toys? Core i7 965 Extreme, 6GB DDR3, NVIDIA GTX 280, X58 Mobo + other junk = easily $1,600 - $2,000.

Re:Expen$ive (2, Insightful)

vegiVamp (518171) | more than 5 years ago | (#25613429)

Sure, but I don't buy a new pc whenever I get a haircut.

I got my first PC, an 386, around 1992. Next thing was a Pentium 1. Then it was up to a P4, which died on me some two months ago. Still haven't bought a new one,but when I do, I expect it to last me another five years at least.

2k$ over 5 years makes for 400$ per year. That's a lot less of an investment than what a lot of people spend on their PC.

That being said, I have no burning desire to play the every new game at the top of it's pixel range, either. The PS3 does a fine job of that, for me.

Re:Expen$ive (1, Interesting)

Anonymous Coward | more than 5 years ago | (#25613901)

$2,000 is cheap for a new professional workstation. I can pay for this with a couple or three Wedding shoots. Makes running noise reduction and all the other 'automagic make it cool!' photoshop plugins alot more economical.

Re:Expen$ive (1)

afidel (530433) | more than 5 years ago | (#25614283)

Companies. Specifically companies needing CAD workstation (though they'd use a card that costs almost as much as your estimate). Also I can't wait for Core i7 to come to the HP DL line, I expect I can finally use Intel for database work because it's been their very poor multicore memory bandwidth that has kept AMD in the lead up till now.

Re:Expen$ive (1)

DragonWriter (970822) | more than 5 years ago | (#25614769)

Looks great and everything but who has money for such toys? Core i7 965 Extreme, 6GB DDR3, NVIDIA GTX 280, X58 Mobo + other junk = easily $1,600 - $2,000.

More than that, likely; $1,600 - $2,000 sounds right for the processor (~$1,000 itself) and RAM for that setup. But you max out a system using the best processor available, and its expensive. The first PC my family got had an MSRP of approximately $4,500 (and the employee purchase price that we actually paid was ~$2,500) -- in 1984.

By comparison, $2,000 in 2008 dollars is cheap. People who need top of the line will get it, people who don't will settle for a mere 4GB of less impressive ram with a i7 920, and shave $1,000 or more off the total cost even if they don't change anything else.

Another great /. post. (4, Interesting)

Ecuador (740021) | more than 5 years ago | (#25612699)

Link to the middle of an ad-laden article and to the Cinebench of all pages - because, you know, that is what the average /. reader is running...

Also, add a nice touch: forget to mention that while the i7 is faster clock for clock with the Core 2, it currently tops out at 3.2GHz and has some sort of overclock protection (lowers clock when it goes over 110A or 130w).

My cheap Core 2 is running at 4GHz on just the stock fan, I don't see myself upgrading to the i7 anytime soon.

What did you say? ... What do you mean Cinebench would still run faster?

Re:Another great /. post. (1)

Predius (560344) | more than 5 years ago | (#25613137)

The linked article showed a Core i7 running at 4.15ghz with stock air cooling.

Re:Another great /. post. (3, Funny)

Ecuador (740021) | more than 5 years ago | (#25613483)

You made me RTFA. The same ad-laden FA I was complaining about. Thanks. So, from the article.

Because the Core i7 Extreme 965 has its overspeed protection removed--i.e. its multipliers are unlocked--we overclocked the processor by raising its multiplier to 25 and also experimented with an increased QPI speed.

My 4GHz Core 2 is not a $1000 *Extreme* part. Humanly priced i7s will have overspeed protection.

I have the feeling you knew this was the case anyway, but had me read TFA just for kicks... shame on you!

Re:Another great /. post. (1)

MrNaz (730548) | more than 5 years ago | (#25614355)

I should bloody well hope it doesn't allow 110A to go through the CPU.

Re:Another great /. post. (1)

Ecuador (740021) | more than 5 years ago | (#25615813)

You seem to know what 110A means (huge current indeed), yet you haven't put 2 and 2 together and realize how you get a 130W TDP (pretty average for a modern CPU) out of 1.16V ;)

Lot of reviews out, but there is one with 64 bit (5, Informative)

WittyName (615844) | more than 5 years ago | (#25613009)

http://www.planetx64.com/index.php?option=com_content&task=view&id=1435&Itemid=14 [planetx64.com]

1) 64 bit macro-op fusion is new. See it tested here..

2) Virtualisation is more efficient with nested pagetables.

3) Gaming should benefit, since all x58 mobos support Crossfire
      and nVidia SLI.

4) 12 gigs ram supported with 2gb dims - this is rare for desktop boards.

Numerous other minor tweaks, but read it for yourself..

Have fun with your upgrade dollars!

Real world performance??? (3, Interesting)

Ritz_Just_Ritz (883997) | more than 5 years ago | (#25613067)

And I was *just* about to retire my "old" socket 940 dual-core opteron box for a quad core Intel system. I think I'll just wait another month or two and jump to the i7 platform instead. 8-)

Would be nice to see some video and audio encoding benchmarks and some real world application performance numbers instead of teenmarks (gaming performance).

Cheers,

Re:Real world performance??? (1)

Redvision_500 (1399369) | more than 5 years ago | (#25613969)

Or you could still go for a Core 2 setup for a lower price once the i7 is out. An i7 based machine sounds expensive.

Servers? (2, Interesting)

slashkitty (21637) | more than 5 years ago | (#25613231)

Is there a comparable intel chip for servers coming out? It's been over a year and still nothing can beat the price/performance of the xeon 3220..

More reviews (3, Informative)

Vigile (99919) | more than 5 years ago | (#25613651)

Another review with some more data, including memory channel performance testing, good explanations of overclocking process, etc.

http://www.pcper.com/article.php?aid=634 [pcper.com]

Love the heat sink!!!! (2, Interesting)

gsgriffin (1195771) | more than 5 years ago | (#25614471)

Go follow the link to the hothardware site. Please don't tell me they are still going to ship their latest CPU ovens with a dorky heat sink that won't allow you to run the CPU beyond 40% sustained usage. I'll buy it after there is at least 50 comments on Newegg saying it works.

..and Intel and AMD, please blast through 3.2Ghz per CPU so all programs work faster all the time.

What good is it? (2, Insightful)

raijinsetsu (1148625) | more than 5 years ago | (#25615161)

Can we actually get any more performance out of our computers with faster CPUs and RAM CPU transfers? I've had processors with a 2.2ghz/core speed for some time now(years), and I always find that the only time I really get a slow-down is when accessing hard-disk, not when playing in memory. Jumping from 2.2ghz quad-core to 3.2ghz quad-core is not going to bring you to a new utopia in desktop performance (like upgrading from a P3 to AMD64 was).

For CPUs and memory, the market needs to focus on power usage reduction and fabrication cost reduction, thereby decreasing the cost to all end users. I think they've brainwashed everyone into thinking that more processor power equates to a better PC experience.

Until storage devices can operate at near bus speeds, the average consumer (and even you uber-gamers) do not need these types of numbers for CPU performance. One caveat: there will always be someone who needs the processing speed, but they are not typical of the audience these chips are marketed to.

Re:What good is it? (1)

NevDull (170554) | more than 5 years ago | (#25616199)

I'm certainly interested in performance for reasons other than games, and for home use as well. I might not be able to give an exhaustive list, but transcoding is one area where a huge boost in compute performance will substantially change overall throughput. I've been playing with Elemental Technologies BadaBoom, which uses CUDA to encode h.264 on the GPU, but it'll only use the first GPU it finds, and as of this point it's still limited on input and output formats. As the PC becomes more of a digital media hub, I definitely see encoding/transcoding being an area which will drive home CPU consumption for the time being.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?