Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

Soulskill posted about 1 month ago | from the onward-and-upward dept.

Intel 181

crookedvulture writes: Intel has updated its high-end desktop platform with a new CPU-and-chipset combo. The Haswell-E processor has up to eight cores, 20MB of cache, and 40 lanes of PCI Express 3.0. It also sports a quad-channel memory controller primed for next-gen DDR4 modules. The companion X99 chipset adds a boatload of I/O, including 10 SATA ports, native USB 3.0 support, and provisions for M.2 and SATA Express storage devices. Thanks to the extra CPU cores, performance is much improved in multithreaded applications. Legacy comparisons, which include dozens of CPUs dating back to 2011, provide some interesting context for just how fast the new Core i7-5960X really is. Intel had to dial back the chip's clock speeds to accommodate the extra cores, though, and that concession can translate to slower gaming performance than Haswell CPUs with fewer, faster cores. Haswell-E looks like a clear win for applications that can exploit its prodigious CPU horsepower and I/O bandwidth, but it's clearly not the best CPU for everything. Reviews also available from Hot Hardware, PC Perspective, AnandTech, Tom's Hardware, and HardOCP.

cancel ×

181 comments

Sorry! There are no comments related to the filter you selected.

Broadwell (-1)

Anonymous Coward | about 1 month ago | (#47786161)

Just wait a while. Haswell is soon to be known as has-been.

Re:Broadwell (4, Informative)

zlives (2009072) | about 1 month ago | (#47786499)

if you can wait then you should always wait for new tech

Re:Broadwell (0)

Anonymous Coward | about 1 month ago | (#47786543)

This is the extreme X*9 chipset where costs are multiplied by 225% but performance pct gain in in the single digits. Waiting for Broadwell is prudent. If you are a gamer, you wait - what you have now is FASTER! If you are a rich mofo, you don't use Intel at all! If you are any body else, you wait. Every one lay down and wait!

Re:Broadwell (1)

Anonymous Coward | about 1 month ago | (#47786665)

If you are a rich mofo, you don't use Intel at all!

Oh, what are the rich folk buying instead?

Re:Broadwell (1)

mestar (121800) | about 1 month ago | (#47786873)

They buy TWO Intels.

Re:Broadwell (1)

CODiNE (27417) | about 1 month ago | (#47788411)

Over clocked POWER chips in liquid nitrogen.

Re:Broadwell (0)

Anonymous Coward | about 1 month ago | (#47788681)

Over clocked POWER chips in liquid nitrogen.

The REAL rich kids use vacuum chambers.

Re:Broadwell (1)

PopeRatzo (965947) | about 1 month ago | (#47787017)

But if I'm trying to game on an old i5-750, wouldn't this be a good time to upgrade to one of the cheaper 4-core Haswells that are running 3.8mhz instead of 2.7? Maybe a Haswell i5 (I guess, I'd need a new mobo then, right?) And the latest PCI-E for a new graphics card.

I don't like to buy the newest and best, but when the second newest becomes cheap. I've got a really nice case, but I'm not sure if I could put a new processor into my old motherboard or if it would even be worth it.

I'd like to do something before the fall games come out. Would I be better off just upgrading my old Radeon HD6850 to a nvidia 760 or a Radeon R9 285 or something?

And did I fall through a wormhole and end up at Tom's Hardware?

Re:Broadwell (0)

Anonymous Coward | about 1 month ago | (#47787423)

From an i5? Really? I laugh at people that waste money constantly upgrade their rigs. I usually keep mine for 10 yrs. before upgrading. Before my recently built i7 machine, I was gaming on my previous build: a Pentium 4. Sure, it wasn't the fastest, but I was still playing modern games on it just fine (with only a video card upgrade). I shake my head when I read comments in forums saying you MUST upgrade every 3 years. 5 or 7 is probably must more sane (unless you are like me, and let them get a little long in the tooth first).

Image processing (4, Interesting)

fyngyrz (762201) | about 1 month ago | (#47787737)

I use -- and write -- image processing software. Correct use of multiple cores results in *significant* increases in performance, far more than single digits. I have a dual 4-core, 3 GHz mac pro, and I can control the threading of my algorithms on a per-core basis, and every core adds more speed when the algorithms are designed such that a region stays with one core and so remains in-cache for the duration of the hard work.

The key there is to keep main memory from becoming the bottleneck, which it immediately will do if you just sweep along through your data top to bottom (presuming your data is bigger than the cache, which is typoically the case with DSLRs today.) Now, if they ever get main memory to us that runs as fast as the actual CPU, that'll be a different matter, but we're not even close at this point in time.

So it really depends on what you're doing, and how *well* you're doing it. Understanding the limitations of memory and cache is critical to effective use of multicore resources. You're not going to find a lot of code that does that sort of thing outside of very large data processing, and many individuals don't do that kind of data processing at all, or only do it so rarely that speed is not the key issue, only results matter. But there are certainly common use cases where keeping a machine for ten years would use up valuable time in an unacceptable manner. As a user, I am constantly editing my own images with global effects, and so multiple fast cores make a real difference for me. A single core machine is crippled by comparison.

*drool* (4, Funny)

msobkow (48369) | about 1 month ago | (#47786205)

*drool*

'nuff said.

I'm still clunking along on a P4 3.8 GHz. I'd love a new box that fast!

Re:*drool* (1)

Anonymous Coward | about 1 month ago | (#47786245)

...to do what? I've long stopped caring about this stuff. It seems to solve no real problems out there.

Re:*drool* (2)

dugancent (2616577) | about 1 month ago | (#47786305)

You're not the only one. Im chugging along with a c2d from 2008. I can get at least another three years out of this, if not more. Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer).

Re:*drool* (4, Informative)

Bob the Super Hamste (1152367) | about 1 month ago | (#47786477)

We have probably passed the point where for most applications more speed, memory, cores, etc does anything for users but I welcome the latest and greatest. I don't do much gaming and that which I do is mostly old games that would run fine on an old Pentium 166 MMX. There are other resource intensive computations for which this is useful. My personal example is I do some amateur cartography and GIS stuff and to do what I wanted with my last machine (Athalon 64 x2) was painful and sometimes would take days to complete a single operation, mostly due to being stuck at 4GB of physical RAM. That machine got replaced by an i7 3770k with 32GB ram and what use to take almost a week could be done in about 10 minutes. Now granted a use case like this is rare but there are probably others like it, but not everyone is doing dick measuring based off of frames per second.

Re:*drool* (1)

Feces's Edge (3801473) | about 1 month ago | (#47787479)

We have probably passed the point where for most applications more speed, memory, cores, etc does anything for users

Awful and mediocre programmers (the majority) are trying their hardest to make their software as inefficient as possible so as to completely or mostly eliminate any advantages we get from the latest and greatest technologies.

Re:*drool* (1)

SirMasterboy (872152) | about 1 month ago | (#47786481)

Speed brings a lot more to personal computing than gaming.

It makes my photoshop/c4d/after effects render faster. It makes my audio and video encode faster. And it makes my code compile faster.

Re:*drool* (2)

Jane Q. Public (1010737) | about 1 month ago | (#47786533)

Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer).

There are LOTS of applications outside of gaming where more speed is appreciated. Especially if you're a professional. (Of course, it's arguable you didn't mean that when you said "personal" computing, but I'm not working in an office, and my work machine is my "personal" machine.)

I was chugging along with a c2d for a long time too. But there came a time when it was long past due for replacement.

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47786599)

A Core 2 Duo is is a ton faster than en shitty P4 3.8 GHz though. You'll have trouble playing HD video with a P4.

Re:*drool* (3, Insightful)

schlachter (862210) | about 1 month ago | (#47786703)

there was a time, back in the 90's (rapid progression of 286/386/486/Pentium) where you needed to upgrade your computer every 2-3 yrs or you couldn't even run the latest software...and i'm not talking hard core games...even simple stuff like word processing or the newest ver of windows.

seems like now you can get by with 5-6 yr cycles, esp with the introduction of an ssd and more ram.

Re:*drool* (2)

tysonedwards (969693) | about 1 month ago | (#47786919)

A large part of that is because recent improvements in computing have come in terms of efficiency rather than raw number crunching ability. Being able to have a Xeon machine with dual GPUs run well with a 450w power supply versus a 1500w power supply is a prime example. Desktops that run in 25w versus 450w is another such example. Yes, there certainly have been GREAT advancements over the past few years and those shouldn't be overlooked, but the emphasis has been around smaller, lighter and more efficient, with a 5% YoY gain in performance while you're at it.

Gone are the days of once every 18 months a computer being twice as fast.
Instead we have the days of a computer with a battery that runs twice as long, boot in half the time, and faster wireless connections (some that even outperform their wired counterparts).

The reason why people *needed* to upgrade historically on such a rapid cadence was because technology was evolving at such a rapid pace. Those who would build the tools that everyone else wanted to use were geeks themselves and wanted to be on the latest and greatest, exploiting the advantages that the rapidly advancing technology would provide for them. Advances like MMX or SSE, or for that matter the move from 16-bit to 32-bit instruction sets gave some excellent benefits to those early developers as it allowed for programmers to design complex operations more easily as well as simply do certain things faster, letting applications like Excel deal with much larger data sets and perform comparisons instantaneously instead of the previous "Calculating, please wait." prompts that users would experience. Then, somewhere along the way these hardware architecture improvements no longer were a requirement for the vast majority of applications to run effectively, or even for developers to specifically target applications against. It became more of a minimum being "on this hardware, this runs 'well enough'" as opposed to "it just won't run".

At present, GPGPU acceleration does much the same thing for us today as the architectural changes did for us during the late 90's and early 2000's. When someone says "I need more raw power", that's usually where they turn to in computing space any more. There is certainly the case for x86, PowerPC, ARM and other conventional architectures and they remain at the core of every computer, but the large scale deployments that need massive number crunching capabilities are moving GPGPU. (See scientific computing, clustering, high performance computing, ...)

Re:*drool* (1)

toejam13 (958243) | about 1 month ago | (#47787095)

I second this. It isn't always about performance.

I recently replaced my Core i7-930 (Nehalem) system with a Core i7-4790S (Haswell) system. The new system is modestly faster. But mostly, it requires significantly less power, resulting in a cooler and quieter system. My case fans now only run when gaming for long periods.

If I lived in a colder part of the country, I probably would have kept my i7-930 for another couple of years. But I live in an area where A/C is a must. So I expect the upgrade to eventually pay for itself in the form of reduced electrical bills.

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47787567)

Every time I run the numbers, I can't justify a hardware upgrade based on electricity savings. The payback time is significantly longer than the life of the new machine. (Especially in the winter here, where the old machine reduces my heating costs by some minuscule amount.) But you need to run the numbers for your situation. For me, a Core 2 Quad in an old Dell Optiplex I found in the dumpster at work does everything I need, reliably, quietly, quickly (enough), for free. Slap a decent Linux distro on there (Debian Jessie with XFCE is nice, as is Linux Mint 17 with Mate), and for my admittedly basic use scenarios, I'm set for the next 3-5 years.

Re:*drool* (4, Informative)

mestar (121800) | about 1 month ago | (#47786835)

Single thread performance from core 2 duo from 2008, to the 4770 i7 from this year improved just 90%, so, not even a doubling in speed.

Re:*drool* (1)

zugedneb (601299) | about 1 month ago | (#47786951)

picked a core 2 quad q6600 with mobo and 2MB ram from recycle bin, here in sweden...
what people throw away...

have to recap the mobo, then I will send it to some fellows in hungaria =)

Re:*drool* (1)

zlives (2009072) | about 1 month ago | (#47787925)

haha i am still using mine though soon i will succumb to starcitizen and an upgrade

Re:*drool* (1)

Shinobi (19308) | about 1 month ago | (#47788483)

"Speed brings nothing to table in personal computing anymore (outside of gaming and i'm not and have been a gamer)."

That is just so stupid. Personal computing is not just about gaming or browsing or a bit of coding.

Many non-geeks do things that require way more computer horsepower than geeks do. Like 3D, video/movie, heavy graphics editing, music and the list just goes on.

Re:*drool* (1)

lgw (121541) | about 1 month ago | (#47786451)

..to do what? I've long stopped caring about this stuff. It seems to solve no real problems out there.

Well, I use my CPU to transcode media files, so I might get one. But for gaming? When will CPU ever matter for gaming, unless your running some terribly-written Java game?

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47786713)

When you need to run a simulator and the CPU is used for physics.
Then, a c2d will not cut it.

Re: *drool* (1)

Redbehrend (3654433) | about 1 month ago | (#47786789)

Cpu matters for next gen games more than you think. A good example is the voxelfarm engine.

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47787067)

It makes sense to offload processing off some of the gpu cores on to cpu cores sice they are abundant AND unused.

Re:*drool* (1)

0123456 (636235) | about 1 month ago | (#47787155)

When will CPU ever matter for gaming, unless your running some terribly-written Java game?

When consoles stop shipping with such crappy CPUs.

Re:*drool* (1)

UnknownSoldier (67820) | about 1 month ago | (#47788123)

PS4 and XBone are _already_ x86 so not sure if you are being serious, sarcastic, or cynical.

Are you referring to their abysmal 1.6 GHz clock speed [wikipedia.org] ?

Re:*drool* (1)

Dutch Gun (899105) | about 1 month ago | (#47788299)

Game developer here. A lot of stuff still happens on the CPU, especially when you're talking about large-scale AAA 3D games. Note that some of these items may make use of additonal GPU or specialized hardware, but that's still somewhat rare.

* Model animation is performed on the CPU. This is probably the biggest CPU hit in most AAA games today.
* Audio engines are all in software now, and they're applying a lot of real-time effects, in addition to the costs of real-time decompression and mixing overhead.
* Physics and collision detection is performed on the CPU.
* Pathfinding can be very CPU-intensive.
* Particle effects are sometimes performed on the CPU, especially if they need to interact with the world in any way or have complex behaviors.
* AI and any sort of scripting is, of course, performed on the CPU

Obviously, some games push the CPU a lot harder than others, but it's still important to have a reasonable CPU/GPU balance if you want to be able to play a wide variety of games.

That being said, of course it's pointless to upgrade your CPU if you're already GPU-bound, and that still tends to happen faster, because it's easy to crank up visual complexity until your video card chokes and sputters under the load.

Re:*drool* (1)

Shinobi (19308) | about 1 month ago | (#47788571)

"When will CPU ever matter for gaming, unless your running some terribly-written Java game?"

When you have a game that does a lot of AI stuff? Sins of a Solar Empire and the Total War series both tend to hit the CPU quite hard when your fleets/armies become large....

To the point of "Don't zoom in, just let the fleet autoattack...." yet you zoom in anyway, and get 120 FPS thanks to the GPU, but no units doing anything except in a slideshow, due to the CPU being hogged... Of course, I probably shouldn't have enough carriers/drone hosts for 300+ fighter and bomber squadrons..... Nor should my opponent....

Re:*drool* (1)

msobkow (48369) | about 1 month ago | (#47787427)

Working on my pet project. Having Eclipse start in under 10 minutes. Being able to run *all* my code manufacturing jobs all at once, instead of having to run three at a time on my laptop (the longest job takes 20 hours to run.)

Believe me, I could use the CPU power. I'm not an "average" user, just a broke one. :P

Re:*drool* (1)

msobkow (48369) | about 1 month ago | (#47787429)

BTW, that 20 hour job is running on a Core i7 mobile/laptop chip, not my P4. I shudder to think how long the P4 would take...

Re:*drool* (1)

DoomSprinkles (1933266) | about 1 month ago | (#47788745)

to be fair... In my job, It does me very well to have 10+ VMs running on my desktop machine 24/7. Sandy Bridge-E (3930K, hex-core) was a god send for this. The 64 GB of RAM plays no small part as well, of course. I believe I left an E-8600 Core 2 Duo and 4 GB of RAM for this particular upgrade. Needless to say, for this workload, it was a fantastic upgrade. Obviously, there's been no value in leaving SB-E for IVB-E or now Haswell-E as the performance jump just is so minimal. Though, some of the cool things they've put on the silicon in these last 2 gens are enticing, but just not enough to leave for. I'd say the coolest thing about Haswell-E is the X99 chipset. That chipset is drool worthy at 10 6Gb/s SATA ports, butt loads of PCIE lanes, and DDR4 support.

Re:*drool* (1)

mr_mischief (456295) | about 1 month ago | (#47786771)

Why spend $2000 to update from a P4 though? For $350 or $400 a system can show your P$ to be a waste of electricity.

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47786793)

AMD Athlon64 5600+ with 4GB RAM right here. I am running a SSD with an AMD 6800 and since modern games are made for consoles I can run them just fine.
Next in line will be a triple display upgrade .... iracing :)

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47787283)

A $50 Celeron G3220 runs circles around your P4. It's about 6x faster already! A $150-ish i3 would be about 10x faster, and a $250-ish i5 about 15x. Hell, even an old Core 2 Duo slaughters this PC by a huge margin because it's a dual core, both cores being quite faster too. And the P4 is quite power hungry too.

Time to retire that wasteful slow dinosaur! You're probably already paying the price of an upgrade on your power bill anyway!

Re:*drool* (0)

Anonymous Coward | about 1 month ago | (#47787607)

Right after they came out, I build the box I have now, with the new schmantzy corei7-920. It has 12GB of ram, and has held up very well. Its getting a bit long in the tooth. I've replaced all of the Seagate OEM drives (they said OEM even though I bought them individually, and they were supposed to have a MTBF of 100,000 hours (11 years), even though I replaced all of them in less than 4 (so before anyone says 11 years/4 is the new MTBF, yes it is....but its the average worst case on a single drive, not the lot!). Everything else has run very well. Maybe in the new year I upgrade.

Price (1)

wisnoskij (1206448) | about 1 month ago | (#47786237)

At least the one review I looked at said it was only $1K.

Re:Price (5, Interesting)

SirMasterboy (872152) | about 1 month ago | (#47786497)

Though the lower-end model is only $300 for a 6-core 12-thread!

http://www.microcenter.com/pro... [microcenter.com]

but when... (-1)

Anonymous Coward | about 1 month ago | (#47786265)

can i has cheezburger?

Nice (1)

MyLongNickName (822545) | about 1 month ago | (#47786273)

I recently (less than a year ago), bought an i7, four core, 8 thread machine. I use it a lot for chess analysis, and it is amazing how quickly it can get to a 24 ply deep analysis. Even with a slightly slower clock, 8 cores would be so much quicker.

Re:Nice (1)

DiamondGeezer (872237) | about 1 month ago | (#47786587)

I have a similar spec in a laptop workstation where I run cloud software in VMware workstation. For most people 24GB of RAM and a quad-core i7 is not going to make their wordprocessing or browsing any better, so for most people tablets are more convenient and useful.

Cores matter in virtualization of course, but at the moment, the slowest component is the hard disk.

***Big intake of breath*** (1)

DiamondGeezer (872237) | about 1 month ago | (#47786275)

But does it run Linux?

Re:***Big intake of breath*** (2)

vivek7006 (585218) | about 1 month ago | (#47786371)

But does it run Linux?

No but it runs Netbsd!

Re:***Big intake of breath*** (1)

DiamondGeezer (872237) | about 1 month ago | (#47786593)

I'm almost going to buy one. But only if I can get better performance that makes the extra money worth it.

At the moment - naaaah!

just wait (5, Interesting)

hypergreatthing (254983) | about 1 month ago | (#47786281)

until next year. 14nm shrink should be a huge boost in both efficiency and performance.
The x99 is an "enthusiast" platform and has pricing along those lines.
DDR4 is also extremely new. Expect it to get faster/better timing specs as time progresses.

Re:just wait (0)

Anonymous Coward | about 1 month ago | (#47786385)

this.

DDR4 is like $350 for 4x4GB. Too expensive still. This time next year we should see prices closer to what we are paying for DDR3 today.

Re:just wait (0)

Anonymous Coward | about 1 month ago | (#47787629)

...said every analysis of every new computer generation, ever!

Taco sucks a cock (-1)

Anonymous Coward | about 1 month ago | (#47786283)

Sometimes I miss him.

5820K is an extremely nice part (5, Interesting)

CajunArson (465943) | about 1 month ago | (#47786293)

The 5820K is packing 6 cores and an unlocked multiplier for less than $400. If you don't absolutely need the full 8-core 5960X, then the 5820K is going to be a very powerful part at a reasonable price for the level of performance it delivers.

Re:5820K is an extremely nice part (1)

Anonymous Coward | about 1 month ago | (#47786473)

Good eye, I saw that huge difference also. 2x price for 2 more cores? Also does the new architecture provide any benefit outside of clock-related speed?

Any chance of unlocking cores I wonder?

Re: 5820K is an extremely nice part (2)

Kjella (173770) | about 1 month ago | (#47787515)

Yes but X99 and DDR4 blows any chance of doing Haswell-E on a budget. I need a new PC and is considering either 4790K or 5960X, the former is fine now while the latter is going all out on new tech which I hope will last longer. Eight cores crushes the mainstream chip in multithreading. Eight RAM slots in case I want to double up, of a type that will exist long and improve much. Plenty PCIe lanes. Slightly weak single threaded performance at stock but considerable overclocking potential. With 10% performance improvement per generation it'll take ages until I need an upgrade again. On the other hand, a 4790K might last me long too.

Re: 5820K is an extremely nice part (1)

godefroi (52421) | about 1 month ago | (#47788703)

In my experience, I'm seldom if ever CPU-capped, and if I am, what I'm doing it the sort of thing that 10% won't make a difference on. My advice, save your money. Buy all the RAM you'll want now (16GB, 32GB would be extravagant) before it becomes expensive.

The few extra months you buy with the 5960X isn't going to make a difference in the long run.

Of course, I don't know your particular application, nor do I know your particular financial situation. YMMV.

8 core (-1)

Anonymous Coward | about 1 month ago | (#47786297)

Owning a 8 core $1000 CPU will definetly make people think you have a big dick. sweet!

DDR2/3/4 (0)

Anonymous Coward | about 1 month ago | (#47786303)

Correct me if I'm wrong, but it seems that while DDR memory bandwidth increases, the CAS latency always remains roughly the same (between 10 and 20ns), so will there really be that much of an improvement?

Re:DDR2/3/4 (2)

tralfaz2001 (652552) | about 1 month ago | (#47786365)

Lets hope so. DDR3 has always been a joke, since it gained speed over DDR2 when configured in 3 channel banks. Except it is almost never configured that way, and thus resulted in faster clocked DDR2. Hopefully DDR4 works appropriately when configured in a 4 DIMM bank.

Re:DDR2/3/4 (4, Informative)

danbob999 (2490674) | about 1 month ago | (#47786765)

DDR is not about the number of channels. You could design a system with 8 channels DDR1 or single channel DDR4 if you want to. New generation DDR RAM is always about lower voltage and higher clock speed. Usually at the cost of higher latency (800 MHz DDR3 is a bit slower than DDR2)

Re:DDR2/3/4 (0)

Anonymous Coward | about 1 month ago | (#47788261)

Memory latency has always been the same. The CAS latency cycles have increased, but so have bus speeds. The absolute latency in ms has remained roughly the same since DDR1 days. That's why 2133MHz RAM has higher CAS than 1600MHz, but in the end it's the same. Only if you get the enthusiast memory do they lower the CAS latency significantly.

Re:DDR2/3/4 (5, Informative)

mr_mischief (456295) | about 1 month ago | (#47786749)

CAS latency hasn't been measured directly in nanoseconds for some time now. It is now measured in clock cycles. The shorter your clock cycles (the higher your frequency) the shorter in absolute time your CAS latency is for the same number. CAS 10 at 2133 is about the same as CAS 5 on 1066.

CAS latency on Wikipedia [wikipedia.org]
Memory timing on Hardware Secrets [hardwaresecrets.com]
FAQ on RAM timings from Kingston [kingston.com]

Re:DDR2/3/4 (3, Interesting)

pjrc (134994) | about 1 month ago | (#47786861)

Just to put "some time now" the time frame into perspective, the last mainstream PC memory form-factor to use asynchronous DRAM was 72 pin SIMMs.

When PCs went from 72 pin SIMMs to the first 168 pin DIMMs, in the mid-1990s, the interface changed to (non-DDR) synchronous clocking.

Intel had to (1)

sseymour1978 (939809) | about 1 month ago | (#47786389)

> how fast the new Core i7-5960X really is. Intel had to dial back the chip's clock speeds to sell you the same processor twice
I fixed (probably) that for you

No TSX? (0)

Anonymous Coward | about 1 month ago | (#47786431)

No TSX on that one too?

Obligatory (1)

gmhowell (26755) | about 1 month ago | (#47786433)

But will it run Crysis?

Re:Obligatory (1)

DiamondGeezer (872237) | about 1 month ago | (#47786555)

No you'll have to wait for the upgrade with the industrial cooling unit bolted on. Then you might get it usable.

YMMV

And the Mac Pro is now obsolete or soon will be. (1)

LWATCDR (28044) | about 1 month ago | (#47786465)

That is the problem with Apple's obsession with small and sexy. Of course if Apple updates the Pro this year all is good but given their history I would not bet on it.

Re:And the Mac Pro is now obsolete or soon will be (1)

DiamondGeezer (872237) | about 1 month ago | (#47786613)

That's just nonsense. Just because there are taller buildings doesn't make the Empire State Building any smaller.

Re:And the Mac Pro is now obsolete or soon will be (1)

Bing Tsher E (943915) | about 1 month ago | (#47787123)

That's right. The Empire State Building is Majestic, and Quaint and stuff. Like the Mac Pro.

Re: And the Mac Pro is now obsolete or soon will b (1)

cerberusss (660701) | about 1 month ago | (#47787053)

Come on. The Mac Pro requires you to spend big bucks. It's not too much to ask Apple follow Intel's roadmap with the Mac Pro.

Re:And the Mac Pro is now obsolete or soon will be (1)

John Bokma (834313) | about 1 month ago | (#47787063)

Mac Pro uses Xeon E5 v2. It's much more likely that a (small) upgrade is going to use Xeon E5 v3. But hey, don't let me stop you from celebrating the upcoming death of Apple. Parties that go on for decades must be great ;-)

Elephant in the room (3, Informative)

cowwoc2001 (976892) | about 1 month ago | (#47786659)

No one is talking about the elephant in the room: RAM prices are so high that you'd have to spend $700 to hit 64GB RAM (the max the board supports). That is just outrageous.

These prices are going to lead to a severe drop in demand.

Re:Elephant in the room (4)

umafuckit (2980809) | about 1 month ago | (#47786777)

Why is that the elephant in the room? How many people need 64 gigs of RAM? 8 to 16 gigs is currently plenty for most applications. Yes, there are instances where more is needed, but these instances are rare. Usually people who need more than 16 gigs are requiring this for work-related reasons, where the $700 takes a different perspective.

Re:Elephant in the room (0)

Anonymous Coward | about 1 month ago | (#47786813)

In my day 1MB memory sticks went for $75 each and we STILL had demand for them. NOW GET OFF MY LAWN!

Re:Elephant in the room (1)

zlives (2009072) | about 1 month ago | (#47787947)

your lawn is too new, i paid close to 200/mb

Re:Elephant in the room (1)

CityZen (464761) | about 1 month ago | (#47788245)

I once paid $100 for 16KB ($6400/MB). Of course, Apple was charging $400 for the same amount ($25,600/MB).

Re:Elephant in the room (2)

mestar (121800) | about 1 month ago | (#47786859)

OMG, 64 GB of RAM for only $700. That is simply amazing, how cheap it is.

Re:Elephant in the room (4, Funny)

cowwoc2001 (976892) | about 1 month ago | (#47787073)

Two years ago it was half that price.

Electronics prices are supposed to drop over time. When you compare current prices to 5 years ago there isn't much of a difference.

Re:Elephant in the room (0)

Anonymous Coward | about 1 month ago | (#47787857)

Two years ago it was half that price.

Electronics prices are supposed to drop over time. When you compare current prices to 5 years ago there isn't much of a difference.

Drop over time why? Why do you think this is automatic?
Pricing of anything changes with supply and demand, production volumes, process efficiency, uhh.. stuff like that.

The price of WHAT hasn't changed in five years, DRAM chips or DIMM packages... of what capacity?
If you're looking at bleeding edge PC desktop DIMMs, at lot has changed in regards to volume in the past five or ten years.

Re:Elephant in the room (0)

Anonymous Coward | about 1 month ago | (#47787031)

http://blog.modernmechanix.com/mags/InterfaceAge/4-1978/thinker_toys.jpg

Re:Elephant in the room (1)

Demonantis (1340557) | about 1 month ago | (#47787099)

The price will probably go down once manufacturing spins up and yields are better. I remember reading on /. that manufacturers over provisioned manufacturing for DDR3. They will probably be more conservative with DDR4 at least in the beginning. Wait a couple months until the market stabilizes.

Re:Elephant in the room (1)

cowwoc2001 (976892) | about 1 month ago | (#47787133)

Actually, the prices I was quoting were for DDR3, not DDR4 so the problem is actually worse than you think.

Re:Elephant in the room (0)

Anonymous Coward | about 1 month ago | (#47787255)

there might be some collusion. Its hard to tell; it would be the first time though.

When will the newer gen of intel chips go DDR4? (1)

Joe_Dragon (2206452) | about 1 month ago | (#47788071)

When will the newer gen of Intel chips go DDR4? also what about AMD?

broadwell is not going to have DDR4.

The I-Word (1)

Mister Liberty (769145) | about 1 month ago | (#47786697)

For 'how many lanes of NSA Bulldozer 7.0" (is that the latest?) we'll just have to wait for the next Vanunu or Snowden.

Why the need to slow down the CPU ? (0)

Anonymous Coward | about 1 month ago | (#47787311)

"... Intel had to dial back the chip's clock speeds to accommodate the extra cores ..."

Can anyone please tell us why is there a need to slow down the CPU speed in order to put in more cores?

I am no EE, and will appreciate very much any information on this matter !

Many many thanks in advance !!

Re:Why the need to slow down the CPU ? (1)

slew (2918) | about 1 month ago | (#47787489)

Can anyone please tell us why is there a need to slow down the CPU speed in order to put in more cores?

Thermals. More CPUs generate more heat, more heat with the same thermal envelope means you can't run each CPU as fast. Of course in a different environment (say with a liquid nitrogen cooling rig vs an air cooled rig), you could probably clock those CPUs higher.

Just because you can put in more CPUs doesn't mean you should. It used to be the limiting engineering factors were area vs chip yield. Now days thermals are arguably the most important consideration because often you are limited both thermally (and sometimes even electrically) to the amount of power you can deliver to a square millimeter of a computer chip.

Can I get one without the NSA hardware backdoor? (1)

RocketRabbit (830691) | about 1 month ago | (#47787365)

Can I get one without the NSA hardware backdoor? I'm talking about iAMT. It would be nice, a guy can dream.

The 3 year cycle... (1)

sdguero (1112795) | about 1 month ago | (#47787403)

As a gamer, I have been on a 3 year PC build cycle since 1992. Every three years (more or less), I build a new PC. Since 2008, I've only upgraded once and my current build (now 2 years old, a 2500k overclocked to 4.4 Ghz) feels pretty darnn fast still. It's weird because 15 years ago, I'd be itching for new hardware with a 2 year old system. Since my Core2 Duo build in 2008, I haven't really seen any noticeable performance jumps other than the move to SSDs and the bigger IPS panels. My old Core 2 system, which is now 6+ years old and on my workbench in the garage can still hold it's own with new games without a problem.

Obviously there is a lot more to PC/game perfomrnace than just the CPU. But what I'm getting at, is ever since the move to multi-core platforms in the mid-2000s and offloading more work to the GPU(s), hardware development for PCs seems to have slowed down quite a bit. Maybe a combination of the newer hardware not being utilizaed as well as stuff back in the ewarly 2000s and smaller bumps up in the various chip/processor design? Perhaps the greater focus on power savings and mobile development? The trend towards netbooks, and now tablets? Maybe the fact that most kids these days only play on consoles?

I dunno. It just thinks like the rate of change has slowed down a lot over the last decade. Or maybe I'm just getting old.

Re:The 3 year cycle... (1)

zlives (2009072) | about 1 month ago | (#47787969)

please check out StarCItizen. if it's to your taste... you will upgrade

Re:The 3 year cycle... (1)

InvalidError (771317) | about 1 month ago | (#47787987)

I dunno. It just thinks like the rate of change has slowed down a lot over the last decade. Or maybe I'm just getting old.

10+ years ago, performance was more than doubling every two years through a combination of higher clocks, die shrinks, extra transistors, fundamental breakthroughs in logic circuit designs, etc. Right now, mainstream CPUs are only ~60% faster than mainstream CPUs from four years ago because clocks are stuck near the 4GHz mark, die shrinks are becoming much slower in coming, nearly all fundamental breakthroughs have been discovered and modern hardware is already more powerful than what most people can be bothered with so there is a general lack of demand for significantly faster low-mid-range CPUs to make things worse.

Progress is slowing down and I can only imagine it getting worse in the future.

Slower games (0)

Anonymous Coward | about 1 month ago | (#47787623)

Demanding games have been made concurrent for the most part and these will certainly benefit more from 8 cores and huge caches than they will lose from lower frequency. Dual, Quad+ cores are not new. Failure to full exploit SMP in 2014 is a fine reason to avoid a given game as far as I'm concerned.

Re:Slower games (1)

UnknownSoldier (67820) | about 1 month ago | (#47788029)

> Failure to full exploit SMP in 2014 is a fine reason to avoid a given game as far as I'm concerned.

That's a crappy reason. You'll miss out on Path of Exile, Minecraft, and Terraria, all which are excellent games.

Boring...oddly (2)

Sir_Sri (199544) | about 1 month ago | (#47787745)

Interesting essentially how little benefit they get.

The X99 mobo and platform is nice, I like a lot of what they're doing there, and all of the system components matter a lot to user experience. But unless you have a very specific requirement any user would be just as well served with a quad core or a octa core, if not better served with the devil's canyon quad core given the single threaded performance. That's probably a bad place for intel to be positioning these, as the target audience for these processors is looking for blazing fast and lots of cores. And it only delivers one of the two.

I think if I was buying a system this week or next (which... I am) I'd be a bit disappointed that I can't put a devil's canyon quad core on an X99 mobo, and then upgrade the CPU later if they manage to refresh the E series into something more attractive.

Also why can't the DMI link be better in other cpu (1)

Joe_Dragon (2206452) | about 1 month ago | (#47788127)

Also why can't the DMI link be better in other cpu's

Why do have to now get an 6 core to use Hasswell-e to get more then 16 pci-e 3.0 + X4 pci-e 2.0 (DMI)

Most people may only need 1 Video card but with pci-e SSD's coming out more pci-e is needed.

Re:Boring...oddly (0)

Anonymous Coward | about 1 month ago | (#47788327)

What's also a bit disappointing, so we have a completely new platform with incompatible everything, and the shiny new X99 PCH with its 10 Sata6G ports, M.2, USB3, ... is on the same old 20Gbit/s DMI2 as the first Z68.

"...can translate to slower gaming performance" (1)

Snufu (1049644) | about 1 month ago | (#47787883)

for non-multithreaded games?

How do these compare to AMD (1)

nzs1 (1371539) | about 1 month ago | (#47788167)

8-core processors?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?