×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Intel Announced 8-Core CPUs And Iris Pro Graphics for Desktop Chips

Unknown Lamer posted about 8 months ago | from the face-meltingly-fast dept.

Intel 173

MojoKid (1002251) writes "Intel used the backdrop of the Game Developers Conference in San Francisco to make a handful of interesting announcements that run the gamut from low-power technologies to ultra-high-end desktop chips. In addition to outing a number of upcoming processors—from an Anniversary Edition Pentium to a monster 8-core Haswell-E — Intel also announced a new technology dubbed Ready Mode. Intel's Ready Mode essentially allows a 4th Gen Core processor to enter a low C7 power state, while the OS and other system components remain connected and ready for action. Intel demoed the technology, and along with compatible third party applications and utilities, showed how Ready Mode can allow a mobile device to automatically sync to a PC to download and store photos. The PC could also remain in a low power state and stream media, server up files remotely, or receive VOIP calls. Also, in a move that's sure to get enthusiasts excited, Intel revealed details regarding Haswell-E. Similar to Ivy Bridge-E and Sandy Bridge-E, Haswell-E is the 'extreme' variant of the company's Haswell microarchitecture. Haswell-E Core i7-based processors will be outfitted with up to eight processor cores, which will remain largely unchanged from current Haswell-based chips. However, the new CPU will connect to high-speed DDR4 memory and will be paired to the upcoming Intel X99 chipset. Other details were scarce, but you can bet that Haswell-E will be Intel's fastest desktop processor to date when it arrives sometime in the second half of 2014. Intel also gave a quick nod to their upcoming 14nm Broadwell CPU architecture, a follow-on to Haswell. Broadwell will be the first Intel desktop processor to feature integrated Iris Pro Graphics and will also be compatible with Intel Series 9 chipsets."

Sorry! There are no comments related to the filter you selected.

8 cores? (3, Insightful)

chris200x9 (2591231) | about 8 months ago | (#46538615)

So they finally caught up to AMD.

Re:8 cores? (2, Insightful)

Anonymous Coward | about 8 months ago | (#46538653)

No, they're well ahead of AMD in this regard. AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core. Much like Intel's current i7s are 4 core CPUs that can process an integer and a floating point instruction at the same time on one core. Basically, AMD is marketing hyper threading as being more cores.

They Both Fudge (1)

Anonymous Coward | about 8 months ago | (#46538745)

AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core.

Intel calls EMT64 64 bits when it is just 32 bits on each 1/2 of the clock cycle.

The CPU is dead in the long run. Long live the GPU/APU. Now if we could only code for parallel.

Re:They Both Fudge (0)

Anonymous Coward | about 8 months ago | (#46538787)

Ehh? Are you implying that intel's desktop chips are not APUs? Last I checked all of them had GPUs built in, and those GPUs were actually pretty competitive with both AMD and nVidia in the power constraints they're operating in.

Re:They Both Fudge (1)

K. S. Kyosuke (729550) | about 8 months ago | (#46538887)

Uh, no shared paged memory. No built-in hardware queuing support ("fast-path function calls to GPU"), as far as I'm aware. Perhaps questionable IEE-754 compliance (certainly in case of nVidia). And Intel isn't really interested in competing with AMD's APUs because they're busy trying to sell Xeon Phi for similar workloads.

Re:They Both Fudge (1)

Anonymous Coward | about 8 months ago | (#46539479)

AMD HSA:
Integration of Cpu and GPU in Silicon...
GPU can access CPU Memory...
Unified Memory for CPU and GPU...
GPU Context Switching...

Re:They Both Fudge (0)

Anonymous Coward | about 8 months ago | (#46539153)

AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core.

Intel calls EMT64 64 bits when it is just 32 bits on each 1/2 of the clock cycle.

The CPU is dead in the long run. Long live the GPU/APU. Now if we could only code for parallel.

Uhhhh... what? It address more than 32-bits of memory.
It's clock for clock faster than anything AMD has at pretty much any benchmark.

This sounds like the fan bois whining about how the Core 2 Quad wasn't a "real" quad core due to having two chips on a single package, but then the "real quad core" Barcelona was quite the disappointment.

It's a 64-bit chip because it executes a 64-bit instruction set (quite quickly). Maybe in your fantasy land some minute technical detail makes your inferior AMD chip somehow "better", but not in any way that is objectively measurable ("reality").

Re:They Both Fudge (1)

Guy Harris (3803) | about 8 months ago | (#46539175)

Intel calls EMT64 64 bits

Intel hasn't called it EM64T in years. It's now "Intel 64".

when it is just 32 bits on each 1/2 of the clock cycle.

Please provide a reliable source for your assertion that all Intel 64 processors have 32-bit data paths internally.

Re:They Both Fudge (1)

ppanon (16583) | about 8 months ago | (#46539967)

It was true for the first generation or two of Intel chips that supported AMD's 64-bit extensions. It hasn't been true for quite a while though.

Re:They Both Fudge (1)

Guy Harris (3803) | about 8 months ago | (#46540031)

It was true for the first generation or two of Intel chips that supported AMD's 64-bit extensions. It hasn't been true for quite a while though.

So that'd be the 64-bit Pentium 4s (perhaps not surprising, as it was initially a 32-bit microarchitecture, and fully widening it to do 64 bits of arithmetic at the time might've been more work than they wanted to do) and the Core 2 (more surprising, as that microarchitecture was released in 64-bit chips from Day One, but maybe the design work started with a 32-bit chip and the 64-bitness was added at the last minute).

So I can believe it for the 64-bit Pentium 4s; is there any solid information indicating that it was true of the Core 2 processors?

Re:8 cores? (4, Informative)

Travis Mansbridge (830557) | about 8 months ago | (#46538799)

While each 2 cores on the AMDs share resources, this is different from hyperthreading, and there are indeed 8 cores. http://www.reddit.com/r/builda... [reddit.com]

Re:8 cores? (1)

Anonymous Coward | about 8 months ago | (#46538823)

They share everything except for the ALU, which is duplicated out. This allows two instructions to get through two different ALUs at once. Similarly, Intel's ALU and FPU share everything except for the actual units, which are separate. This allows two instructions to get through two different units at once. This is exactly the same technology, implemented slightly differently, and it has similar performance effects. Hence why AMD's CPUs end up being slower than Intel's despite having "twice as many cores".

Re:8 cores? (4, Informative)

K. S. Kyosuke (729550) | about 8 months ago | (#46539049)

They share everything except for the ALU, which is duplicated out.

Actually, in addition to integer execution units, they also don't share instruction decoders, L1 data caches, and integer op schedulers. ;-) They do share L1 instruction cache, L2 cache and the FPU pipeline (which is supplemented by the GPU units in ALUs anyway for FP-heavy applications, though).

Re:8 cores? (1)

K. S. Kyosuke (729550) | about 8 months ago | (#46539083)

Oh, I meant "by the GPU units in APUs", of course...

Re:8 cores? (1)

Blaskowicz (634489) | about 8 months ago | (#46540209)

Steamroller (used only in Kaveri) now uses two decoders, as long with the assorted little changes (a good review on a decent tech site will explain it)

Re:8 cores? (1)

gnasher719 (869701) | about 8 months ago | (#46539823)

I had a look at www.cpubenchmark.net, and the highest AMD processor with 8 cores is rated about 1 percent higher than the 2.6 GHz quad core processor in a Retina MacBook Pro.

Re:8 cores? (2)

K. S. Kyosuke (729550) | about 8 months ago | (#46538859)

Uhhh...no. About the only thing not shared these days in (BD-derived) AMD cores is the FPU.

Re:8 cores? (3, Informative)

guacamole (24270) | about 8 months ago | (#46538935)

I believe cache is shared, and is believed to be one of the bottlenecks of the current AMD CPUs.

Re:8 cores? (1)

cheesybagel (670288) | about 8 months ago | (#46539547)

The data caches are not shared. Each core has a separate data cache. The decoder is the same so they share the instruction cache. But AMD's instruction caches are 64 KB while Intel uses 32 KB sized instruction caches.

Re:8 cores? (1)

Kjella (173770) | about 8 months ago | (#46539981)

I believe cache is shared, and is believed to be one of the bottlenecks of the current AMD CPUs.

By far not the most significant one though, in single threaded tests the i7-4770K beats the FX-8350 by 62% in Cinebench R11.5, 73% in Cinebench R10 and 47% in POV-Ray 3.7RC6 and that's when the AMD core is not competing for resources with its sibling. With turbo the picture is a bit more complex than that but 4 Intel cores already equals 6-7 AMD cores. Then you add in cache contention, shared FPU, overhead of more threads for the last 1-2 cores of difference as in the most ideal benchmarks for AMD they're roughly equal. Except the AMD processor is 125W and using every drop of it, while the Intel is 84W, clearly AMD has some very basic performance issues not due to the core layout. They just run slow and hot no matter how you look at them and it's not getting better. In the words of AMD itself:

As we move forward, we will continue to strategically transform AMD as we diversify our portfolio and drive a larger percentage of our revenue from the semi custom, ultra low power client, embedded, dense server and professional graphics high growth markets.

Their CPU/APU division "Computing solutions" is already less than half the revenue and none of the operating income of AMD in Q4, that division's 2013 revenue was down 25% compared to 2012 which was a bad year itself and even their Christmas quarter sales showed a strong downwards trend (722M in Q4 vs 790M in Q3), Maybe AMD will survive as a company but the part of AMD that competes with Intel is clearly shrinking and shrinking fast. There's no significant refreshes of the FX or Opteron line in sight and since they're actively diversifying instead of investing I expect the same will happen to their other chips as well.

Re:8 cores? (1)

Anonymous Coward | about 8 months ago | (#46540201)

Restricting multi-threaded apps on multi-core processors to a single-thread is one of the stupidest tests I repeatedly keep seeing. Start doing benchmarks with real-world single-threaded programs that people use, not this bogus synthetic test that doesn't represent reality. Same goes for that obsolete benchmark of MP3 encoding. You've got 4 or 8 cores, Einstein. Run 4 or 8 encodes in parallel. And if you're going to claim that you have to pull data from the CD-ROM sequentially, then include the ripping time in the benchmark, because that's reality. If the end result is that CPU time is negligible compared to ripping time, then so be it. Be honest about it instead of misleading people.

If I'm interested in how a CPU performs in Cinebench, It's because I'm using that app. And there's no way in hell I'm going to kick the app in the groin and limit it to a single core. I want to know how fast that CPU renders a scene, and how much it costs. That's it. I can then work within my budget to determine what I'm going to choose and how much of a time tradeoff I'm willing to take.

Bull (1)

Blaskowicz (634489) | about 8 months ago | (#46540197)

No, they're well ahead of AMD in this regard. AMD's 8 "core" CPUs are actually 4 core CPUs that can process 2 integer instructions at the same time on one core. Much like Intel's current i7s are 4 core CPUs that can process an integer and a floating point instruction at the same time on one core. Basically, AMD is marketing hyper threading as being more cores.

What you describe is superscalar execution, and was the point of the original Pentium. That's Instruction-Level Parallelism not Thread-Level Parallelism. Also the Pentium Pro/Pentium 2 had three FPUs.

It's lame that this comment is modded insightful, you're making shit up.

So much wrong in this thread... (5, Insightful)

thesandbender (911391) | about 8 months ago | (#46539207)

AMD's Bulldozer cores have Clustered Integer Core [wikipedia.org] which has two true ALU "cores" and one shared FPU. For integer instructions this is two true cores and not "hyper-threading". For FP instructions this is "hyper-threading" and why Intel has been regularly handing AMD it's arse in all benchmarks that aren't strictly ALU dependent (gaming, rendering, etc). AMD's FPU implementation, clock for clock, is a bit weaker on most instructions as well. And yes, the FPU _is_ shared on AMD processors.

EMT64 is not "32 bits on each 1/2 of the clock cycle". That doesn't even make any sense. EMT64 is true 64 bit. x86-64 does have 32 bit addressing modes when running on non-64bit operating systems. This is part of the x86-64 standard and hits AMD, Intel and VIA.

Hardware Queuing Support is part of the Heterogeneous System Architecture [wikipedia.org] open standard and won't even be supported in hardware until the Carizzo APU in 2015. Since this is an open standard, Intel can chose to use it.

Both architectures have shared caches.

WTF does nVidia's IEE-754 compliance have to do with Intel vs AMD?

I'm not an Intel or AMD fanboy, I try to use the right one for the job. I prefer AMD for certain work loads like web servers, file servers, etc because they have the most integer-bang for the buck. If I'm doing anything that involves FP, I'm going to use an Intel Chip. Best graphics solution?... yeah, I'm not even going to go down that hole.

Re:So much wrong in this thread... (1)

K. S. Kyosuke (729550) | about 8 months ago | (#46539299)

Hardware Queuing Support is part of the Heterogeneous System Architecture [wikipedia.org] open standard and won't even be supported in hardware until the Carizzo APU in 2015. Since this is an open standard, Intel can chose to use it.

The first is not a correction of something that was "wrong in this thread" (if I was wrong in the first place - there *is* already HW for it in Kaveri, even though the implementation may change in the future) , and the second is an opinion (I really don't think that Intel will follow suit any time soon on that).

WTF does nVidia's IEE-754 compliance have to do with Intel vs AMD?

Well, AMD apparently takes care for the execution units to be completely interchangeable, so that code could executed on one core or the other as necessary with identical results, which is one of the point of the APUs ("use the right core for the job"). From my "perhaps" you can probably infer that I posited that Intel may not take that care because they don't simply have the motivation.

Re:So much wrong in this thread... (1)

Wizel603 (1367631) | about 8 months ago | (#46539989)

I may be wrong, but I've been led to understand that EMT64 is a typo of EM64T. How true is this?

NO. (0)

Anonymous Coward | about 8 months ago | (#46539303)

a single core INTEL is more powerful than 2 AMD cores.

So in other words, they were already ahead, and now they are literally over Twice as fast.

Caught up? (0)

Anonymous Coward | about 8 months ago | (#46540243)

How long has it been since Intel produced a desktop CPU with a process above 22nm? Meanwhile, AMD only managed to get down to 28nm.. pretty soon it will be Intel @ 14nm Vs AMD at 28nm. And just in case you need someone to do the math for you, that means Intel will be able to fit twice as many transistors in half as much space.. It's not like Intel can't put more cores in, their E5 V2 series Xeons have 12 cores + hyperthreading. AMD chips need all 8 cores just to keep up with 4 in a Haswell i7.

Good news (-1)

fotistika (3585657) | about 8 months ago | (#46538639)

The technology is growing too fast!!! fotistika orofis [ravnali.gr]

Mac mini (0)

ArcadeMan (2766669) | about 8 months ago | (#46538669)

Is Apple waiting for these new CPUs to release an updated Mac mini? It's been quite 513 days [macrumors.com] since the last update.

Re:Mac mini (1)

ChunderDownunder (709234) | about 8 months ago | (#46539741)

I'll wait for the mac nano.

They could halve the price if they abandoned Intel for their own A7 chip. i.e. iPad internals with 8GB RAM running OS X.

Needs an better DMI link / more PCI-e lanes (1)

Joe_Dragon (2206452) | about 8 months ago | (#46538675)

The non extreme / severs ones are very limited on PCI-e and even in systems like the MAC pro the pci-e limits / DMI hold it back.

The mac pro should of had 2 SSD's but due to limits it only has one.

Re:Needs an better DMI link / more PCI-e lanes (2)

Billly Gates (198444) | about 8 months ago | (#46539535)

It has a pci express which is several multitudes faster as it is directly on the PCI bus. It is rated for over 700 megs a second.

Re:Needs an better DMI link / more PCI-e lanes (1)

Joe_Dragon (2206452) | about 8 months ago | (#46540089)

number of lanes is to low

Only compatible ... (0)

Anonymous Coward | about 8 months ago | (#46538699)

... with thick wallets.

How to cripple good hardware (-1)

Anonymous Coward | about 8 months ago | (#46538767)

How to cripple otherwise good hardware?

Intel graphics.

They're crap; always have been, and there's no reason to expect this to be any different.

The only reason for intel gfx chip is they're cheap.

Re:How to cripple good hardware (0)

Anonymous Coward | about 8 months ago | (#46538909)

Not true at all. The current generations is excellent, to the point that only nVidia's highest end 750m and above mobile graphics chips are actually faster than it. Each successive iteration of the last 3 has got them step by step closer. It's entirely reasonable to expect that broad well will basically be on a par with the current best mobile chips of AMD and nVidia.

heard them duck fart underwater before (0)

Anonymous Coward | about 8 months ago | (#46538983)

Heard those claims for years.

Then every time I end up with a system with intel gfx they have never delivered.

Re:How to cripple good hardware (0)

Anonymous Coward | about 8 months ago | (#46539079)

Look up Iris Pro on Youtube. There's a few demo videos of systems with an Iris Pro gfx chip. I think it would work very well for most modern games. There's potential for a Steambox and it is running full HD video.

This definitely isn't the Intel on-board graphics of old. Hopefully Iris Pro will get good 'Nix drivers.

Re:How to cripple good hardware (1)

Rockoon (1252108) | about 8 months ago | (#46539557)

Look up Iris Pro on Youtube.

Look up the price difference between a chip with Iris Pro and a similarly spec'd chip without. How does the Iris Pro compare with a $200+ stand alone GPU?

ding ding ding .. now you get it .. the Iris Pro is crap, not because it doesnt perform, but because it costs many times what its actually worth.

Re:How to cripple good hardware (1)

RightSaidFred99 (874576) | about 8 months ago | (#46539651)

Lol, 2001 called, it wants its information back.

Weird Business Strategy (2)

Stormy Dragon (800799) | about 8 months ago | (#46538775)

Other details were scarce, but you can bet that Haswell-E will be Intel's fastest desktop processor to date when it arrives sometime in the second half of 2014. Intel also gave a quick nod to their upcoming 14nm Broadwell CPU architecture, a follow-on to Haswell.

Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?

Re:Nerdly Business Strategy (0)

Anonymous Coward | about 8 months ago | (#46538947)

News for the newly-hatched nerd: E = M x server x server. Ergo, must be short on addendium. I suggest you use http://beta.slashdot.com/ [slashdot.com] .

server cpus are more complicated (1)

Chirs (87576) | about 8 months ago | (#46539125)

The desktop/laptop processors are easy...single socket, relatively small number of cores.

It takes effort to add the bits to allow the processors to scale to 10/12 cores, huge caches, and multiple sockets. They also use more complicated memory modules, different motherboards, etc.

Also, large companies are able to get their hands on limited quantities of these cpus well before they're generally available for large-scale ordering to allow their engineers to build products on them and test how they'll behave.

Re:Weird Business Strategy (3, Insightful)

Amtrak (2430376) | about 8 months ago | (#46539143)

This is because these chips are meant for the Server and Workstation market where stability and longevity is more important than bleeding edge tech. As long as they stay the fastest chips you can buy who cares if they are a process node behind. Not the businesses actually buying them. If you want a "Kickass" gaming machine save your money and don't buy an E series Intel.

Re:Weird Business Strategy (1)

rsborg (111459) | about 8 months ago | (#46539219)

Other details were scarce, but you can bet that Haswell-E will be Intel's fastest desktop processor to date when it arrives sometime in the second half of 2014. Intel also gave a quick nod to their upcoming 14nm Broadwell CPU architecture, a follow-on to Haswell.

Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?

Not at all - the commodity CPU customers can do beta test for the more risk-averse enterprise server CPU customers.

Re:Weird Business Strategy (1)

Kjella (173770) | about 8 months ago | (#46539297)

Does anyone else find it kind of weird that Intel seems to have gotten into a pattern where their supposed top of the line CPUs are perpetually a generation behind their supposed commodity CPUs in terms of technology?

They're not really consumer CPUs, they're a spin-off of Intel's server/workstation CPUs for the enterprise. That market requires a lot of validation and is generally very conservative preferring tested and true technology so it's not unnatural for server chips to lag behind consumer chips by a generation and so the "enthusiast" processors aren't ready until the Xeons are. My guess is that most of them are "damaged goods", server CPUs with ECC, QPI, vPro, TXT or other essential server features broken, but if you pair it with a high end motherboard you can sell it for $1000 to the "money is no subject" segment.

There's no business reason for Intel to make a CPU just for serving the high end desktop market, sure each chip is very profitable but they don't sell in big volume. If you look at the benchmarks the two extra cores don't help you in games at all, even with dual high-end video cards you're still GPU limited. Sure if you're doing video encoding, 3D rendering or any other task that'll load all six cores fully it's faster. If 64GB (8x8GB) vs 32GB (4x8GB) RAM matters to you then sure. But we're talking very narrow use cases here, even if AMD were able to give them competition I'm quite sure they'd follow the Xeon roadmap anyway. Oh and they are first to adopt DDR4 (since the Xeons do) but I'm sure that's actaully an advantage, my guess is the premiums on non-ECC DDR4 modules will be huge like everything else.

Re:Weird Business Strategy (1)

petermgreen (876956) | about 8 months ago | (#46539311)

It makes sense for a couple of reasons

1: Intel desperately want to stop the portable computing market moving away from laptops and laptop-like tablets towards smartphone-like tablets. To do that they need to get the most power efficient technology possible into ultrabooks and ultrabook-like tablets.
2: Making a design work properly with 2-4 cores on one chip for laptops and mainstream desktops is a lot simpler than making it work properly with 8+ cores and inter-chip links for a server part (and the high end desktop parts are basically server parts with the inter-chip links disabled and overclocking enabled).

It is a pain to the high end desktop users who have to choose between a low end platform and a core design that is a generation behind and as such it probablly cuts into intel's high end desktop sales but ultimately those high end desktop users are a small part of the market.

Re:Weird Business Strategy (1)

triffid_98 (899609) | about 8 months ago | (#46539449)

Because

#1. For CPU heavy loads you probably have more than one CPU per board.

#2. Most people don't use their 1U Rack-Mount Servers to play Crysis and TitanFall, they just need to handle a crap-ton of threads/ram/drives. Therefore having the latest built-in GPU features does nothing useful.

#3. Stability > Core Speed

Re:Weird Business Strategy (2)

radarskiy (2874255) | about 8 months ago | (#46540087)

The design-side motivation is to alternate architectural changes with process shrinks so that you're not trying to debug both at the same time. Prescott tried that, and look how that turned out.

The marketing motivation is that the buyer of the commodity part is more price sensitive and the buyer of the performance part is more feature sensitive. You use the shrunk process for commodity parts first due to the increased die per wafer, which give you both greater volume and lower cost per die so that you can still maintain margins at lower price points.

Re:Weird Business Strategy (1)

bloodhawk (813939) | about 8 months ago | (#46540195)

nothing weird here at all, commodity vs reliability. Stable, tested proven chips generally stay one step behind. consumer commodity chips give them a chance to weed out any problems without placing risks on chip lines that simply MUST work.

Bout time... (4, Funny)

LoRdTAW (99712) | about 8 months ago | (#46538797)

Finally! I have been waiting for next gen Iris graphics [computerhistory.org] since like forever!

Why? (0)

Anonymous Coward | about 8 months ago | (#46539431)

Integrated Intel graphics are the worst in the industry in terms of performance. This has been the case for many, many years. I simply don't trust Intel on this score after being repeatedly promised "game changing" designs and the result is cr*p, quite frankly.

If you must have an integrated graphics solution, go with AMD. Their implementation is much superior. If you want the best CPU and the best graphics (again, performance-wise), then get an Intel CPU and pair it with a separate graphics card.

The Intel graphics solution is adequate for many business PC's, media PC setups, servers and otherwise relatively undemanding graphics computation loads. But I don't believe the Iris hype because, well, 50 times burnt, 51 times shy. Or something like that.

Re:Why? (0)

RightSaidFred99 (874576) | about 8 months ago | (#46539667)

You are full of outdated rubbish. You don't have to "believe" anything, there are benchmarks out there. You won't beat Iris in terms of power/performance.

Re:Why? (1)

Anonymous Coward | about 8 months ago | (#46539745)

But Iris is easily smoked in terms of price/performance.

Pointless (1)

Dan Askme (2895283) | about 8 months ago | (#46538821)

8 Cores wont magically make the code threaded.
We still live in this era of Single threaded applications and games, drives me up the wall.

64bit applications are still mostly 32bit. It took 10 years, but at least the 4gb memory limit turned some heads.

Re:Pointless (2, Informative)

Anonymous Coward | about 8 months ago | (#46538867)

There's no reason for most programs to be 64bits. Most programs don't need to address that much RAM nor do they need the additional registers that you get with 64bit processors.

Now for programs that use massive amounts of RAM or need the additional registers, going 64bit makes sense, but it's silly to suggest that there's something wrong with 32bit programs in general that would be fixed by moving to 64bit.

Re: Pointless (0)

Anonymous Coward | about 8 months ago | (#46539245)

ASLR

Re:Pointless (1)

Salgat (1098063) | about 8 months ago | (#46539341)

64 bit is advantageous since you're no longer running under the WOW64 emulation layer (all Windows 32 bit programs run under this emulator on 64 bit Windows). The overhead isn't large, but it does exist. There are few reasons not to just run native.

Re:Pointless (1)

viperidaenz (2515578) | about 8 months ago | (#46539565)

But your memory usage will be lower if all your pointers are 4 bytes instead of 8.

Re:Pointless (3, Insightful)

Anonymous Coward | about 8 months ago | (#46539221)

I just did a ps -e | wc -l and got 245. Maybe most of my processes are only single threaded but since there's 245 of them I'm glad my processor has 8 hardware threads to handle them.

Re:Pointless (1)

Blaskowicz (634489) | about 8 months ago | (#46540027)

A dual core solves that already. That allows your most CPU hungry process to use 100% of one core (when it does) while your 244 other processes use about 10 to 20% of the other core.

Re:Pointless (2, Insightful)

lgw (121541) | about 8 months ago | (#46539463)

The few times I'm ever waiting on CPU, it's multi-threaded. Video transcoding, occasionally compiling. I can't remember the last time I heard of a game being CPU bound - that's always GPU-bound these days.

Re:Pointless (1)

Blaskowicz (634489) | about 8 months ago | (#46540049)

Games are often CPU bound or rather have some significant CPU requiremets, it's just that new graphics cards are always benched on fast CPUs and the "gamers" tend to keep their hardware up to date. If you put a good graphics card on an old, unspectacular CPU your games may run like crap.

Re:Pointless (1)

UnknownSoldier (67820) | about 8 months ago | (#46539657)

> We still live in this era of Single threaded games,

That hasn't been true since the PS3 and Xbox 360 days.

Yes, a lot of (PC) indie games are single-threaded, but any game that ships on consoles is multi-threaded.

Re:Pointless (1)

Blaskowicz (634489) | about 8 months ago | (#46540075)

A point that I read somewhere is that even though they're multithreaded, they largely have "the rendering thread", "the audio thread", "the physics thread" etc.
Few games are really well multi-threaded. On the other hand this puts a tab on run-away CPU requirements.

How about 2 fast cores instead of 8 slow ones? (1)

JoeyRox (2711699) | about 8 months ago | (#46538833)

We've been limping along with ~10% performance increases per chip generation since forever.

Re:How about 2 fast cores instead of 8 slow ones? (2)

Ken_g6 (775014) | about 8 months ago | (#46539059)

You asked for it, you got it! [anandtech.com] Though the downside is these two fast cores don't include AVX, AVX2, or a few other instruction sets.

Re: How about 2 fast cores instead of 8 slow ones? (0)

Anonymous Coward | about 8 months ago | (#46539191)

How are increases in speed, "limping?" This year's machine is the fastest you have ever had. Next year is even faster. Limp?

Re: How about 2 fast cores instead of 8 slow ones? (0)

Anonymous Coward | about 8 months ago | (#46539277)

Limping means that the speed upgrades in new processor generations aren't what they used to be. Is that so hard to understand?

Re: How about 2 fast cores instead of 8 slow ones? (2)

UnknownSoldier (67820) | about 8 months ago | (#46539673)

Silicon tops out at ~ 5 GHz.
Germanium X tops out at ~500 GHz.

The average consumer doesn't give a rats ass about GHz, which means that you will never see cheap 10 GHz CPUs anytime soon.

Hell, we're STILL waiting for Knights Corner / Landing 48+ core CPU to ship to the general public.

Re:How about 2 fast cores instead of 8 slow ones? (1)

Salgat (1098063) | about 8 months ago | (#46539363)

The only reason you saw phenomenal speed increases on single cores in the past was because we were no where near the frequency barrier. Going from 200 to 400MHz was extremely easy compared to 4GHz to 8GHz, which isn't even possible except in exotic conditions.

Re:How about 2 fast cores instead of 8 slow ones? (1)

Hamsterdan (815291) | about 8 months ago | (#46539839)

This, but AMD with Athlon helped a lot (at 33Mhz a year, we might have 2Ghz CPUs now), Otherwise Intel wouldn't have had have any incentive to push clock speeds that fast. If AMD were kicking them again, I'm pretty sure those exotic conditions wouldn't be such a barrier anymore.

AMD posts go here (-1, Troll)

dave562 (969951) | about 8 months ago | (#46538871)

Feel free to consolidate all of the anti-Intel, pro-AMD posts here.

I will get it started to help out.

My AMD chip runs twice as fast at half the power, overclocked to 5Ghz on air. It's totally stable. Only idiots buy Intel chips.

Re:AMD posts go here (1)

cheesybagel (670288) | about 8 months ago | (#46539589)

Actually I run an AMD processor. So what if it had half the FP power. Most FP intensive applications I use have GPU acceleration. Oh and yeah it was cheaper than an Intel processor with the same integer performance. Heck it was cheaper than an Intel processor with the same FP performance. That's how expensive Intel processors are these days.

If you got a yourself a PS4 or a Xbox One you are using an AMD processor.

Re:AMD posts go here (1)

Blaskowicz (634489) | about 8 months ago | (#46540117)

But with AMD you have a higher power bill, need to buy a bigger heatsink, stay clear from lowest end motherboards. It ain't exactly cheaper.

So they finally caught up to Parallax Inc.? (1)

spiritplumber (1944222) | about 8 months ago | (#46538915)

I've been using 8 core chips since 2006... so have most people who use Parallax microcontroller. I still wonder why the Arduino made such a splash since it came out a couple years after the Prop.

Re:So they finally caught up to Parallax Inc.? (1)

Salgat (1098063) | about 8 months ago | (#46539415)

You have to remember that Intel and AMD work inside a very limited silicon die area. It would be trivial for them to make a 100 core CPU if they wanted, but the performance would drastically go down. It all comes down to core quantity versus core performance (to put things in perspective, even an ancient single core celeron from the early 2000s would outperform a parallax 8 core cpu). As far as why the Arduino made such a splash, that's because Atmel microcontrollers come with heavy peripheral support, heavy documentation, and a fantastic support for hobbyists through their free IDEs. Atmel is a big corporation that still pushes hard for hobbyist adoption.

What's with the past tense in the headline? (1)

wonkey_monkey (2592601) | about 8 months ago | (#46538929)

Intel Announced 8-Core CPUs And Iris Pro Graphics for Desktop Chips

Okay, I know that strictly speaking it did happen in the past, but that's not how headlines are usually written.

DDR4? (1)

sshir (623215) | about 8 months ago | (#46538969)

Am I is the only one concerned about amount of RAM?

I mean, everybody is so excited about DDR4... But do people understand that instead of 8 dimm slots we'll get only 4 (1 dimm per channel instead of 2-3)? So while keeping costs on this side of reasonable, we're getting only half the amount of memory?

WTF?!!

Re:DDR4? (0)

Anonymous Coward | about 8 months ago | (#46539113)

Enthusiast mobos mostly only have 4 slots anyway. And Intel showed a Haswell-EP system with 3 DIMM slots per channel while they keep saying it's 1 per channel; clearly we haven't gotten the full story.

Re:DDR4? (0)

Anonymous Coward | about 8 months ago | (#46539259)

Full story: DDR4-LRDIMM

Re:DDR4? (1)

petermgreen (876956) | about 8 months ago | (#46539367)

Enthusiast mobos mostly only have 4 slots anyway.

Define "Enthusiast mobos", there are plenty of LGA2011 desktop boards with 8 dimm slots.

And Intel showed a Haswell-EP system with 3 DIMM slots per channel while they keep saying it's 1 per channel; clearly we haven't gotten the full story.

That's EP not E, it wouldn't surprise me if ddr4 desktop memory only supports 1 dimm per channel while registered ECC DDR4 server memory supports more. Just as with DDR3 the desktop stuff maxed out at two dimms per channel while the server stuff went up to three dimms per channel

Re:DDR4? (1)

dugancent (2616577) | about 8 months ago | (#46540009)

"Enthusiast mobos" - anything on the front page of Newegg.

Re:DDR4? (1)

triffid_98 (899609) | about 8 months ago | (#46539211)

I mean, everybody is so excited about DDR4... But do people understand that instead of 8 dimm slots we'll get only 4

No...not everyone. Going from DDR2 to DDR3 netted fractional gains in real world applications and indications are that the same will be true going from DDR3 to DDR4.

Also plenty of consumer level boards only have 4 DIMM slots now. Which has always been plenty for most people, ever since we moved up from DDR1 boards and their crappy 2GB limit per stick.

Re:DDR4? (1)

Rockoon (1252108) | about 8 months ago | (#46539625)

No...not everyone. Going from DDR2 to DDR3 netted fractional gains in real world applications and indications are that the same will be true going from DDR3 to DDR4.

To put a bullseye on this, its because latencies havent really changed. Its a rare workload that isnt either CPU limited or RAM latency limited, rather than RAM bandwidth limited. DDR4 isnt going to change that.

Re:DDR4? (2)

rogermcdodger (1113203) | about 8 months ago | (#46539271)

You'll have 16GB unbuffered DIMMs so you aren't losing anything. With Haswell-EP using LR-DIMMs allows 3 per channel for 768GB per CPU.

Re:DDR4? (1)

petermgreen (876956) | about 8 months ago | (#46539395)

So while keeping costs on this side of reasonable, we're getting only half the amount of memory?

I suspect it will be a pain when the platform first comes out but in time 16GB desktop DDR4 modules will become affordable while I doubt 16GB desktop DDR3 modules ever will (if the boards even support them)

Re:DDR4? (1)

Blaskowicz (634489) | about 8 months ago | (#46540163)

Fun story, AMD supports 16GB unregistered DDR3 DIMMs, but Intel CPUs don't, except the 8-core Atom and presumably Broadwell. If those 16GB DIMMs ever get affordable and readily available, it would probably be in 2015 when there are Broadwell desktops/laptops around.

Iris Pro is a white elephant (4, Interesting)

edxwelch (600979) | about 8 months ago | (#46539075)

The eDRAM simply makes the chip way too expensive.
If you look at the price of i7 core 4770R: $358. It's an i7 but has only has 6 MB of cache (compared to the 8mb of the regular i7 4770). So basically, it's about the same value as a i5-4670K which cost $243. With the price difference you could buy a Radeon R7 260X, which will trash Iris Pro in performance.

Re:Iris Pro is a white elephant (2)

Blaskowicz (634489) | about 8 months ago | (#46540169)

6 MB L3 plus 128 MB of L4 gets you a faster CPU than 8 MB of L3 alone, actually.

Only took seven years and 3 process nodes... (0)

Anonymous Coward | about 8 months ago | (#46539187)

The first quad core desktop CPU from Intel was the 65nm Core 2 Q6600 [wikipedia.org] released more than seven years ago. Now that it is possible to fit more than eight times the number of transistors into the same area, Intel throws enthusiasts an astronomically priced bone? How generous.

Re:Only took seven years and 3 process nodes... (1)

viperidaenz (2515578) | about 8 months ago | (#46539715)

They used those 8x more transistors to increase the performance per clock
I couldn't find any current quad core Haswell CPU's with a 2.4G clock like the Q6600, but an i5 4430 is twice as fast, despite having less cache.
It's multi-threaded performance is on par with a 2 core G1820 Celeron. The Celeron is must better at single threaded performance and uses half the power, despite having integrated graphics in there too.

They also moved the memory controller in to the CPU. That takes up space.

Three Article links are all the same? (3)

itsybitsy (149808) | about 8 months ago | (#46539295)

At the time I posted this comment the three links in the article all go to one page, http://hothardware.com/News/In... [hothardware.com] . Oops. Could /. or the author correct the links assuming two are missing or remove two of them, we really don't need three links all going to the same page. Thanks a bunch.

What's a "desktop"? (1)

grumpyman (849537) | about 8 months ago | (#46539319)

Really.

What's a "grumpyman"? (3, Funny)

viperidaenz (2515578) | about 8 months ago | (#46539719)

Really.

And the Mac Pro is now lagging again. (1)

LWATCDR (28044) | about 8 months ago | (#46539381)

Unless they have a refresh of the Pro when the chip launches or soon after the Pro is back to being too expensive for the performance.

Re:And the Mac Pro is now lagging again. (2)

Billly Gates (198444) | about 8 months ago | (#46539519)

The pro has a +10 core if you max it out.

99% of the population has no need for it! (0)

Anonymous Coward | about 8 months ago | (#46539603)

Modern boxes are so damn fast already, the software is already 20-30 years behind!
First we need a new, inherently multithreaded parallel language, then we need to get the programming community up to speed with it. Another 8-10 years to get an OS that supports it.............

timing is everything (0)

Anonymous Coward | about 8 months ago | (#46539921)

Heh, not even a week after building myself a new system....

Re:timing is everything (1)

mister_playboy (1474163) | about 8 months ago | (#46540077)

Our 4770Ks will still have much better performance per dollar than these E-chips.

I'm not happy to find that we got gimped out of VT-d by buying the current top chip, however. Being able to (possibly) run Windows only in a VM for gaming while using Linux as the host would be awesome.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?