Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Future of Intel Processors

Zonk posted more than 7 years ago | from the more-core-lads-throw-more-cores-on-there dept.

Intel 164

madison writes to mention coverage at ZDNet on the future of Intel technology. Multicore chips are their focus for the future, and researchers at the company are working on methods to adapt them for specific uses. The article cites an example were the majority of the cores are x86, with some accelerators and embedded graphics cores added on for added functionality. "Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it's a somewhat manageable problem. "When you get to eight and 16 cores, it can get pretty complicated," Bautista said. The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel." madison also writes, "In another development news Intel has updated its Itanium roadmap to include a new chip dubbed 'Kittson' to follow the release of Poulson. That chip will be based on a new microarchitecture that provides higher levels of parallelism."

Sorry! There are no comments related to the filter you selected.

Interesting! Cell is making waves after all... (4, Funny)

seebs (15766) | more than 7 years ago | (#19520839)

I think Cell's taught us two important things about heterogeneous multicore:
1. It's fairly hard to develop for.
2. It's bloody fast.

Looks like Intel's gonna be running with it some; that's good news for anyone making a living selling compilers! :) Buy stock in gcc...

gcc? (2, Insightful)

everphilski (877346) | more than 7 years ago | (#19521005)

Buy stock in gcc..

Yeah, cause, you know, Intel doesn't make their own http://www.intel.com/cd/software/products/asmo-na/ eng/compilers/284132.htm [slashdot.org] ">compiler...

Re:gcc? (2, Informative)

walt-sjc (145127) | more than 7 years ago | (#19521245)

It's a joke son. Gcc is GPLed.

Re:gcc? (1)

seebs (15766) | more than 7 years ago | (#19521825)

You know, come to think of it, IBM has a compiler too.

Maybe, uhm... A joke?

(That said, stuff like this IS good news for anyone working on gcc professionally, potentially, although it does have the short-term impact of creating a class of apps where gcc isn't going to be as good as the industrial and research compilers for a while.)

The future of Intel, with AMD following (0)

Anonymous Coward | more than 7 years ago | (#19521815)

Cool, I guess whatever we read about the future of Intel chips is the same as reading about the future of AMD's chips, since AMD always follows Intel by at least a year, cloning whatever they do.

Instead of more power (1, Interesting)

nurb432 (527695) | more than 7 years ago | (#19520863)

How about more code efficiency? That would also improve overall security too.

If people coded properly, we wouldn't need this 'speed race' just to watch our word processors and browsers get slower and slower each release..

Re:Instead of more power (5, Insightful)

CajunArson (465943) | more than 7 years ago | (#19521059)

That would also improve overall security too.

I hate to break it to ya, but in a low-level language like C, doing proper bounds checks and data sanitization required for security does not help performance (although it doesn't harm it much either, and should of course always be done)
    There is a lot of bloated code out there, but the bad news for people who always post "just write better code!" is that the truly processor-intensive stuff (like image processing, 3D games) is already pretty well optimized to take advantage of modern hardware.
    There's also the definition of what "good code" actually is. I could write a parallelized sort algorithm that would be nowhere near as fast as a decent quicksort on modern hardware. However, on hardware from 10 years from now with a big number of cores, the parallelized algorithm would end up being faster. So which one is the 'good' code?
    As usual, real programming problems in the real world are too complex to be solved by 1-line Slashdot memes.

Re:Instead of more power (-1, Troll)

nurb432 (527695) | more than 7 years ago | (#19521325)

Better code = less bloat = better performance and security.

Sure you can cite specific examples of what may currently be 'good, but useless code' ( your 3D games is an example of *totally* worthless code, regardless of the 'quality' ) but the above statement holds true. ( and it wasnt to create a discussion, it was a simple statement of fact. Anything different is incorrect. )

Re:Instead of more power (2, Insightful)

fitten (521191) | more than 7 years ago | (#19521493)

Define "bloat". For example, do you classify 'features', as in adding more of them, as bloat? I think the word "bloat" is thrown around so much that few people have a good definition of it anymore. For example, features (what lots of people call 'bloat') that aren't used *shouldn't* cause performance issues as the code for them isn't executed.

Besides, if we stopped adding features, we'd still be using things like ed for editing (and 'word processing'), our games would still be like Pong, and our remote access would still be VT52 terminals.

Still using pong and VT52 terminals (-1)

nurb432 (527695) | more than 7 years ago | (#19521677)

And your point?

Re:Still using pong and VT52 terminals (0)

Anonymous Coward | more than 7 years ago | (#19521787)

Wow... you must be great fun at parties.

Re:Still using pong and VT52 terminals (-1, Troll)

nurb432 (527695) | more than 7 years ago | (#19521923)

At the parties i go to, we don't waste our time playing video games, or editing documents.

Re:Still using pong and VT52 terminals (1)

fitten (521191) | more than 7 years ago | (#19522357)

At the parties i go to, we don't waste our time playing video games, or editing documents.
...or having fun, I'd wager ;)

Re:Still using pong and VT52 terminals (1, Funny)

Anonymous Coward | more than 7 years ago | (#19522407)

At the parties i go to, we don't waste our time playing video games, or editing documents.

Or talking to girls.

Re:Instead of more power (2, Insightful)

morgan_greywolf (835522) | more than 7 years ago | (#19521513)

Better code = less bloat = better performance and security.


The parent's point is that in code where it makes a difference, the code is already thoroughly optimized, in general. Slimming down the code for Microsoft Word or XEmacs or Firefox or Nautilus or iTunes (there, now we've slaugthered everyone's sacred cow!) isn't likely to make much of a difference because apps like these already run plenty fast on modern hardware. Sure, bloat is bad, but it's a lot harder to remove bloat from existing code without removing features than it sounds. If bloat is an issue, use an equivalent app with less features -- nano instead of XEmacs, for instance.

Re:Instead of more power (1)

bberens (965711) | more than 7 years ago | (#19522401)

I can't speak for the rest of those apps because I don't use them, but I can assure you that the javascript engine in firefox is pretty slow.

Re:Instead of more power (4, Funny)

ichigo 2.0 (900288) | more than 7 years ago | (#19523119)

For some reason javascript is slow on all browsers. I believe there is a W3C spec that mandates it.

Re:Instead of more power (1)

Vellmont (569020) | more than 7 years ago | (#19521565)


Better code = less bloat = better performance and security.

The thing you've failed to realize is that "bloat" is relative. One mans bloat is another mans "gotta-have-it" feature. Also the point of the poster was that "better performance" is a moving target.

Programmers don't design software for one guy, with one computer, that's run only next week. They design software for a hundred/thousand/million guys that runs on 200 different computers of different speeds, and for the next several years.

The basic takehome message here is that the computing world changes fast, and has a wide diversity of environment. "better" changes.

Re:Instead of more power (0, Troll)

nurb432 (527695) | more than 7 years ago | (#19521597)

The 'other mans' ( as you put it ) needs are not relevant.

Re:Instead of more power (1)

drinkypoo (153816) | more than 7 years ago | (#19523191)

The 'other mans' ( as you put it ) needs are not relevant.

Well, as you are the only important person on the planet, I would like to know what you're planning to do about climate change.

Re:Instead of more power (0)

Anonymous Coward | more than 7 years ago | (#19521915)

your 3D games is an example of *totally* worthless code, regardless of the 'quality' )
Can I please attend your tea party? Or are you busy finding a boyfriend?

Re:Instead of more power (1)

drinkypoo (153816) | more than 7 years ago | (#19523087)

your 3D games is an example of *totally* worthless code, regardless of the 'quality'

What does this mean? I like playing games, and entertainment is not worthless. I can only conclude (from reading and rereading your comment at least six times) that you disagree.

Re:Instead of more power (1)

Nikker (749551) | more than 7 years ago | (#19521767)

I think with all of these cores and such an increase in on die cache we should be asking what can we accomplish by staying on-die? As the number of cores increase so will on-die cache, when we start to get into 10MB+ area we could likely do some pretty fancy stuff, also treating registers as memory on idle cores will add to this. With all this micro-logic maybe even simple operations add + move ops will be added to the off-die ram as a type of pre-processing.

The more cores they add the more the system will seem to converge into the CPU, as this happens devices will become very simple as most of the system will be able to operate using a smaller package. As the system makes more money it will be come more and more closed, curiosity will lead to hacks, hacks will lead to other uses, which will give us an interface which will make the whole thing balloon up again....

What a tangled web we weave eh?

Re:Instead of more power (1)

Vexorian (959249) | more than 7 years ago | (#19521957)

hmmm how about:?

Optimization = more specialized code = less maintainability = bugs are worse = adding features adds bloat = security issues

More powerful processors = less need for optimization

More powerful processors = Compilers take less time to do their job and developers get more time to work on their applications efficiently

Multicore vs. implicit parallelism (4, Interesting)

BritneySP2 (870776) | more than 7 years ago | (#19520873)

While multicores, obviously, have their use, the future belongs to CPUs with massive internal implicit parallelism, IMHO.

Re:Multicore vs. implicit parallelism (1)

twitchingbug (701187) | more than 7 years ago | (#19521907)

What do you mean by implicit parallelism. I wikied it [wikipedia.org] but that's at the software layer. Are you saying you should move that into the processor? What level of parallelism are we talking about here?

Re:Multicore vs. implicit parallelism (2, Interesting)

BritneySP2 (870776) | more than 7 years ago | (#19522075)

move that into the processor

In a manner of speaking, yes. For a compiler of a programming language to be able to implement the language's constructs efficiently, there must be an adequate support of those constructs by the target hardware.

On a more general note, the boundaries between hardware and software are always blurred, in that you cannot completely abstract one from another without hurting the performance of the system.

Re:Multicore vs. implicit parallelism (1)

MikShapi (681808) | more than 7 years ago | (#19521965)

While CPUs with massive internal implicit parallelism, obviously, have their use, the future belongs to electric cars, IMHO.

Re:Multicore vs. implicit parallelism (1)

BritneySP2 (870776) | more than 7 years ago | (#19522255)

Yours is a good point, if a bit obvious. Mine was to draw the attention to the tendency of talking more about multi-threading than making individual cores provide radically better support the implicit parallelism.

Re:Multicore vs. implicit parallelism (1)

MikShapi (681808) | more than 7 years ago | (#19522473)

My point had to do with how silly making it look like an either-or scenario is.
Intel is very likely doing both with equal zeal, and the market is at a point where it will pay for useful advances in either.

Re:Multicore vs. implicit parallelism (1)

BritneySP2 (870776) | more than 7 years ago | (#19522843)

There is some asymmetry to this. Speaking of cars, adding cores is, in a sense, like adding more wheels to a car. Simple; but there is an overhead; the performance increase is not proportional to the number of cores, etc. On the other hand, a car that is designed better may not need that many "wheels" anyway.

Re:Multicore vs. implicit parallelism (0)

Anonymous Coward | more than 7 years ago | (#19522663)

Yea, but can you actually say something that makes grammatical sense?

Let's see where this takes us (2)

keithjr (1091829) | more than 7 years ago | (#19520881)

With process sizes getting smaller and smaller, it is interesting to watch new ideas for as to what to do with that newfound area. The elementary choice seemed to always be "throw on more cores" but the prospects of accelerators and bridges moving into Systems-on-Chips looks like it might have much nicer prospects.

The average parallism factor for most programs tends to hover around four. I think Intel might have figured out that this is a decent stopping point for hardware parallelism as well.

Re:Let's see where this takes us (1)

f00man (1056198) | more than 7 years ago | (#19521045)

In the early 1980's I was sure that Y2K would bring desktop machines with >10,000 (neural net) processors and paperless offices. I blame MS, Intel and HP.

I never really expected a flying car though.

But gee (3, Funny)

MrNonchalant (767683) | more than 7 years ago | (#19520917)

What I really want is a dialogue with Intel engineers about this piece of Intel-themed news. Why can't you add something like that to the site? You could call it something like Opinions With Intel or Intel And Opinions or Center for Intel. No that's not quite right.

This story should be posted 8 times (3, Funny)

Timesprout (579035) | more than 7 years ago | (#19520931)

So we can can have comments in parallel.

Re:This story should be posted 8 times (4, Funny)

MyLongNickName (822545) | more than 7 years ago | (#19521117)

Be patient. The other five articles should appear soon.

Re:This story should be posted 8 times (0)

Anonymous Coward | more than 7 years ago | (#19521389)

Before anyone asks, the missing two were disabled to increase comment yield on the others.

Re:This story should be posted 8 times (1)

MonoSynth (323007) | more than 7 years ago | (#19522947)

...but subscribers can beat the rush!

Re:This story should be posted 8 times (2, Funny)

wbren (682133) | more than 7 years ago | (#19521709)

So we can can have comments in parallel.
I think you ramped up your clock frequency too much. Your instructions are overlapping, causing data corruption in the pipeline and grammar mistakes. :-)

Oblig. (1)

techpawn (969834) | more than 7 years ago | (#19520977)

Who's going to need 80 Cores? *ducks*

Re:Oblig. (1)

WrongSizeGlass (838941) | more than 7 years ago | (#19521257)

Who's going to need 80 Cores? *ducks*
Any one wanting to run Areo on Vista Ultra Optimum Utmost Paramount Ultimate Quintessential Home Edition?

I, for one, am betting Intel loses its shirt on this 80 Core hodgepodge. That's why I'm investing my entire retirement saving in Transmeta's Crusoe line.

Re:Oblig. (2, Funny)

walt-sjc (145127) | more than 7 years ago | (#19521307)

What would a duck do with 80 cores? Quack in harmony?

Cell and parallel processing. Answer this for me. (1)

zymano (581466) | more than 7 years ago | (#19520989)

Why isn't parallel processing used more since more of us will need graphics/math intensive processors? We don't need faster word processors. The threading direction seems misguided to me. Is the state of parallel processing compilers not workable. I don't want to hear about the stupid '4 diggers to dig 1 ditch' analogy. Cliche.

Re:Cell and parallel processing. Answer this for m (2, Insightful)

keithjr (1091829) | more than 7 years ago | (#19521077)

Well, the analogy I've always heard was "1 woman can have 1 baby in 9 months, but 9 women can't have 1 baby in 1 month." Lesson here: not everything is as "parallelizable" as digging a ditch. Data dependency in single execution threads means there often simply isn't enough independent work that can be done at once. Moreover, it is often left up to the user (or third party vendors) to create the application library to take advantage of parallel processing. Almost all code being run at this moment was writen in a serial, higher-level language (such as C++) for serial execution (even if it utilizes threading in the OS). The Cell didn't provide a very good API, and even trivially parallelizable algorithms often have to be rewritten in assembly code to take full advantage of the available hardware. And that just plain sucks.

Re:Cell and parallel processing. Answer this for m (1)

LWATCDR (28044) | more than 7 years ago | (#19521179)

Okay how is threading not parallel processing?
One of the great difficulties of the Cell is asymmetrical in nature. With a Cell you have to do a lot more resource management than with symmetrical multiprocessor system. I have not worked with the Cell but some of the issues I could see cropping up is that it maybe a little light in none floating point resources. With only one PPC core there may be issues with keeping all the SPEs busy.
The 360 is no slouch when it comes to floating point but has a lot more general purpose CPU power than the PS3. The PS3 will kill the 360 in things like transcoding video but the 360 maybe a better mix of capabilities than the PS3.

Begins At Microsoft With (0)

Anonymous Coward | more than 7 years ago | (#19521459)


    Data Parallel Haskell [haskell.org]

Re:Cell and parallel processing. Answer this for m (0)

Anonymous Coward | more than 7 years ago | (#19521745)

"We don't need faster word processors."

No, but "we" seem to need to run ever larger and faster databases.

intel chips (-1)

Anonymous Coward | more than 7 years ago | (#19521049)

Intel chips make me very sad in the pants.

Go AMD!

News from the future (0)

Anonymous Coward | more than 7 years ago | (#19521509)

The once el-cheapo knockoff of Intel/nVidia known as Awful Micro Devices/Awful Technologies INC has recently announced they have liquidated all of their assets to pay off all of their massive debts which includes all of their personal debts. Word has it all of the execs and stock holders are still in major debt and have committed suicide. There is even rumours of people at Microsoft committing suicide from massive debt including Steve Ballmer and his recently fiance Bill Gates. No word from Linus Torvalds, the now richest person in the world.

If you may recall Linus recently won a 150 Billion dollar lawsuit against Microsoft for using code that was under the GPL in Windows kernels ranging from Windows NT 4.0 to Windows Vista. The Supreme Court judges found that Microsoft violated the GPL and ordered Microsoft to pay Linus the 150 Billion dollar. Very desperate, Microsoft then invested in AMD/ATI in the attempt to keep their monopoly. They instead lost the rest of their money when most of the people went to Intel/NVidia for their support to GNU/Linux. GNU/Linux now has far better support for Windows software since the Judges ordered Microsoft to place all of their code into the GPL. The support for all Windows software far exceeds that of even Windows itself.

In other news the economy of all countries around the world are now in a state of improvement.

shitdot sheeple should slit their fucking wrists (0)

Anonymous Coward | more than 7 years ago | (#19522687)

G0 & SLI7 Y0UR FUCKING WRIS7S FUCK7ARDED LINSUX FUCK7ARD!

For the long term (2, Insightful)

ClosedSource (238333) | more than 7 years ago | (#19521103)

Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores. Whether multi-core processors can significantly increase performance for standard applications hasn't yet been proven and even if possible, will depend on the willingness of developers to do the extra work to make it happen.

If software developers can't or won't take advantage of the potential benefits of multi-core, Intel and AMD may have to significantly cut the price of their processors because upgrading won't add much value.

Re:For the long term (3, Insightful)

timeOday (582209) | more than 7 years ago | (#19521947)

Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores.
Figure out how to do that and you will be a rich man. The move to multi-core is a white flag of surrender in the battle against the laws of physics to make a faster processor, no doubt about it. The industry did not bite the bullet of parallelism by choice.

Re:For the long term (2, Informative)

0xABADC0DA (867955) | more than 7 years ago | (#19522025)

That sounds like what they are doing, improving performance by making more things native.

For example, they could put a Java bytecode interpreter "cpu" into the system. Java CPUs didn't take off because a mainstream processor would always have better process and funding, and you had to totally switch to Java. But if everybody had a Java "cpu" that only cost $0.25 extra to put in the chip and got faster as the main CPU got faster, then it might actually be useful (incidentally .NET bytecode is too complicated to run directly in a cpu).

Alternatively, they could put in generic garbage collection as a separate processor that runs all the time. This could accelerate Python, Java, .net, perl, ruby, smalltalk, and any number of other 'slow' languages that people are using anyway. The can add in a cell-like cpu who's only purpose is lzw-style compression or hashes, or these could be just *really* slow uninterruptible instructions only available on some cores... leaving others to handle interrupts and whatnot.

I don't think multi-threaded code is necessarily the only way to take advantage of multiple cores.

Re:For the long term (1)

DNeoMatrix (1098085) | more than 7 years ago | (#19522515)

I think what we need are (STANDARD) commands to say, how many cores? okay - throw this thread on core A, this thread on core B, and let me handle the interlinks, from a programming end. I know you can probably do this in ASM or with interrupts, and probably from C - but it's really got to get a lot more straightforward. Once it's as easy as speaking it, then it will be more likely that an average programmer will use that mechanism, and thus programs as a whole will begin to pick up enormous pace, and they will be able to adapt, at RUN TIME to the running conditions.

Clock Speed? (3, Interesting)

tji (74570) | more than 7 years ago | (#19521173)

It seems that Intel very rarely mentions clock speed in any of their roadmap briefings. The clock speed increases over the last five years or so have been pretty minimal. Moore's law talks about the rate transistor density increases. But, clock speed has followed a similar curve until recently. The last 4-5 years has to be the longest plateau in the history of the industry.

Yes, I know they changed to a new architecture that put less emphasis on raw clock speed. But, given that more efficient architecture, clock speed increases are still going to be a major benefit.

So, what's the story? Has the industry hit a wall? How long will it take to get back to above 3GHz for a mainstream processor, or even to the 4GHz levels that the old Pentium IVs were pushing.

Don't get me wrong, I am a huge fan of the power efficiencies of the new chips. For my primary purposes (laptop, HTPC) the new chips are a godsend. And, the thought of specialized "accelerator" cores is fantastic (a video decoder core for MPEG2 & H.264, please). But, doing that same thing at 4GHz is even more compelling (of course, with the speedstep++ stuff to shut down cores when not needed, and throttle back to low GHz to save power).

Re:Clock Speed? (4, Informative)

ZachPruckowski (918562) | more than 7 years ago | (#19521609)

Penryn (a die shrink of the Core2 Duo/Quad plus some SSE4) should have 3 GHz+ models. The real performance issue isn't clockspeed, it's instructions per second. When you make 128-bit SSE take fewer cycles, and you add execution units, improve scheduling logic, and reduce access latencies (through pre-fetching or larger caches, or faster buses), you make processors faster. A processor that runs at 2 GHz with 3 Instructions per clock is just as fast as one that runs at 4 GHz with 1.5 IPC. The reason clockspeed hasn't been increasing is because performance gains have been coming from other areas. Intel could probably sell a juiced-up 3.6 GHz Core 2 Extreme, but it'd run at 180 Watts or something, and cost like $1500.

Re:Clock Speed? (1)

timeOday (582209) | more than 7 years ago | (#19522083)

The real performance issue isn't clockspeed, it's instructions per second.
Bull. The fact is, the MHz "myth" is mostly true. The vast majority of improvement in processor speed over the past 30 years is due to clock rate, not IPC. The performance gains from other areas over the last 5 years have not kept pace with the rate of progress for the preceeding 25 years, not even close.

Re:Clock Speed? (3, Informative)

644bd346996 (1012333) | more than 7 years ago | (#19522347)

Sure, for most of the past 25 years, it has been the clock speed that's been improving. But that's changed in recent years. When Intel switched from Prescott to Core, they pretty much cut the clock speed in half without really sacrificing performance. That's because they increased the IPC a lot in Core, so that it had comparable IPS.

When comparing different processors with the same ISA (ie x86), IPS is the best measure of CPU performance, not clock speed.

Re:Clock Speed? (2, Informative)

Vancorps (746090) | more than 7 years ago | (#19522477)

Tell that to the Amiga guys and to AMD when they chose IPC over clock while the P4 was around. Both are very important. The industry spent years ramping up the clock and now they're spending a few years working on IPC. It makes perfect sense to me. Moore's law also doesn't refer to the frequency of a chip but to the number of transistors which has kept pace especially now with the 45nm processes.

Personally I think for the moment IPC is far more important than frequency given computers are doing more and more these days not just doing one thing faster.

Yes. (0)

Anonymous Coward | more than 7 years ago | (#19521755)

Are there any more questions from captain obvious?

New term war. (3, Insightful)

jshriverWVU (810740) | more than 7 years ago | (#19521213)

I was just checking out this page here [azulsystems.com] which discussed a machine with 768 cores. While I do a good amount of parallel programming this is good news to me. But it seems for the average person, this is turning into another mhz/ghz war, this time cores.

What we really need is for software to catch up. Luckily some programs like Premiere, Photoshop have supported multiple CPU's for a while now. But games, etc can really benefit from this. Just stick AI on 1 core, terrain on another, etc etc.

Re:New term war. (1)

MikShapi (681808) | more than 7 years ago | (#19522111)

"Software" (as in all software ever written) is not a monolithic thing. The vast majority of software in use today is not CPU-restricted by modern (and even 5-year-old) commodity hardware.

Of the little bit that does need oompf, Where SMP can be taken advantage of, people have largely been working on doing so for a while now.

Only the little fraction that remains - projects that CAN USE the extra oompf and haven't been developed in that direction yet - need to catch up.

Your statement hardly applies to most software out there.

Re:New term war. (1)

suggsjc (726146) | more than 7 years ago | (#19522141)

First, I'm not saying your wrong. But the (processor) world doesn't revolve around /. comments/criticisms. Meaning, its all to easy to look at companies (esp big companies) and say that they just get going in one direction and don't stray from the course until it hits a dead end.

Do you really think companies will intentionally go in the wrong direction (more GHz, more cores, etc) just because? Possibly for marketing reasons, but outside that I would think that with their massive R&D budget that they would be exploring other ideas to give them the edge over the competition. Yes, sometimes it takes a new-comer to shake things up, but at the same time the big companies are pushing as hard as they can to either get an edge or narrow the gap...so give credit where credit is due and stop complaining (not that you were necessarily complaining, but almost any tech war cores, ghz is going to result in better tech for the consumer).

Re:New term war. (1, Interesting)

Anonymous Coward | more than 7 years ago | (#19522745)

What people often fail to understand about that "GHz war" is that the problem is not that Intel and AMD pursued high clock speeds, but that they were sacrificing other factors. The Pentium 4's high clock speeds were attained by using a very long pipeline, resulting in various drawbacks like a long flush / warm-up phase.

I don't think that's the case now - I'm sure there has been some small sacrifices to accommodate the large number of cores, but not that great. Furthermore, unlike the GHz war, the focus is about scalability- so the overhead for operating 4 cores would not be very large compared with 8 cores.

I think the industry is going in a very good direction, especially with the concept of specialized cores.

Re:New term war. (1)

vecctor (935163) | more than 7 years ago | (#19523147)

But games, etc can really benefit from this. Just stick AI on 1 core, terrain on another, etc etc.
Indeed. I know Supreme Commander actually does this - and they recommend multi-core processors. I believe they said it uses up to 4 cores.

Improved cash management (4, Funny)

gEvil (beta) (945888) | more than 7 years ago | (#19521237)

I've found that improved cash management does wonders for me, like allowing me to buy things like new processors.

Remaining Interchangable (1)

Nom du Keyboard (633989) | more than 7 years ago | (#19521293)

My thought is: How long can Intel and AMD remain interchangeable? For that matter, how interchangeable will Intel be in the same socket, if processors are going to vary this widely? In is this a good thing?

Re:Remaining Interchangable (2, Informative)

drinkypoo (153816) | more than 7 years ago | (#19523149)

For that matter, how interchangeable will Intel be in the same socket, if processors are going to vary this widely? In is this a good thing?

If intel used just one socket, then you would have portions of a socket unused on some systems, but it would cost less to do the design, because there would be only one design. They don't do this because a socket with less pins costs less.

I don't know if that's what you wanted to know...

Intel and AMD could ostensibly remain eternally interchangeable; they are not and long have not been socket-level-compatible anyway. And they're not 100% interchangeable, if you fritter around at low levels you will find things that must be done differently on each processor, which is why [for example] the Linux kernel is configured differently for each.

The last time intel and AMD were socket-compatible was Socket Super 7.

how about not screwing the consumer oveR? (0)

Anonymous Coward | more than 7 years ago | (#19521309)

how about not screwing the consumer over every 2 to 3 years and making some damn boards that will follow your processor road map? too much to ask?

Re:how about not screwing the consumer oveR? (0)

berwiki (989827) | more than 7 years ago | (#19521603)

I agree, is there any real reason to change from IDE to SATA connectors? or AGP to PCI-Express?
(I'm talking about the actual physical layout from AGP to PCIx)

Why is backwards compatibility so critical when it comes to software, but hardware manufactures just decide to chuck it out the window?

You are a minuscule fraction of consumers. (1)

santiago (42242) | more than 7 years ago | (#19522217)

New hardware is adopted because it's faster and/or cheaper. These days, the processor is only sometimes the critical component when it comes to speed. Slapping a new processor into an old system doesn't make that much sense, and the development cost of backwards compatibility with old hardware architectures to keep a tiny fraction of the Slashdot crowd happy simply isn't worth it. Computers have become commodities. When they break or get old, you throw them out and get a new one. No amount of whining will change this, because economics is against you.

Where all the CPU time will go (5, Insightful)

Animats (122034) | more than 7 years ago | (#19521337)

Where will all the CPU time go on desktops with these highly parallel processors?

  • Virus scanning. Multiple objects can be virus scanned in parallel.
  • Adware/spyware. The user impact from adware and spyware will be reduced since attacks will be able to use their own processor. Adware will be scanning all your files and running classifiers to figure out what to sell you.
  • Ad display. Run all those Flash ads simultaneously. Ads can get more CPU-intensive. Next frontier: automatic image editing that puts you in the ad.
  • Indexing You'll have local search systems indexing your stuff, probably at least one from Microsoft and one from Google.
  • Spam One CPU for filtering the spam coming in, one CPU for the bot sending it out.
  • DRM One CPU for the RIAA's piracy searcher, one for the MPAA, one for Homeland Security...
  • Interpreters Visualize a Microsoft Office emulator written in Javascript. Oh, wait [google.com] .

Re:Where all the CPU time will go (2, Insightful)

walt-sjc (145127) | more than 7 years ago | (#19521423)

Keep in mind that many of those tasks are also very I/O intensive, and our disk speed has not kept up with processor speed. With more cores doing more things, we are going to need a HELL of a lot more bandwidth on the bus for network, memory, disk, graphics, etc. PCI SuperDuper Express anyone?

Re:Where all the CPU time will go (1)

spirit of reason (989882) | more than 7 years ago | (#19521645)

Or a massive cache with extreme block sizes! Mwahahaha

Re:Where all the CPU time will go (1)

twitchingbug (701187) | more than 7 years ago | (#19521977)

Well, I think the internal CPU memory bus is shared, for Intel anyways. Other I/O stuff (SATA, PCIe) are all point to point protocols no? So there's no I/O contention there. Of course, that doesn't help if you're all trying to access 1 disk, but then, yeah i agree with you.

Re:Where all the CPU time will go (1)

Joe The Dragon (967727) | more than 7 years ago | (#19522523)

Will intles newer cpus have somethings like amd Direct Connect Architecture?
Will cpus be able to talk to each other without need to use the chip set?
Will they be able to have more then one northbridge like chip as there is in high end amd systems?
Will they have cache coherency?
Will you be able to have add on cards on the cpu bus like you can with HyperTransport?
Only having one chipset link for the pci-e slots, I/O, network, and etc. can be a big choke point in a 2-4+ cpu systems even more so with each cpu has 4+ cores.

Re:Where all the CPU time will go (1)

Vo1t (1079521) | more than 7 years ago | (#19521475)

Most of the features you mentioned require disk access. So when I try to open a file that is really important to me, it will be slower on a multicore than without those things (antivir, etc.) running in parallel.
What I would really like to see is improvement in real-time raytracing and radiosity. Something more like http://www.youtube.com/watch?v=oLte5f34ya8 [youtube.com] .

Size doesnt matter to me. (1)

jshriverWVU (810740) | more than 7 years ago | (#19521363)

I for one do a lot of cpu intensive coding, so I *would* use a 1thz processor. One thing I dont understand, they kept wanting to get more ghz for the same size an eventually hit a barrier. So why are we stuck on having a processor so small? I recently bought a 3ghz CPU and it was about the size of a 50 cent piece, and the actual core was smaller than a dime! 3ghz in less space than a dime! Cool, but why can't they just extend outwards?

I wouldn't mind going back to the days when computers were bigger if it meant I could have a 10ghz or 1thz computer. Let the computing begin.

Re:Size doesnt matter to me. (1)

RevHawk (855772) | more than 7 years ago | (#19521435)

IANAS (I am not a scientist) But I thought I remembered hearing the size limitation has to do with the speed of light only being so fast - so if you make a cpu too large, you run into a delay issue because data can only move so fast. But, this might all be total BS. I did read it on Slashdot after all...

Re:Size doesnt matter to me. (1)

bcmm (768152) | more than 7 years ago | (#19521471)

I don't think size is an issue really. Faster cycling doesn't come from adding transistors, it comes from making things happen faster. If anything, putting things closer together helps.

Re:Size doesnt matter to me. (1)

spirit of reason (989882) | more than 7 years ago | (#19521877)

I've got it. I'll just redo the stages so that there are 334 times as many. Then we can clock your processor at 1 thz! And getting bigger will almost certainly not allow you to go faster, btw. If your goal is to improve the latency of the result for your instruction, you have to reduce the delay of the components (which smaller gates help) or do less in your instruction (which I was jokingly suggesting above) so you can turn up the clock speed (if heat weren't a problem). Adding more chip real estate will not make your add instruction go any faster. What you could do with it, though, is add more functionality by providing more instructions in the ISA, which the processor could accelerate with ever more transistors, at a higher price.

Re:Size doesnt matter to me. (1)

smoker2 (750216) | more than 7 years ago | (#19521921)

3ghz in less space than a dime! Cool, but why can't they just extend outwards?
Three words :
Speed Of Light
The clock speed (of a cpu) is limited by the speed of light, and the bigger the chip, the further stuff has to travel. Even at light speed, you can only go so far and get back again in a certain time.
I'm not brilliant at explaining this, but I'm sure someone else will pick this up.
In the meantime, have a look at this interesting paper [www.gotw.ca] from 2005.

Re:Size doesnt matter to me. (0)

Anonymous Coward | more than 7 years ago | (#19522315)

>Three words :
>Speed Of Light

That is a crock. This is not a factor yet for microprocessor speed.

Re:Size doesnt matter to me. (1)

gertam (1019200) | more than 7 years ago | (#19522181)

For one thing, size matters for the manufacturing process. The larger the chip of silicon, the more likely there is a flaw in it. If you increase the size of the silicon chip, you are likely to throw away many more flawed processors, wasting your time and money.

Re:Size doesnt matter to me. (0)

Anonymous Coward | more than 7 years ago | (#19522223)

you are likely to throw away many more flawed processors

or rename them Celeron, and sell at a discount.

Re:Size doesnt matter to me. (1)

timeOday (582209) | more than 7 years ago | (#19522209)

3ghz in less space than a dime! Cool, but why can't they just extend outwards?
Because the speed of light is too slow. No, seriously. You wanna run at 3 GHz? Light only travels about 4 inches in a clock cycle. Of course, you also need to allow time for switching - a processor is mostly a big bunch of switches, and they take a little time to respond to turn on and off.

Re:Size doesnt matter to me. (1)

dgatwood (11270) | more than 7 years ago | (#19522851)

And the speed of electrical propagation is even slower. In modern, copper-based chips, it's about 2/3rds the speed of light, IIRC. In the old aluminum-trace chips, I believe electrical propagation was even slower. The next gen will probably use carbon nanotubes, which reportedly provide faster propagation.

That said, your point still holds that you are constrained by the speed of electrical signal propagation in the trace medium (currently copper), and that short of changing that medium (and thus, the speed of propagation), the only way to increase speed beyond a certain point is to make the die smaller.

Re:Size doesnt matter to me. (1)

Tim_Enchanter (958469) | more than 7 years ago | (#19522903)

Actually the speed of light has very little to do with it. It is true that the individual electron's travel near the speed of light, but the actual signal is limited by the RC delay. The speed of light is just the therotical maximun, but on silicon it is never truely approached. However wire delay, while notable, isn't the bottle neck. Just like others stated, the real bottle neck is the speed of the components. Making more transistors isn't going to speed up the chip, it will just add more functionality. Making FASTER transistors that drive smaller loads is what speeds things up.

Intel's future can be summed up in just 3 letters. (0)

Anonymous Coward | more than 7 years ago | (#19521365)

AMD

Programmable Cache/Storage (1)

Doc Ruby (173196) | more than 7 years ago | (#19521403)

Cache's are cool, because they're automated to solve a common chip problem of faster access to more frequently used data, without any extra programming. But they're a pain, because they're a blob that extra programming can't do anything else with. If Intel could just add some programmatic access to core caches (including flushing and swap in/out to main or other-core memory), which otherwise could serve higher performance at some cycles, they'd solve a lot of these problems with little investment.

Conversely, chips like the Cell could include HW that makes their cores' local storage into caches.

Re:Programmable Cache/Storage (1)

serviscope_minor (664417) | more than 7 years ago | (#19521623)

Intel have added some programmer control over the cache. Look at the prefetch, movnt and sfence instructions. They're only really hints, but they do help.

Time to dig out your instruction set manual... :-)

About time for some competition (0)

Anonymous Coward | more than 7 years ago | (#19521463)

Not so long ago it seemed like there were more processors. Now it seems to be coming down to Intel, AMD and Power. The thing is that a lot of chips in the embedded world are as powerful as old Intel chips. Arm chips are able to run a minimal computer for instance. The joy of Linux is that it can be used on quite restricted chips. I wonder when we'll be able to buy consumer grade PCs with lower performance chips that Intel doesn't want to build anymore.

Some of the GPUs available now are a lot more powerful than even recent Intel chips. To heck with the computer, maybe we can run everything off the video card. :-)

More energy efficient chips... (3, Insightful)

Nim82 (838705) | more than 7 years ago | (#19521523)

I'd much rather they focussed on making chips more energy efficient than faster. At the moment barring a few high end applications most of the cpu power on the majority of current processors is largely unused.

I dream of the day when my gaming computer doesn't need any active cooling, or heat sinks the size of houses. Focussing on efficiency would also force developers to write better code, honestly its unbelievable how badly some programs run and how resource intensive they are for what they do.

Re:More energy efficient chips... (1)

MikShapi (681808) | more than 7 years ago | (#19522391)

I second that.

I've just finished pulling apart my E6X00-based gaming box, in favor of a C2D T5500 mobile-on-desktop rig, replacing a fast FSB with a fanless(BIG-heatsink)-CPU and cutting CPU power consumption to almost 1/3. (Yes, I know an 8800 eats 250 Watts on idle. I'm still looking for a way to depower it and use alternative low-power VGA-out when not in use. Mention'em if you can think of'em)

L7200 and L7400's soon to hit the mobile-478-socket CPU market soon (thinkpad X60t's already ship with it), giving the same dual-core mid-range desktop performance for yet another 50% cut in power consumption - ~15 watts in place of ~30W, and knocking another 5W off for losing a fan or two.

Speaking of, any 478-mobile boards out there except for Gigabyte's GA-8I945GMMFY-RH that do both C2D (bumps the Asus N4L-VM) and PCIex16 (bumps the Abit IL-90)?

Additionally (0)

Anonymous Coward | more than 7 years ago | (#19522079)

Additionally, value-added features will lead to added improvements in additional quarters

Cells taught about multicore huh (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19522089)

It's more like the evolution of processors demanded multi core designs because of the limitations of lithography and other techniques to cram more crap into one die. Why do you think GHZ stopped going up ? These multicore CPUS are not faster than single core CPUS, their are simply more of them. While it's a boon for consumers, developers are more or less suffering the extra work to speed up applications (something they should be forced to do all the time). It's important to realize the limitation of duel core vs single core. In the end they will likely go back to upping GHZ. The only reason dual core CPUS have been released is because they are currently cost effective. This doesn't mean that we will forever use dual core style computers. We probably won't. I think going dual core may be a sign that we've pushed this simple design to it's max. We need a new logic gate that's faster because dual core is not a solution for the future. Dual core isn't going to scale up well enough in power consumption or performance that it's really the technology of the future. It's a stop gap solution because higher GHZ isn't paying off and it's expensive to produce.

This means it's time for a new way of rendering binary logic, a new logic gate, quantum computer or any host of new ways that will offer us a path to the next generation of computing. It won't take long to max out the benefits of multi core, but it does mean a burst of performance for consumers. Why? Simple, because their is little that's actually all that new about dual core systems. It's not some amazing technology. The fact is that it just got to the point where cost effectively dual core made enough sense to become a mainstream technology. They've been saying it would happen for probably 10 years or more and talking about waffer technology and such. It was always just a matter of when the mhz or ghz race would end the manufacturers have to either go dual core or make major architecture changes to increase performance. Until now they made architecture tweaks and increased mhz, now that mhz is a dead end they went muli core and put the pressure on developers to write better multithreading code, which as I said should have been happening awhile ago, though perhaps we can blame XP's rather sucky threading performance on that one.

I'm glad to live in the age of dual core, but on the other hand it's kind of sad also that the technology is not advancing as fast as it used to. I mean, if we were in a CPU race with China you know we would have had multidie CPUs years ago. It's just not a new technology rather it's a solution to their inability to scale performance with GHZ. Maybe that's a good thing, but I can't see multi core cpus ever being an efficient or graceful solution to our need for more computing power. Whats next clustering for the masses ? That's great, but your wasting a lot of power for that extra processing and when you consider how computers are selling and that many people have several computers and that developing countries still have a lot of people who want PCs, the cost of inefficiency will add up. AND we risk the potential of other countries developing alternative logic gates and then not sharing them with us because in the end dual core just isn't going to scale. The biggest performance increase will be with 2 CPUS and after that it just scales down. Good for consolidated servers and all, but that's not the type of high performance situation I'm thinking of such as number crunching, 3D rendering, and overall ability to accomplish more calculations per clock cycle. Servers are much easier to get the performance out of since they run so many background threads. Workstations and desktops don't tend to get nearly the boost in performance from say a quad core system and they drive the technology market far more than servers in overall profit and name recognition. What average person has heard of an Alpha chip ? Shit most people don't even know what Linux is still.

So, yea get a dual core while they are cheap, but don't expect the cell processor of multi core to really re-define the entire industry because in the end they are inefficient solutions and don't scale as well as a single core with better architecture. I don't pilling CPUs will always be the solution of the future as many people seem to think. That's just the solution for now when faster logic gates or a significant increase in cutting cpu dies come out we will have faster and more power efficient single core processors. Slapping a second core only makes sense when the technology has gotten so cheap and more or less outdated that you can afford to put 2, 4 or 8 CPUs into a machine. This could have been done a decade ago, it just wasn't cost effective and at some point it won't be again, because in the end the CPU market is defined by the more hardcore science breakthroughs in physics and chemistry. As we ignore those fields some a lagged delay in our ability to produce higher end CPUs has come to pass.

Perhaps if we had been pushing science instead of theology for the last 12 years we wouldn't be using stop gap solutions to increase our CPU performance.

New slashdork slogan: (0)

Anonymous Coward | more than 7 years ago | (#19522799)

Slashdork - Ads for people, intel stuff

I just love corporate whores.

Energy Efficiency (2, Interesting)

zentec (204030) | more than 7 years ago | (#19522911)

The thing that is the future for Intel is not only the bizillion cores and cheaper/faster, but to do so with outstanding energy efficiency. This is obviously important for portable computing, but it's also important to reduce heat load and power consumption in large data centers. Cost of ownership comparisons have yet to include power consumption, but as green house gas taxes start making their way onto electric bills, it's likely to be a selling point.

More and more there's a need for extremely energy efficient, low footprint devices for special purpose applications. It just doesn't make a lot of sense to have PC sucking 60 watts when all you need is something to run Minicom to a simple 15" LCD screen.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?