Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel's Dual-core strategy, 75% by end 2006

CmdrTaco posted more than 9 years ago | from the thats-a-lotta-core-ap dept.

Intel 306

DigitumDei writes "Intel is moving ahead rapidly with their dual core chips, anticipating 75% of their chip sales to be dual core chips by the end of 2006. With AMD also starting to push their dual core solutions, how long until applications make full use of this. Some applications already make good use of multiple cpu's and of course multiple applications running at the same time instantly benifit. Yet the most cpu intensive applications for the average home machine, games, still mostly do not take advantage of this. When game manufacturers start to release games designed to take advantage of this, are we going to see a huge increase in game complexity/detail or is this benifit going to be less than Intel and AMD would have you believe?"

cancel ×

306 comments

Sorry! There are no comments related to the filter you selected.

fp (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11822494)

fp!

Re:fp (4, Funny)

Anonymous Coward | more than 9 years ago | (#11822639)

If you had a dual-core system you would have gotten second post too.

couldnt resist (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11822495)

Bulakasha!

Solitaire??? (5, Funny)

zzmejce (756372) | more than 9 years ago | (#11822497)

I hope sol.exe will become dual-core aware soon.

Me too (5, Funny)

imrec (461877) | more than 9 years ago | (#11822606)

Those bouncing cards STILL leave trails at the end of a game! REFRESH! GAWDAMNIT! REFRESH!!

Re:Solitaire??? (1)

Leroy_Brown242 (683141) | more than 9 years ago | (#11822728)

Yeah, I have to admit that there is a super geeky little kid somewhere inside me that thinks a multithreaded version of solitare, or freecell, or heart, would be REALLY cool.

Dual Core Gaming (3, Interesting)

carninja (792514) | more than 9 years ago | (#11822499)

One has to wonder if this is going to provide Intel with a competitive edge against Sony's Cell processor in the gaming front...

Re:Dual Core Gaming (1)

mcc (14761) | more than 9 years ago | (#11822526)

Since no plans have yet been announced to use the Cell in PCs-- so far it seems only PS3 game systems and very high-end IBM POWER business workstations will be taking advantage of it-- that wouldn't seem to make a whole lot of sense.

Re:Dual Core Gaming (1)

carninja (792514) | more than 9 years ago | (#11822540)

I was thinking on the other end of the spectrum - instead of putting the cell in PC's, putting the Intel chips in other consoles (IE, xbox2, etc)

All the consoles will use IBM (2, Insightful)

g2racer (258096) | more than 9 years ago | (#11822591)

A little off topic, but anybody find it interesting that all the next generation consoles will use IBM processing power? Considering the number of consoles sold compared to PCs, this has got to piss both Intel and AMD off...

Re:All the consoles will use IBM (2, Insightful)

Ironsides (739422) | more than 9 years ago | (#11822848)

Not really. Intel and AMD have never had the console gaming market. Also, the consoles really do require either an embedded microprocessor, or one that is customized. The gameboy series uses arm7 and arm9 processors. The recent Consoles themselves have used customized ones. The X-Box is the only exception in that it used a general purpose Pentium 3.

I can see that Intel and AMD might want to break into that market, but they would have to create a custom chip (as a general purposed will either use too much power or won't cut it) just for that. Something I am not sure they want to do.

Re:Dual Core Gaming (3, Informative)

mcc (14761) | more than 9 years ago | (#11822592)

The XBox2 and Gamecube are both already known to be using POWER/PowerPC derivatives. Besides which, chip contracts for new consoles are the sort of thing that get worked out an amount of time in advance measured in years, and they're usually not bought from quite the same stock that PC OEMs are buying from. Intel's plans for their mass market "by late 2006" lineup really couldn't have any impact on the console world at all at this moment.

dual cores (3, Insightful)

lkcl (517947) | more than 9 years ago | (#11822506)

less heat generated. more bang per watt.

Re:dual cores (2, Informative)

gl4ss (559668) | more than 9 years ago | (#11822514)

not automatically.

all else equal.. two cores, two times the power, two times the heat..

Re:dual cores (4, Informative)

sbryant (93075) | more than 9 years ago | (#11822691)

all else equal.. two cores, two times the power, two times the heat..

You haven't been paying attention! Go back and read this article [informationweek.com] again (about AMD's demo of their dual core processor). While you're at it, read the related /. article [slashdot.org] .

The dual core processors use nowhere near double the power and produce nowhere near double the heat.

-- Steve

Re:dual cores (2, Insightful)

PureCreditor (300490) | more than 9 years ago | (#11822766)

if the 2 cores can share L1 and L2, then it's less than "twice the power"...and given the close distances between the 2, it's not hard to create a high-speed connect that will equate Shared L1 to Local L1 speeds.

Re:dual cores (1)

theVP (835556) | more than 9 years ago | (#11822756)

but when you say "bang", what are you referring to?? I just really don't see the point unless a game attempts to use more than one process to function. Without knowing much about video game builds, would it be possible to run your graphics engine as one process, and your physics and game engine as another process? I think that if this was possible, then you would probably see a huge leap in performance. Otherwise, I don't see much need for this technology other than multitasking like a madman.

Games do take advantage of having a second cpu (4, Funny)

nounderscores (246517) | more than 9 years ago | (#11822507)

It's just that it's called a GPU, sits on a special card, on a special slot and is sold to you regularly about once every six months for an ungodly amount of money.

It would be interesting if games were rewritten to run with the game logic on one core, the graphics on another core and the networking code on a third core of a multicore chip...

Hey. You could even have a mega-multicore chip and do first person shooters with realtime raytracing... each core is responsible for raytracing a small area of the screen. I'm sure that there's a company working on this. I saw a demo video in a computer graphics lecture. I'll have to check my notes.

Re:Games do take advantage of having a second cpu (1)

notamac (750472) | more than 9 years ago | (#11822530)

Maybe split AI and Physics into seperate threads... networking really doesn't need it yet though :)

Re:Games do take advantage of having a second cpu (1)

xtracto (837672) | more than 9 years ago | (#11822695)

Mod me offtopic if you like but parent gave me a cool idea.

I am in the AI/game research field, and well, every new AI book I read tells that the new groundbreaking technology for games is AI, because now graphics are really advanced etc etc etc.

Now, I am wondering, maybe in a few years there will be some sort of "AI" card. I am thinking in a PCI card which will provide games with some sort of capability to execute AI routines (of course outside the PC).

Now, being paranoic, I should not have posted this here, if I wanted to make som 1,2,3... proffit with this.

What a game using this card will need is to separate the "normal" AI and the "Improved" AI, and of course this special Card should need an appropiate API (maybe the guys at OpenAI would get things going again...).

Is there anything like this now?? at least for testing?.

Re:Games do take advantage of having a second cpu (2, Funny)

nounderscores (246517) | more than 9 years ago | (#11822820)

I can see it now. Sony Playstaion X being bought by enemy nations to harvest their AI cards and install them in a new autonomous guidance module for SWORDs [technovelgy.com] .

At present, the SWORD robot is operated with a thirty-pound control unit with two joysticks, buttons and a video screen. I wonder how much the current control module looks like a playstation portable?

Re:Games do take advantage of having a second cpu (1)

harrkev (623093) | more than 9 years ago | (#11822562)

Hey. You could even have a mega-multicore chip and do first person shooters with realtime raytracing... each core is responsible for raytracing a small area of the screen. I'm sure that there's a company working on this. I saw a demo video in a computer graphics lecture. I'll have to check my notes.
You will see this when the processing power of a current A64 or P4 goes for around $2! There is a reason that current GPUs look the way that they do -- it is a LOT more efficient than ray-tracing.

What you speak of is certainly a neat concept, but it will not happen in my home for a LOOOOONG time, because a "mega-multicore" chip would be insanely expensive.

Ask again in 10 years.

Re:Games do take advantage of having a second cpu (1)

LiquidCoooled (634315) | more than 9 years ago | (#11822736)

Nahhhhh.
People pay hundreds of dollars for additional graphics processing units for their machines at present.

If there are real world advantages to installing additional cpu's then people would do it almost regardless of cost.
However at present for home users nothing screams out at me. Upgrades come from a perceived need, whether that comes from half-life 2, or Winblows foghorn is irrelevant. It just isn't here yet.

Re:Games do take advantage of having a second cpu (2, Interesting)

svanstrom (734343) | more than 9 years ago | (#11822575)

It isn't really the game itself that needs to be written to take advantage of a second CPU (or whatever), it's the code that's always being reused (either something inhouse, or the engine that they're using).

People are lazy, and when things will work today as it is, most companies will rather focus on releasing the game asap than spending alot of time recoding what they've already got...

It comes down to how much money they can make in as little time as possible.

But, of course, once a company starts pushing their better performance/more features/more "beautiful" games, then everyone else has to catch up instantly; or else their profit goes down.

It's a "all of us or none of us"-kind of a game, with really really high stakes.

Re:Games do take advantage of having a second cpu (2, Interesting)

TheRaven64 (641858) | more than 9 years ago | (#11822648)

The problem is that x86 is a horrible architecture for multithreading. On a standard x86 chip, a context switch is about 100 times more expensive than a function call (on an architecture like PowerPC or SPARC function calls and context switches have similar overheads). Designing a multithreaded game is relatively easy, but would give a huge performance penalty on uni-processor machines.

If that was significant.. (1)

Kjella (173770) | more than 9 years ago | (#11822843)

...shouldn't there be a big advantage running a uniprocessor game on a dual-core CPU? Send all other threads to the other core (I assume each one must be woken up to see if it wants to do anything quite regularly). I seem to remember the Linux kernel does context switches 1000 times a second or there abouts. Say you have 3-4 threads (AI, network, graphics, sound) should still leave you able to check those at 250fps...

Kjella

Re:Games do take advantage of having a second cpu (1)

Deliveranc3 (629997) | more than 9 years ago | (#11822807)

Carmack, openGl.

Carmack will do it, if a directX game doesn't hit first MS might be in trouble.

Here's hoping.

Re:Games do take advantage of having a second cpu (1)

Peldor (639336) | more than 9 years ago | (#11822598)

A general purpose CPU has no chance against a dedicated GPU in graphics processing. Using a second core isn't going to be 'interesting' except as an exercise in futility.

Re:Games do take advantage of having a second cpu (2, Interesting)

TomorrowPlusX (571956) | more than 9 years ago | (#11822656)

Actually, for what it's worth I'm writing a game in my free time which already splits rendering and (physics/game logic) into two threads. The idea being that the physics runs while the rendering thread is blocking on an opengl vsync. While the behavior is synchronous, it runs beautifully on both single and dual processor machines.

In principle this should have detrimental effects on single processor machines but my relatively meager 1.3 ghz powerbook plays beautifully at 30 fps and 60 physics frames per second.

Anyway, this isn't *really* what's being discussed since the behavior is in fact synchronous, but I'm just saying it ain't hard. I'm surprised more games aren't multithreaded. It's not as if MT is hard. You just have to be careful.

One possible multi-threaded benefit (4, Interesting)

JSBiff (87824) | more than 9 years ago | (#11822710)

I would like to see a more multi-threaded approach to game programming in general, and not all the benefits would necessarily be about performance.

One thing that has bugged me a long time about a lot of games (this has particular relevence to multi-player games, but also single player games to some extent) is the 'game loading' screen. Or rather, the fact that during the 'loading' screen I lose all control of, and ability to interact, with the program.

It has always seemed to me, that it should be possible, with a sufficiently clever multi-threaded approach, to create a game engine where I could, for example, keep chatting with other players while the level/zone/map that I'm transitioning to is being loaded.

Or maybe I really want to just abort the level load and quit the game, because something important in Real Life has just started occuring and I want to just kill the game and move on. With most games, you have to wait until it is done loading before you can then quit out of the game.

In other words, even ignoring performance benefits for a moment, if a game engine is correctly multi-threaded, I could continue to have 'command and control', and chat, functionality while the game engine, in another thread, is loading models and textures.

Re:Games do take advantage of having a second cpu (1)

diegocgteleline.es (653730) | more than 9 years ago | (#11822727)

I don't know how valuable would be that. For networking, I think that it's much better to do everything in one CPU, much better than using two CPUs. With 2 CPUs, if you're doing different tasks in the same data, cache coherency mechanism will update CPU caches as needed, and you might be _losing_ performance

(I don't know if it's exactly like that, it's one of the reasons why SMP is bad if you want to route traffic, unless you"attach" the IRQ the network card is using to a single CPU)

Re:Games do take advantage of having a second cpu (3, Interesting)

Rhys (96510) | more than 9 years ago | (#11822773)

Beyond the GPU, any intensive computation application gets benefits from the second CPU.

Our local (to UIUC) parallel software master, working on the turing xserve cluster is pulling about 95% (I think, don't quote me) of theoretical peak performance in linpack running on 1 cpu on 1 xserve. Bring that up to both cpus in one and he said it dropped to around 50%.

Why? The OS has to run somewhere. When it's running, that processor is stuck with it. The other processor is stuck waiting for the OS, and then things can pick up again.

Now, we haven't yet finished tuning the systems to make the OS do as little as possible. (they're still running GUIs, so we can remote desktop into them amoung other things.) But still that's quite a performance hit!

He said two machines running 1 CPU each over myrinet were still in the 90%ish of theoretical peak.

So can we quite rehashing this stupid topic every time dual core CPUs comes up? Yes it'll help. No it won't double your game performance (unless it's written for a dual-core cpu), and probably it won't even double it then, because there's still teamspeak/windows/aim/virus scan/etc running that need cpu time.

cue the spell-checker jokes (3, Funny)

Anonymous Coward | more than 9 years ago | (#11822509)

or is this benifit going to be less

how long will it be before dual core CPUs boost slashdot editor's ability to spell-check?

Re:cue the spell-checker jokes (0)

Anonymous Coward | more than 9 years ago | (#11822595)

how long will it be before dual core CPUs boost slashdot editor's ability to spell-check?
When they can walk and chew gum at the same time?

Quake 3? (1)

Siva (6132) | more than 9 years ago | (#11822511)

Wasn't Quake 3 supposed to be able to take advantage of SMP?

Re:Quake 3? (2, Informative)

Anonymous Coward | more than 9 years ago | (#11822615)


It did. It was dropped in Doom 3, as it really wasn't that much of a win for the effort.

Modern games are really limited by bandwidth or by GPU power. CPU power is only really used for game logic, which isn't terribly complex compared to the other parts of a game.

Re:Quake 3? (1)

Leroy_Brown242 (683141) | more than 9 years ago | (#11822757)

One has to wonder then, why hasn't anyone introduced a video card that uses SMP?

It makes sence to me.

If they can't release a GPU that's fast enough, use more GPUs.

Re:Quake 3? (1)

meat.curtains (197789) | more than 9 years ago | (#11822841)

It was dropped in Doom 3, as it really wasn't that much of a win for the effort.

While it didn't really increase the maximum framerate, it did increase the minimum framerate during the busiest moments. And those are the times when you actually benefit from a better framerate IME.

3dfx beat Intel to it (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#11822512)

Most games are already running on multiple processors, the CPU and the GPU.

Well... (3, Insightful)

Kn0xy (792482) | more than 9 years ago | (#11822521)

If their going to be that ambitious with their sales, I hope the are concidering pricing the the chips in a price range that anyone could afford and is willing the pay.

Re:Well... (1)

LiquidCoooled (634315) | more than 9 years ago | (#11822647)

If its cheap your after, I've got a tray of 386s out the back.
Intel says they are state of the art, and they have won numerous awards for advancing technology.
I'll let you have the lot for a dollar a piece.

Re:Well... (1)

Leroy_Brown242 (683141) | more than 9 years ago | (#11822774)

As always, I'm looking forward to dropping prices on last weeks' tech. Getting a nice single core CPU on the cheap, as they fall out of fasion.

Re:Well... (1)

mario_grgic (515333) | more than 9 years ago | (#11822789)

There is a difference between "their" and "they're". They, their, they're and there are all different words, would you believe. Just because they sound very similar, does not mean they are spelled the same.

Q's (1)

cablepokerface (718716) | more than 9 years ago | (#11822522)

are we going to see a huge increase in game complexity/detail

complexity? Isn't multi-core programming pretty much the same as multi-processor programming?

detail? If this helps greatly for performance and it's relatively cheap (it's on affordable consumer chips) then why not a processor with 3 cores, or 5?

Re:Q's (1)

leonmergen (807379) | more than 9 years ago | (#11822547)

complexity? Isn't multi-core programming pretty much the same as multi-processor programming?

... which usually adds a real hell of complexity to a software project...

Re:Q's (1)

dhbiker (863466) | more than 9 years ago | (#11822770)

but surely with dual core becoming the norm we'll be seing a wave of next generation compilers that automatically optimize (as much as in possible) for dual core?

Then you would have any normal application taking advantage of the dual core with no extra coding hit, but if you really wanted to take full advantage of the dual core then I guess you'd still have to dig in to the extra complexity.

Re:Q's (0)

Anonymous Coward | more than 9 years ago | (#11822564)

5 is right out!

Memory latency is the limiting factor (3, Informative)

rastan (43536) | more than 9 years ago | (#11822524)

AFAIK memory latency/bandwidth is currently the limiting factor in conmputation speed. Dual core processors will not change this, but make the gap even bigger.

Re:Memory latency is the limiting factor (1)

campaign_bug (815908) | more than 9 years ago | (#11822556)

Word has it that Rambus' new technologies (Yellowstone and Redwood IIRC) will show vast improvements in this area, and hence are used in the Cell design.

Re:Memory latency is the limiting factor (3, Interesting)

MindStalker (22827) | more than 9 years ago | (#11822671)

Not nessesarly, as both cores share the same memory controller and registered memory, latency from core to core is essentially zero. I wonder if someone could write some really smart code that has one core doing all memory prefetching and the second core doing the actual computations. Could be interesting.

Re:Memory latency is the limiting factor (1)

MindStalker (22827) | more than 9 years ago | (#11822706)

Sorry, I was speaking of the AMD design of course, and Intels memory controller is off chip (and I believe not shared as well).

Re:Memory latency is the limiting factor (2, Insightful)

Deliveranc3 (629997) | more than 9 years ago | (#11822858)

Budy, some of us have AMD.

Come on in the HyperTransport is fine. Care for a pinya-Onchipmemorycontroller?

Huge increase in game complexity? In short: No (5, Insightful)

Reverant (581129) | more than 9 years ago | (#11822531)

When game manufacturers start to release games designed to take advantage of this, are we going to see a huge increase in game complexity/detail
No, because most games depend more on the gpu rather than the CPU. The cpu is left to do tasks such as opponent AI, physics, etc, stuff that the dedicated hardware on the graphics card can't do.

That Is The Change In Software (4, Insightful)

EngineeringMarvel (783720) | more than 9 years ago | (#11822602)

Your statement is true, but I think you missed the point the article poster was trying to get across. Currently games are writeen to use computer resources that way. If the code was written differently for games, they could allocate some of the graphic responsiblities to the 2nd CPU instead of all of it going to the GPU. The 2nd CPU could be used to help the GPU. Allocating more of the now available (2nd CPU) resources to graphics allows more potential in graphics. That's what the article poster wants to see, that game resoure allocation written in the games code be changed to use the 2nd CPU to help enhance graphics in the video game.

Re:That Is The Change In Software (1)

TheRaven64 (641858) | more than 9 years ago | (#11822674)

GPUs are at least one order of magnitude faster at doing the kind of operations required for surface-based rendering than current CPUs. Adding a CPU to the mix will make very little difference.

So Intel's going to be a year late ?. (3, Interesting)

Gopal.V (532678) | more than 9 years ago | (#11822541)

AMD demo'd [amd.com] their dual core x86 a year ago. Also from what I read, the Pentium extreme is NOT going to share the memory controller - which means unlike the AMD, we might need a new motherboard for the dual core ones (well, AMD promised that we wouldn't). So this is costlier, uglier and more power hungry.

All in all I see that Intel is going down unless they do something quick. And remember Competition is good for the Customer.

Re:So Intel's going to be a year late ?. (5, Insightful)

unother (712929) | more than 9 years ago | (#11822628)

Yes, but since the core of Intel's marketplace consists of people who see a monitor and think it is the computer, this is a barrier that Intel can easily hurdle.

Re:So Intel's going to be a year late ?. (0)

Anonymous Coward | more than 9 years ago | (#11822665)

If only a demo were the same thing as a shipping product...

Re:So Intel's going to be a year late ?. (1)

diegocgteleline.es (653730) | more than 9 years ago | (#11822744)

How would be having two memory controllers bad? Actually, it may be great - you won't have a central bus to saturate, you'll have two.

Re:So Intel's going to be a year late ?. (2, Interesting)

jest3r (458429) | more than 9 years ago | (#11822755)

AMD will be releasing Quad Core chips as early as 2007 according to Arstechnica. Where does that leave Dual Core?

http://arstechnica.com/news.ars/post/20040813-40 99 .html

Pretty soon (3, Insightful)

PeteDotNu (689884) | more than 9 years ago | (#11822546)

Once multi-core chips start getting into home computers, the game developers will have a good justification for writing thread-awesome programs.

So I guess the answer to the question is, "pretty soon."

Gamers (2)

WoodieR (860635) | more than 9 years ago | (#11822549)

gamers and gaming companies have been a massive driving force in the use of any such new technology ... I expect that gaming outfits are already salivating over the possibility of next years vpu's and dual core cpu's ... but remember there is a delay gap on the uptake of the new tech by the commoners ( for want of a better term ), so it will be a fine line for the companiers to invest time and resources into programming for a new boxen, if the vast majority are going to be on their vanilla boxen through 2007 ...

Re:Gamers (0)

Anonymous Coward | more than 9 years ago | (#11822852)

so it will be a fine line for the companiers to invest time and resources into programming for a new boxen
For a new boxen? Goddamnit, it's bad enough as the fake plural of "box", but you're not even using it as a plural.

Look, it's "box" and "boxes". No "boxen". Boxen is not a word. You know who called boxes "boxen"? The Nazis, that's who. You're not a Nazi are you? No? So stop using it.

Meanwhile back in PPC land (5, Interesting)

Anonymous Coward | more than 9 years ago | (#11822563)

I find this interesting, every machine Apple sells except at the definite low end is dual CPU SMP now, and it's been this way for awhile. Now Intel/AMD seem to be realizing "oh yeah, dual cpus, maybe that's something we should start targeting for the mass market instead of just the high end" (though AMD seems to be pretty comfy with the idea already). I wonder why Apple doesn't seem interested in dual cores though. Intel/AMD seem to be treating multicore tech as their way of getting SMP out of the power-user range, Apple doesn't seem to want to have anything to do with it even though POWER has had multicore ability for a really long time. What's up with this, is there something I'm missing?

Re:Meanwhile back in PPC land (1)

TheRaven64 (641858) | more than 9 years ago | (#11822700)

What makes you think Apple isn't interested in dual core? They haven't released any machines with dual core CPUs, because none are available. I don't know what IBM's plans are with dual core PPC970 derivatives, but FreeScale are expected to launch a dual-core G4-class CPU Real Soon Now(TM), and I wouldn't be at all surprised to find it appearing in the next PowerBook revision.

Re:Meanwhile back in PPC land (1, Interesting)

Anonymous Coward | more than 9 years ago | (#11822722)

They haven't released any machines with dual core CPUs, because none are available.

None are available?

IBM's had multicore POWERs forever, at least since like 2002 [top500.org] , I think before. The G4 and G5 have both had the technical capacity to be made in a multiple core configuration. I think Apple isn't interested in dual core because getting dual core PPCs would have been as simple as just asking IBM "hey, could you start making us some dual core PPCs", but they haven't bit.

Re:Meanwhile back in PPC land (2, Insightful)

chrishillman (852550) | more than 9 years ago | (#11822745)

Apple has offered Dual-CPU systems for a long time, but they are more than just a company for teachers to buy computers from. They also sell systems to graphic artists, publishing houses and many other places that benefit from dual-CPU systems. It's just the Apple shotgun approach, they are aming at their market which includes many levels of users. It's not their intention that Grandma should have a dual CPU 64bit system (unless she is a Lightwave user looking to decrease render times). Multiple core CPUs have been AMDs dream for a long time now. This is just Intel not wanting to look stupid on the 64bit front any longer. They are making sloppy decisions to try to beat AMD to market so Dell can use such "innovations" in their ads.

It'll be less than you think in gamin... (1, Insightful)

Total_Wimp (564548) | more than 9 years ago | (#11822570)

... and more everwhere else. Games continue to get most of their good stuff from the GPU, not the CPU. It aint that the CPU isn't important, but it's not going to make a huge difference all by itself.

What I hope to see, but don't expect, is better prioritization of CPU requests. If you have something high-priority going on, like a full screen video game, recording a movie or ripping a CD, I'd like to see the antivirus and other maintenance tasks handled by the other core, or even put on hold. My personal experience is that this stuff can sometimes be set up to some extent, but it's overall kind of crappy and labor intensive.

But this really isn't intel's fault. MS and the app vendors need to take the blame. So, the question is: do other OSs handle this better for consumer products?

TW

Hmm? (4, Insightful)

Erwos (553607) | more than 9 years ago | (#11822573)

"how long until applications make full use of this"

Full use? Probably never. There's always improvements to be made, and multi-threaded programs are a bitch and a half to debug, at least in Linux. Making "full use" of SMP would _generally_ decrease program reliability due to complexity, I would imagine.

But, with an SMP-aware OS (Win2k, WinXP Pro, Linux, etc.), you'll definitely see some multi-tasking benefits immediately. I think the real question is, how will Microsoft adjust their licensing with this new paradigm? Will it be per-core, or per socket/slot?

I'm going to go out on a limb and predict that Longhorn will support 2-way SMP even for the "Home" version.

-Erwos

Re:Hmm? (1)

Rura Penthe (154319) | more than 9 years ago | (#11822594)

Not much of a limb considering they've already announced that they are considering a CPU package 1 processor for licensing even if it has dual cores. So yes, "Home" versions will assuredly support 2 processors.

Re:Hmm? (0)

Anonymous Coward | more than 9 years ago | (#11822690)

Erwos writes:
"There's always improvements to be made, and multi-threaded programs are a bitch and a half to debug . . ."

Choose the right tool, Ada for example. [adapower.com] Ada was designed from the start to support multi-threaded programming. It isn't some sort of add-on library. Multi-threading is intrinsic to the language. Of course you get the bonus that your program will probably run correctly the first time, without the need for a debugger.

Games? Pr0n? (1)

garompa (714684) | more than 9 years ago | (#11822574)

Intel and AMD are aiming for databases.

Re:Games? Pr0n? (1)

nietsch (112711) | more than 9 years ago | (#11822827)

Database latency does not depend on raw computational power, but it does depend on data thoughtput, ie disk seek times (and good indexes!).

It's called Multi-threading (2, Interesting)

TimeTraveler1884 (832874) | more than 9 years ago | (#11822609)

For example on the Intel HT processors, all I have to do is write my applications to use multiple threads for operations that are CPU intensive and voila! I have almost doubled the speed of my app. Otherwise, a single thread app will only use one of the cores.

Often, it's almost trivial to write an app as a multi-threaded app. The only difficult part is when a the problem your application is solving does not lend itself well to paralellization. So sequential problems don't really benefit from it.

However, this is almost always -something- that can be done in paralell. Even if the problem the app is solving is highly sequential, if you need to read the disk or anything, you can always implement look-ahead and caching code that runs in a different thread. Or whatever. Because it's rare you will just cruch numbers and not display it, require data, or send it across a network. Usually, the GUI itself will have it's own thread and benefit from a dual-core processor

Pshaw! (1)

basilisk12 (622742) | more than 9 years ago | (#11822612)

Intel is pushing dual cores? Sony's Cell has [i]nine[/i] cores!

It's just _Dual_ (2, Insightful)

infofarmer (835780) | more than 9 years ago | (#11822613)

Oh, come on, it's just dual, it's just a marketing trick. Speed has been increasing in a logarithmic manner for years on end, and now we're gonna stand still at the word "Dual"? If intel/amd devise a way within reason to logarithmically increase the number of cores in a CPU (which I strongly doubt), that'll be a breakthrough. But for now - it's just a way to keep prices high without inventing anything at all. WOW!

Re:It's just _Dual_ (1)

disposable60 (735022) | more than 9 years ago | (#11822687)

You actually mean exponential. The sequence 1 -> 2 -> 4 -> 8 -> 16 IS exponential.

Complexity/detail (3, Insightful)

Glock27 (446276) | more than 9 years ago | (#11822620)

"are we going to see a huge increase in game complexity/detail?"

If you consider a factor of about 1.8 (tops) "huge".

Make sure you first don't pay double (4, Interesting)

bigtallmofo (695287) | more than 9 years ago | (#11822621)

Check your licensing agreements before you buy one of these dual-core processors. Make sure that your software vendor isn't going to double the price on you.

Oracle and others [com.com] have announced plans to increase their revenue by charging people for multiple cores in their single processor.

mod Du4 (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11822623)

Theo de RRadt, one

dualcore (1)

Varg Vikernes (859668) | more than 9 years ago | (#11822627)

1. game complexity has hardly to do anything with the CPU. That is if you're not John Carmack and let the lightning be done on the CPU (not a wise thing to do). 2. more "irrelevant" stuff wil be pushed on the CPU from the GPU and thus making more use of the GPU => higher detail/complexity. 3. more realistic physics (ragdoll), better AI, more complex sound effects (5.1?). 4. [ot?] Microsoft will only charge one Windows license on the actual CPU => 1 CPU with n cores == 1 license. IBM and others (I think also Sun) will charge per core => 1 AMD with 2 cores == 2 OS licenses.

OpenGL Performer (2, Interesting)

Anonymous Coward | more than 9 years ago | (#11822629)

This problem has already been solved by OpenGL Performer [sgi.com]

Applications, even 'games', written using Performer, will immediately benefit from multiple CPUs.

Fairly simple... (3, Insightful)

Gadgetfreak (97865) | more than 9 years ago | (#11822643)

I think as long as the hardware becomes established, people will write software for it. From time to time, hardware manufacturers have to push the market in order to get the established standard to jump to the next step.

It's like what Subaru did when they decided to make all their vehicles All Wheel Drive. It was a great technology, but most people at the time just didn't care to pay extra for it. By making it a standard feature, the cost increase is significantly reduced, and provided that the technology is actually something functional, the market should grow to accept it.

i thought software pushed hardware (1)

doorbender (146144) | more than 9 years ago | (#11822769)

my software is always asking for more memory.

webdevelopers design webpages that noone on dialup can view. pushing the percieved need for broadband.

what a dual core cpu needs is a magic converter piece that enables 32bit code to fully utilize 64bit processing. THEN everyone will want one, and software can creep into 64bit computing at it's own pace.

i remeber needing a 16bit FATed partition for the running of 16 bit programs under 98 but that was only 5 years ago.

hey amd e-me for some architecture ideas.

Games and multi core (5, Interesting)

Anonymous Coward | more than 9 years ago | (#11822645)

As already mentioned games already do make use of the GPU and the CPU so we're fairly used to some mutliprocessor concerns.

To say that most PC games are GPU bound however is a mistake - most games I've come across (and worked on as a games core technology/graphics programmer) are CPU bound - often in the rendering pipeline trying to feed that GPU.

Anyhow games are already becoming dual-core aware. Most if not all multiplayer games make use of threads for there network code - go dual core (or hyperthreading) and you get a performance win. Again most sound systems are multi threaded often with a streaming/decompression thread, again a win on multi core. These days streaming of all manner of data is becoming more important (our game worlds are getting huge) and so again we will be (are) making use of dual core there too.

I personally have spent a fair amount of time performance enhancing our last couple of games (mostly for HT but the same applies to true dual core) to make sure we get the best win we can. For example on dual core machines our games do procedural texture effects on the second core that you just don't get on a single core machine and still get a 20% odd win over single core. I'm sure most software houses take this as seriously as us and do the same. It's very prudent for us to do so - the writings been on the wall about multi processors being the future of top end performance for a while now.

At the end of the day though us games developers have little choice but to embrace multi core architectures and get the best performance we can. We always build software that pushes the hardware to the full extent of it's known limits because that's the nature of the competition.

Just think what the next generation of consoles is going to do for the games programmers general knowledge of concurrent programming techniques. If we're not using all of the cores on our next gen XBox or PS3 then our competition will be and our games will suck in comparison.

Do they share the cache? (3, Interesting)

jbb999 (758019) | more than 9 years ago | (#11822666)

Do these new chips share the highest speed cache? I can think for several ways to make use of them without using traditional threads. For example: Set up a pool of threads each one of which just reads a function address from a queue of work and then calls that function, waiting when there is no work. The main program can then just push function pointers onto the queue knowing that a thread will pick up the work.
I'm thinking that instead of writing something like
for(int i = 0; i < NumberOfModels; i++) {
UpdateModelAnimation(i);
}
you could write
ThreadPool* pool = new ThreadPool();
for(int i = 0 ; i < NumberOfModels; i++) {
pool->QueueAsyncCall(UpdateModelAnimation, i);
}
pool->WaitForAllToFinish();
The queueing of work could be made pertty low overhead and so if there were only a few thousand CPU instructions in the call you'd get a big speed up, but only if each processor already had the data they were working on in cache. If each core has a separate cache this would be a lot less efficient. Does anyone know?

If this were coupled with... (1)

Tavor (845700) | more than 9 years ago | (#11822668)

If this were coupled with a multi-cored GPU, think of the benefits Gamers would reap. Of course, Longhorn might require you to have both those processor cores, and one more video card, SLI'ed in to handle the vole-bloat...

Not convinced (1)

PhotoBoy (684898) | more than 9 years ago | (#11822673)

I'm not convinced that dual-cores are the answer to the problem both Intel and AMD are now having scaling up CPU performance.

Using dual-core for games for example will certainly allow developers to make some enhancement to their games by parallelising non-dependant parts of their engine e.g. split A.I. and physics up, but at the end of the day once you've broken the game down to these parts you're going to be limited by processor speed again. Things can only be sub-divided into smaller tasks so much, once that limit is hit you again are reliant on faster clock speeds to do more.

I believe dual-cores are a distraction while the fundamental problem of reducing transistor leakage is addressed.

Dual core is soooo last-year (2, Insightful)

frogmonkey (777477) | more than 9 years ago | (#11822677)

I am going to wait for at least quad core 64bit processors ;)

what about chess? (1)

johansalk (818687) | more than 9 years ago | (#11822681)

The only 'games' i play on my computer are chess engines. What about those? Will they benefit from dual-core?

Re:what about chess? (1)

Ulric (531205) | more than 9 years ago | (#11822778)

Certainly. Any problem that can be parallelized benefits. Chess or pretty much any game engine that searches a game state tree can be parallelized.

Already done, apparently. (2, Informative)

raygundan (16760) | more than 9 years ago | (#11822826)

I would think so! All the "big" chess computers (Deep Blue, etc...) have just been massively parallel systems, and chess is one of those things that people have been coding and refining for years. I'm not much of a chess player myself-- computers have been kicking my ass since the 1MHz era, but it appears that multiprocessor chess software is already available for end-users:

Deep Junior 9 and Deep Shredder 9 [chessbase.com] support multiple processors, and should have no trouble on a multicore system.

Each core doubles how many moves it can evaluate in a given time-- and searching possible moves is primarily how chess algorithms work.

Plus... Shredder renders a fancy 3D glass chess set for you, making sure your GPU doesn't get lonely with nothing to do.

Chicken and Egg Question (1, Interesting)

Anonymous Coward | more than 9 years ago | (#11822692)

A lot of pieces have to be in place first. Multi-core cpus have exist first. That's just starting to happen. You have to have decent OS and api support for multiprocessing that exploits it rather than putting in locks to make it seem single threaded which slows things down considerably. Then you get the apps to start using it. Big learning curve on that last bit. Pretty spectacular program crashes when it's done wrong. Lot's of gibbage, which make debugging from a core dump challenging.

Boon for Game AI (2, Insightful)

fygment (444210) | more than 9 years ago | (#11822698)

A lot of posts have quite rightly pointed out that the GPU is currently how games use a "pseudo" dual core. But it seems that what games could be doing now is harnessing the potential of dual core not for graphics, but for game enhancement i.e. better physics and true AI implementations. Realism in games has to go beyond tarting up the graphics.

Already can take advantage it (2, Insightful)

mcbevin (450303) | more than 9 years ago | (#11822740)

The average system is already running a number of different processes at once. Even if most individual applications aren't multithreaded, a dual-core will help not only make the system technically faster but also help hugely on the response of the system (which is often a far more important factor for the 'feel' of how fast a system is as the user experiences it) whenever process are running in the background.

While one might ask whether it makes much useful difference to the 'average' home user, one might ask the same about say 4ghz vs 2ghz - for most Microsoft Word users this makes little difference in any case. However, for most users who really make use of CPU-power in whatever form, the dual-core will indeed make a difference even without multi-threaded applications, and it won't take long for most applications where it matters to become multi-threaded, as its really not that hard to make most cpu-intensive tasks multi-threaded and thus further improve things.

I for one am looking forward to buying my first dual-CPU, dual-core system (i.e. 4x the power) once the chips have arrived and reached reasonable price levels, and I'm sure that power won't be going to waste.

Performance plateau and functional programming (5, Interesting)

barrkel (806779) | more than 9 years ago | (#11822749)

I believe that we're going to see a performance plateau with processors and raw CPU power for the next 5 years or so.

The only way CPU manufacturers are going to get more *OPS in the future is with many cores, and that's going to require either slower or the same kind of speeds (GHz-wise) as things are today. To get programs to run faster under these circumstances you need some kind of explicitly parallel programming.

We haven't seen the right level of parallelism yet, IMHO. Unix started out with process-level parallelism, but it looks like thread-level paralellism has beaten it, even though it is much more prone to programmer errors.

On the other end of the scale, EPIC architectures like Itanium haven't been able to outcompete older architectures like x86 because the explicitly parallel can be made implicit with clever run-time analysis of code. Intel (and, of course, AMD) are their own worst enemy on the Itanium front. All the CPU h/w prediction etc. removes the benefit of the clever compiler needed for EPIC.

Maybe some kind of middle ground can be reached between the two. Itanium instructions work in triples, and you can effectively view the instruction set as programming three processors working in parallel but with the same register set. This is close (but not quite the same) to what's going to be required to efficiently program multi-core CPUs, beyond simple SMP-style thread-level parallelism. Maybe we need some kind of language which has its concurrency built in (something sort of akin to Concurrent Pascal, but much more up to date), or has no data to share and can be decomposed and analyzed with complete information via lambda calculus. I'm thinking of the functional languages, like ML (consider F# than MS Research is working on), or Haskell.

With a functional language, different cores can work on different branches of the overall graph, and resolve them independentantly, before they're tied together later on.

It's hard to see the kind of mindset changes required for this kind of thinking in software development happening very quickly, though.

We'll see. Interesting times.

in other news... (2, Funny)

SpongeBobLinuxPants (840979) | more than 9 years ago | (#11822771)

anticipating 75% of their chip sales to be dual core chips by the end of 2006

global warming expected to increase by 75% by the end of 2006

Good Thing For Number Crunching (1, Funny)

ObsessiveMathsFreak (773371) | more than 9 years ago | (#11822796)

Dual core is a godsend!
As anyone who works with number crunching apps will tell you, having two cores seriously improves your work quality.
Not because number crunchings apps are taking advantage of dual cores.

It's becasue now I can set one core to work on those wicked hard numerical calculations while I kick back and watch movies and play music for a few hours. Bliss!

Nevertheless it would be nice if there was an easier way to make apps use multiple cores. I'd love to be able speed up my crunching by getting a program to use both cores, intuitavely, but I don't expect this to happen any time soon. Surely there has to be easier ways of making apps thread compliant?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>