Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Valve's New Direction On Multicore Processors

Zonk posted more than 7 years ago | from the hard-not-to-enjoy-this dept.

80

illeism writes "Ars Technica has a good piece on Valve's about face on multithread and multicore application in programming. From the article: '...we were treated to an unveiling of the company's new programming strategy, which has been completely realigned around supporting multiple CPU cores. Valve is planning on more than just supporting them. It wants to make the absolute maximum use of the extra power to deliver more than just extra frames per second, but also a more immersive gaming experience.'"

cancel ×

80 comments

Sorry! There are no comments related to the filter you selected.

So have the Win multicore bugs been worked out? (3, Interesting)

kalirion (728907) | more than 7 years ago | (#16738901)

I remember reading of all kinds of bugs in games running on dual-core processors in Windows. Something to do with the OS providing different amount of power to the two cores. Has that been sorted out, or will Valve be compensating in the game engine code?

Re:So have the Win multicore bugs been worked out? (2, Interesting)

oggiejnr (999258) | more than 7 years ago | (#16739021)

A lot of problems can be caused by the lack of a coherent timing signal across even logical, let alone physical, processors. This is the main reason behind cut of lines, out of sync video etc bugs which affects a lot of older games designed to only one processor.

Re:So have the Win multicore bugs been worked out? (1)

Enry (630) | more than 7 years ago | (#16739023)

Huh. Flatout 2 (great game!) had a problem on my dual core Athlon 64. When I downloaded and installed the AMD drivers for the CPU, the problems went away. I haven't tried HL2/Episode 1 yet on that CPU, but everything else I have works fine.

Re:So have the Win multicore bugs been worked out? (1)

ThosLives (686517) | more than 7 years ago | (#16739901)

Anyone else disturbed by the fact that even processors now require drivers?

Re:So have the Win multicore bugs been worked out? (0)

Anonymous Coward | more than 7 years ago | (#16740057)

Compared to the alternative (different versions of every single program, each working only with one specific processor)... no, not really.

Re:So have the Win multicore bugs been worked out? (1)

mlheur (212082) | more than 7 years ago | (#16740073)

Not particularily. Processors already had drivers, an OS. So, who do you trust more to write for that processor, your OS provider or your processor provider?

Re:So have the Win multicore bugs been worked out? (2, Interesting)

ThosLives (686517) | more than 7 years ago | (#16741963)

I've never considered an OS as a 'driver' for a processor; I've always looked at a processor as a fixed-ability piece of hardware, and for good reason: reliability. The more knobs you have on something, the more likely it is to be broken.

Not really (1)

phorm (591458) | more than 7 years ago | (#16740299)

Your system cannot be aware of the advanced functionality of any device, including CPU's unless:
a) It has drivers
b) It's programmed into the software
c) It comforms to an existing standard in a more efficient way (aka faster at operation X internally, without any different I/O communication to the machine)

I'd rather have drivers at the OS-level than have code-bloat in every app for hundreds of hardware combinations, besides I've been compiling linux kernels specific to my hardware for years, so windows drivers isn't such a stretch.

Re:Not really (1)

ThosLives (686517) | more than 7 years ago | (#16742101)

My take is that this 'advanced' functionality doesn't belong in the CPU, as such. A CPU should be a wholly transparent entity that causes information to flow around, not be something that I can poke and prod at (yes, I know how much fun that is). I think perhaps the problem here is that by 'CPU' I really do mean a 'CPU' - a driver for the interlink between CPUs, even if on the same core, is really something different. However, now that I think of it that way, I don't know that I have such a problem with things.

Re:Not really (1)

dknj (441802) | more than 7 years ago | (#16747791)

what, do you not remember the pentium F00F bug? this is why microcode exists, to fix problems in the chip design AFTER the chip has been released. this is why you don't see recalls on processors. i'm willing to bet this "processor driver" was nothing more than a microcode update, but i don't care to research it so take this line with a grain of salt.

Re:So have the Win multicore bugs been worked out? (1)

Creepy (93888) | more than 7 years ago | (#16742857)

That's not necessarily a multicore problem. With most games, it's probably more likely a problem with the 64 bit architecture running 32 bit code.

The exception would be if the code is, in fact, multithreaded or if it runs a server and clients as separate processes. Most games don't, or at best use a thread to dynamically load files in the background (and most games that do that are RPGs like Diablo 2 and Gothic 1-3). Even then, assuming that threads are stable, it's highly likely that the problem would crash 32 bit machines as well (with the possible exception of race conditions).

The major problems with threads are lock/starvation (see the dining philosopher's problem [mtu.edu] and race conditions (you have A and B in separate concurrent threads but A needs to finish before B). Both of these problems are usually caused by coding errors.

Re:So have the Win multicore bugs been worked out? (1)

Enry (630) | more than 7 years ago | (#16745383)

Err..It's a 32 bit OS, 32 bit application. The fact that the CPU supports 64 bit has nothing to do with it (or else the single core chip I had in previously would have had problems as well).

Re:So have the Win multicore bugs been worked out? (1)

Creepy (93888) | more than 7 years ago | (#16751867)

Yeah, but it's still an underlying 64 bit architecture that's running in a 32 bit mode. Fixes for bugs to that would be provided by AMD.

    Since you tried a single core, though, it makes me suspect the real problem is with the shared cache between the cores. I can't think of anything else that AMD would patch, at least, that would fix application problems, especially if the cores themselves are the same architecture as the single core chip you had in first.

Re:So have the Win multicore bugs been worked out? (1)

SnowZero (92219) | more than 7 years ago | (#16746583)

The major problems with threads are lock/starvation (see the dining philosopher's problem and race conditions (you have A and B in separate concurrent threads but A needs to finish before B). Both of these problems are usually caused by coding errors.

When it first came up that game programmers were mystified with how they were going to use mutli-core processors, I didn't understand why it would be as hard as they claimed it would be. I've been writing graphics and multi-threaded software for years in support of my robotics work. The graphics and visibility algorithms that games employ can be quite complex, and these programmers eat those for breakfast. In comparison, threading doesn't really seem that bad; If you modularize your code and data, getting correct locking isn't that hard. Then I realized that these guys fell asleep in their Operating Systems class (after staying up working on Advanced Graphics), or their school was myopic like mine, and required you to take Graphics or AI or Operating Systems. A game programmer should take all three, but will pretty much always just take the first one. Hopefully that will change soon.

Re:So have the Win multicore bugs been worked out? (1)

Creepy (93888) | more than 7 years ago | (#16752587)

Most game programmers have none of the above. I've known 3 and the best educated of them has between 2 and 3 years of college, which got him through C++ and then he got hired by Volition (and has moved around since). The other two started working straight out of high school for MECC (the Oregon Trail people) and Wizard Works (or wizworks? - low budget games), both of which were absorbed into larger companies. I lost touch with all three of them in college, but I do still chat on IRC with some other game industry people (and lots of wannabes) due to some open source game engine work I do.

Incidentally, I wanted to get into games once upon a time (I'm currently employed by a CAD company) and I had Graphics and AI, but not operating systems. I did have hardware courses because I took CE for 2 years before switching to CSCI, but no fundamental OS courses. Quite honestly, I don't see the point of AI - A* is probably the most important part of game programming and it was less than one day in my coursework. Most of the other stuff is not really suited to game programming, though some of that class was redundant to my hardware classes (doing AND, OR, XOR, etc in software). Heck, we wasted more time on learning the differences between Lisp and Scheme (the "starter" language taught at the college I transferred into) than A*. The only thing I've used Lisp for since is customizing emacs. I've used A* a LOT.

You missed Networking, though I'd have to stress finding a network coding class as opposed to a survey of networking (my first networking class was more of a survey class - it was interesting, but had almost no programming).

Re:So have the Win multicore bugs been worked out? (1)

the real darkskye (723822) | more than 7 years ago | (#16745591)


I haven't tried HL2/Episode 1 yet on that CPU, but everything else I have works fine.

When you do let me know if you get the famous audio stuttering bug, that my rig can't beat.

gpu: nVidia GeForce 7100
cpu: AMD Athlon 64 3500+
snd: Creative Audigy 2 ZS
ram: 1.5Gb DDR


Re:So have the Win multicore bugs been worked out? (1)

default luser (529332) | more than 7 years ago | (#16756261)

Couldn't have ANYTHING to do with the fact that you have a POS video card, could it?

Nvidia pulled a fast one on you. The 7100 GS series is a rebranding of the old 6200 TC series [theinquirer.net] , and thus has pathetic performance. The old 6200 does not have advanced compression technology (unlike every other chip available today), which means with a pathetic 64-bit bus it runs like a dog.

I've noticed with Source that the lower your framerate, and the less on-card memory, the more likely you are to encounter sound stuttering (due to texture loads and the like). With the extra load TurboCache is putting on your memory subsystem, I wouldn't be surprised at all if that is the source of the problem.

Try running it in DirectX 7 mode and see if that doesn't help things. But my best suggestion would be to get a real video card (IE, no turbo cache, at least a 128-bit bus, and more than two ROPs). You can pick up a card about 4 times as powerful as your 7100 for under $100 - the 7600 GS.

Some other questions:

Your amount of ram makes me think you have a Socket 754 system. Do you have 3x512MB ram sticks? That hurts performance.

Also, if you have a Socket 754 system, do realize that memory bandwidth is precious on such systems, and is being eaten up by your 7100 card. You can't expect your soundcard to stream audio without stuttering if the video card is sapping your limited memory bandwidth.

Re:So have the Win multicore bugs been worked out? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#16739143)

You forgot to say "FP bitches". fix plz k thx

Re:So have the Win multicore bugs been worked out? (2, Informative)

Ford Prefect (8777) | more than 7 years ago | (#16739323)

I've had no problems at all with the original Half-Life, and its sequel, on a dual-core machine [hylobatidae.org] .

From a modding point of view, the Source map compilation tools are fully SMP-aware - so I guess someone at Valve knows about multithreaded programming. Seeing both processors pegged at 100% is great, as is hearing the whooshing noise from my laptop's fans. No belching of flames quite yet, fortunately.

(Actually, the compilation tools will scale up to running in a distributed manner - apparently at Valve, even the receptionist's PC contributes processor time. But the necessary glue code isn't available for us modders, alas.)

Re:So have the Win multicore bugs been worked out? (1)

cnettel (836611) | more than 7 years ago | (#16751455)

One significant problem when doing game code is that you have to keep that > 50 FPS frame rate. In such an environment, context switches can be a killer on their own. If you just go ahead and write a schoolbook threaded design, it might work on a quad-core chip, but it will perform far worse on a single-core chip compared to the single-threaded design. You can naturally avoid the context switcehs by serializing the calls again, but then you realize that you don't really need to copy around all that data for synchronization of a problem that isn't there.

I would hazard to guess that the transition would be faster if noone felt the need to target single-core chips. Synchronization is not simple, but it's not hard. Good response times without jittering is not simple, but it's not hard. Combining the two over different processing power layouts, that's hard. Or at least harder.

Re:So have the Win multicore bugs been worked out? (3, Informative)

Apocalypse111 (597674) | more than 7 years ago | (#16739579)

For a while, if you played Planetside with a dual-core machine it essentially gave you a speedhack. It didn't affect your rate of fire, but it did affect your rate of movement, and how quickly your COF bloom came back down. While in the lightest armor available, and with the speed implant installed and enabled, it was possible to run almost as fast as a Mosquito (the fastest aircraft available) on afterburners. In a tank you were almost untouchable, and a Galaxy (large air transport craft capable of carrying 12 people including the pilot) could get you and your squad to your target faster than the enemy could respond. It was nuts, but fortunatly not much abused as those caught doing it were frequently reported.

Re:So have the Win multicore bugs been worked out? (1)

irc.goatse.cx troll (593289) | more than 7 years ago | (#16740043)

Thats how it was in Halflife(1) under certain conditions, likely the same ones.

Except it did affect your rate of fire in addition to movement. It was full on speedhacks no different than the downloadable program. Luckily Valve's anticheat doesn't detect any form of speedhacking so nobody got globally banned from it, but I'm sure a lot of people got banned from servers for it without knowing how to fix it or even why it was happening.

Got an odd result on Linux, too. (1)

Dr. Manhattan (29720) | more than 7 years ago | (#16739595)

I'm currently working on icculus' port of the older game "Aliens versus Predator". I noticed that the darn thing would run fine, ~30fps, on a PIII-700MHz with i810 graphics. But it ran like a dog, On what I thought was an unrelated note, I added the ability to play the game music from Ogg files instead of off CD. For some weird reason, when the game is playing the music from files, the framerate is at least 50fps. If it can't find the files, the framerate drops again. Totally bizarre, but totally repeatable.

My current theory is that OpenAL's AL_EXT_VORBIS extension uses a hidden thread and some interaction there lets the game proceed. But that's just a guess. Anyone ever run into something like that?

Re:Got an odd result on Linux, too. (0)

Anonymous Coward | more than 7 years ago | (#16740037)

Hmm, my personal uneducated guess is that if you were using music tracks on the CD, rather than playing the music using the cd drive (straight to the soundcard via the tiny little cable) it's ripping the music in realtime and playing that. On an IDE drive, that's pretty much suicide, but when you're dealing with users who have no audio cable or their mixers set at funky levels or whatever, it's pretty much necessary.

Re:Got an odd result on Linux, too. (1)

Dr. Manhattan (29720) | more than 7 years ago | (#16742075)

Hmm, my personal uneducated guess is that if you were using music tracks on the CD, rather than playing the music using the cd drive (straight to the soundcard via the tiny little cable) it's ripping the music in realtime and playing that.

Nah, when it plays the music via CD it's just using the SDL calls to start the CD playing analog, not via the digital interface. And in the one case, I don't even have the CD in at all and the framerate's still slow. But if I have the game load up an Ogg version of the song and play it in the background, something that should take more CPU, the framerate goes up. It's really weird. I'm not even sure how to debug it.

So, no CD - slow. CD playing - slow. Ogg music - fast. WTF?

Re:Got an odd result on Linux, too. (0)

Anonymous Coward | more than 7 years ago | (#16744015)

Most likely the threads are blocking on cd access due to delays in the drive controller if you are accessing a cd drive and ide hard drive on the same channel. That would be my guess. Play the ogg files from memory or the hard drive, you are still accessing the hard drive only, and in memory the access would be obviously much faster. At some level the calls are blocking, causing a drop in the frame rate in seemingly unrelated threads.

Re:So have the Win multicore bugs been worked out? (0)

Anonymous Coward | more than 7 years ago | (#16740913)

For Athlon processors:

http://www.hardforum.com/showthread.php?t=983781 [hardforum.com]

Apply the 3 mentioned fixes, in order. Your dual-core Athlon system will run without glitches.

Re:So have the Win multicore bugs been worked out? (1)

Emetophobe (878584) | more than 7 years ago | (#16741361)

I remember reading of all kinds of bugs in games running on dual-core processors in Windows. Something to do with the OS providing different amount of power to the two cores. Has that been sorted out, or will Valve be compensating in the game engine code?

It's mainly been sorted out. Out of all the games I own, the only one with dual core issues is Need for Speed Most Wanted.

List of my games that work fine with dual core:
1) Warcraft 3
2) UT2004
3) Need for Speed Underground 2
4) Call of Duty 2
5) Oblivion
6) Call of Juarez
7) San Andreas
8) Doom 3
9) FarCry
10) Settlers 2 - 10th Anniversary Edition (loved the original!)
11) Sid Meier's Railroads
12) Halflife 2

List of my games that don't work right (fix is to set cpu affinity manually)

1) Need for Speed Most Wanted

Wow! (0)

Anonymous Coward | more than 7 years ago | (#16738907)

So their "new direction" is "we're going to use them?"

Amazing.

Re:Wow! (2, Funny)

SnowZero (92219) | more than 7 years ago | (#16746603)

Yep. Valve finally hired a systems programmer, and now they can do threading. This is almost as revolutionary as hiring someone with a background in AI to work on AI, rather than hiring a graphics programmer to do AI.

5 cores! (0)

Linkiroth (952123) | more than 7 years ago | (#16738977)

For the closest shave! Er... fastest app!

Re:5 cores! (1)

dextromulous (627459) | more than 7 years ago | (#16739079)

Don't forget that one extra core for compatibility with single-core-only apps!

Re:5 cores! (0)

Anonymous Coward | more than 7 years ago | (#16739493)

Well, if we're making Gillette Fusion comparisons, I suppose that one extra core will probably be a 386 SX.

A good suppliment article (2, Informative)

Dr. Eggman (932300) | more than 7 years ago | (#16739093)

With Videos! [bit-tech.net] (on the 4th page.)

GPU Death (2, Insightful)

Thakandar2 (260848) | more than 7 years ago | (#16739211)

I realize this was brought up in the story, but I really do think that when AMD bought ATI, they were looking forward at stuff like this. If AMD started adding ATI GPU instructions to their cores, and you get 4 of them on one slot along with the memory controller, what kind of frame rates and graphical shenanigans will happen then?

Of course, the problem is that my AMD 64 3200 will do DirectX 8 with my old GeForce, and will do DirectX 10 if I buy the new ATI card after Vista ships since its the video card handling that. But if I buy the new AMD multicore CPU w/ GPU instructions, will I have to upgrade my processor rather than my video card to get new features? And if I do, what will the processor price points be since new Intel/AMD extreme chips cost $1000 at launch, and the bleeding edge graphics cards cost $500?

As of right now, I get a big boost in game performance if I just upgrade my old video card and buy a new one.

Re:GPU Death (1)

Aadain2001 (684036) | more than 7 years ago | (#16739347)

Ah, you have run into the classic problem of integrating too many features into one piece of hardware. Now, instead of just upgrading your video card, you have to upgrade your entire processor and possibly your entire system (new mobo for new CPU pin configuration and new memory due to a different memory control used in the new CPU/GPU). I've always felt that the components you are most likely to upgrade (CPU, GPU, and Memory) should always be separate so you are not forced to upgrade all at once. But of course that means less money for AMD/ATI, so who cares what the consumer wants ;)

Re:GPU Death (1)

frosty_tsm (933163) | more than 7 years ago | (#16740361)

I am sure that if AMD could make it where you could just upgrade the CPU and get value for that upgrade, they would. They don't sell memory or motherboards, and only recently have they gotten vested interest in GPUs.

Re:GPU Death (1)

Bert64 (520050) | more than 7 years ago | (#16751649)

Often you find that your new GPU requires a relatively fast processor to feed it data quick enough...
Not to mention newer versions of AGP/PCIe etc...
I bought an Athlon64 3200 a while back, and pretty soon thereafter upgraded my video card too, since the old one had poor to non-existent 64bit drivers. And when i bought that videocard originally, for a K6-2/400, it provided virtually no performance improvement over the previous card i had because the system couldn't keep up with it.

Re:GPU Death (1)

webrunner (108849) | more than 7 years ago | (#16752207)

Of course that problem exists even with completely discrete components when new pipelines come out.

To update my home machine's processor or video card, i have to replace the motherboard:
- all faster processors worth the money to upgrade to use a different socket
- All faster video cards worth the money to upgrade to are PCI Express

If I get a new video card, I have to get a new motherboard to support it, and then I have to get a new processor to fit in the new slot.

If I get a new CPU, I have to get a new motherboard to support that, and unless I waste the money and go for an AGP motherboard, or unless I find one of the extremely rare AGP+PCI Express dual motherboards, I have to buy a new video card for the new slot.

At this point I might as well buy a new machine and use the old one as a server.

Re:GPU Death (1)

ClamIAm (926466) | more than 7 years ago | (#16742005)

If AMD started adding ATI GPU instructions to their cores, and you get 4 of them on one slot along with the memory controller, what kind of frame rates and graphical shenanigans will happen then?

Well, AMD Fusion [tgdaily.com] looks interesting. Of course, I also hope that we'll still be able to buy them separately, because it's cheaper to upgrade parts as you go, rather than all at the same time.

But the possibilities of a hybrid processor are pretty cool. For example, this type of CPU would be awesome in a home theater PC, cutting in half the number of super-hot, power-hungry chips you have to deal with. And if they scale it down far enough, portable media players and game consoles (i.e. Gameboys) could benefit from this as well.

Re:GPU Death (1)

cnettel (836611) | more than 7 years ago | (#16751549)

On the other hand, we already know that GPUs and CPUs separate pretty well. The APIs are rather batch-based, right now out of necessity, so you can separate them. When we already have some airflow within the HTPC system, it's not too obvious that it's a good thing to integrate even more processing power onto the same die. Cooling that one might actually be harder than cooling the discrete components. At the very least, it makes sense to keep integrated graphics in the chipset, rather than the CPU.

I guess someone will start shouting about the glorious benefits of sharing the cache. I don't think that's too relevant, since a CPU cache is designed to be great at associativity and random accesses (and accepting writes to just about any data, ignoring the instruction cache). GPU caches is a tightly hold secret, but while they exist, they are rather small relative to the enormous bandwidth requirements of refreshing a frame buffer umph times per second, and there is probably a quite strict distinction of read-only and RW. It's not just a matter of adding "GPU instructions" to the existing instruction set, we already have SIMD. GPUs are extremely scalable SIMD machines, with a very different design tradition.

It's The Execution Units That Count (0)

Anonymous Coward | more than 7 years ago | (#16753129)

If AMD started adding ATI GPU instructions to their cores, and you get 4 of them on one slot along with the memory controller, what kind of frame rates and graphical shenanigans will happen then?

Remember that today's GPUs have way more transistors for logic than today's CPUs (because they can easily do so much more in parallel). If AMD started supporting GPU instructions (the low-level stuff beneath 3D API calls -- mostly FP vector ops), it wouldn't mean squat unless they managed to add all those dozens of execution units for the ops as well. ATI and Nvidia are constantly pushing (TSMC/UMC) manufacturing to the limits to have even more shader engines in their GPU chips -- AMD hardly has the transistor budget to add even one of those cores into their single-core Athlon chips.

And your solution would instantly make most games bandwidth-limited -- today's video cards offer faster memory on a twice wider bus, dedicated to the GPU only.

Sheesh, the whole point of 3D accelerator chips from the friggin' ViRGE onwards was to add more horsepower to augment the CPU. Why the heck try to squeeze all that back into the CPU? Add more plain jane CPU cores or L2 cache instead, and keep graphics where it belongs...

That said, I happily agree with the rest of your post :)

Big deal... (1, Insightful)

HawkingMattress (588824) | more than 7 years ago | (#16739461)

Sounds like pure hype. Basically they're saying they have a 2 years technical advantage because they've been working on better multithreading schemes and avoiding deadlocks to put the multi cores at better use. How groundbreaking... Never mind that most game companies obviously have been working on that too but simply don't talk about it. They also make it sound like they invented distcc... which reinforce the impression that they're just trying to impress people and show how high tech they are (not). Almost sounds like a stock increase scheme or something on those lines if you ask me.

Re:Big deal... (0)

Anonymous Coward | more than 7 years ago | (#16739783)

Yes, everything is obvious and plain and you could do it all in your sleep twenty times better if only you didn't need all your time to post on Slashdot bitching about. We all know.

Re:Big deal... (0)

Anonymous Coward | more than 7 years ago | (#16740583)

lol that's a pwn

Re:Big deal... (0)

Anonymous Coward | more than 7 years ago | (#16740703)


Sounds like pure hype. Basically they're saying they have a 2 years technical advantage because they've been working on better multithreading schemes and avoiding deadlocks to put the multi cores at better use. How groundbreaking... Never mind that most game companies obviously have been working on that too but simply don't talk about it. They also make it sound like they invented distcc... which reinforce the impression that they're just trying to impress people and show how high tech they are (not). Almost sounds like a stock increase scheme or something on those lines if you ask me.

Why is this modded up?

1. It's likely they are talking about map compiling instead of code (it's not made very clear in the article)
2. Valve is an LLC not a publicly traded company, so no this isn't a stock scam

Your post reinforces that *you're* "just trying to impress people and show how high tech they are (not)."

Debugging multithreaded code (4, Informative)

Coryoth (254751) | more than 7 years ago | (#16739463)

Debugging multithreaded code can be relatively easy, you just have to start off on the right foot. The best way to do that is to leave behind older concurrency models like monitors with mutexes, which the inventor of that model rejected back in the 80s and go with more recent concurrency models like CSP [usingcsp.com] (the newer way to do concurrency from the man who brought you monitors). From a more modern perspective like CSP reasoning about concurrency is a lot easier, and hence debugging becomes much simpler. In fact tere are model checking tools that can verify lack of deadlocks etc. The downside is that its much easier if you have a language that supports the model, or get an addon library to do it for you. You can get a CSP add-ons for Java: JCSP [kent.ac.uk] , and for C++: C++CSP [twistedsquare.com] . Alternatively languages like Eiffel, Erlang, Occam, and Oz, offer more of hat you need out of the box - concurrent programming with those languages is easy to get right. Changing languages is, of course, not an option for most people.

Re:Debugging multithreaded code (2, Interesting)

I Like Pudding (323363) | more than 7 years ago | (#16739665)

Don't forget about software transactional memory [wikipedia.org] . Haskell already has it, and I'm sure there are more implementations to come (Perl 6, for instance).

Re:Debugging multithreaded code (1)

LordMyren (15499) | more than 7 years ago | (#16740933)

speaking as a n00b (1)

east coast (590680) | more than 7 years ago | (#16739725)

IANAD... But I do dabble a bit and do want to head in that direction professionally (not as a game developer but more towards general applications).

Should I start getting my teeth cut on the concept of multicore programming? Is there enough of an advantage for this doing smalled generalized apps? How does software written with multicore in mind suffer on single core systems?

I've been thinking about this more but currently do not have the proficiency to take this as seriously as the general studies I'm doing currently. Or am I wrong and this is the best time to get me in the habit for what seems to be the future of desktop PCs?

Re:speaking as a n00b (1)

Emetophobe (878584) | more than 7 years ago | (#16741525)

I would recommend learning how to write multi-threaded programs now. The way CPUs are heading, pretty much everyone will have a dual or quad core in 3-5 years, so I think it would a wise decision to learn now rather than later.

So what are the tools? (1)

adz (630844) | more than 7 years ago | (#16739877)

The whole article talks about how Valve created tools to ease multithreaded programming, but never actually says what they are. There's very little to the article except the obvious statement about how multithreaded code can run faster.

And the technical content is questionable - for example the definitions of multithreading problems are somewhat innaccurate.

Re:So what are the tools? (1)

dknj (441802) | more than 7 years ago | (#16747861)

they spread tasks across all the cores. valve has maybe a 1 month headstart, but lets look at the bigger picture. ps3 was designed with this in mind and development kits have been out for awhile. of course, you have to design your game differently for a ps3 compared to, say, an xbox360 or pc since you don't have to traverse a series of cpus before getting to the one you want.

this is valve doing smoke and mirrors to hype up the steam platform, but in reality we'll see games that are designed in a similar manner in a few months.

Re:So what are the tools? (1)

adz (630844) | more than 7 years ago | (#16749327)

That's not a tool though, that's simply a decomposition strategy!

OS X Leopard and OpenGL (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#16739917)

It is nice when software is rewritten to take advantage of multiple cores, and imagine that most new games will be designed to do this. For older games, however, Apple has announced that programs using the OpenGL APIs will automatically spawn a process that feeds the GPU, using a second thread. This means, theoretically, some programs could see up to 2X the performance when run on OS X 10.5 and a dual core system, without any changes from the developers.

It's a nice little optimization and hopefully Vista will have something similar (although I heard of such a thing yet).

Re:OS X Leopard and OpenGL (0)

Anonymous Coward | more than 7 years ago | (#16741743)

Ati and Nvidia's drivers are already multithreaded on windows but there is only a 10% improvement at best.
See http://www.behardware.com/news/7949/driver-ati-nvi dia-multithread.html [behardware.com] for one benchmark.

Rendering frames to the screen is inherently serial so you can't make it much faster with more cores.

Re:OS X Leopard and OpenGL (3, Informative)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#16741975)

Ati and Nvidia's drivers are already multithreaded on windows but there is only a 10% improvement at best... Rendering frames to the screen is inherently serial so you can't make it much faster with more cores.

We're not talking about the drivers, per se. Many the libraries used by OpenGL programs and some of the OS interactions will be spawned as a second "feeder" process that does nothing but send data to the graphics card/drivers. This means programs who are CPU bound and single threaded, can offload one big task to the second processor without any work from the developers or even recompiling. Theoretically, the perfect storm would be a process where half the work is feeding the GPU and the bottleneck to the GPU is at least half as wide as the CPU bottleneck... resulting in twice the performance. This will never happen, of course, and I don't expect much benefit from this optimization in general, but it is still kinda neat and might be useful in some instances.

Re:OS X Leopard and OpenGL (1)

ClamIAm (926466) | more than 7 years ago | (#16742171)

You probably meant to say this, so I'll add it for clarification: Leopard has code that splits up the OpenGL load between all usable processors. This means not only the GPU but the second CPU core as well.

Re:OS X Leopard and OpenGL (1)

abdulla (523920) | more than 7 years ago | (#16746561)

I thought they stated that the games had to be written to take advantage of this, it wasn't so much an automatic boost as one that you need to apply to take full advantage of.

Re:OS X Leopard and OpenGL (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#16751467)

I thought they stated that the games had to be written to take advantage of this, it wasn't so much an automatic boost as one that you need to apply to take full advantage of.

Their literature (sparse as it is) strongly implies otherwise, but I don't have any of the NDA stuff so I could be wrong.

multi-core and GPU/CPU integration (4, Informative)

EricBoyd (532608) | more than 7 years ago | (#16739961)

I found this paragraph from the conclusion really interesting:

"Newell even talked about a trend he sees happening in the future that he calls the "Post-GPU Era." He predicts that as more and more cores appear on single chip dies, companies like Intel and AMD will add more CPU instructions that perform tasks normally handled by the GPU. This could lead to a point where coders and gamers no longer have to worry if a certain game is "CPU-bound" or "GPU-bound," only that the more cores they have available the better the game will perform. Newell says that if it does, his company is in an even better position to take advantage of it."

This is almost certainly why AMD has bought out ATI - they see that the future is about integrating everything on the motherboard into one IC, and AMD wants the CPU to be that point of integration. For more, see:

Computers in 2020
http://digitalcrusader.ca/archives/2006/02/compute rs_in_20.html [digitalcrusader.ca] which is my prediction for how the whole field is going to evolve over the next 14 years.

Re:multi-core and GPU/CPU integration (1)

bluefoxlucid (723572) | more than 7 years ago | (#16741259)

That won't work. You can crack DES using dozens of high-end 64-bit PCs in several years; using a $300 DES cracking card, you can do it in 4 days. The DES cracking card has 76,000 transistors; the CPU has hundreds of millions. We're talking about dedicated hardware versus something general purpose here; squashing more crap on the same dye generates more heat and creates longer logic paths to try and accomplish the same tasks, leading to slower operation. It also decreases modularity (you can't get a "High end" or "Low end" graphics card depending on need; and manufacturing both combinations on one chip is expensive) and robustness of the system (if your GPU fails, you don't buy another video card; you buy a new computer).

Re:multi-core and GPU/CPU integration (1)

StikyPad (445176) | more than 7 years ago | (#16745023)

they see that the future is about integrating everything on the motherboard into one IC, and AMD wants the CPU to be that point of integration

So everything old is new again, eh?

I don't see the GPU disappearing anytime soon. It's much easier to create an IC that is good at one specific task than a multi-use IC that is just as good at that particular task. As long as there are companies that can outshine the CPU manufacturers' performance in graphics, there will be a market for GPUs. The only exception is if/when we reach a point where the limitation on graphic quality is purely the artists' skills/time, but that's a ways off, especially since art design is relatively easy to parallelize.

I just did this -.- (1)

bluefoxlucid (723572) | more than 7 years ago | (#16740805)

This is lame, I have been making the argument that FPS != game benchmark. Everyone always says "look dual core FPS is the same as single core, maybe 1 or 2 faster," and they don't even understand why it's 1 or 2 faster (because the few cycles used for AI and physics are out of the way...). It takes VALVE to come out and say it?

Yesterday I blew and made the same statements on kerneltrap [kerneltrap.org] . A little unscientific, but should cover the issue nicely.

Too Bad... (2, Insightful)

ArcadeNut (85398) | more than 7 years ago | (#16742863)

Too bad as I'll never buy another Valve game because of Steam.

Re:Too Bad... (0)

Anonymous Coward | more than 7 years ago | (#16743987)

It wouldn't be half bad if it wasn't a resource hog... I'd rather buy games off steam than get them retail with starforce (i.e. X3)

Re:Too Bad... (1)

The MAZZTer (911996) | more than 7 years ago | (#16745057)

Have it your way.

Myself, I'm off to enjoy the excellent games made by the excellent people at Valve, automatically updated by the excellent automatic patching system Steam provides. Even if you ignore the convenience of the patches, the built-in IM and server browser capabilities, and the media and game tool content, the fact that similar services are springing up here and there has to mean Valve did SOMETHING right.

You go shuffle your CDs around and Google for patches, finding 5 different ones and, in some cases, wondering which patch is the latest and the appropriate one for your PC.

Myself, I'll just take my game collection (did I mention how excellent these games are?) from PC to PC using only my Steam username and password.

But, you hang on to your CDs and patches, and your closed-minded hatred of Valve. I'm going to go have some fun in City 17, or Office, or Venice, or somewhere else fun. Ta ta.

Re:Too Bad... (1)

Bert64 (520050) | more than 7 years ago | (#16751581)

What happens when/if steam is shut down? Valve won't run it forever, at which point you'll no longer be able to download games from it, and may no longer be able to run the ones you already have.
I prefer console games, that due to the lack of ability to patch them, have to be written properly in the first place, also since the hardware is always the same I too can take them anywhere only needing the appropriate console, instead of an x86 compatible computer which meets a huge array of other requirements.

Re:Too Bad... (0)

Anonymous Coward | more than 7 years ago | (#16752979)

Except for the fact that consoles now have the ability to be patched and were never written properly in the first place and the hardware isn't always the same.

Re:Too Bad... (1)

AlexMax2742 (602517) | more than 7 years ago | (#16763891)

What happens when/if steam is shut down? Valve won't run it forever, at which point you'll no longer be able to download games from it, and may no longer be able to run the ones you already have.

Steam includes an option for backing up a hard copy of your games to convenient CD or DVD-sized files. Then, if Steam ever vanishes, unpack the backed-up .gcf files, use a no-authenticaion crack on it, and you're good to go.

Re:Too Bad... (0)

Anonymous Coward | more than 7 years ago | (#16769617)

use a no-authenticaion crack on it

Therein lies the problem I have with that scheme. Why the hell should I have to use a crack to play a game I bought?

More of the treating your customers as criminals attitude we see so often.

Re:Too Bad... (1)

AlexMax2742 (602517) | more than 7 years ago | (#16774267)

Because we don't live in a perfect world where customers aren't treated like criminals. We live in the real world, a world with Walgreens, Starforce and 4X games that appeal to 1% of the game-playing population making a stand for no CD protection.

Half Life 2 took the gamble for online distribution, and as it turns out, people are willing to "put up" with Steam to play their favorite AAA games. Then, the authors of Rag Doll Kung Fu and Darwinia took the gamble as well, and as it turns out, people are more willing to impulse buy indie games that are 20 bucks and that they can download and have on their computer within minutes or hours. These are the same indie developers who used to live in the margins and be at the whims of publishers before Steam, and now they can market their games directly through to you, without necissarily having to worry about the expense behind boxed copies. I support their endeavours, I bought Rag Doll Kung Fu on principle, and then Red Orchestra and Defcon. I also plan on paying for NS2 when it comes out too. And I would have bought Civilization IV on Steam if I hadn't already bought a boxed copy.

And why should you feel guilty about using a crack on a game you legally paid for anyway?

Re:Too Bad... (1)

markulu (819053) | more than 7 years ago | (#16776659)

And why should you feel guilty about using a crack on a game you legally paid for anyway?

We shouldn't, but then again, we shouldn't have to use the crack in the first place. That's my point.

Don't get me wrong, I love the idea of steam, but the practical implementation of it leaves a lot to be desired.

Re:Too Bad... (1)

chrisxkelley (879631) | more than 7 years ago | (#16751395)

You know, I've heard all of this shit about steam and how gamers hate it and all that, but when I finally decided to install XP on my MacBook Pro so I could play some games, I installed steam. Using it to get games, install them, updates, run them- It's probably the easiest software I've used in a while, and it's clean and fast. I dont see what the deal is with people. Everyone bitches and moans about it because it makes it (near) impossible to steal the games.

Re:Too Bad... (1)

Reapy (688651) | more than 7 years ago | (#16753171)

I will grudginly admit steam is ok. I think my initial problem with it was being required to verify my cd key online in order to play it.

My biggest problem with is with source, which makes me want to throw up all over the place! I never made it through hl2, and I really wanted to. :((

Re:Too Bad... (0)

Anonymous Coward | more than 7 years ago | (#16890802)

When did you last try Steam? I know a lot of people who tried it in the early days, vowed never to use it and are now missing out since the improvements.

Why is this news? (0)

Anonymous Coward | more than 7 years ago | (#16743547)

Why is this Valve advertisment posted on Slashdot?

They say nothing revolutionary here, it's just "we did the most obvious thing we could think of, btw you should buy engine licenses from us".

Yes, this method is an obvious way to get some benefit from a small number of extra hardware threads. But it is *not* future proof. This approach may give good CPU utilization up to perhaps 10-20 threads, but after that it will start to take a big dive. Games are not easily parallellisable. For now, it's sufficient to split off some low hanging fruit onto extra cores and get a decent CPU utilization, but what happens in a few years? Intel claims that they will have 80 core CPUs by then so it's probably not unreasonable to think that the next "next gen" consoles will have at least 50 hardware threads or so.

So what the hell do we do? Are our current tools (C/C++) really sufficent for programming non-easily-parallellisible applications like game engines when we have to scale to the high tens, and low hundreds of threads? I don't think so. Just like using assembly exclusively to program a next gen console engine is theoretically possible, but practically impossible, using C++ to program massively multithreaded game engines will simply not be feasible.
So what the hell do we do?

Personally, my bet is on languages like Haskell. Purely functional programming makes multithreading easy, and for the imperative bits it has transactional memory (no locks, no deadlocks, and finally composability even with multithreading). Haskell itself may be too slow, so we may need to find a non-lazy version (laziness is basically the biggest performance problem Haskell has) for it to be practical. But then again, the highly conservative estimate of 2x worse performance compared to C++ (it's usually better) still means that it'll win out with enough threads (a hypothetical C++ program which simply isn't feasible to write is slower than an actual and correct Haskell program).

Even so. We're pretty much screwed. It seems like the industry is moving far too slow on this. There is a coming crisis here, and we don't really have a solution (except higher development costs, more developer, more bugs, and less profit).

Re:Why is this news? (3, Insightful)

SnowZero (92219) | more than 7 years ago | (#16747049)

Yes, this method is an obvious way to get some benefit from a small number of extra hardware threads. But it is *not* future proof. This approach may give good CPU utilization up to perhaps 10-20 threads, but after that it will start to take a big dive.

While they are obviously not doing anything that supercomputer programmers didn't invent 30 years ago, they are leading their industry into the future, err, present. Though the article details are pretty weak, its clear that they've already gone beyond the module-level threading (sound, AI, graphics), to something that sounds more like work queues. If done right, those can get you to hundreds of threads as seen on early supercomputers, although it doesn't sound like Valve is dealing with cache-sharing problems yet, which could cause problems far sooner. I'm hoping hardware + language extensions will help mitigate that somewhat, at least on the read-sharing side.

Personally, my bet is on languages like Haskell. Purely functional programming makes multithreading easy, and for the imperative bits it has transactional memory (no locks, no deadlocks, and finally composability even with multithreading). Haskell itself may be too slow, so we may need to find a non-lazy version (laziness is basically the biggest performance problem Haskell has) for it to be practical.

I think OCaml [inria.fr] has a lot better chance of becoming mainstream than Haskell. For one thing, I know of programs written in OCaml that aren't written by members of the PL community. For Haskell, outside of compilers and libraries, the only thing I can think of is Darcs [darcs.net] , and that project is having all sorts of issues with determining and fixing performance problems (I love darcs though!). I'm not sure a pure-lazy language will ever map well to a soft-realtime media app such as a game. Also, when you really need delayed evaluation (i.e. laziness), ML derivatives have closures and higher-order functions which allow you to implement it easily enough.

Re:Why is this news? (0)

Anonymous Coward | more than 7 years ago | (#16748865)

The idea of splitting the engine up into major modules (e.g. sound) and then also parallellising some low hanging fruit here and there where it's easy is hardly worth getting excited about. It's about the most obvious way of doing it. The problem is that these low hanging easy parallellisible fruits are few and far apart, and with a few more threads there won't be any left to take advantage of more power. We need to parallellise *everything*.

The problem with OCaml is that it isn't pure. I'd love a pure version of ML (and rumor has it the next version will be).

That's sort of the deal breaker.

I mean, OCaml will get you loads of benefits over C++, but the huge, major issue that will *force* developers to switch is that for a multithreaded program, you really do want pure functions. So even if OCaml (ML) is just "slightly impure" it's just as impure as C++. There are no gray scales here, a language which isn't pure can't reap the benefits of being pure, even if it is a lot better than C++ in a whole host of ways.

Oh, and as far as Darcs is concerned, I don't think their performance problems has that much to do with the language. A naive implementation of their algorithms would've been slow in C too.

dual vs. single + gpu (3, Informative)

GoatVomit (885506) | more than 7 years ago | (#16743605)

If you are on a budget and want to play games you'll probably get more bang for the buck with a single core proc and a better gpu than with the same amount of cash spent on dual core + slower gpu. I've been waiting for this to change for a while but so far it's been more marketing than anything else. I ended up moving the dual core proc to a linux box and single core to windows after a few weeks of testing. Naturally windows chugs more after a game and isn't as responsive but while playing it was hard to notice any tangible difference. With all the talk about the future the present seems forgotten.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>