×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia's Chief Scientist on the Future of the GPU

CmdrTaco posted more than 5 years ago | from the well-isn't-that-special dept.

Graphics 143

teh bigz writes "There's been a lot of talk about integrating the GPU into the CPU, but David Kirk believes that the two will continue to co-exist. Bit-tech got to sit down with Nvidia's Chief Scientist for an interview that discusses the changing roles of CPUs and GPUs, GPU computing (CUDA), Larrabee, and what he thinks about Intel's and AMD's futures. From the article: 'What would happen if multi-core processors increase core counts further though, does David believe that this will give consumers enough power to deliver what most of them need and, as a result of that, would it erode away at Nvidia's consumer installed base? "No, that's ridiculous — it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required.""

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

143 comments

Nah... (0, Troll)

Anonymous Coward | more than 5 years ago | (#23253088)

It won't work, since linux will never get the drivers for it going.

Re:Nah... (1)

pato101 (851725) | more than 5 years ago | (#23253962)

It won't work, since linux will never get the drivers for it going.
stop trolling! NVidia's drivers for Linux do the CUDA stuff!

Re:Nah... (1)

witherstaff (713820) | more than 5 years ago | (#23255152)

On the nVidia driver bashing, I'm still waiting for a driver for FreeBSD on amd64. It's a back and forth blame game between Nvidia and FreeBSD devs and it has been going on for years.

Re:Nah... (0)

Anonymous Coward | more than 5 years ago | (#23255444)

Dude, if you are willing to come down here into my parents basement and get it working I'll pay you a consulting fee.

NV on the war path? (3, Interesting)

Vigile (99919) | more than 5 years ago | (#23253102)

Pretty good read; interesting that this guy is talking to press a lot more:

http://www.pcper.com/article.php?aid=530 [pcper.com]

Must be part of the "attack Intel" strategy?

VIA (2, Interesting)

StreetStealth (980200) | more than 5 years ago | (#23253326)

The more Nvidia gets sassy with Intel, the closer they seem to inch toward VIA.

This has been in the back of my mind for awhile... Could NV be looking at the integrated roadmap of ATI/AMD and thinking, long term, that perhaps they should consider more than a simple business relationship with VIA?

Re:VIA (2, Informative)

Retric (704075) | more than 5 years ago | (#23253782)

The real limitation on a CPU/GPU hybrid is memory bandwidth. A GPU is happy with .5 to 1 GB of FAST RAM but CPU running vista works best with 4-8GB of CHEEP ram and a large L2 cash. Think of it this way a GPU needs to access every bit of ram 60+ times per second but a CPU tends to work with a small section of a much larger pool of ram which is why L2 cash size/speed is so important.

Now at the low end there is little need for a GPU but as soon as you want to start 3D gaming and working with Photoshop on the same system you are going to want both video and normal ram.

PS: This is also why people don't use DDR3 memory for system RAM it's just not worth the cost for a 1-2% increase over cheep DDR2 ram.

Re:VIA (2, Interesting)

JoshHeitzman (1122379) | more than 5 years ago | (#23255004)

Don't see why a hybrid couldn't have two memory controllers included right on the chip and then mobos could have slot(s) for the fast RAM nearest to the CPU socket and the slots for the slower RAM further away.

Re:VIA (1)

m50d (797211) | more than 5 years ago | (#23256634)

Modern systems are already having trouble fitting everything that has to go near the CPU near the CPU; ordinary system RAM still wants latency as low as you can. I don't see this happening until after we start putting main memory on the same chip as the CPU.

Re:VIA (1)

smallfries (601545) | more than 5 years ago | (#23256674)

The simple answer is you use a memory hierarchy same as people do now. The L2 cache on a CPU is large enough to contain the working set for most problems. The working set for GPU-type problems tends to be accessed differently. You need some sort of caching for data but for lots of the memory you access it will be a really large pretty sequential stream. The memory locking in CUDA reflects this.

So going back to your comment about memory mismatch. Some of your cores in a hybrid would have large L2 caches like a conventional CPU. Some of your cores would have almost no L2 cache but would share a really large pool of L3 (probably the same 1/2 GB of DDR3) and the rest would be system memory. If the large L3 pool is in use then the cpu type cores wouldn't see any benefit from this layer, .... but when the gpu parts are idle this would be a large speed boost for the cpu-type parts.

Re:NV on the war path? (1)

pato101 (851725) | more than 5 years ago | (#23253950)

interesting that this guy is talking to press a lot more:
I was a couple of weeks ago in a conference given by him at Barcelona (Spain). He is a nice speaker. He seems at a round of conferences around the world universities showing the CUDA technology. By the way, CUDA technology seems to be an interesting thing.

Very surprising (1)

athdemo (1153305) | more than 5 years ago | (#23253114)

I would never have expected nVidia's chief scientist to say that nVidia's products would not soon be obsolete.

Re:Very surprising (2, Interesting)

AKAImBatman (238306) | more than 5 years ago | (#23253492)

I would never have expected nVidia's chief scientist to say that nVidia's products would not soon be obsolete.

Moving to a combined CPU/GPU wouldn't obsolete NVidia's product-line. Quite the opposite, in fact. NVidia would get to become something called a Fabless semiconductor company [wikipedia.org]. Basically, companies like Intel could license the GPU designs from NVidia and integrate them into their own CPU dies. This means that Intel would handle the manufacturing and NVidia would see larger profit margins. NVidia (IIRC) already does this with their 3D chips targeted at ARM chips and cell phones.

The problem is that the GPU chipset looks nothing like the CPU chipset. The GPU is designed for massive parallelism, while CPUs have traditionally been designed for single-threaded operation. While CPUs are definitely moving in the multithreaded direction and GPUs are moving in the general-purpose direction, it's still too early to combine them. Attempting to do so would get you the worst of both worlds rather than the best. (i.e. Shared Memory Architecture [wikipedia.org] )

So I don't think NVidia's chief scientist is off on this. (If he was, we'd probably already see GPU integration in the current generation of game consoles; all of which are custom chips.) The time will come, but it's not here yet. :-)

And as we all knew (2, Insightful)

aliquis (678370) | more than 5 years ago | (#23253572)

Only Amiga made it possible! (Thanks to custom chips, not in spite of them.)

It doesn't seem likely that one generic item would be better at something than many specific ones. Sure CPU+GPU would just be all in one chip but why would that be better than many chips? Maybe if it had RAM inside aswell and that enabled faster FSB.

Re:And as we all knew (1)

AKAImBatman (238306) | more than 5 years ago | (#23254026)

It doesn't seem likely that one generic item would be better at something than many specific ones.

Combined items rarely are. However, they do provide a great deal of convenience as well as cost savings. If the difference between dedicated items and combined items is negligent, then the combined item is a better deal. The problem is, you can't shortcut the economic process by which two items become similar enough to combine.

e.g.
Combining VCR and Cassette Tape Player: Not very effective
Combining DVD Player and CD Player: Very effective

CPUs and GPUs are moving in the right direction to eventually merge (in much the same way as FPUs and SSE units merged with CPUs), but they simply aren't there yet. :-)

Re:Very surprising (1)

Hatta (162192) | more than 5 years ago | (#23256772)

I would. After all, why buy nVidia's next product if their current product isn't obsolete?

CPU based GPU will not work as good as long as the (1)

Joe The Dragon (967727) | more than 5 years ago | (#23253128)

CPU based GPU will not work as good as long as they have to use the main system ram also heat will limit there power. NVIDAI should start working HTX video card so you can the video card on the cpu bus but it is on a card so you put ram and big heat sinks on it.

Re:CPU based GPU will not work as good as long as (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23253260)

Right, come back in 5 years when we have multi core processors with integrated spe-style cores, GPU and multiple memory controllers.

NVidia are putting a brave face on it but they're not fooling anybody.

Re:CPU based GPU will not work as good as long as (2, Insightful)

arbiter1 (1204146) | more than 5 years ago | (#23253304)

truthfully only real application for the gpu/cpu hybrid would be in laptop use where they can get away with using lower end gpu chips

Re:CPU based GPU will not work as good as long as (1)

maxume (22995) | more than 5 years ago | (#23253432)

How many more pixels do you think you need? I'm glad they are looking ahead to the point when graphics is sitting on chip.

(current high end boards will push an awful lot of pixels. Intel is a generation or two away from single chip solutions that will push an awful lot of pixels. Shiny only needs to progress to the point where it is better than our eyes, and it isn't a factor of 100 away, it is closer to a factor of 20 away, less on smaller screens)

Re:CPU based GPU will not work as good as long as (1)

billcopc (196330) | more than 5 years ago | (#23256406)

As many pixels as they can possibly throw at me, that's how many.

There are people who are perfectly happy with resolutions like 1024x768, good for them! Me, I was running that rez in the 486 days, and gaming it in the late 90's with Voodoo2 and the first GeForce.

The fact that GPUs have scaled faster and larger than CPUs is proof to me that GPGPU is a good idea. I have a beefy PC, and the bulk of what I do involves image processing. If the GPU can do it 10 times faster for less money, that's an epic win and I say bring it on!

Re:CPU based GPU will not work as good as long as (1)

maxume (22995) | more than 5 years ago | (#23257066)

I'm not saying that they should stick everything on one chip anytime soon, just that there is actually a limit somewhere in the medium term future where you start spending improvements somewhere other than raw performance. For casual users, that's really soon(because dpi isn't that important 3 or 4 feet away from your face, most people's eyes don't have the resolution for it to matter).

Re:CPU based GPU will not work as good as long as (1)

Culture20 (968837) | more than 5 years ago | (#23253612)

If you have Eight or X Cores, Couldn't one or two (or X-1) be dedicated to run MESA (or a newer, better, software GL implementation)? IIRC, SGI's linux/NT workstation 350's had their graphics tied into system RAM (which you could dedicate huge amounts of RAM for), and they worked fine.

Re:CPU based GPU will not work as good as long as (0)

Anonymous Coward | more than 5 years ago | (#23255438)

MESA is slow, and main RAM access is slow. More general purpose cores isn't the solution. That said, you could build onto the same silicon wafer one or more GPU cores with access to on chip graphics ram (which could double as extended cache or ram specifically for double/triple buffering).

Ugh. (2, Insightful)

Anonymous Coward | more than 5 years ago | (#23253130)

From TFA> The ability to do one thing really quickly doesn't help you that much when you have a lot of things, but the ability to do a lot of things doesn't help that much when you have just one thing to do. However, if you modify the CPU so that it's doing multiple things, then when you're only doing one thing it's not going to be any faster.

David Kirk takes 2 minutes to get ready for work every morning because he can shit, shower and shave at the same time.

FOR NOW (2, Interesting)

Relic of the Future (118669) | more than 5 years ago | (#23253146)

There wasn't a horizon given on his predictions. What he said about the important numbers being "1" and "12,000" means consumer CPUs have about, what, 9 to 12 years to go before we get there? At which point it'd be foolish /not/ to have the GPU be part of the CPU. Personally, I think it'll be a bit sooner than that. Not next year, or the year after; but soon.

Re:FOR NOW (1)

Bastard of Subhumani (827601) | more than 5 years ago | (#23253274)

Personally, I think it'll be a bit sooner than that. Not next year, or the year after; but soon.
You mean it'll coincide with the release of Duke Nukem Forever?

Re:FOR NOW (1)

Keith Russell (4440) | more than 5 years ago | (#23253568)

You mean it'll coincide with the release of Duke Nukem Forever?

Nope. Duke Nukem Forever will be delayed so the engine can maximize the potential of the new combined GPU/CPU tech.

Re:FOR NOW (2, Insightful)

Dolda2000 (759023) | more than 5 years ago | (#23256142)

Why would one even want to have a GPU on the same die as the CPU? Maybe I'm just being dense here, but I don't see the advantage.

On the other hand, I certainly do see possibly disadvantages with it. For one thing, they would reasonably be sharing one bus interface in that case, which would lead to possibly less parallelism in the system.

I absolutely love your sig, though. :)

Re:FOR NOW (2, Interesting)

renoX (11677) | more than 5 years ago | (#23256554)

>Why would one even want to have a GPU on the same die as the CPU?

Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.

And a huge number of computers have integrated video so this is an important market too.

Re:FOR NOW (1)

Dolda2000 (759023) | more than 5 years ago | (#23256614)

Think about low end computers, IMHO putting the GPU in the same die as the CPU will provide better performance/cost than embedded in the motherboard.
Oh? I thought I always heard about this CPU/GPU combo chip in the context of high-performance graphics, but I may just have mistaken the intent, then. If it's about economics, I can understand it. Thanks for the explanation!

Re:FOR NOW (1)

bluefoxlucid (723572) | more than 5 years ago | (#23256982)

The GPU doesn't care about CPU cache, the CPU doesn't care about VRAM. You'll create a heat problem and need an extended memory bus to access video memory. Graphics without dedicated VRAM causes a huge CPU performance hit due to rapid and repeated north bridge/memory bus access.

Re:FOR NOW (1)

smallfries (601545) | more than 5 years ago | (#23256600)

It can do vast amounts of linear algebra really quickly. That makes it useful for a lot of applications if you decrease the latency between the processor and the vector pipelines.

Sharing one bus would hamper bandwidth per core (or parallelism as you've phrased it) - but look at the memory interface designs in mini-computers/mainframes over the past ten years for some guesses on how that will end up. Probably splitting the single bus into may point-to-point links, or at least that is where AMD's money was.

Consider the source (1)

dj245 (732906) | more than 5 years ago | (#23253172)

Graphics card man says that CPU's not a threat to his businees. I'm shocked!

Re:Consider the source (1)

Ngarrang (1023425) | more than 5 years ago | (#23253348)

This just in, Slashdotters think slashdot is the best web site!

Re:Consider the source (0)

Anonymous Coward | more than 5 years ago | (#23253792)

This just in, Slashdotters think slashdot is the best web site!
You must be new here.

On the high-end... (1)

Kjella (173770) | more than 5 years ago | (#23253204)

..there's discrete chips, but on the low end there's already integrated chipsets and I think the future is heading towards systems on a chip. A basic desktop with hardware HD decoding and 3D enough to run Aero (but not games) can be made in one package by Intel.

Re:On the high-end... (1)

Telvin_3d (855514) | more than 5 years ago | (#23253344)

Aero takes more graphics support than some games. Even some new games if you look at some smaller niche titles.

Re:On the high-end... (0)

Anonymous Coward | more than 5 years ago | (#23256104)

And that is why it is failing

Re:On the high-end... (1)

billcopc (196330) | more than 5 years ago | (#23256518)

Minesweeper is a game. It's about as hard on a GPU as most "niche" titles you speak of, because the great majority of low-budget titles are built by glorified VB coders.

I'm not saying a game needs to pound the crap out of my machine to be considered entertaining. What I'm saying is the game industry is full of junk in all segments. Great ideas with crap developers, and crap ideas with great devs. Once in a blue moon, both kinds of geniuses meet up and produce gaming nirvana, the other 99% isn't even worth a screenshot.

Re:On the high-end... (1)

bluefoxlucid (723572) | more than 5 years ago | (#23257002)

John Carmack usually goes, "Your ideas are stupid. We're going to make something cool now unless you fire all of us."

And later.... (1)

Chyeld (713439) | more than 5 years ago | (#23253220)

"No, that's ridiculous -- it would be at least a thousand times too slow [for graphics]," he said. "Adding four more cores, for example, is not going anywhere near close to what is required."

He then quipped, "Go away kid, ya bother me!" [dontquoteme.com]

More interested in open drivers (1)

pembo13 (770295) | more than 5 years ago | (#23253250)

So I am going which ever manufacturer has the best drivers for my platform of choice, Linux. So if the future doesn't hold this for Nvidia, it doesn't really interest me.

Re:More interested in open drivers (1)

Telvin_3d (855514) | more than 5 years ago | (#23253402)

And if your platform of choice doesn't hold much future/value for Nvidia, you will continue to not really interest them.

The only people who run Linux without access to a Windows/OSX box tend to be the ones who are only willing to run/support Open Source/Free software. This is also the group least likely to buy commercial games, even if they were released for Linux.

No games -> No market share for high end graphics cards with big margin -> The graphics cards companies don't care

Re:More interested in open drivers (1)

pembo13 (770295) | more than 5 years ago | (#23253500)

Does Nvidia make commercial games? I thought they made hardware. I can't (yet) download hardware for free.

Re:More interested in open drivers (1)

IKnwThePiecesFt (693955) | more than 5 years ago | (#23255556)

He never implied they did. He was saying though that NVidia doesn't care about everyday productivity users, they care about gamers since gamers are the ones spending $500 for the top video cards. Since games are typically Windows exclusive (aside from less-than-perfect emulation) gamers tend to be Windows users. Thus, Linux is not their market and they don't care.

Re:More interested in open drivers (0)

Anonymous Coward | more than 5 years ago | (#23253600)

>And if your platform of choice doesn't hold much future/value for Nvidia, you will continue to not really interest them.

Yet ATI is very interested...

>The only people who run Linux without access to a Windows/OSX box tend to be the ones who are only willing to run/support Open Source/Free software. This is also the group least likely to buy commercial games, even if they were released for Linux.

That would explain why we buy the windows games then run them in WINE.

You can't broadly generalize FOSS users. Not all of us are Stallman clones.

As an earlier story today pointed out, if nVidia wants to keep the interest of OEMs like Dell, Lenovo, Asus, HP etc., they'll provide redistributable linux drivers or else the OEMs will give preference to ATI, which does provide them.

As of now, the only place to legitimately get the linux drivers for nVidia chips is nVidia. If you want to bundle them in your distro, you taint your license and incur legal risk.

Say "There aren't enough linux users to interest nVidia and game companies". Don't say the FOSS users that exist won't buy games, because it's completely wrong.

I buy an average of 6 games a year. Sometimes more.

-AC

Re:More interested in open drivers (0)

Anonymous Coward | more than 5 years ago | (#23253610)

To be fair to NVidia, they do have reasonable binary drivers. But yeah, I toy with 3D very little these days and buy intel purely because of the driver situation.

Give me either companies products over what AMD/ATI are churning out though... pfft!

Why not.... (0)

Anonymous Coward | more than 5 years ago | (#23253254)

Instead of 4 CPU cores on a quad-core chip, why not put 2xCPU cores and 2xGPU cores?

Re:Why not.... (1)

sexconker (1179573) | more than 5 years ago | (#23253756)

Because the design of a "CPU" core is vastly different than that of a "GPU" core.

The whole "OMG let's integrate everything!" routine is old. It is quickly followed by the realization (due to programmers getting frustrated with stupid quirks/implementation requirements, and hitting the always annoying performance wall) that things work better when they're designed for a specific purpose, and then we work to separate them out again, creating new buses and slots and form factors and power connectors.

A while later, CPU people begin to envy the raw performance of the dedicated hardware, while the GPU (and other dedicated hardware) people see the untapped potential of a CPU sitting mostly idle, with a giant instruction set.

They both then work on gearing their hardware to the other side of things (designing memory access and pipelines and extra instruction sets and registers to allow for more specialized tasks, for the CPU folks, and making everything programmable and more generic for the dedicated hardware folks).

We eventually get to the point where people realize "Hey, they're both just processors doing some basic logic and math. Why do we need multiple things again?" and "Hey, if we got rid of this stupid bus, and put everything on one chip, we could save on latency, heat, power, and cost!"

Then the merge everything, and the cycle repeats.

Re:Why not.... (2, Insightful)

nick_davison (217681) | more than 5 years ago | (#23253918)

Instead of 4 CPU cores on a quad-core chip, why not put 2xCPU cores and 2xGPU cores?
Because now they have to make [number of CPU options] x [number of GPU options] variants rather than [number of CPU options] + [number of GPU options].

Even taking a small subset of the market:
8600GT, 8800GT, 8800GTS, 6600, 6700, 6800

Six products sit on shelves. Users buy what they want. As a competitor to say the 8600GT comes out, Best Buy has to discount one product line.

To give users the same choices as an integrated solution, that'd be 9 variants:

8600GT/6600 - Budget
8600GT/6700 - Typical desktop user
8600GT/6800 - Photoshop user/media encoder
8800GT/6600 - Poor gamer
8800GT/6700 - Mid range gamer
8800GT/6800 - Serious desktop user who likes to game
8800GTS/6600 - Exclusive but somewhat poor gamer
8800GTS/6700 - Gaming enthusiast
8800GTS/6800 - Hardcore power gamer/3D Modeller

Most users are now left scratching their heads as to whether the similarly priced 8600GT/6800 or the 8800GTS/6600 is better or worse for them than the also similarly priced 8800GT/6700.

Plus, every time one part of the market is perceived as less valuable, the stores have to price many different skus.

Now add in the gamer who bought a $200 GPU and a $300 CPU a little while before a great new mid range GPU option turns up. They can toss their $200 investment which sucks but that's probably it when it comes to upgrading. Or the guy who bought the $450 (we'll grant a small discount for single purchases) combined unit now has to toss both. Plus he most likely has to buy a new motherboard and memory because memory speed requirements and processor sockets change faster than Britney Spears' moods can swing.

Why wouldn't you have a gpu core in a multiple ... (3, Interesting)

Cedric Tsui (890887) | more than 5 years ago | (#23253298)

... core processor? I don't understand the author's logic. Now, suppose it's 2012 or so and multiple core processors have gotten past their initial growing pains and computers are finally able to use any number of cores each to their maximum potential at the same time.

A logical improvement at this point would be to start specializing cores to specific types of jobs. As the processor assigns jobs to particular cores, it would preferentially assign tasks to the cores best suited for that type of processing.

Re:Why wouldn't you have a gpu core in a multiple (1)

Narpak (961733) | more than 5 years ago | (#23253384)

Because if processing power goes up way past what you generally need for even heavy apps, Nvidia still want you to believe that you need a separate graphics card. If that model were to change at some point it would be death for graphics card manufacturers. Of course, they could very well be right. What the hell do I know :P

Re:Why wouldn't you have a gpu core in a multiple (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23253562)

I don't think you understand the difference between GPUs and CPUs. The number of parallel processes that a modern GPU can run is massively more than what a modern multi-core CPU can handle. What you're talking about sounds like just mashing a CPU core and GPU core together on the same die. Which would be horrible for all kinds of reasons (heat, bus bottlenecks and yields!).

Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power. Intel knows they have to find some other way to keep people buying more in the future. How many home users need more than a C2D E4500? Will MS Word, web browser and an email client change that much in the next 3-5 years that will demand more horsepower that is available today?

Then again, you might need 32 CPU cores on a single die if you want to run that AT&T browser ;)

Re:Why wouldn't you have a gpu core in a multiple (1)

Cedric Tsui (890887) | more than 5 years ago | (#23254730)

Hmmm. That's interesting.
You're right. Perhaps the CPU and the GPU are too different to play nicely on the same die.

A little simpler then. If CPU processing power does continue to increase exponentially (regardless of need) then one clever way to speed up a processor may be to introduce specialized processing cores. The differences might be small at first. Maybe some cores could be optimized for 64bit applications while others are still backwards compatible with 32bit. (No. I have no idea what sort of logistical nightmare this would be. )

Re:Why wouldn't you have a gpu core in a multiple (1)

blahplusplus (757119) | more than 5 years ago | (#23255892)

"Intel has already figured out that for the vast majority of home users have finally caught on that they don't NEED more processing power."

I think the real big issue is that there are no killer apps yet (apps so convenient to ones life that they require more processing power).

I think there are a lot of killer apps out there simply waiting for processing power to make its move, the next big move IMHO is in AUTOMATING the OS, automating programming, and the creation of AI's that do what people can't.

I've been experimenting with automatic content generation for games and whatnot, and over time these same principles will spread into other areas, I doubt I'll see it in my lifetime but smart-systems are coming.

Re:Why wouldn't you have a gpu core in a multiple (1)

holophrastic (221104) | more than 5 years ago | (#23253574)

well, yeah, for sure. But I see that as only the first step. It's like the math-coprocessor step. My 32-core cpu has six graphics cores, four math cores, two HD video cores, an audio core, 3 physics, ten AI, and 6 general cores. But even that only lasts long enough to reach the point where mass production benefits exceed the specialized production benefits.

It'll also be the case that development will start to adjust back towards the cpu. Keep in mind, I don't think even one game exists now that is actually built to be dependent on even two cores. We're still dropping video frames as a preference. I await the day when other things get dropped. Imagine where AI gets dumber on a slower machine. Or sound FX are reduced. Or any number of other code paths are eliminated. Hey, no ones even reducing the video quality for 10 of 20 frames. All things that become possible with poly-core machines. Obviously raytracing takes that concept even further.

The trickle-down of core programming for many cores -- heh -- is the leaps-and-bounds concept that moves industries.

Either way, I'm back to my tried-and-true statement that what brought about the computer world in the first place was the concept of shared resources -- which includes the cpu. The same thing will happen again. Because the alternative is rediculous. Do you want to play a game on your GPU for graphics, your Xonar for sound, your physX for physics, your AIntelligence for AI (I made that up), and have your cpu do nothing but handle keyboard input? That's just silly. And it reminds me of the days when music was produced differently than sound fx. A computer is not a whole bunch of individual components in one box. It's about the box that can do anything. And when it can't, it gets a little expansion card. That expansion card usually handles some external component, or does something particularly unusual.

A GPU doesn't handle anything external, and certainly not something unusual. Every machine, always, at every moment, produces high-quality display elements. It's silly to make that a separate component.

Also, look at the prices. A decent CPU is $100 - $300, and a decent GPU is $150 - $400. There's a lot of money there when combined into a $250 - $700 device. It'll also be great to spend more on my CPU that on my hard drives. What a concept.

Re:Why wouldn't you have a gpu core in a multiple (1)

gbjbaanb (229885) | more than 5 years ago | (#23254064)

Supreme Commander is the game that requires 2 cores (well, ok you can drop the frame rate, polygon levels and other fidelity settings of course. Nobody would ever release a game that couldn't be played on a single core machine)(not yet at least).

I think, considering the diminishing returns from adding cores, that adding specialised units on die would make sense. Look at how good a GPU version of folding@home is, and think how that kind of specialised processign could be farmed off to a specialised core. Not necessarily for graphics as I think Nvidia will continue to sell better and better graphics cards.

If the die had the co-processor on it, and CPU extensions to support it, then compiler writers would use it and some processing tasks could fly along.

And the reason why wouldn't they put these things into the existing CPU cores is probably complexity. A dedicated core must be easier to design and develop that bloating existing ones with added features and extensions.

Re:Why wouldn't you have a gpu core in a multiple (1)

iabervon (1971) | more than 5 years ago | (#23253624)

I think the interviewer wasn't asking the right questions. His answer was for why you can't replace a GPU with an N-core CPU, not why you wouldn't put a GPU on the same die with your CPUs. I think his answers in general imply that it's more likely that people will want GPU cores that aren't attached to graphics output at all in the future, in addition to the usual hardware that connects to a monitor. I wouldn't be surprised if it became common to have a processor chip with 4 CPU cores and 2 GPU cores, and also have a graphics card with another GPU or 2 in addition to video output.

He is right that having a 16-core CPU won't do a number of common tasks efficiently, compared to a single massively-SIMD core.

Re:Why wouldn't you have a gpu core in a multiple (1)

drinkypoo (153816) | more than 5 years ago | (#23254280)

I think it's fairly clear that GPUs will stick around until we either have so much processing power and bandwidth we can't figure out what to do with it all, at which point it makes more sense to use the CPU(s) for everything, or until we have three-dimensional reconfigurable logic (optical?) that we can make into big grids of whatever we want. A computer that was just one big giant FPGA with some voltage converters and switches on it would make whatever kind of cores (and buses!) it needed on demand. Since we're not Buck Rogers and this ain't the 24th century, GPUs will probably be here for a while.

The real question becomes how all these cores will be connected to one another. The processes are getting finer all the time but clock rates are rising only gradually as are word lengths, and it seems highly likely that basically all computers will go multicore before we experience another quantum leap in performance that makes uniprocessor systems powerful again. So then, why not have two CPU cores and a GPU core on the same die?

I would assume that he's guessing that the integrated systems will continue to be mid-range at best, and that those systems will continue to have only one core. I disagree on both counts, especially since mid-range systems with crappy onboard graphics are here around $500-600 today.

Re:Why wouldn't you have a gpu core in a multiple (1)

bhima (46039) | more than 5 years ago | (#23253674)

I was under the impression that optimal bus design used to different but that was sort of going away with the move to multi core designs.

Re:Why wouldn't you have a gpu core in a multiple (1)

svnt (697929) | more than 5 years ago | (#23253800)

The five year window might not be in the cards, but I've got two words for you: ray tracing.

Pretty much the only way to continue Moore's Law that I can see is via additional cores. If you had 128 cores, you would no longer care about polygons. Polygons = approximations for ray tracing. Nvidia = polygons.

Re:Why wouldn't you have a gpu core in a multiple (2, Insightful)

Anonymous Coward | more than 5 years ago | (#23254228)

I think a better question is "Why wouldn't we have a separate multi-core GPU along with the multi-core CPU?" While I agree that nVidia is obviously going to protect it's own best interests, I don't see the GPU/CPU separation going away completely. Obviously there will be combination-core boards in the future for lower-end graphics, but the demand on GPU cycles is only going to increase as desktops/games/apps get better. However, one of the huge reasons that video cards are a productive industry is that there are plenty of high-end graphical demands out there, from hardcore gamers to Autocad applications. Ever seen the number of cycles/graphical processesing power it takes to run a digital 911 map? Unbelievable!

Seriously, if there is anything that history has taught us, it's that there's room for the integrated (low-end) and dedicated (high-end) graphics at the same time, as they server different niches.

Oh, and never get involved in a land war in Asia ;-)

Qualified... (1)

fitten (521191) | more than 5 years ago | (#23253368)

From TFA:

"Sure," acknowledged David. "I think that if you look at any kind of computational problem that has a lot of parallelism and a lot of data, the GPU is an architecture that is better suited than that. It's possible that you could make the CPUs more GPU-like, but then you run the risk of them being less good at what they're good at now - that's one of the challenges ahead [for the CPU guys].


Yeah... so all you have to do is turn every problem into one that GPUs are good at... lots of parallelism and lots of data... but not all problems are like that (heck, the majority of problems aren't like that). GPU stream processors do fairly simple jobs compared to what a (general purpose) CPU does *and* what they do is extremely parallel (embarassingly parallel). All that OOOE, branch prediction, memory management, and all those other features take silicon to make fast. That's the reason general purpose CPUs have few cores per die.

Stream processors are very simple in comparison and don't require nearly as much silicon to implement, which is why we have over 100 of them on some chips. When you add the complexity that the general purpose CPU has to deal with to the GPU processors, you will eventually be in the same boat.

Or maybe perhaps NVIDIA has been showing their graphics cards running a database engine? or even an OS as we are used to using (memory protection, etc.) What about compiling source code?

The future is asymmetric cores on a single die. The DSPs and Cell are early forms of this but still too hard to deal with. OS kernels and compilers have to become smarter: the OS knows which cores can do what and the compiler can tell what kinds of things a program expects to do and puts that into the executable, the OS matchs the executable with the cores that best satisfy what the program needs (closest minimal match), perhaps even dynamically as different sections of a program are 'marked' by the compiler to let an OS know when to schedule the process for a different type of core.

Today, they are explicitly programmed... the 'main' CPU makes library calls, basically, that use the other cores to do stuff, more like coprocessors. All this stuff will eventually need to be done automatically.

Every time I walk out to my car I see raytracing. (2, Interesting)

argent (18001) | more than 5 years ago | (#23253382)

There's the sun reflecting off the cars, there's the cars reflecting off each other, there's me reflecting off the cars. There's the whole parking lot reflecting off the building. Inside, there's this long covered walkway, and the reflections of the cars on one side and the trees on the other and the multiple internal reflections between the two banks of windows is part of what makes reality look real. AND it also tells me that there's someone running down the hall just around the corner inside the building, so I can move out of the way before I see them directly.

You can't do that without raytracing, you just can't, and if you don't do it it looks fake. You get "shiny effect" windows with scenery painted on them, and that tells you "that's a window" but it doesn't make it look like one. It's like putting stick figures in and saying that's how you model humans.

And if Professor Slusallek could do that in realtime with a hardwired raytracer... in 2005, I don't see how nVidia's going to do it with even 100,000 GPU cores in a cost-effective fashion. Raytracing is something that hardware does very well, and that's highly parallelizable, but both Intel and nVidia are attacking it in far too brute-force a fashion using the wrong kinds of tools.

overestimating the cost of ray tracing (2, Informative)

j1m+5n0w (749199) | more than 5 years ago | (#23254362)

During the Analyst's Day, Jen-Hsun showed a rendering of an Audi R8 that used a hybrid rasterisation and ray tracing renderer. Jen-Hsun said that it ran at 15 frames per second, which isn't all that far away from being real-time. So I asked David when we're likely to see ray tracing appearing in 3D graphics engines where it can actually be real-time?

"15 frames per second was with our professional cards I think. That would have been with 16 GPUs and at least that many multi-core CPUs â" that's what that is. Just vaguely extrapolating that into our progress, it'll be some number of years before you'll see that in real-time," explained Kirk. "If you take a 2x generational increase in performance, you're looking at least four or five years for the GPU part to have enough power to render that scene in real-time.

Modern real-time ray tracers can get respectable performance without doing any sort of GPU-hybrid trickery, or requiring any hardware other than a fast CPU. For instance, try out the Arauna [igad.nhtv.nl] demo. (Dedicated ray-tracing hardware would be nice, but I'm not aware of any hardware implementation that has significantly outperformed a well-optimized CPU ray tracer. With the resources of a major chip manufacturer I don't doubt it could be done, though.) Arauna and OpenRT and the like might still be a little too slow to run a modern game at high resolution, but they're getting there fast.

"People use ray tracing for real effects as well though. Things like shiny chains and for ambient occlusion (global illumination), which is an offline rendering process that is many thousands of times too slow for real-time," said Kirk. "Using ray tracing to calculate the light going from every surface to every other surface is a process that takes hundreds of hours."

This is just plain ignorant. Naive, O(n^2) radiosity may take that long, or path tracing with a lot of samples per pixel, but a decent photon mapping algorithm shouldn't be anywhere near that slow to produce a rendering quality acceptable for games. Maybe "hundreds of seconds" might be a more plausible number. (Or less, if you're willing to accept a less accurate approximation.) Metropolis Light Transport is another algorithm, but I don't have a good notion of how fast it is.

Re:overestimating the cost of ray tracing (1)

argent (18001) | more than 5 years ago | (#23255064)

Not to mention that Philipp Slusallek was getting 15 FPS in 2005, with an FPGA that had maybe 1% the gates of a modern GPU, and ran at 1/8th the clock rate. It might not have been beating the best conventional raytracers in 2005, but it was doing them with a chip that had the clock rate and gate count of a processor from 1995.

Re:Every time I walk out to my car I see raytracin (1)

synth7 (311220) | more than 5 years ago | (#23254486)

Every time I walk out to my car I see raytracing.

Actually, you don't. Raytracing is a mathematical model that attempts to simulate light behavior in reality. And, as is true for most simulations, it is a gross simplification of reality. The mathematical model used for approaching realism is irrelevant, just so long as the result is closer to the perceived goal.

And, of course, we are assumng that modeling visual reality is the perceived goal, which it is not in many cases.

Re:Every time I walk out to my car I see raytracin (1)

argent (18001) | more than 5 years ago | (#23254936)

Actually, you don't.

What, you're one of these heretics who doesn't realize that we're in an elaborate computer simulation?

Future is set (3, Insightful)

Archangel Michael (180766) | more than 5 years ago | (#23253390)

The pattern set by the whole CPU / Math Co-Processor integration showed the way. For those old enough to remember, once upon a time the CPU and Math Co-Processor were separate socketed chips. Specifically you had to add the chip to the MOBO to get math functions integrated.

The argument back then is eerily similar to the same as proposed by NV chief, namely the average user wouldn't "need" a Math Co-Processor. Then came along the Spreadsheet, and suddenly that point was moot.

Fast forward today, if we had a dedicated GPU integrated with the CPU, it would eventually simplify things so that the next "killer app" could make use of commonly available GPU.

Sorry, NV, but AMD and INTEL will be integrating GPU into the chip, bypassing bus issues and streamlining the timing. I suspect that VIDEO processing will be the next "Killer App". YouTube is just a precursor to what will become shortly.

Re:Future is set (1)

TopSpin (753) | more than 5 years ago | (#23253840)

CPUs, GPUs... in the end they're all ICs [wikipedia.org]. Bets against integration inevitably lose. The history of computation is marked by integration.

NVidia already makes good GPUs and tolerable chipsets. They should expand to make CPUs and build their own integrated platform. AMD has already proven there is room in the market for entirely non-Intel platforms.

It's that or wait till the competition puts out cheap, low power integrated equivalents that annihilate NVidia's market share. I think they have the credibility and could leverage the necessary capital. The question is whether NVidia has the vision to act. Probably not; they've been very successful for a long period and may have weeded out any risky leadership.

They'll probably just get bought by HP, long after their relevance has faded.

Whole System Design (1)

Archangel Michael (180766) | more than 5 years ago | (#23254346)

I've actually been suggesting to my friends for a while, that you'll end up with about four or five different major vendors of computers, each similar to what Apple is today, selling whole systems.

Imagine Microsoft buying Intel, AMD buying RedHat, NVidia using Ubuntu(or whatever) and IBM launching OS/3 on Powerchips, and Apple.

If the Document formats are set (ISO) then why not?

There will be those few that continue to mod their cars, but for the most part, things will be mostly sealed and only a qualified mechanic er technician would ever need to crack the case.

I suspect that in the next 15 years or so, this is what you're gonna end up seeing.

Re:Future is set (1)

TheRaven64 (641858) | more than 5 years ago | (#23255726)

nVidia would be foolish to think that the desktop graphics market won't follow the same trends as the workstation graphics market, since their founders were at SGI when that trend started and were the ones that noticed it. I suspect this is why nVidia have licensed the ARM11 MPCore from ARM. They are using it in the APX 2500, which has one to four ARM CPU cores at up to 750MHz, and an nVidia-developed GPU, which supports OpenGL 2.0 ES and 720p encoding and decoding of H.264, in a small enough (and low enough power) package for handheld devices.

Re:Future is set (0)

Anonymous Coward | more than 5 years ago | (#23254080)

It's preposterous to say that spreadsheets drove adoption of FPUs.
Even for a huge spreadsheet at the time (say 10000 cells), each cell containing a complex formula (say 5 multiplications, 5 divisions and 10 additions/subtractions), would only use approx. 0.1 seconds to recalculate on a 25mhz 386sx..

Re:Future is set (1)

blahplusplus (757119) | more than 5 years ago | (#23256034)

"The pattern set by the whole CPU / Math Co-Processor integration showed the way. For those old enough to remember, once upon a time the CPU and Math Co-Processor were separate socketed chips"

Math co-processors did not have massive bandwidth requirements that modern GPU's need in order to pump out frames. Everyone in this discussion seeing the merging of CPU and GPU haven't been around long enough, I remember many times back in the 80's and 90's the same people predicting the 'end of the graphics card' it NEVER HAPPENED, even with 2D cards, and this was well before the advent of 3D accelerators.

SMP not the answer (0)

Anonymous Coward | more than 5 years ago | (#23253454)

We already have multi-core systems and they rarely improve gaming. Why? Because almost all games are not coded to take advantage of a SMP environment.

Even Carmack himself will tell you that it is very challenging to develop a truly multi-threaded app, especially when a real-time component exists.

Most games operate in a synchronous state machine type fashion with rendering just another step in the cycle. This does not lend itself to parallelism. In order to truly take advantage of SMP, most of the big game engines in use would have to be re-written from the ground up.

Unless we move the engine to the GPU, then Intel would really be in trouble. Already moving that direction piecemeal... shaders (well sort of, dynamic code on the GPU counts in my book), aegia physx anyone?

Realtime Ray Tracing and Multicore CPU's (4, Interesting)

nherc (530930) | more than 5 years ago | (#23253460)

Despite what some major 3D game engine creators have to say [slashdot.org] if real-time ray tracing comes sooner than later, at about the time an eight core CPU is common, I think we might be able to do away with the graphics card especially considering the improved floating point units going in next gen. cores. Consider Intel's QuakeIV Raytraced running at 99fps at 720P on a dual quad-core Intel rig at IDF 2007 [pcper.com]. This set-up did not use any graphic card processing power and scales up and down. So, if you think 1280x720 is a decent resolution AND 50fps is fine you can play this now with a single quad-core processor. Now imagine it with dual octo-cores which should be available when? Next year? I hazard 120fps at 1080P on your (granted) above average rig doing real time ray tracing some time next year IF developers went that route AND still playable resolutions and decent fps with "old" (by that time) quad-cores.

Re:Realtime Ray Tracing and Multicore CPU's (1)

Sancho (17056) | more than 5 years ago | (#23253806)

It seems like you could still have specialized ray-tracing hardware. Whether that's integrated into the main CPU as a specialized core, or as an expansion card really isn't relevant, though.

I think the best thing about heading in this direction is that "accelerated" graphics no longer becomes limited by your OS--assuming your OS supports the full instruction set of the CPU. No more whining that Mac Minis have crappy graphics cards, no more whining that Linux has crappy GPU driver support....

The downside is that an easy upgrade path gets lost. Right now, you can breathe new life into your aging system by upgrading the graphics card (if you're wanting to play newer games, of course.) Upgrading the CPU is a little more intimidating.

Re:Realtime Ray Tracing and Multicore CPU's (1)

G00F (241765) | more than 5 years ago | (#23255724)

You can rarely upgrade CPU now days unless you bought what is considered a low end CPU. And even then you don't see the performance jump from say a Geforce 440MX to a Geforce 8500. (easy under $100)

Most people would spend ~100+ to upgrade a CPU for small increases and then their mobo is locked with PCI or AGP? Just spend ~150-200 for new CPU/RAM/Mobo, upgrade video card later. (I've been upgrading people to AMD 690V chipset mobo, and it has given them a large enough increase they didn't need the new card and the ones that did just cost another $50-150)

SO what I am saying is upgrading CPU is worthless unless upgrading the whole system, the video card is great, but at a certain point have to meak the leap because why continue to upgrade an aging and out of date system?

Re:Realtime Ray Tracing and Multicore CPU's (1)

Sancho (17056) | more than 5 years ago | (#23255886)

Well, I was talking about a world where we've moved on from off-CPU GPUs. Right now, yes, it's rare for people to upgrade the CPU without also upgrading many other components--but it's not always as dark an outlook as you suggest. The Core Duo, for example, is pin-compatible with the Core2Duo, and the performance difference is noticeable (at least on Macs.)

Re:Realtime Ray Tracing and Multicore CPU's (1)

nabasu (771183) | more than 5 years ago | (#23254426)

Problem is just that ray-tracing isn't all that. You want control when you design a game environment. Control over how everything looks and all the lights. I think that there will be lighting solutions to take advantage of more CPU horsepower, like Geomeric's Enlighten. But a full raytracing solution would just plain suck. If we look at the film business, you will see that the most commonly used render solution is Pixar's Renderman. It's a mix between raytracing and more traditional methods. I think that's where gaming is headed now, a mix between traditional graphics and a touch of raytracing.

Re:Realtime Ray Tracing and Multicore CPU's (1)

nabasu (771183) | more than 5 years ago | (#23254454)

Woops. Pardon for using the "bold"-tag...After ten years of using html, I'm still a newbie...

Re:Realtime Ray Tracing and Multicore CPU's (0)

Anonymous Coward | more than 5 years ago | (#23254826)

Consider that Quake IV uses at least two orders of magnitude less complex scene geometry than current-gen games, lower-resolution textures, and almost no shader effects...

Re:Realtime Ray Tracing and Multicore CPU's (0)

Anonymous Coward | more than 5 years ago | (#23255676)

Good. I'm sick of lousy support and unstable drivers for high end graphics cards. I would like to see all the complexity of graphics cards moved into standardized software which can run on a variety of multi-core general purpose processors. Then we can actually get some better stability.

Drivers? (0)

Anonymous Coward | more than 5 years ago | (#23253468)

I'm more interested in the future of open gpu drivers... that fancy hardware is next to useless without them unless you are pretty technical and don't mind poisoning your system, or use that other OS.

ATI will win if nVidia doesn't follow suit. My next card will be ATI on OSS drivers unless nVidia does something similar to keep me interested.

-AC

How bout this (1)

kurt555gs (309278) | more than 5 years ago | (#23253486)

Why not make one of the multiple cores a GPU, then the speed at which it communicates with the CPU will be at clock speed.

Problem solved.

Of course Nvidia will need to come up with a CPU.

Cheers
 

Algorithms for graphics don't need Pentium cores (1)

Donkey Kong Cluster (1261480) | more than 5 years ago | (#23253598)

It is very ridiculous, because if you can put 8 cores in a single dye, then you can put a lot more Multiprocessors then a current GPU already have. And this GPUs are very scalable and the software the runs in it are very simple, so you need simpler threads.
And this is what happens. Current GPUs can run 512 threas in parallel. Suppose you have 8 core with Hyperthreading, you could run, squeezing everything, 16 threads top. And there isn't any 8 core for sale, isn't?

SIMD vs. MIMD (1)

MOBE2001 (263700) | more than 5 years ago | (#23253620)

Nvidia makes SIMD (single instruction, multiple data) multicore processors while Intel, AMD and the other players make MIMD (multiple instructions, multiple data) multicore processors. These two architectures are incompatible, requiring different programming models. The former uses a fine grain approach to parallelism while the latter is coarse-grained. This makes for an extremely complex programming environment, something that is sure to negatively affect productivity. The idea that the industry must somehow resign itself to an uneasy marriage between the two approaches is nonsense. Logic dictates that universality should be the main goal of multicore research. The market is crying for a super fast, fine-grain and easy to program, MIMD multicore architecture that can handle any kind of parallel computing task. Neither Nvidia, Intel, AMD or the others even come close to delivering what the market wants. And as we all know, what the market wants, the market will get. So my point is that Nvidia should not rest on its laurels because their technology is bound to become obsolete as soon as someone figures out how to make the right multicore processor and kicks everybody's ass in the process. Read Nightmare on Core Street [blogspot.com] for a good analysis of where the changing multicore landscape is going. In the meantime, I advise everybody in the multicore business to thread carefully. Big money is in the balance. And I mean, BIG MONEY.

Re:SIMD vs. MIMD (2, Informative)

hackstraw (262471) | more than 5 years ago | (#23254048)

Nvidia makes SIMD (single instruction, multiple data) multicore processors...

That is untrue. The Nvidia cuda environment can do MIMD. I don't know the granularity, or much about it, but you don't have to run in complete SIMD mode.

Re:SIMD vs. MIMD (0)

Anonymous Coward | more than 5 years ago | (#23254468)

For anybody that followed the link to the COSA bullshit, the following quote from the same author should put it in perspective...

My goal is to use my understanding of the metaphorical texts to design and build a true artificial intelligence. The Christian AI! It is only a matter of time. When that happens, the Darwinian walls will come crumbling down like the old walls of Jericho. Sweet revenge.


HTH.

Comments on AMD technology misleading (0)

Anonymous Coward | more than 5 years ago | (#23254542)

He said that current ATI hardware cannot run C code, so the question is: has Nvidia talked with competitors (like ATI) about running C on their hardware?
You can indeed run C on AMD's firestream processor using the Brook+ compiler http://ati.amd.com/products/streamprocessor/specs.html [amd.com]. There are issues, however. For instance, the firestream 9170 appears dedicated to computing. I'm not sure they have a processor that can do both graphics and general purpose computing with C. Makes me wonder about his other comments.

It would be sad if his comments about AMD folding did pan out. It would have been wonderful to have a CPU and GPU chip communicating by HT.

SoC is the future (1)

davido42 (956948) | more than 5 years ago | (#23255272)

I think you will see nVidia licensing their IP to other chip companies, because like it or not, the push is always going to be toward cost reduction, power reduction, smaller form factors, and so on. This is true at all performance levels. For low-end systems, the multi-core CPUs may eat their lunch. The only thing that saves them is that graphics and video are data pigs, so the issue is more managing high bandwidth data flow than overall horsepower. They will still be around, but they may go the way of SGI.

Eventually it will be all one processor (1)

fluxburn (1278932) | more than 5 years ago | (#23256912)

Eventually processors will have hundreds of cores, and will be the polar opposite of today's world of processor design.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...