Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia Working on a CPU+GPU Combo

Zonk posted more than 7 years ago | from the that-will-keep-them-out-of-trouble-for-a-while dept.

178

Max Romantschuk writes "Nvidia is apparently working on an x86 CPU with integrated graphics. The target market seems to be OEMs, but what other prospects could a solution like this have? Given recent development with projects like Folding@Home's GPU client you can't help but wonder about the possibilities of a CPU with an integrated GPU. Things like video encoding and decoding, audio processing and other applications could benefit a lot from a low latency CPU+GPU combo. What if you could put multiple chips like these in one machine? With AMD+ATI and Intel's own integrated graphics, will basic GPU functionality be integrated in all CPU's eventually? Will dedicated graphics cards become a niche product for enthusiasts and pros, like audio cards already largely have?" The article is from the Inquirer, so a dash of salt might make this more palatable.

cancel ×

178 comments

Heard This One Before (4, Interesting)

eldavojohn (898314) | more than 7 years ago | (#16517577)

Sounds like Nvidia is just firing back at the ATI-AMD claim from two months ago [theinquirer.net] . Oh, you say that you're integrating GPUs and CPUs? "Well, we can say that too!"

What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.

What I'm not confused about is the sentence from the above article:
DAAMIT engineers will be looking to shift to 65 nanometre if not even to 45 nanometre to make such a complex chip as a CPU/GPU possible.
Oh, I've worked with my fair share of DAAMIT engineers. They're the ones that go, "Yeah, it's pretty good but ... DAAMIT, we just need more power!"

Re:Heard This One Before (0)

Anonymous Coward | more than 7 years ago | (#16517625)

I think it is because of the distance and to speed up shared memory or cache or something.

Re:Heard This One Before (0)

Anonymous Coward | more than 7 years ago | (#16519053)

Remember to add a Physics engine, too! And an AI engine! And maybe even a kitchen sink.

Re:Heard This One Before (4, Interesting)

everphilski (877346) | more than 7 years ago | (#16517669)

What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again?

a really, really fast pipe. It is a lot quicker to push stuff from CPU->GPU when they are on the same piece of silicon, versus the PCIe or AGP bus. Speed is what matters, it doesn't look like they are moving the load one way or another (although moving some load from CPU->GPU for vector based stuff would be cool if they had a general purpose toolkit, which I'd imagine one of these three companies will think about).

Re:Heard This One Before (2, Insightful)

Ironsides (739422) | more than 7 years ago | (#16517839)

Then why not just have some connections that come straight out of the CPU and go directly to a graphics card, bypassing any bus entirely?

Re:Heard This One Before (3, Interesting)

drinkypoo (153816) | more than 7 years ago | (#16517871)

ATI/AMD is working on that right now. I think it comes after the next rev of hypertransport.

Re:Heard This One Before (1)

Neil Hodges (960909) | more than 7 years ago | (#16518363)

Wasn't that what the old VL Bus was for? It connected the i486 to the video card, ne?

Re:Heard This One Before (1)

ultranova (717540) | more than 7 years ago | (#16518461)

Then why not just have some connections that come straight out of the CPU and go directly to a graphics card, bypassing any bus entirely?

That's the whole point here: put the CPU and GPU right next to each other and wire them together. You see, the nearer they are to each other, the less time it takes for electric impulses to travel from one to the other, and the faster the communication is.

And, of course, the reason number one: you get a guaranteed GPU sale for each CPU sale - goodbye pesky competition ;).

Re:Heard This One Before (4, Insightful)

Ironsides (739422) | more than 7 years ago | (#16518679)

And, of course, the reason number one: you get a guaranteed GPU sale for each CPU sale - goodbye pesky competition ;).

And vice versa. This might work where someone wants an embeded GPU for low to medium end graphics. However, I doubt gamers would like the idea of having to purchase a new CPU evertime a new GPU comes out and vice versa.

There's something to be said for physically discrete components.

Re:Heard This One Before (0)

Anonymous Coward | more than 7 years ago | (#16519361)


And vice versa. This might work where someone wants an embeded GPU for low to medium end graphics. However, I doubt gamers would like the idea of having to purchase a new CPU evertime a new GPU comes out and vice versa.

There's something to be said for physically discrete components.


I'm sure the company is smart enough to do some simple marketing and pair up the right CPU with the right GPU for a particular buyer/budget.

A cyclic process? (4, Insightful)

Kadin2048 (468275) | more than 7 years ago | (#16517867)

A while ago -- and maybe it was in the Slashdot discussion about ATI, I'm not sure -- somebody described a cycle in computer design, where various components are built-in monolithically, then broken out as separate components, and then swallowed back up into monolithic designs again.

Graphics chips seem to have done this cycle at least once; perhaps now we're just looking at the next stage in the cycle? We've had graphics as a separate component from the processor for a while, perhaps the next stage in the cycle is for them to combine together into a G/CPU, to take advantage of the design gains in general-purpose processors.

Then at some point down the road, the GPU (or more likely, various GPU-like functional units) might get separated back out onto their own silicon, as more application-specific processors become advantageous once again.

Re:A cyclic process? (5, Informative)

shizzle (686334) | more than 7 years ago | (#16517943)

Yup, the idea is pushing 30 years old now, and came out of the earliest work on graphics processors. The term "wheel of reincarnation" came from "On the Design of Display Processors", T.H. Myer and I. E. Sutherland, Communications of the ACM, Vol 11, No. 6, June 1968.

http://www.cap-lore.com/Hardware/Wheel.html [cap-lore.com]

Re:A cyclic process? (1)

gstoddart (321705) | more than 7 years ago | (#16518563)

The term "wheel of reincarnation" came from "On the Design of Display Processors"

I won't dispute that the term in a technical usage was coined by them. But, it's basically a borrowed term from Hindu/Buddhist stuff who have believed in reincarnation and the wheel of life for a few thousand years.

Cheers

Re:A cyclic process? (1)

shizzle (686334) | more than 7 years ago | (#16518807)

True! Didn't mean to imply otherwise. I should have been clearer:

The use of the term "wheel of reincarnation" to refer to this phenomenon came from [...]

And of course the main contribution of this paper was the recognition of that phenomenon, not just the appropriation of a catchy phrase to describe it.

Re:A cyclic process? (1)

gstoddart (321705) | more than 7 years ago | (#16518933)

True! Didn't mean to imply otherwise. I should have been clearer

*laugh* That, or I should be less of a pedant. ;-)

Cheers

Re:A cyclic process? (4, Informative)

levork (160540) | more than 7 years ago | (#16517981)

This is known as the wheel of reincarnation [catb.org] , and has come up several times in the last forty years of graphics hardware.

Re:Heard This One Before (4, Interesting)

purpledinoz (573045) | more than 7 years ago | (#16517889)

It seems like this type of product would be marketed towards the budget segment, which really doesn't care about graphics performance. However, the huge advantage of having a GPU on the same silicon as the CPU would be a big boost in performance. The low cost advantage has already been attained with the integrated graphics chipsets (like nForce). So that would mean this might be marketed towards the high-performance crowd.

But I highly doubt that nVidia will be able to get a CPU out that out-performs an Intel or AMD, which the high-performance junkies would want. Intel and AMD put a HUGE amount of money into research, development, and fabrication to attain their performance. This is going to be interesting to watch. Hopefully nVidia doesn't dig themselves into a hole with this attempt.

Re:Heard This One Before (4, Interesting)

NerveGas (168686) | more than 7 years ago | (#16518455)

I don't think that the CPU->GPU pipe is any limitation. Going from AGP 4x->8X gave very little speed benefit, and on PCI-E connections, you have to go from the normal 16x down to a 4x before you see any slowdown.

Memory size and bandwidth are the usual limitations. Remember that if you want 2x AA, you double your memory usage, and if you want 4x AA, you quadruple it. So, that game that needed 128 megs on the video card, with 4x AA, can suddenly need 512.

steve

Sounds like the CELL (0)

Anonymous Coward | more than 7 years ago | (#16518749)

Maybe it is also a move to compete with the Cell Broadband Engine, where the SPU's can act similar to a GPU.

Re:Heard This One Before (3, Insightful)

LWATCDR (28044) | more than 7 years ago | (#16517749)

Well think of it like floating point.
At one time floating point was done by software it still is one some cpus.
Then floating point co-processors became available. For some applications you really needed to speed up floating point so it was worth shelling out the big bucks for a chip to speed it up. This is very similar to what we have now with graphics cards.
Finally CPUs had floating point units put right on the die. Later DSP like instructions where added to CPUs.

We are getting to the point where 3d graphics are mandatory. Tying it closer to the CPU is now a logical choice.

Re:Heard This One Before (5, Informative)

TheRaven64 (641858) | more than 7 years ago | (#16517931)

It's not just floating point. Originally, CPUs did integer ops and comparisons/branches. Some of the things that were external chips and are now found on (some) CPU dies include:
  1. Memory Management Units. Even in microcomputers there are some (old m68k machines) that have an off-chip MMU (and some, like the 8086 that just don't have one).
  2. Floating Point Units. The 80486 was the first x86 chip to put one of these on-die.
  3. SIMD units. Formerly only found in high-end machines as dedicated chips, now on a lot of CPUs.
  4. DSPs. Again, formerly dedicated hardware, now found on-die in a few of TI's ARM-based cores.
A GPU these days is very programmable. It's basically a highly parallel stream processor. Integrating it onto the CPU makes a lot of sense.

Re:Heard This One Before (4, Interesting)

LWATCDR (28044) | more than 7 years ago | (#16518555)

Exactly.
I was using floating point as an example.
I don't know if Nvidia can pull this off without a partner. Too build a really good X86 core isn't easy. I wonder if they may not do a PPC or Arm core instead. That could make nVidia a big player in the cell phone and mobile video market. At some point there will be portable HD-DVD players.

My crystal ball says.
AMD will produce these products.
1. A low end CPU it integrated GPU for the Vista market. This will be a nice inexpensive option for home and corporate users. It might also end up in some set-top boxes. This will the next generation Geode.
2 A family of medium and high end video products that use Hyperchannel to interface with Opteron and Athlon64 line.

Intel will
Adopt Hyperchannel or reinvent it. Once we hit four cores Intel will hit a front bus wall.
Intel will produce a replacement for the Celeron that is Duo2Core with integrated graphics on one die. This is to compete with AMD new integrated solution.
Intel will not go in to the high end graphics line.

nVidia will fail to produce an X86+GPU to compete with AMD and Intel.
nVidia produces an integrated ARM+GPU and dominates the embedded market. Soon every cellphone and media player has an nVidia chipset at it's heart. ARM and nVidia merge.

Of course I am just making all this up but so what, electrons are cheap.

Re:Heard This One Before (1)

NerveGas (168686) | more than 7 years ago | (#16519135)

I'm not sure if it does. Modern GPUs occupy a large die, run at slow clock speeds, and have an enormous transistor count. CPUs, on the other hand, have fewer transistors, smaller dies, and higher clock speeds.

Your CPU isn't going to work well at the 200-400 MHz of a GPU, and you're not going to make a huge GPU die run at 2 GHz to get your CPU to work well. I think that their CPUs will be closer to Via's C3 than a P4/Athlon 64.

Re:Heard This One Before (1)

dargaud (518470) | more than 7 years ago | (#16519311)

I have a general question relating to this. How can you compile a program that stays compatible with all those kinds of processor 'options' ? It's been a while since I last did some compiler work (okay, 15 years), but how can you have a program that uses FPU instructions if there's an FPU coprocessor or on-die available and and still work if not, and so on for GPU, DSP, SIMD, etc... Do you have tests and branches each time one of those instructions should be used that uses a library if not available ? In that case it gives a lot of sense to use a distro like Gentoo to compile specifically for your processor (saves a lot of test/branches during program execution and a lot of space in the executable). Or I'm missing something.

Re:Heard This One Before (4, Interesting)

arth1 (260657) | more than 7 years ago | (#16518355)

At one time floating point was done by software it still is one some cpus.
Then floating point co-processors became available. For some applications you really needed to speed up floating point so it was worth shelling out the big bucks for a chip to speed it up.

Then people started using floats for the convenience, not because the accuracy was needed, and performance suffered greatly as a result. Granted, there are a lot of situations where accuracy is needed in 3D, but many of the calculations that are done could be better done in integer math and table lookups.
Does it often matter whether a pixel has position (542,396) or (542.0518434,395.97862456)?
Using a lookup table of twice the resolution (or two tables where there's non-square pixels) will give you enough precision for pixel-perfect placement, and can quite often speed up things remarkably. Alas, this, and many other techniques have been mostly forgotten, and it's easier to leave it to the MMU or graphics card, even if you compute the same unneccessary calculations and conversions a million times.

Fast MMUs, CPU extensions and 3D graphics routines are good, but I'm not too sure they're always used correctly. Does a new game that's fifty times as graphically advanced as a game from six years ago really need a thousand times the processing power, or is it just easier to throw hardware at a problem?

Re:Heard This One Before (5, Informative)

LWATCDR (28044) | more than 7 years ago | (#16518629)

I am an old school programmer so I tend to use ints a lot. The sad truth if that float using SSE are as fast and sometimes faster than the old tricks we used to avoid floats!
Yes we live in an upside down world where floats are faster than ints some times.
 

Re:Heard This One Before (2, Informative)

daVinci1980 (73174) | more than 7 years ago | (#16518675)

Does it often matter whether a pixel has position (542,396) or (542.0518434,395.97862456)?

Yes. It absolutely matters. It makes a huge difference in image quality.

It matters when we go to sample textures, it matters when we enable AA, it matters.

Re:Heard This One Before (2, Informative)

arth1 (260657) | more than 7 years ago | (#16518995)

Yes. It absolutely matters. It makes a huge difference in image quality.

No, it doesn't. Note that I said pixel, not coordinate.
The coordinates should be as accurate as possible, but having a pixel more accurate than twice the resolution of the display serves very little purpose.

Re:Heard This One Before (2, Interesting)

Ryan Amos (16972) | more than 7 years ago | (#16517769)

It is; but if you combine them on the same die with a large shared cache and the on-chip memory controller... you can see where I'm going with this. Think of it as a separate CPU, just printed on the same silicon wafer. That means you only need 1 fan to cool it and you can lose a lot of heat producing power management circuitry on the video card.

Obviously this is not going to be ideal for high end gaming rigs; but it will improve the quality of integrated video chipsets on lower end and even mid range PCs.

Re:Heard This One Before (1)

novus ordo (843883) | more than 7 years ago | (#16517789)

They are merging them, but I doubt one cpu will do both. This will be more like 2 cores where 1 core does sequential instruction processing, and other is a more vector processor where you have to do the same operation over a lot of data. Sounds pretty interesting since you will be able to use the vector processor as a more general purpose cpu to do other things. The challenge will be solving the problem of cores competing for the bus bandwidth.

Re:Heard This One Before (1)

mikael (484) | more than 7 years ago | (#16517903)

What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.

GPU's are so powerful now, that some of the latest high-end scientific visualisation applications will actually do calculations on a supercomputer, transfer the data across to a PC, and then use the CPU to process the data so it can be visualised on the GPU in real-time. Similarly for game software (the physics engine will run on the CPU or physics card, then send the data over to the GPU). Engineers will always try and remove the bottleneck in performance whether its in the network, CPU, data bus, or GPU.

Re:Heard This One Before (3, Informative)

Do You Smell That (932346) | more than 7 years ago | (#16518179)

What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.
You're partially right. GPUs were made to execute the algorithms developed for graphically-intensive programs directly in silicon... thus avoiding the need to run compiled code within an operating system, which entails LOTS of overhead. Being able to do this directly on dedicated hardware (with entirely different processor designs optimized for graphical computations)makes it possible to execute ALOT more calculations per second. You can really see the difference if you, for instance, use DirectX on two nearly identical video cards; one with hardware based DirectX, the other with it running as software.

Moving it right up next to the CPU will allow the data to flow between the two alot faster than currently where it has to go over a bus... they can finally get rid of the bottlenecks that have been around since the two were seperated.

Re:Heard This One Before (1, Interesting)

Anonymous Coward | more than 7 years ago | (#16518569)

Except for the bottleneck of memory bandwidth. Video cards have tremendous bandwidth on external cards, several times that of main memory in a computer. Putting them next to cpus and having to share main memory bandwidth may not be that great for certain workloads.

Re:Heard This One Before (1)

*weasel (174362) | more than 7 years ago | (#16518243)

Having a specialized GPU made sense when processors were single-core.

Now that processors have multiple cores, many of which are left looking for a job to do - it makes sense to bring the GPU back to the main die.

The result will produce an immediate performance boost for Joe Sixpack, at lower manufacturer cost.

Re:Heard This One Before (2, Interesting)

nine-times (778537) | more than 7 years ago | (#16518771)

What I don't understand is that I thought GPUs were made to offload a lot of graphics computations from the CPU. So why are we merging them again? Isn't a GPU supposed to be an auxillary CPU only for graphics? I'm so confused.

You've already gotten some good answers here, but I'll throw in something that I haven't seen anyone else mention explicitly: GPUs aren't only being used for 3D animation anymore. GPUs started because, in order to make graphics for games better, you needed a specialized processor to handle the 3D calculations. However, GPUs have become, in some ways, more complex and powerful than the CPU, and as that has happened, other uses have been found for all that power. turns out that there are lots of mathematical transformations that are more efficient on the specialized graphics processors, including audio/video processing and some data analysis. Some clever programmers have already started offloading some of their complex calculations from the CPU to the GPU.

This has lead many people to wonder, why don't we bring some of the GPU advancements back to the CPU somehow, so that we aren't swapping data back and forth between the CPU and GPU, the system RAM and video RAM? Apparently, it's not a stupid question.

Re:Heard This One Before (2, Insightful)

FlyingGuy (989135) | more than 7 years ago | (#16518925)

As other have replied its all about the bus speed. The amount of time it takes to move data from chip to chip can insert enormous overhead.

Think back a little to the DEC Alpha. Now the ALPHA chip in and of itself was not really that remarkable. What was so VERY remarkable about the Alpha system was its bus switching. It was blazingly fast and could handle monster amounts of data from main memory to CPU, GPU, etc. The reason ( mostly ) that its now sitting in HP's vault is that the bus switch was/is really expensive to manufacture.

So the way you do this without haveing to build this very expensice bus architecture is to just put the GPU on the die with the CPU. Everything runs at the internal speed of the processor and its fairly inexpensive, comparatively, to build.

Should Slashdot really insult other news outlets? (0, Offtopic)

Anonymous Coward | more than 7 years ago | (#16517583)

Because they're never wrong and never sensationalize a story for a few clicks.

Re:Should Slashdot really insult other news outlet (4, Insightful)

vondo (303621) | more than 7 years ago | (#16518175)

The Inquirer is more of a rumor site than a news site. They have a pretty good track record for their rumors, but they don't have people on record backing this one up.

What NVidia eventually does may not bear much resemblance to the story.

Nvidia is going down. (-1, Troll)

Anonymous Coward | more than 7 years ago | (#16517589)

Crouching nigger hidden suspect!

My outlandish idea for this (0)

Anonymous Coward | more than 7 years ago | (#16517607)

Perhaps something like this could be used in a general-purpose computer. Many technical hurdles will have to be overcome, but it may be possible after several decades of research.

Re:My outlandish idea for this (1)

Teilo (91279) | more than 7 years ago | (#16518087)

I am hoping this is a joke.

"decades" ago (assuming 20 years to match your plural), most computing was still 8-bit and most personal computers had either monochrome text displays or EGA (16 color 640x480) graphics at best? Make it 30 years and there were no personal computers of note at all.

nVidia don't have a good chance with this. (2, Interesting)

purpledinoz (573045) | more than 7 years ago | (#16517637)

AMD and Intel have their own fabs that are at the leading edge of semiconductor technology. I highly doubt that nVidia will open up a fab for their chips. But who knows, IBM may produce their chips for them.

I think the better option would be to have a graphics chip fit into a Socket 939 on a dual socket motherboard, with an AMD chip. It will have a high-speed link through hyper-transport, and would act just like a co-processor. I'm no chip designer, so I have no idea what the pros/cons of this are, or if it's even possible.

Re:nVidia don't have a good chance with this. (1)

drinkypoo (153816) | more than 7 years ago | (#16517897)

You'd also need more hardware on the board, like the video memory and the RAMDAC, not to mention the ports to plug in your display. Those could be on a riser, though, and expressed like any slot card, including the DAC.

Re:nVidia don't have a good chance with this. (1)

Joe The Dragon (967727) | more than 7 years ago | (#16518977)

No you put on a hypertransport htx card

One Question: (1)

TubeSteak (669689) | more than 7 years ago | (#16517641)

How much heat do integrated graphics solutions put out?

I can't imagine it is that much
(since they mostly suck)

With integration.. (3, Interesting)

Hangin10 (704729) | more than 7 years ago | (#16517643)

With this integration, does that mean a standard for 3-d? No more Nvidia/ATI drivers. The OSDEV guys would love this if it came to that. But how would this integration work? A co-processor space like MIPS? If so, does that mean that graphics calculations have somewhat been moved back to the CPU? And what about the actual workings itself, I'm guessing the actual registers would still be memory mapped in someway (or I/O ports for x86, whatever).

I'm thinking way too much. It did alleviate boredom for about a minute though...

Re:With integration.. (0, Flamebait)

ozamosi (615254) | more than 7 years ago | (#16518411)

You DID see that Nvidia was behind this, right? Which means you'll have to run som broken, bloated and insecure binary blob just to get the bloody CPU working.

Out of their league? (4, Insightful)

Salvance (1014001) | more than 7 years ago | (#16517661)

Unless Nvidia is partnering with Intel to release this chip, I think they're getting too far out of their confort zone to be successful. Sure, a dual or even quad core chip with half of the cores handling graphics would be great, but can Nvidia deliver? I doubt it ... look how many companies have gone down the tubes after spending millions/billions after trying to make an x86 compatible chip, let alone trying to integrate top end graphics as well.

Nvidia is a fantastic graphics card company - they should continue to innovate focus on what they're good at rather than try to play follow the leader in an arena they know nothing about.

Re:Out of their league? (4, Insightful)

TheRaven64 (641858) | more than 7 years ago | (#16518113)

The thing is, it doesn't need to be a very good x86 chip. Something like a VIA C7 is enough for most uses, if coupled with a reasonable GPU. I can imagine something like this being very popular in the sub-notebook (which used to mean smaller-than-letter-paper but now means not-as-fat-as-our-other-products) arena where power usage is king. If the CPU and GPU are on the same die then this gives some serious potential for good power management, especially if the memory controller is also on-die. This makes for very simple motherboard designs (and simple = cheap), so it could be very popular.

Re:Out of their league? (1)

Doctor Faustus (127273) | more than 7 years ago | (#16518827)

The thing is, it doesn't need to be a very good x86 chip. Something like a VIA C7 is enough for most uses, if coupled with a reasonable GPU.
Especially if they just shove several of them together onto the die. Everyone is going to be focusing more on software that can take advantage of multiple CPUs for the next couple of years, and nVidia can ride the coattails of that with a nice, simple in-order-execution design. Put 16 or so of those onto your chip with a good GPU, and you might get pretty good performance without a whole lot of design work (relatively speaking, of course). The power management system could shut down entire CPUs, too.

Re:Out of their league? (0)

BecomingLumberg (949374) | more than 7 years ago | (#16518259)

Did they go down the wrong tube? If they did, maybe thats why my internets are not getting through...

I smell a pattern (2, Interesting)

doti (966971) | more than 7 years ago | (#16517673)

There seems to be a cycle of integrating and decoupling things.
We had separated math co-processors, that later were integrated in the CPU.
Then the separated GPU, which will soon be integrated back too.

Re:I smell a pattern (1)

katz (36161) | more than 7 years ago | (#16518477)

The pattern you describe has a name, it's called the "Wheel of Reincarnation". The Jargon File specifically mentions graphics equipment, even:

"[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.
Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter."

see http://catb.org/jargon/html/W/wheel-of-reincarnati on.html [catb.org] .

- Roey

Re:I smell a pattern (1)

Doctor Faustus (127273) | more than 7 years ago | (#16518979)

My understanding of The Wheel of Reincarnation is that separate hardware is eliminated entirely, and the main CPU just has more processes running. If I've got that right, floating point doesn't really fit, because it wasn't really integrated into the main CPU, just moved onto the same die.

When you look at the 80387, it was a lot harder to get enough onto a single chip, and the bus speed between chips was a lot faster, relative to the CPU speed, than it is now (you might have a couple cycles of latency, but the CPU and the bus were both 33mhz and 32 bit). This is back when the CPU cache was on separate chips. Even a generation later, the 486 had an 8kb cache built-in, and motherboard manufacturers were adding external L2 caches of 32-128kb.

Math co-processors, anyone? (4, Insightful)

cplusplus (782679) | more than 7 years ago | (#16517709)

GPUs are going the way of the math co-processor. I think it's inevitable.

Re:Math co-processors, anyone? (1)

SirKron (112214) | more than 7 years ago | (#16518123)

Exactly, why else do you feel we need an 8 core processor? The GPU will just be extra clock cycles on one of the cores.

patents (2, Insightful)

chipace (671930) | more than 7 years ago | (#16517725)

It's quite clear that the AMD-ATI merger was to aquire the IP and expertise necessary to integrate gpu core(s) on the same die as cpu core(s). Nvidia does not have to actually market a design, but rather patent some key concepts, and this could provide revenue or protection.

I would very much doubt that they could compete with AMD and Intel who have already patented many x86 cpu concepts.

It's a shame that Intel has decided not to buy nvidia, and go it alone with it's own design staff.

Nvidia (1, Funny)

Anonymous Coward | more than 7 years ago | (#16517805)

Why doesn't google buy Nvidia?

Re:Nvidia (1)

SevenHands (984677) | more than 7 years ago | (#16518435)

Google is just software, not hardware. Or am I wrong on this one?

Re:Nvidia (1, Funny)

Anonymous Coward | more than 7 years ago | (#16518645)

maybe for the same reason ford doesn't.

Thank MicroSoft (5, Interesting)

powerlord (28156) | more than 7 years ago | (#16517811)

Okay, I admit, I haven't RTFA yet, but if GPUs do get folded back into CPUs, I think we need to thank MS.

No. ... Seriously. Think for a minute.

The major driving force right now in GPU development and purchase are games.

The major factor that they have to contend with is DirectX.

As of DirectXv10. A card either IS, or IS NOT compliant. None of this "We are 67.3% compliant".

This provides a known target that can be reached. I wouldn't be surprised if the DirectX10 (video) featureset becomes synonymous with 'VGA Graphics' given enough time.

Yeah, sure, MS will come out with DX11, and those CPUs won't be compatible, but so what?, If you upgrade your CPU and GPU regularly anyway to maintain the 'killer rig', why not just upgrade them together? :)

Re:Thank MicroSoft (0)

Anonymous Coward | more than 7 years ago | (#16517951)

It will take a long time for DX 10 to be fully accepted if it's going to require vista, as rumoured...

Re:Thank MicroSoft (1)

powerlord (28156) | more than 7 years ago | (#16518063)

I wasn't thinking so much DX10 being accepted, as the featureset that it requires to be there.

Re:Thank MicroSoft (0)

Anonymous Coward | more than 7 years ago | (#16518159)

I dunno, maybe due to cost/system instability issues? You want to try swapping out the CPU every 4 months, along with all the headaches that can cause (like optimizing the kernel)? I sure as hell don't. Also, what about OpenGL? As a CPU manufacturer, I'd prefer to keep the number of standards I have to comply with and get certified for to a minimum, and as a Linux user, I don't like the idea of my CPU of all thinsg being beholden to Microsoft.

Re:Thank MicroSoft (1)

powerlord (28156) | more than 7 years ago | (#16518409)

OpenGL should be able to take advantage of the same features a chip can provide, as long as there are drivers to support it.

My comment about DirectX10 was just that it was being presented as an absolute standard to adhere to, unlike its predecessors, if you want to get certified, you have to have certain features. This provides a much more concrete standard that people can look at and say 'can that graphic Card/Chip do X?'.

I'm confused how this would make the CPU beholden to MS. Somehow the CPUs currently in our computers, primarily sold to run DOS and Windows, have managed to run SCO (wether you like it or not), Novell, Minux, Linux, BSD, Solaris, and even (shock) OS X. A remarkable number of those computers even contain graphics cards that are certified for DirectX (in one flavor or another).

I do have one other question though: As a linux user, do you usually find the need to swap out your GPU every 4 months?

Mod parent up +1 Sad (1)

Tei (520358) | more than 7 years ago | (#16518503)

yes, indeeed, but is a sad state of the world. I hope OpenGL return again and force the Neverwinter Nights to develop Nwn 3 with a OpenGL path again :I

a dash? (1)

racebit (959234) | more than 7 years ago | (#16517823)

The article is from the Inquirer, so a dash of salt might make this more palatable.
A dash?..hell! better use the whole damn shaker

we need more than that (0)

Anonymous Coward | more than 7 years ago | (#16518177)

I don't think it's gonna happen (1)

tehSpork (1000190) | more than 7 years ago | (#16517905)

With nVidia making CPUs, and of course Intel and AMD/ATI (DAAMIT) making CPUs, how could nVidia expect to grab any market share? No offence to the nVidian engineers, but their product would have to be miles above the Intel/DAAMIT offerings in order to make most people even consider a system with an nVidia CPU. I think they would be better off if they attempted to enter into a contract with Intel for their CPU/GPU combo ideas, maybe Intel could get a few nice server chipsets out of the deal? :)

Pointless without documentation. (3, Insightful)

sudog (101964) | more than 7 years ago | (#16517939)

Why is everyone getting excited about this? Now we're going to have a CPU that's only partially documented, and we lose even moreso to closed-source blobs.

This isn't a good thing unless they also release documentation for it!

Just my preference . . . (3, Funny)

Orange Crush (934731) | more than 7 years ago | (#16518065)

The article is from the Inquirer, so a dash of salt might make this more palatable.

I prefer my articles with a dash of accuracy.

Niche market? (3, Insightful)

gstoddart (321705) | more than 7 years ago | (#16518117)

Will dedicated graphics cards become a niche product for enthusiasts and pros, like audio cards already largely have?

Haven't they already???

The vast majority of machines (at least, from my experience, which could not be broad enough) seem to be shipping with integrated graphics on the motherboard. Certainly, my last 3 computers have had this.

Granted, I buy on the trailing edge since I don't need gamer performance, but I kind of thought most consumer machines were no longer using a separate graphics card.

Anyone have any meaningful statistics as opposed to my purely anecdotal observations?

Re:Niche market? (2, Insightful)

gbjbaanb (229885) | more than 7 years ago | (#16518265)

Its hardly a niche market - every server wants onboard graphics, mainly because they don't need to be superpowerful. I imagine this is similar - a low-powered CPU on the same chipset as the graphics chip (and no doubt the network chip) would make making motherboards a bit cheaper, or give them more capabilities that they currently have to have managed with software installed as a driver.

I really doubt the CPU part is going to compete with the latest super-quadcore chips from AMD or Intel, so no-one will use it for a mainstream computer. Possibly it'd have a market for embedded products but I thought they were already well catered for.

Re:Niche market? (1)

gstoddart (321705) | more than 7 years ago | (#16518437)

I really doubt the CPU part is going to compete with the latest super-quadcore chips from AMD or Intel, so no-one will use it for a mainstream computer.

What about the latest quad-core chips are mainstream??? Those are specialty products if there ever was a specialty product. Except for high-end gamers and people doing really specialized tasks, who actually needs one of them? I bet I couldn't tax one for anything I do.

Nowadays, so many motherboards have video, lan, sound, IDE controller, possibly RAID, and USB all on them that I would think for most people that would suffice. Your average home user isn't going to overwork that.

A separate video card might not yet be niche, but I can forsee it becoming so.

Cheers

It's a logical extension of the NVidia NForce line (4, Interesting)

Animats (122034) | more than 7 years ago | (#16518147)

I've been expecting this for a while, ever since the transistor count of the GPU passed that of the CPU. Actually, I thought it would happen sooner. It's certainly time. Putting more transistors into a single CPU doesn't help any more, which is why we now have "multicore" machines. So it's time to put more of the computer into a single part.

NVidia already makes the nForce line, the "everything but the CPU" part, with graphics, Ethernet, disk interface, etc. If they stick a CPU in there, they have a whole computer.

Chip designers can license x86 implementations; they don't have to be redesigned from scratch. This isn't going to be a tough job for NVidia.

What we're headed for is the one-chip "value PC", the one that sits on every corporate desk. That's where the best price/performance is.

Re:It's a logical extension of the NVidia NForce l (1)

LesPaul75 (571752) | more than 7 years ago | (#16518479)

Well, the thing about a high-end CPU is that it's something like 80% custom logic, where a GPU is much more "standard cell" design. So the fact that NVIDIA is good at GPUs with lots of transistors doesn't mean that it will be easy for them to build a CPU. It will be very difficult to build something competitive with Intel and AMD. But if anyone out there right now has a shot at it, it's NVIDIA. Licensing of the x86 architecture is going to be a sticky issue.

Something that's interesting about this, if true, is that Intel might be the one playing catch-up. AMD will have ATI graphics, NVIDIA will have NVIDIA graphics, and Intel will have Intel graphics, which have always been pretty horrible.

Re:It's a logical extension of the NVidia NForce l (0)

Anonymous Coward | more than 7 years ago | (#16518751)

Wow, a systerm on a chip, never heard of those before...

Re:It's a logical extension of the NVidia NForce l (1)

ceoyoyo (59147) | more than 7 years ago | (#16518957)

The problem with gluing the GPU and CPU together is that it'll be a humongous chip, with low yields and therefore very expensive -- more expensive than a comparable CPU and GPU separately.

Re:It's a logical extension of the NVidia NForce l (1)

Joe The Dragon (967727) | more than 7 years ago | (#16519055)

and do you really want to take away system ram for video ram?
with vista high ram use 128+ just for video whould make it even worse.

Re:It's a logical extension of the NVidia NForce l (1)

ceoyoyo (59147) | more than 7 years ago | (#16519179)

Presumably you'd still need separate graphics memory. We all know how well integrated memory works now.

Re:It's a logical extension of the NVidia NForce l (2, Interesting)

Doctor Faustus (127273) | more than 7 years ago | (#16519181)

do you really want to take away system ram for video ram?
If using larger chips means I can get 2GB combined RAM for the price of 1GB system RAM and 256MB video RAM? Absolutely.

Why multiprocessor units suddenly most efficient? (0)

Anonymous Coward | more than 7 years ago | (#16518277)

From someone who has mainly been involved in computers as a semi-competent user:

Why are the multiprocessor units suddenly so popular, relative to why e.g. the Voodoo graphics cards failed? I remember them being ridiculed and ending up in the performance backwaters through their 2-4-8(-16) multiprocessor cards, but it seems that there are engineering reasons why multiple processors are now suddenly coming into favour, or?

Re:Why multiprocessor units suddenly most efficien (2, Informative)

SillyNickName4me (760022) | more than 7 years ago | (#16519207)

Why are the multiprocessor units suddenly so popular, relative to why e.g. the Voodoo graphics cards failed? I remember them being ridiculed and ending up in the performance backwaters through their 2-4-8(-16) multiprocessor cards, but it seems that there are engineering reasons why multiple processors are now suddenly coming into favour, or?

multiple processors (CPU, GPU or otherwise) are a way to add more 'cycles' based on current technology. This has the advantage of being able to get more out of your current designs and manufacturing technology, but comes at the cost of increased complexity in both the supporting hardware, and in software.

Getting a single core implementation faster is always the more efficient way to add processing capacity, but it is very impractical beyond a certain point due to power and heat considerations (where that point is exactly depends on the state of technology at any given moment but in the end is limited by the physical size of molecules, at least for as far as current technology goes)

So, multiple processors is not directly better from an engineering point of view, rather, it is a solution to overcome the speed limits of current technology, provided you can deal with the extra complexity (moving much of the hardware complexity into the chip itself like AMD and Intel are doing now removes the burden from systemboard designers, but the complexity itself is still there, esp. on the software side of the picture).

With regards to 3dfx, it seems to me that:
1. They failed to manage the additional complexity
2. As their competition showed, limits of technology at that time were much higher then what 3dfx managed, which indicates there were problems with either their design or manufacturing technology, or more likely, with both.

Re:Why multiprocessor units suddenly most efficien (1)

Doctor Faustus (127273) | more than 7 years ago | (#16519285)

it seems that there are engineering reasons why multiple processors are now suddenly coming into favour

Traditionaly, when you move to a more compact production process, your parts are closer together, so it takes less time for the electric signals to move through them (propogate), so you can get faster clock speeds without really changing the design much. When Intel reached the 90nm process (or maybe the one before -- 130nm?), they were startled to discover that that effect just didn't work anymore. The chips were smaller, and thus cheaper to make, but they didn't work any faster than the old ones. AMD apparently ran into the same thing when they reached that size.

With their easy gains in clock speeds at a dead end, they went looking for something else to improve, and more CPUs was it.

Re:Why multiprocessor units suddenly most efficien (1)

mrkh (38362) | more than 7 years ago | (#16519293)

Mostly because it's a different point in the life cycle. Many of the easy performance gains in CPU's have now been taken - it wasn't so long ago that a 100MHz clock increase was sensational, but now it's expected. It's getting tough to eke out big performance wins in each generation now, so it's easier to move sideways to multiple cores. Parallelism is the future (witness the supercomputers).

I don't know the 3dfx history all that well, but I'd *guess* that their cards were getting hard to dig more performance from, and that they went two ways - one to work on a new core, one for multiprocessor. They could at least keep some market share if they got something out quickly. Unfortunately, the new core probably took too long, and the MP wasn't enough ; there were still new things to try for single GPU's, and they ended up with an expensive, slow card.

CELL (0)

Anonymous Coward | more than 7 years ago | (#16518279)

This is happening because all the big vendors know that once consumers really start using stuff with the cell processor they won't need all the other stuff for most purposes. Not all, but or most normal consumer desktops. It's sneaking in everywhere because it was a good idea and it works. You have multiple cores that go do different stuff and talk to each other really really fast, makes building a computer a lot simpler and in theory can be a lot smaller. I know which one I would pick say at a low end, a 300 buck computer with a normal even duocore processor and a cheap amount of ram and cheap vid and sound, or something for the same price with the cell processor where different cores did didfferent stuff and 4 times the ram. No contest. There will always be a market for higher end and specialised cards and chips, but once you can whip out one thing that fits in one slot and takes the place of 3/4ths of the current mobo you will be making some cash on that badboy. Pretty soon you'll be able to just plop your cellphone (which will act as your main unit) down on your desk in front of a wireless enabled screen, keyboard and mouse and do basically most stuff you do today with a normal big machine, and it is all because of the multiple core chips, they will make it possible and cheap.

Jen-Hsun Huang: A True Asskicker (2, Informative)

DeathPenguin (449875) | more than 7 years ago | (#16518345)

When I saw this headline I immediately thought of this article [wired.com] , an interview with Jen-Hsun Huang (CEO: nVidia) by Wired dated July '02. In it, the intention of overthrowing Intel is made quite clear, and ironically enough they even mention the speculation from a time when it was rumored that nVidia and AMD would merge.

It's actually a very good article for those interested in nVidia's history and Huang's mentality. Paul Otellini [intel.com] ought to be afraid. Very afraid.

Niche (1)

Etyenne (4915) | more than 7 years ago | (#16518347)

Will dedicated graphics cards become a niche product for enthusiasts and pros, like audio cards already largely have?

It already is.

Other uses? (1)

NerveGas (168686) | more than 7 years ago | (#16518423)

"what other prospects could a solution like this have?"

Duh. Gaming consoles. Add memory, a sound controller, and some sort of storage, and you're in business.

best of both worlds..? (1)

discojohnson (930368) | more than 7 years ago | (#16518429)

"back in the day" when my 80387 (7?..coprocessor) was sitting to the side with it's own instructions to complete, the commands never had to traverse up and down a much slower ISA bus. why not apply the same idea to upgrading your CPU and GPU separately through a slot/socket design? they can still share memory (not necessarily cache--sorry), but the agp/pcie bus is removed. communication between a single die g/cpu will still have some sort of bus, albeit a very tiny bus with tiny pathways; why not just make the pathways a bit bigger and build them into the mobo?

Here's the bottom line. (1)

smug_lisp_weenie (824771) | more than 7 years ago | (#16518441)

There's certain advantages to having (A) the GPU functionality integrated in the CPU (cost, certain performance aspects, others) and some to having it (B) in a separate GPU (more easily upgraded, more real estate, less heat problems)...

Every once in a while an unrelated tech innovation comes around that benefits (A) or (B) in some indirect fashion. It could be faster bus speeds, more sophisticated GPU instruction sets, etc etc etc. Doesn't really matter what they are, but they happen all the time and each may benefit (A) or (B) more than the other.

So currently we are at a time when (B) is the standard design- GPU and CPU remain separate. Whenever an innovation has come along that benefits (B) more it has been integrated into the latest NVIDIA or ATI design. Whenever something that benefits (A) has come along it has been ignored in the last few years... this probably includes things such as the balooning costs of the GPUs, difficulty in getting GPUs into now popular laptop form factors, texture latency, etc. etc. ...so finally the innovations for design (A) are reaching a critical mass... all those innovations that recently couldn't be pounced upon because of the separation of GPU/CPU are now making (A) looking pretty damn nice again.

This CPU/GPU cycle happens every few years... We've been in an (A) cycle many times before- Remember what MMX was originally for? It was because we didn't have GPUs and couldn't do efficient block operations for video at the time... Remember the IBM PCjr? That was arguably another (A) cycle because IBM wanted to save money on circuitry/memory for the video subsystem, which is arguably just a primitive form of a GPU.

This GPU->CPU oscillation has been, and will continue to be, going on forever.

Nothing New In The World (1)

NekoXP (67564) | more than 7 years ago | (#16518491)

Integration is the key to cost reduction, performance improvement and power efficiency.

L2 cache used to be external. Then they integrated it when technology and performance allowed. L3 cache then became external while L2 was integrated; now you can buy processors with all this inside. Put the memory controller inside the CPU and you no longer need to spread out high (er than CPU core voltage) IO lines with nasty length requirements between Northbridge and CPU, and can clock the bus faster. Put the ethernet and so on inside the Northbridge and you no longer need discrete chips and buses for them, and can run them faster with tighter integration to a DMA controller and embedded RAM.

Integrate the graphics hardware into the CPU and you can have most of the high-bandwidth devices on the fastest possible bus.

Take Freescale's nearly-done 8641D Power Architecture processor. It's 2 G4s, 4 gigabit ethernet, USB2, PCI Express and RapidIO, DMA, interrupt and memory controller, I2C, serial. This chip is priced LOWER than equivalently specced Core Duo 2 combinations (CPU, i975MCH/ICH combination), and the performance.. is about the same. However board implementation will be much easier, and lower power. All you need for a system is to add your peripherals; a SATA chip, perhaps. I can't think of anything else that is missing besides graphics.

http://www.freescale.com/files/32bit/doc/fact_shee t/MPC8641DDLCRFS.pdf [freescale.com]

Eventually SATA will go in there, you can bet on that. Then graphics. Then one chip per board is a possibility. You thought NanoITX was small..?

Despite doubters, this seems a good idea to me (1)

zappepcs (820751) | more than 7 years ago | (#16518567)

If you could preload half the code you run on the CPU over to a CPU/GPU chip, and cut down buss use by >50% by utilizing a 'smart' GPU chip, this should enhance overall system performance by tons in graphic intensive applications. Not to mention that it simplifies simpler system needs (say embedded or wireless) for smaller systems that require less space but provide required functionality with high graphics performance. Just my thought

Just what we needed (0)

Anonymous Coward | more than 7 years ago | (#16518631)

A proprietary CPU that you can't use with your free operating system.

Considering the recent security problems with their proprietary piece of shit driver I'd rather stay away from anything manufactured by NVIDIA.

What if you could put multiple chips like these in (1)

stunt_penguin (906223) | more than 7 years ago | (#16518733)

What if you could put multiple chips like these in one machine?

They'd probably be obsolete in three months, as opposed to one month ;)

Thin Clients (1)

Beefslaya (832030) | more than 7 years ago | (#16518735)

Thinner thin clients perhaps?

Nvidia is the odd-man out (1)

WoTG (610710) | more than 7 years ago | (#16518911)

Nvidia doesn't really have a choice in this matter. They need to at least explore the option of creating a CPU or be relegated to niche status.

It's pretty clear that both Intel and AMD are intent on swallowing up the lower 3/4 (hand-waving guess) of the GPU market over the next few years. And I believe that ATI will still be fighting it out at the top end over that remaining 25%.

That would leave Nvidea as a niche player in the uber high end, making GPUs strictly for graphics professionals and gamers with too much money. You can survive as a niche player in technology... but it's a tenuous life. Just ask Cray, SGI, Transmeta, Sun, Cisco, and hundreds of others*. Only the latter two of resemble themselves in their primes -- but even then, they've lost influence.

* Whatever happened to Gravis and the Gravis Ultrasound?

Cyrix Media GX, anyone? (1)

spedrosa (44674) | more than 7 years ago | (#16519047)

This integration is nothing new. "Cyrix did it!"

First, processors now have integrated memory controllers. Now this.

Seems like Cyrix was way ahead of its time.

Looking at this from the wrong perspective (0)

Anonymous Coward | more than 7 years ago | (#16519149)

What folding@home has shown us is that specialized hardware can be put to other novel uses, in some cases yielding huge performance gains. In this case, using graphics circuitry for the protein folding computations. All Nvidia proposes is to reduce latency by moving the GPU on-chip (and perhaps have the added benefit of locking you into a GPU of Nvidia's manufacture...).

However, there are many algorithms that are not optimally solved by throwing EITHER a CPU or a GPU at them. A more interesting idea would be in the form of on-chip programmable hardware a la FPGAs, in conjunction with a nice low-latency on-ship setup. Then, developers could push more interesting and advanced circuits onto CPUs, such as crypto engines or crackers. This would allow for readily incorporating new high performance codecs for crypto, graphics, audio, protein-folding, etc.

Intel is interested in something similar (1, Informative)

Anonymous Coward | more than 7 years ago | (#16519169)

A guy from Intel recently presented at a seminar at my university. He is working with a group that is pushing for a CPU architecture that looks kind of like a GPU, when you look at it at a very high level (and perhaps your eyes squinted just a bit).

The unofficial title of his talk was 'the war going on inside your PC'. He argued that the design of future CPUs and GPUs will eventually converge, with future architectures being comprised of a sea of small and efficient but tightly interconnected processors (no superscalar), and that it is basically a race to see who will get there first - the CPU manufacturers or the GPU manufacturers.

One of his main points was that with increased compiler effort, potentially many computational workloads can be made to run on the tiled architecture of simple processors, much in the way that the process of graphic rendering has been able to be shifted into the type of workload that can leverage the 'tiles of simple processors' found in a graphics card today, even though the nature of graphic rendering was originally better suited for execution in a typical CPU, where control dependent loads run efficiently. When the workload cannot be mapped to the 'tiles of simple processors' architecture, just slap a superscalar processor in the corner of your die (like nvidia seems to be doing) to take care of those small corner cases.

So, we will likely be seeing a lot more of this in the future. Especially now that AMD and ATI are together.

(More details on the abstract of the presentation I mentioned can be found here [utexas.edu] )

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...