Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Next-Gen GPU Progress Slowing As It Aims for 20 nm and Beyond

Soulskill posted 1 year,2 days | from the have-you-tried-pinch-to-zoom dept.

Graphics 91

JoshMST writes "Why are we in the middle of GPU-renaming hell? AMD may be releasing a new 28-nm Hawaii chip in the next few days, but it is still based on the same 28-nm process that the original HD 7970 debuted on nearly two years ago. Quick and easy (relative terms) process node transitions are probably a thing of the past. 20-nm lines applicable to large ASICs are not being opened until mid-2014. 'AMD and NVIDIA will have to do a lot of work to implement next generation features without breaking transistor budgets. They will have to do more with less, essentially. Either that or we will just have to deal with a much slower introduction of next generation parts.' It's amazing how far the graphics industry has come in the past 18 years, but the challenges ahead are greater than ever."

Sorry! There are no comments related to the filter you selected.

This is good news for me (4, Interesting)

All_One_Mind (945389) | 1 year,2 days | (#45218109)

As a Radeon 7970 early adopter, I am completely fine with this. It still more than kicks butt at any game I throw at it, and hopefully this slow pace will mean that I'll get another couple of good years out of my expensive purchase.

Re:This is good news for me (3, Informative)

Rockoon (1252108) | 1 year,2 days | (#45218781)

Hell I recently picked up an A10-6800K APU and the integrated graphics are more than acceptable for the gaming that I do at 1920x1080 (Team Fortress 2, Kerbal Space Program, Planet Explorers, Skyrim, ...) .. and its not even with the fastest DDR3 the mobo supports.

Re:This is good news for me (0)

Anonymous Coward | 1 year,2 days | (#45220201)

Hell I recently picked up an A10-6800K APU and the integrated graphics are more than acceptable for the gaming that I do at 1920x1080 (Team Fortress 2, Kerbal Space Program, Planet Explorers, Skyrim, ...) .. and its not even with the fastest DDR3 the mobo supports.

I was really really pleased with my old A8 once I started playing games with it.

Re:This is good news for me (1)

Ash Vince (602485) | 1 year,1 day | (#45221787)

Hell I recently picked up an A10-6800K APU and the integrated graphics are more than acceptable for the gaming that I do at 1920x1080 (Team Fortress 2, Kerbal Space Program, Planet Explorers, Skyrim, ...) .. and its not even with the fastest DDR3 the mobo supports.

Can you turn on 16x Anisotropic Filtering and 8x Multi-Sample Anti-Aliasing in Skyrim? In my opinion it just looks a little ugly without it.

Re:This is good news for me (1)

readacc (3401189) | 1 year,1 day | (#45221813)

I would argue that on a typical laptop screen at 1920x1080, the limited physical dimensions make things like anti-aliasing not as important compared to say a proper desktop monitor, where the pixels are bigger and hence the effect of a lack of AA is more pronounced. As for anisotropic filtering, that's basically free for any graphics chipset these days so I'd be surprised if there's any performance hit on that.

Having said that, Skyrim's one of those games which everyone seems to love and I dislike (then again I disliked Oblivion as well due to its paper-thin characters, pathetic story, lack of any actual time pressure or concern about the world and overall design), so maybe the Elder Scrolls series is just not my type anyway.

Re:This is good news for me (0)

Anonymous Coward | 1 year,1 day | (#45222033)

You're definitely not alone in your assessment of (recent--haven't tried anything before Morrowind) Bethesda work. I never could get into any of them, they just feel like huge environments full of mindless monsters and a couple of lifeless cardboard cutouts in the shape of people. Awfully dull.

Re:This is good news for me (1)

Rockoon (1252108) | 1 year,1 day | (#45228771)

Using fraps 60 second benchmark, running same area (not 100% consistent displayed stuff)

Ultra then switching AA/AF:
0AA/0AF - 23 / 63 / 32.497 (min / max / ave)
8AA/16AF - 16 / 32 / 19.283

High then switching AA/AF:
0AA/0AF - 27 / 57 / 40.867
8AA/16AF - 19 / 36 / 23.833

Medium then switching AA/AF:
0AA/0AF - 32 / 59 / 42.167
8AA/16AF - 20 / 34 / 26.933

Low then switching AA/AF:
0AA/16AF - 35 / 62 / 49.733
8AA/0AF - 22 / 41 / 30.417
8AA/16AF - 22 / 40 / 31.350

The other poster seem to be right that AF is virtually free. I could do a more detailed analysis of AA options in the Control Center, but meh.. I dont play with AA anyways...

Re:This is good news for me (1)

Ash Vince (602485) | 1 year,21 hours | (#45232609)

Ultra then switching AA/AF:
0AA/0AF - 23 / 63 / 32.497 (min / max / ave)
8AA/16AF - 16 / 32 / 19.283

Wow, seems better than I expected but 18fps is going to be a bit noticeably ugly, especially if you had a huge fight going on with few enemies such as the stormcloak / empire battles.

Would be a good if someone started throwing these things into laptops though, I just checked quickly and can't see any yet. Hopefully that will change soon.

Re:This is good news for me (1)

Rockoon (1252108) | 1 year,20 hours | (#45232959)

Its a 100W APU. Its the king of the current batch of CPU's with on-die GPU's with regards to GPU performance (Intel's best is like 50% of the performance.) So unlikely to ever see a laptop anytime soon.

As far as the 19 FPS on Ultra with 8x MSAA .. AMD Catalyst can do "Morphological AA" that works on top of other AA methods (for example, 2x MSAA) and is supposed to be very efficient and frequently comparable to 8x SSAA all by itself.. but meh.. hard to quantify subjective stuff like "aa quality")

After all these tests, I have settled on 1920x1080 0xAA / 16xAF with High settings (before I was running 0xAF) Havent benchmarked that same area I was running for the tests, but I always have fraps running and its nearly always 30+ FPS regardless of what I am doing. It seems like the shadows are the performance enemy with these settings (indoor areas seem to always always 40+)

Re:This is good news for me (1)

Ash Vince (602485) | 1 year,14 hours | (#45238177)

Its a 100W APU. Its the king of the current batch of CPU's with on-die GPU's with regards to GPU performance (Intel's best is like 50% of the performance.) So unlikely to ever see a laptop anytime soon.

I believe you but its a pity. In the past I have bought laptops with discrete graphics even though they got crappy battery life just because I wanted to game on them on trains while travelling around here in the UK where you get a power socket in most of the seats. I only skimped on that on my current laptop because I do far less travelling now.

Re:This is good news for me (1)

dstyle5 (702493) | 1 year,1 day | (#45224057)

It will be interesting to see what becomes of Mantle, but it does have the potential to greatly increase the performance of games that support it and extend the life of ATI cards. Being the owner of a 7950 I'm looking forward to seeing how much of a performance boost Mantle gives vs. DirectX. I believe Battlefield 4, which I plan to purchase, will be one of the first Mantle supported games when they patch in Mantle support in December, can't wait to try it out.

http://www.anandtech.com/show/7371/understanding-amds-mantle-a-lowlevel-graphics-api-for-gcn [anandtech.com]

Intel (3, Informative)

Anonymous Coward | 1 year,2 days | (#45218187)

Meanwhile, Intel is about to give us 15 core Ivy-bridge Xeons [wikipedia.org] . A year from now we'll have at least that many Haswell core Xeons, given that they have the same 22nm feature size.

How many cores will 14nm [slashdot.org] Broadwell parts give us (once they sort the yield problems?) You may expect to see 4-5 billion transistor CPUs in the next few years.

Yay for Moore's law.

Re:Intel (3, Insightful)

laffer1 (701823) | 1 year,2 days | (#45218313)

I actually wonder if Intel could catch up a bit on GPU performance with the integrated graphics on the newer chips. They're not process blocked. Same thing with AMD on the A series chips.

Re:Intel (2)

Luckyo (1726890) | 1 year,2 days | (#45220873)

Unlikely simply due to memory bus limitations alone. Then there's the whole drivers elephant in the room.

Frankly, it's unlikely that intel even wants in on that market. The costs of entering GPU market to the point where they can threaten mid end and above discreet GPUs are astronomical and may require it to cross-license with nvidia to the point where nvidia will want x86 cross licensing - something intel will never do.

What is likely is that intel and AMD will completely demolish the current low end GPU market with their embedded solutions, while nvidia/amd discreet solutions are pushed up in speed and quality.

Re:Intel (1)

Narishma (822073) | 1 year,1 day | (#45222319)

They've already solved the memory bandwidth issue with the eDRAM in the Iris Pro Haswell parts.

Re:Intel (0)

Anonymous Coward | 1 year,1 day | (#45224083)

In the current feature size, that eDRAM is a bit smallish. In the next featuresize, they can likely increase the size of that cache by 4X and, with some minor driver work, really do a number on even mid-level GPUs.

Re:Intel (1)

robthebloke (1308483) | 1 year,2 days | (#45221043)

They're not process blocked.

I imagine they're blocked by patents.

Re:Intel (0)

Osgeld (1900440) | 1 year,2 days | (#45218919)

and yet they still make a shitty gpu

Re:Intel (1)

Anonymous Coward | 1 year,2 days | (#45219645)

and yet they still make a shitty gpu

Well, let me be the first to welcome you to 2013. Intel no longer makes, ahem, shitty GPUs. They now make decent GPUs, that play most games on low-medium quality (with high resolution).

You want to talk about shitty GPUs, look to the mobile space.

Re:Intel (3, Informative)

wmac1 (2478314) | 1 year,2 days | (#45220319)

Exactly. The new Intel GPUs (those on Haswell) do better than entry level NVidia and ATI dedicated graphics.

Re:Intel (1)

game kid (805301) | 1 year,2 days | (#45219523)

I'd happily throw money at Intel graphics, once (a) they actually do catch up to the Big Two (they are currently not even anywhere near anything close to something that so much as resembles half their performance or graphical feature set), (b) I can afford them, and (c) they stop considering soldering their new chips to the board, or make more chips with "Windows 8-only" features.

Buying someone's chip is taken by that someone as support for their policies. Those in (c) are two that I hope I never have to support, and I don't want to pay a "competitor" that's just aiming to out-monopolize the monopoly.

Re:Intel (1)

Aereus (1042228) | 1 year,2 days | (#45221325)

I realize this is /. so many may actually be doing workloads that require that sort of multi-threading. But the whole "more cores!" thing is completely lost on gaming and general computing. Games still primarily do their workload on 1-2 cores.

Re:Intel (1)

jones_supa (887896) | 1 year,1 day | (#45221977)

These days games primarily do their workload on 2-4 cores.

There go my dreams of a 2000 fps game (1, Troll)

JoeyRox (2711699) | 1 year,2 days | (#45218199)

Played across 24 monitors. Who really needs this crap?

Re:There go my dreams of a 2000 fps game (2)

Jeremy Erwin (2054) | 1 year,2 days | (#45218247)

There have been some graphics advances since the days of Quake 3.

Re:There go my dreams of a 2000 fps game (1)

jones_supa (887896) | 1 year,1 day | (#45222059)

Q3A (which is still gives an excellent deathmatch experience) had some ahead-of-its-time features, such as multi-core support and ATI TruForm tessellation. :)

Geometry, shaders, physics (3, Informative)

tepples (727027) | 1 year,2 days | (#45218291)

More GPU power translates into more detailed geometry and shaders as well as more GPGPU power to calculate more detailed physics.

Re:Geometry, shaders, physics (1, Insightful)

Dunbal (464142) | 1 year,2 days | (#45219707)

At some point the limiting factor becomes the ability of the software designers to create such a complex graphics engine rather than the video card itself.

After realism comes stylized graphics (3, Insightful)

tepples (727027) | 1 year,2 days | (#45219733)

True. And once graphical realism in a human-created game universe reaches its practical limit, game developers will have to once again experiment with stylized graphics. This parallels painting, which progressed to impressionism, cubism, and abstract expressionism.

Re:After realism comes stylized graphics (1)

RivenAleem (1590553) | 1 year,2 days | (#45221051)

Or hell, perhaps they'll make games not propped up with graphics!

Re:After realism comes stylized graphics (0)

Anonymous Coward | 1 year,2 days | (#45221261)

And we're very nearly there already, to judge by the quality that Crysis 3/BF4 are putting out. "Practical" realism is already here.

Re:Geometry, shaders, physics (1)

Jeremy Erwin (2054) | 1 year,2 days | (#45219951)

But once the engine has been created, the artists still have to take advantage of it.

Re:Geometry, shaders, physics (1)

jones_supa (887896) | 1 year,1 day | (#45222109)

At some point the limiting factor becomes the ability of the software designers to create such a complex graphics engine rather than the video card itself.

I think managing the complexity still goes a long way. You just break the engine to subproblems and assign them to different teams and people. The real caveat is that you will require more and more programmers to put all that together. Making something like the Source 2 engine already involves planning out huge frameworks and foundations, and it looks a bit like building a ship on a shipyard, at least when looking the magnitude of the project.

Re:Geometry, shaders, physics (1)

tepples (727027) | 1 year,1 day | (#45222253)

You just break the engine to subproblems and assign them to different teams and people.

At some point, hiring enough people to solve all the subproblems costs more than your investors are willing to pay. Not all games can sell a billion dollars like Grand Theft Auto V. Indies have had to deal with this for decades.

Artificial Intelligence (1)

mangu (126918) | 1 year,1 day | (#45221851)

The most powerful chips out there are still far below the capacity of a human brain.

I don't want just to play games, I want to retire and leave my computer to do my work for me.

At this point, we already have better software models [deeplearning.net] for the brain than hardware to run it.

the point of diminishing returns? (1)

SpaceManFlip (2720507) | 1 year,2 days | (#45218219)

Having struggled through years of gaming on rigs with various GPUs, I have to wonder where it will hit the point that nobody needs any faster cards.
I started out gaming on the computer on computers with no GPU, and when I got one with a Rage Pro 4MB it was awesome. Then I got a Voodoo card from 3DFX with a whopping 8MB and it was more awesomer. Now you can get whatever that will do whatever for however many dollars.

I really don't see the game programming keeping up with the GPU power. I'm at least 2 GeForce's behind the latest series (560ti) and I can play any game at 1200p resolution with a very decent framerate. Yes I beta-tested Battlefield 4. How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns. They could focus on streamlining and cheapening the "good enough" lines...

Re:the point of diminishing returns? (1)

Anonymous Coward | 1 year,2 days | (#45218287)

Diminishing returns I think is still a ways off. Even if they can't crank out faster frame rates, they can still continue the quest for smaller packages, if anything for efficiency, and power savings. Heck when some video cards require their own separate power supply, there is definitely room for improvement.

Re:the point of diminishing returns? (3, Insightful)

kesuki (321456) | 1 year,2 days | (#45218337)

the real killer app i've heard of for gaming rigs is making realtime special effects for movies and tvs. other than that there is news departments where thin clients can take advantage of a gpu assited server to run as many displays as the hardware can handle.

then there is wallstreet where a cuda assisted computer can model market dynamics in real time, there are a lot of superfast computers on stock exchanges. so there you go. 3 reasons for gpus to go as far as technology will allow.

Re:the point of diminishing returns? (1)

g00ey (1494205) | 1 year,1 day | (#45221749)

Not to mention all of the different research projects taken by students. I myself have indulged in more complex computer simulations using software such as Matlab. Simulations that took a few days of computing to complete on each run. If I had better hardware I would definitely use even more advanced models and conduct more simulations. So, there you have a forth reason :)

Re:the point of diminishing returns? (1)

khellendros1984 (792761) | 1 year,2 days | (#45218457)

I'm at least 2 GeForce's behind the latest series (560ti) and I can play any game at 1200p resolution with a very decent framerate.

I'm further behind than that (far enough to have to bump the resolution down to get anything playable on a game from the last couple years). In the past, the big benefit would've been higher API support for more effects, along with a general performance boost. GF 5xx and 7xx cards seem to support the same APIs, so I'd guess that with the current high-end cards, you've got gamers trying to match their monitor refresh rate while using higher-res monitors in a multi-monitor configuration. If you really want to find something to throw more horsepower at, you can find it.

Re:the point of diminishing returns? (0)

Anonymous Coward | 1 year,2 days | (#45218665)

How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns. They could focus on streamlining and cheapening the "good enough" lines...

Cheapening? Let's try and apply that theory elsewhere. BMW labeled themselves as the ultimate driving machine decades ago. When the hell are they going to focus on "streamlining" their absurd price tag?

Yeah, for the same damn reason ATI and Nvidia wouldn't do it either.

Re:the point of diminishing returns? (2)

UnknownSoldier (67820) | 1 year,2 days | (#45219173)

> How much more is enough?

Uhm, never.

I have a GTX Titan and it is still TOO SLOW: I want to run my games at 100 fps @ 2560 x 1440. I prefer 120 Hz on a single monitor using LightBoost. Tomb Raider 2013 dips down below 60 fps which makes me mad.

And before you say "What?", I started out with the Apple ][ with 280x192; even ran glQuake at 512x384 res on a CRT to guarantee 60 fps, so I am very thankful for the progress of GPUs. ;-)

But yeah, my other 560 Ti w/ 448 cores runs 1080p @ 60 Hz is certainly "good enough" but we we are QUITE a ways away from the end of GPUs for the forseeable future. There is a still a demand for real-time CG photorealism.

Re:the point of diminishing returns? (1)

g00ey (1494205) | 1 year,1 day | (#45221769)

But is using such a high resolution really necessary? I've looked into those 4K BF4 video clips and to be honest, it looks pretty terrible. I could barely see the city and the buildings in the game level, it looked more like a bunch of squary boxes with textures painted on top of them. When using a lower resolution I could more easily suspend my disbelief, the coarseness of the pixels makes the primitive polygons look less, ... boxy. Perhaps GPU hardware a few orders of magnitude faster is required so that there are enough hardware resources to render the extra detail needed to make 4K rendered 3D environments in real-time look fairly realistic again.

Re:the point of diminishing returns? (1)

UnknownSoldier (67820) | 1 year,15 hours | (#45237435)

At 4K you might be running into the "Uncanny Valley" symptom.
http://en.wikipedia.org/wiki/Uncanny_valley [wikipedia.org]

I concur 100% that 4K doesn't make ANY sense due to SMALL screen sizes. In order to have ~300 dpi @ 4K (3840 x 2160) your screen size would have to be 14.66 inches.
http://isthisretina.com/ [isthisretina.com]

I want 300 dpi at 60 inches.

4K only starts to make sense when you want to scale up the picture to be wall size. Let's take an average (diagonal) 60" plasma at 1080p, and the viewer sits at the recommended THX viewing angle of 36 degrees. (The recommended THX viewing distance is 6.7 feet)

  You would need to sit 6'8" (or 80 inches or 6.7 feet) to have a 36 degree viewing angle; the 36 DPI becomes retina at 94 inches, or 7'10". Most people don't sit that close, usually 8+ feet for something that size -- nowhere anything close to the recommended 6.7 feet.

With a 4K TV, while it has doubled the DPI at 73 dpi, it becomes retina at 47 inches. All 4K means is that you can sit closer and still see the same detail.

So if you sit closer then 7'10" then 4K would be an improvement; if your sit at the recommended THX viewing distance then it would be too; I seriously doubt most people pay THAT much attention to a "proper" visual setup.

References:
http://myhometheater.homestead.com/viewingdistancecalculator.html [homestead.com]
http://www.cultofmac.com/173702/why-retina-isnt-enough-feature/ [cultofmac.com]

Re:the point of diminishing returns? (1)

g00ey (1494205) | about a year ago | (#45244241)

I'm not arguing against 4K resolution per se. Personally, I would really like to have 4K, 8K resolution or even higher. For tasks such as word processing (or any task that involves working with text or letters) and getting desktop estate it is the more the merrier that applies, at least for now at the screen resolutions that are available for current desktop or laptop PCs. I totally agree with what Linus Torvalds said about this [slashdot.org] a while ago.

For FPS gaming on the other hand, I agree that 4K is overkill, at least with the polygon capability of current gen GPUs. I think that when dealing with photo, a resolution beyond 1080p (and perhaps 720p) is probably not very beneficial to the experience of immersion. But then again, I have yet to see a truly highly-detailed video-clip at 4K, perhaps that would be a mind-blowing experience. When looking at IMAX in theatres it is indeed a more capturing experience than regular 35mm footage. But the experience will be greater when it comes from say outdoor shots with a nice view and a lot of details from say trees and foliage than from camera shots taken inside a room with much less details.

I find the "Uncanny Valley analogy" to be very inappropriate here because firstly "uncanny valley" applies to human-like robots vs humans which is a very different story, some aspects of why this is different is discussed e.g. here [youtube.com] , and secondly, the higher resolution makes the fps games look less realistic than at lower resolution. The high resolution reveals how "empty" the artificial world really is, something that could be concealed behind a blur or a coarse matrix of pixels which is now floating up to the surface.

Re:the point of diminishing returns? (2)

wmac1 (2478314) | 1 year,2 days | (#45220399)

The more DPI craze has just started. We are going to have more DPIs on monitors and graphics cards will compete to bring the same speed of lower resolutions to them. The unfortunate thing is that the moore's law is at its practical limits (for now) so more capable CPUs might become more expensive and consume more power.

I personally hate the noise of those fans and the heat coming from under my table. I don't do games but I use the GTS-450 (joke? ha?) for scientific computing.

Re:the point of diminishing returns? (1)

pepty (1976012) | 1 year,2 days | (#45220487)

How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns.

Right after they hand me a 2k^3 resolution holographic (360 degree viewing angle) display and a GPU that can power it at 60 frames , er, cubes per second.

Then they can have the weekend off.

Re:the point of diminishing returns? (1)

Luckyo (1726890) | 1 year,2 days | (#45220887)

Not any time soon. We're still massively constrained on GPU front even with current graphics. And people making games want things like ray-tracing, which means that GPUs will have to make an order of magnitude jump before they even begin thinking about saturation.

The reason why 560Ti (the card I'm using as well at the moment) is still functional is because most games are made for consoles, or at least with consoles in mind. And consoles are ancient.
Requirements on PC-only/PC optimized/next gen console games pretty much kick the card's ass at the moment. Same goes for trying to play at stable 60fps or playing at higher resolutions.

Re:the point of diminishing returns? (1)

ciderbrew (1860166) | 1 year,2 days | (#45221369)

We are a long way off from real time photo realistic graphics. Think holo deck level or The Matrix. Add in the extra to create the fantasy worlds and you've got the maximum level.
But I think most want the game play and story (pant shitting system shock 2) that has nothing to do with the level of graphics we can achieve.

Re:the point of diminishing returns? (1)

Ash Vince (602485) | 1 year,1 day | (#45221815)

Having struggled through years of gaming on rigs with various GPUs, I have to wonder where it will hit the point that nobody needs any faster cards.

I started out gaming on the computer on computers with no GPU, and when I got one with a Rage Pro 4MB it was awesome. Then I got a Voodoo card from 3DFX with a whopping 8MB and it was more awesomer. Now you can get whatever that will do whatever for however many dollars.

I really don't see the game programming keeping up with the GPU power. I'm at least 2 GeForce's behind the latest series (560ti) and I can play any game at 1200p resolution with a very decent framerate. Yes I beta-tested Battlefield 4.
How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns. They could focus on streamlining and cheapening the "good enough" lines...

I just upgraded from a GTX 480 to a GTX780 and the big difference it made to me is the ability to turn on proper 8x Anti Aliasing and 16x Anisotropic Filtering at 1920x1200. It did not make a huge difference but it made enough to be noticeable. You might be able to make do with a cheaper card and still play most modern games, but getting something decent does give you nicer graphics for your money at the resolution you quoted.

Re:the point of diminishing returns? (1)

GauteL (29207) | 1 year,1 day | (#45222179)

When they can make graphics which is indistinguishable from "real life" at a resolution where you can no longer see the pixels and which behaves with physics resembling real life, then we can start talking.

Currently we are way off with respect to shapes, lighting, textures, resolution, physics, animation... basically all of it.

Shapes are never fully "curved" looking due to insufficient polygon counts. When the polygon sizes start becoming smaller than the pixel counts, maybe this will be ok. We use rasterisation for lighting, which is far from sufficiently realistic. Global illumination and shadows are still a problem. Textures are still too small and more importantly not "3D" enough. Ideally we would model light reflecting and passing through actual material structures. Resolution is still an issue. We're not doing real time 3D rendering on "retina" displays of decent sizes. We're waaaaay off with regards to physics. Water, smoke, fire, explosions are never realistic enough (ideally we'd use CFD for this). We also are terribly far away when it comes to destructable environments. We are many orders of magnitude off "good enough" on almost all accounts and just solving one of these points would require an order of magnitude better hardware.

The reason you can play any game at 1200p resolution is a stagnation in graphics development in games, not least due to consoles updated every 8 years dominating.

Last 18 years? (1)

RightSaidFred99 (874576) | 1 year,2 days | (#45218251)

Of course it's come far in the last 18 years. Last 2 years? Not so much. In fact GPU advancement have been _pathetically_ slow.

The Xbox One and PS4, for example, will be good at 1080p but ultimately only a few times faster than the previous generation consoles. Same thing with PC graphics cards. Good luck gaming on a high resolution monitor spending less than $500. Even Titan and SLI are barely sufficient for good 4K gaming.

$4K monitor (2)

tepples (727027) | 1 year,2 days | (#45218373)

The Xbox One and PS4, for example, will be good at 1080p but ultimately only a few times faster than the previous generation consoles.

I believe the same sort of slowdown happened at the end of the fourth generation. The big improvements of the Genesis over the Master System were the second background layer, somewhat larger sprites, and the 68000 CPU.

Good luck gaming on a high resolution monitor spending less than $500.

Good luck buying a good 4K monitor for substantially less than $4K.

Re:$4K monitor (1)

RightSaidFred99 (874576) | 1 year,2 days | (#45218687)

Nah. You could game "OK" on the 39" Seiki which is pretty cheap. Once they start making HDMI 2.0 models you'll see 30" 4K monitors for $700 by early next year.

Re:$4K monitor (-1)

Anonymous Coward | 1 year,2 days | (#45220171)

Way to trivialize the difference between the systems.... Fuck! The Genesis was FAR superior to the SMS in the GFX and sound departments. Nothing to do with a fucking background layer. You're quite the dumbass trying to sound like you know about stuff you obviously know nothing about. :S Fuck off

Re:Last 18 years? (2)

Anaerin (905998) | 1 year,2 days | (#45218591)

The thing with graphics improvements is that GPUs are getting better in linear scale, but quality improvements need to happen in logarithmic scale. Going from 100 polys to 200 polys looks like a huge leap, but going from 10,000 polys to 10,100 polys doesn't. I personally think the next big thing will be on-card raytracing (As NVidia has already demonstrated some). Massively parallel raytracing tasks are like candy for GPGPUs, but there is a lot of investment in Rasterising at the moment, so that is their current go-to method.

Holographic Display (1)

dbarron (286) | 1 year,2 days | (#45218283)

I'm ready for the next (or next-next) gen display, holographic display hovering in midair. Preferably with sensors that can detect my interactions with them. Wonder how far out THAT is now ?

I want better 2D performance (4, Insightful)

Overzeetop (214511) | 1 year,2 days | (#45218311)

No, seriously. I have yet to find a graphics card that will accelerate 2D line or bitmapped drawings, such as are found in PDF containers. It isn't memory-bound, as you can easily throw enough RAM to hold the base file, and it shouldn't be buffer-bound. And yet it still takes seconds per page to render an architectural print on screen. That may seem trivial, but to render real-time thumbnails of a 200 page 30x42" set of drawings becomes non-trivial.

If you can render an entire screen in 30ms, why does it take 6000ms to render a simple bitmap at the same resolution?

(the answer is, of course, because almost nobody buys a card for 2D rendering speed - but that makes it no less frustrating)

Re:I want better 2D performance (1)

SpaceManFlip (2720507) | 1 year,2 days | (#45218377)

That stuff is built-in on OSX, check it out.

Re:I want better 2D performance (1)

Jeremy Erwin (2054) | 1 year,2 days | (#45219015)

Does OSX 10.9 still choke on PDFs with embedded JPEG2000 graphics?

Re:I want better 2D performance (1)

Overzeetop (214511) | 1 year,1 day | (#45221889)

I hope it's not the same engine that is used in iOS, because the decoding in the iDevices makes windows software decoding look like greased lightning. The only thing I fear more than opening a complex 200 sheet PDF on my desktop is having to open just one of those sheets on my iPad. At least on the desktop the machine can hold a sheet in memory. The iDevices have to redraw the entire page every time I pan. I even have two versions of all my bitmap/scanned sheet music - one for quality, and one that looks like it was faxed so that the page-turn time on my iPad is under 3 seconds and I don't end up missing 1-2 measures at every page turn.

Re:I want better 2D performance (4, Informative)

PhrostyMcByte (589271) | 1 year,2 days | (#45218381)

All of the 3D rendering APIs are capable of proper, full-featured 2D rendering. The same hardware accelerates both just as well. The problem is that most apps are just not using it and/or that they are CPU bound for other reasons. PDFs, for instance, are rather complex to decode.

Not true (2)

slew (2918) | 1 year,2 days | (#45219287)

All of the 3D rendering APIs are capable of proper, full-featured 2D rendering. The same hardware accelerates both just as well. The problem is that most apps are just not using it and/or that they are CPU bound for other reasons. PDFs, for instance, are rather complex to decode.

Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware). Right now the stroke/path/fill rasterization is done on the CPU and merely 2D blit-ed to the frame buffer by the GPU. The CPU could of course attempt convert the stroke/path into triangles and then use the GPU to rasterize those triangles (with some level of efficiency), but that's a far cry from "proper, full-featured 2D".

Fonts are special cased in that glyphs are cached, but small font rasterization isn't generally possible to do with triangle rasterization (because of the glyph hints).

Since SW doesn't even attempt to use HW for modern 2D operations, it will likely be a long time before HW will support this kind of stuff...

Re:Not true (4, Informative)

forkazoo (138186) | 1 year,2 days | (#45219481)

Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware). Right now the stroke/path/fill rasterization is done on the CPU and merely 2D blit-ed to the frame buffer by the GPU. The CPU could of course attempt convert the stroke/path into triangles and then use the GPU to rasterize those triangles (with some level of efficiency), but that's a far cry from "proper, full-featured 2D".

Fonts are special cased in that glyphs are cached, but small font rasterization isn't generally possible to do with triangle rasterization (because of the glyph hints).

Since SW doesn't even attempt to use HW for modern 2D operations, it will likely be a long time before HW will support this kind of stuff...

A - anything that you can't do by tesselating to triangles could be done with OpenCL or CUDA. You could, for example, assign OpenCL kernels where each instance rasterizes one stroke and composite the results or something similar, and exploit the paralellism of the GPU. But, it would be inconvenient to write. Especially since most PDF viewers don't even bother with effective parallelism in their software rasterizers.

B - you can do anything by tesselating to triangles.

Re:Not true (1)

slew (2918) | 1 year,2 days | (#45220539)

B. You can't run font hint virtual machine by tessellating triangles.

A. running the font hint engine in CUDA or OpenCL would likely be an exercise in deceleration not acceleration.

Re:Not true (1)

PhrostyMcByte (589271) | 1 year,2 days | (#45220457)

All the tessellation needed is possible in shaders. The GPU's job is to provide primitives, I don't see any reason to expect it to dedicate hardware to every little step -- we got rid of fixed-function ages ago (mostly). Windows' Direct2D and DirectWrite are examples of high-quality 2D and font rendering done on the GPU -- they're just wrappers for Direct3D. There is also OpenVG, but as far as I know there are no desktop drivers available for this.

Re:Not true (0)

Anonymous Coward | 1 year,1 day | (#45221419)

Just one problem with D2D and DW - they're not very fast (compared to old-fashioned hardware-accelerated font rendering). Sure, you can show half a page of text in say 20 ms, but it's nowhere near 'fast'.

Re:Not true (1)

Gibgezr (2025238) | 1 year,1 day | (#45222715)

>Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware).

Incorrect. It's there, developers just aren't using it for some reason.
https://developer.nvidia.com/nv-path-rendering [nvidia.com]

Re:Not true (1)

tlhIngan (30335) | 1 year,1 day | (#45223947)

Since SW doesn't even attempt to use HW for modern 2D operations, it will likely be a long time before HW will support this kind of stuff...

Most graphics cards support 2D acceleration since the 90s - it's something that's been built into Windows for ages. Though given how fast it is to draw primitives like lines and such, it's typically far faster to do it in software than to run through the graphics card to render it to a framebuffer.

Though for PDFs, what happens is the Adobe Reader generally uses its own software based rendering engine rather than the OS accelerated GDI primitives. This is because the OS versions are dependent on Windows and Window Handles, and a complex drawing consumes those up in a hurry. (Using something like Foxit Reader which does use GDI can result in resource exhaustion which means you end up with missing lines and other objects - not fun. Though the ones you do see are drawn very quickly).

Of course, Microsoft deprecated DirectDraw (a way to accomplish 2D accelerated drawing) sometime in the DirectX 6 era or so...

Re:Not true (0)

Anonymous Coward | 1 year,1 day | (#45229195)

What you really want is for hardware support for OpenVG.
AFAIK, there is no direct support for it on desktops - only on embedded platforms.
For 2D you'd need a 3rd party library like AmanithVG was (is?), and the software to use it.

It's a shame - I'd also like to see a little more support for vector graphics. It's unlikely to happen though as the conversions basic raster operations from the drawing primitives can be quite tricky in some cases. The industry is loathe to spend silicon on it when they could use it to pad their 3dmark score.

Re:I want better 2D performance (0)

Anonymous Coward | 1 year,2 days | (#45218673)

You are blaming graphics card for lack of feature that software doesn't support..
I mean, the tools are there in graphics API (DirectX, OpenGL), question is why doesn't software take advantage of them or support them?
Usually it's just due to decades of piling stuff over badly implemented core which has become too problematic to change now and you are stuck with whatever implementation they have now.

Likely whatever is stored in the PDF or such isn't generic purpose mesh-like information but rather same archaic draw/putpixel commands that would need complete redesign of the storage format as well..

If the data is unsuitable and software can't use the hardware capabilities it's very hard to use any kind of drawing acceleration.

Re:I want better 2D performance (1)

Overzeetop (214511) | 1 year,1 day | (#45221865)

Data is never unsuitable for hardware acceleration, merely non-optimized. If you can do it in software (which is currently is), you can do it in dedicated hardware. The software doesn't take advantage of the hardware because the calls are all poorly suited to the format. It's not as if PDF is some orphan container, or EPS is some long lost and rarely-used language.

If there is demand for an efficient call mechanism, there will be a better hardware system. Apple gets around decoding audio and video in software by simply forbidding anything that doesn't conform to the narrow codec range that they have chosen to hardware decode. And that's just fine for a limited-purpose toy, but not for a fully functioning general purpose machine.

Re:I want better 2D performance (1)

Redmancometh (2676319) | 1 year,2 days | (#45221227)

Opening a massive set of drawings while solidworks has an exploded 22, 000 part die...even my work machine has issues.

What you just mentioned legitimately damages my productivity severely.

Re:I want better 2D performance (0)

Anonymous Coward | 1 year,1 day | (#45222413)

The APIs for accelerating "2D" path oriented graphics are available in current drivers (nVidia has supported it for some time). See the docs/demos/etc:

https://developer.nvidia.com/nv-path-rendering

Basically SVG rasterized natively by the GPU. Honestly, the issue here is that the developers of these applications are not using the latest APIs/techniques.

FWIW.

Re:I want better 2D performance (1)

Gibgezr (2025238) | 1 year,1 day | (#45222697)

Nvidia has been doing that for a while. They hired several vector-graphics programmers a few years ago and had them add that functionality to their cards. The problem is, no developers use this stuff.
https://developer.nvidia.com/nv-path-rendering [nvidia.com]

Re:I want better 2D performance (0)

Anonymous Coward | 1 year,1 day | (#45222699)

No, this is a premium feature on their workstation class cards (FirePro and I forget nVidia's name). That's for auto-cad and software and special FX acceleration and they charge more LITERALLY for enabling line drawing.

Why dribble about GPUs? (5, Insightful)

Anonymous Coward | 1 year,2 days | (#45218369)

The 'point' of this very crappy article is that each process node shrink is taking longer and longer. Why bother connecting this to GPUs, when self-evidently ANY type of chip relying on regular process shrinks will be affected?

The real story is EITHER the future of GPUs in a time of decreasing PC sales, rapidly improving FIXED-function consoles, and the need to keep high-end GPUs within a sane power budget ***OR*** what is the likely future of general chip fabrication?

Take the later. Each new process node costs vastly larger amounts of money to implement than the last. Nvidia put out a paper last year (about GPUs, but their point was general) that the cost of shrinking a chip may become so high, that it will ALWAYS be more profitable to keep making chips on the older process instead. This is the nightmare scenario, NOT bumping into the limits of physics.

Today, we have a good view of the problem. TSMC took a long time to get to 28nm, and is taking much longer to get off. 20nm and smaller aren't even real process node shrinks. What Intel dishonestly calls 22nm and 14nm is actually 28nm with some elements only on a finer geometry. Because of this, AMD is due to catch up with Intel in the first half of 2014, with its FinFET transistors also at 20nm and 14nm.

Some nerdy sheeple won't believe what I've just said about Intel's lies. Well Intel gets 10 million transistors per mm2 on its 22nm process, and AMD, via TSMC, gets 14+ million on the larger 28nm process. Defies all concept of maths when Intel CLAIMS a smaller process, but gets far less transistors per area against a larger process.

It gets more complicated. The rush to SHRINK has caused the industry to overlook the possibilities of optimising a given process with new materials, geometry, and transistor designs. FD-SOI is proving to be insanely better than finFET on any current process, but is being IGNORED as much as possible by most of the bigger players, because they've already invested tens of billions of dollars in prepping for FinFET. Intel has had two disastrous rounds of new CPU (Ivybridge and Haswell), because FinFET failed to deliver any of the theoretical on the process they 'call' 22nm.

Intel has one very significant TRUE lead, though- that of power consumption in its mains-powered CPU family. Although no-one gives a damn about mains-powered CPU power usage, Intel is more than twice as efficient than AMD here. Sadly, their power advantage largely vanishes with mobile, battery powered parts.

Anyway, to flip back to GPUs, AMD is about to announce the 290x, the fastest GPU, but with a VERY high power usage. Both AMD and Nvidia need to seriously work to get power consumption down as low as possible, and this means 'sweet spot' GPU parts which will NOT be the fastest possible, but will have sane compromise characteristics. Because 20nm from TSMC is almost here (in 12 months max), AMD and Nvidia are focused firstly on the new shrink, and finFETs, BUT moving below 20nm (in a TRUE shrink, not simply measuring the 2D profile of finFET transistors) is going to take a very, very, very long time, so all companies have an incentive to explore ways of improving chip design on a GIVEN process, not simply lazily waiting for a shrink.

Who knows? FD-SOI offers (for some chips) more improvements than a single shrink suing conventional methods. It is more than possible that by exploring material science, and the physics of semiconductor design, we could get the equivalent of the advantages of multiple generations of shrink, without changing process.

Re:Why dribble about GPUs? (4, Interesting)

JoshMST (1221846) | 1 year,2 days | (#45218545)

Because GPUs are the high visibility product that most people get excited about? How much excitement was there for Haswell? Not nearly as much as we are seeing for Hawaii. Did you actually read the entire article? It actually addresses things like the efficacy of the Intel 3D Tri-Gate, as well as alternatives such as planar FD-SOI based products. The conclusion there is that gate-last planar FD-SOI is as good, if not better, than Intel's 22 nm Tri-Gate. I believe I also covered more info in the comments section about how there are certain geometries on Intel's 22 nm that are actually at 26 nm, so AMD at 28 nm is not as far behind as some thought in terms of density.

Re:Why dribble about GPUs? (0)

Anonymous Coward | 1 year,1 day | (#45221461)

The real reason there wasn't much excitement about Haswell is because Haswell turned out to not be that exciting.

It would have been very exciting if somehow Haswell was say 2 times faster than its predecessor in common single threaded tasks but with the same TDP. Everyone would want to know how the fuck Intel did it. And many will be willing to pay a significant premium for such CPUs.

GPUs are easier to make faster since a lot of what they do is "embarrassingly parallel".

Re:Why dribble about GPUs? (1)

drhank1980 (1225872) | 1 year,2 days | (#45220379)

In the discussions I have had with other people that I work with in the semiconductor industry, the primary case against FD-SOI was business not technical. FD-SOI is very expensive as a starting material and the sourcing of the stuff was iffy at best when Intel decided to go FinFet. It was also questionable if it would scale well to 450mm wafers, something that TSMC and Intel really want.

Re:Why dribble about GPUs? (5, Informative)

Kjella (173770) | 1 year,2 days | (#45220547)

Some nerdy sheeple won't believe what I've just said about Intel's lies. Well Intel gets 10 million transistors per mm2 on its 22nm process, and AMD, via TSMC, gets 14+ million on the larger 28nm process. Defies all concept of maths when Intel CLAIMS a smaller process, but gets far less transistors per area against a larger process.

Comparing apples and oranges are we? Yes, AMD gets 12.2 million/mm^2 on a GPU (7970 is 4.313 billion transistors on 352 mm^2) but CPU transistor density is a lot lower for everybody. The latest Haswell (22nm) has 1.4B transistors in 177mm^2 or about 7.9 million/mm^2, but AMD's Richland (32nm) has only 1.3B transistors in 246mm^2. or 5.3 million/mm^2. Their 28nm CPUs aren't out yet but they'll still have lower transistor density than Intel's 22nm and at this rate they'll be competing against the even smaller Broadwell, though I agree it's probably not true 14nm. Very well formulated post though that appears plausible and posted as AC, paid AMD shill or desperate investor?

Re:Why dribble about GPUs? (2)

Rockoon (1252108) | 1 year,2 days | (#45220845)

Continuing on your informative post:

The current best 4-thread chip that Intel makes is the 22nm i5-4670K and the current best 4-thread chip that AMD makes is the 32nm A10-6800K.

Ignoring the integrated graphics, their passmark benchmarks are 7538 and 5104 respectively. Intel performance is therefore about 1.5x the AMD chip.

Intels process advantage between these two parts is also 1.5x transistors per mm^2.

So performance is still apparently scaling quite well with process size.

Re:Why dribble about GPUs? (0)

Anonymous Coward | 1 year,2 days | (#45221193)

Correlation does not imply causation.

"slowing"? More like stagnate for the last 2 years (1)

Neuroelectronic (643221) | 1 year,2 days | (#45218383)

How strange that it just happens that WiFi encryption standards fall with the power of last generation cards... Just a coincidence, I'm sure.

Re:"slowing"? More like stagnate for the last 2 ye (1)

Neuroelectronic (643221) | 1 year,2 days | (#45218425)

Sure is strange though how both NVidia and AMD announced their cards shortly after the adoption of TLS 1.2 by major American e-commerce websites and web browsers.

Maybe skipping 20nm node? (2)

edxwelch (600979) | 1 year,2 days | (#45218717)

During TSMC earnings call the CEO mentioned that there are tape-outs for GPUS for 16nm Finfet, but not 20nm - hinting that Nvidia and AMD will skip that node altogether.

http://seekingalpha.com/article/1750792-taiwan-semiconductor-manufacturing-limited-management-discusses-q3-2013-results-earnings-call-transcript?page=3 [seekingalpha.com]

"Specifically on 20-nanometers, we have received 5 product tape-outs, and scheduled more than 30 tape-outs in this year and next year from mobile computing CPU and PLD segments"

"On 16-FinFET. Technological development is progressing well, risk production is on schedule by the end of this year. More than 25 customer product tape-outs are planned in 2014, including mobile computing, CPU, GPU, PLD and networking applications. "

Re:Maybe skipping 20nm node? (0)

Anonymous Coward | 1 year,2 days | (#45218873)

That could be the case. 20 nm bulk/planar does not look fantastic from an electrical standpoint.

Graphene chips... (1)

Alejux (2800513) | 1 year,2 days | (#45219739)

...can't come soon enough. We desperately need a change in paradigm. Hopefully we might have something around 2020. *crossing fingers*

Time to add features (2)

dutchwhizzman (817898) | 1 year,2 days | (#45220669)

If you're out of silicon to work with, you can't just keep on going to throw transistors at a performance problem. You will have to get smarter with what you do with the transistors. If the GFX card makers add innovative features to the on-board chips, they could solve many bottlenecks we still face with utilizing the massive parallel performance we have on these cards. Both for science and for GFX I'm sure there is a list of "most wanted features" or "biggest hotspots" they could work on. For example, the speed at which you can calculate hashes with OCLhashcat differs extremely for NVidia and AMD graphics. NVidia clearly is missing something they don't need a smaller silicon process for. There must be plenty of this sort of improvements both AMD and NVidia can make.

Re:Time to add features (1)

ausekilis (1513635) | 1 year,1 day | (#45222547)

The feature that comes to mind that some companies have been hammering at for years is raytracing. I remember a project that intel was doing some years ago with the Return to Castle Wolfenstein source (way back when) to make the engine completely ray traced. I also remember it took a good $10k+ computer to render in less than real time (numbers escape me, and the site appears to be down now). While it did create some pretty unbelievable graphics for the time with true reflections on solid surfaces such as glass and metal, it was completely unapproachable for consumers due to cost, and unusable for gamers because it wasn't real-time yet. Intel does seem to be continuing the work [intel.com] though.
There will always be folks using the cards for password cracking and other "simple" massively parallel tasks. My vote goes for increasing the speed that they can generate more realistic imagery. Then that same 100k (guesstimating) render farm over at Pixar or Dreamworks will give us movies that further blur the line between real and fantasy.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?