Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Catalyst Driver To Enable Mantle, Fix Frame Pacing, Support HSA For Kaveri

timothy posted about 9 months ago | from the next-step-is-the-optic-nerve dept.

Upgrades 71

MojoKid writes "AMD has a new set of drivers coming in a couple of days that are poised to resolve a number of longstanding issues and enable a handful of new features as well, most notably support for Mantle. AMD's new Catalyst 14.1 beta driver is going to be the first publicly available driver from AMD that will support Mantle, AMD's "close to the metal" API that will let developers wring additional performance from GCN-based GPUs. However, the new drivers will also add support for the HSA-related features introduced with the recently released Kaveri APU, and will reportedly fix the frame pacing issues associated with Radeon HD 7000 series CrossFire configurations. A patch for Battlefield 4 is due to arrive soon as well and AMD is claiming performance gains in excess of 40 percent in CPU limited scenarios but smaller gains in GPU-limited conditions, with average gains of 11 — 13 percent over all." First time accepted submitter Spottywot adds some details about the Battlefield 4 improvements, writing that Johan Andersson, one of the Technical Directors in the Frostbite team, says that the best performance gains are observed when a game is bottlenecked by the CPU, "which can be quite common even on high-end machines." "With an AMD A10-7850K 'Kaveri' APU Mantle provides a 14 per cent improvement, on a system with an AMD FX-8350 and Radeon 7970 Mantle provides a 25 per cent boost, while on an Intel Core i7-3970x Extreme system with 2x AMD Radeon R9 290x cards a huge 58 per cent performance increase was observed."

cancel ×

71 comments

Sorry! There are no comments related to the filter you selected.

HOWZ IT GO ?? (-1)

Anonymous Coward | about 9 months ago | (#46111737)

Too little !!
Too late !!
WAY !!

High end cpu's get little to no boost (3, Interesting)

Billly Gates (198444) | about 9 months ago | (#46111797)

MaximumPC paints this a little bit different. [maximumpc.com] Where only lower end cpu's get a big boost in conjecture with higher end AMD cards.

I guess we will wait and see with benchmarks later today when 14.1 is released.

This is great news for those like me on older Phenom II 2.6 ghz systems who can afford to upgrade the ram, video card, and to an ssd but not the cpu without a whole damn new system. I use VMWare and this obsolete system has a 6 core cpu and hardware virtualization support. Otherwise I would upgrade but only an icore7 or higher end AMD FX-8350s have the same features for non gaming tasks. I can play Battlefiend 4 on this soon with high settings at 1080p would be great!

Re:High end cpu's get little to no boost (1)

0123456 (636235) | about 9 months ago | (#46111807)

MaximumPC paints this a little bit different. [maximumpc.com] Where only lower end cpu's get a big boost in conjecture with higher end AMD cards.

I was wondering how that made any sense, because I've never seen my i7 more than 20% used in any game where I've monitored CPU usage. However, I haven't played the Battlefield games in years.

Re:High end cpu's get little to no boost (0)

Billly Gates (198444) | about 9 months ago | (#46111853)

Some games which were really slow on my system like Star Wars the old republic have improved in later patches as they now spread the tasks across all 6 cpu cores.

However there is lag sometimes when the cpu usage is at only 40%. This is because of synchronization between all the cores waiting on the other to finish something etc. That is one of the drawbacks of parallelization and why Intel's Itanium failed. Great for servers but anything where data needs to be exchanged between the different parts of the program via threads hits bottlenecks.

So the icore7 uses crazy mathematical algorithms to execute data before it even arrives to save bandwidth to get insane IPC which is why AMD can't compete. But if you have a heavily threaded app that is latency intensive like a game it can be choppy even with low cpu utilization.

Re:High end cpu's get little to no boost (5, Informative)

Anonymous Coward | about 9 months ago | (#46112143)

Some games which were really slow on my system like Star Wars the old republic have improved in later patches as they now spread the tasks across all 6 cpu cores.

However there is lag sometimes when the cpu usage is at only 40%. This is because of synchronization between all the cores waiting on the other to finish something etc. That is one of the drawbacks of parallelization and why Intel's Itanium failed. Great for servers but anything where data needs to be exchanged between the different parts of the program via threads hits bottlenecks.

So the icore7 uses crazy mathematical algorithms to execute data before it even arrives to save bandwidth to get insane IPC which is why AMD can't compete. But if you have a heavily threaded app that is latency intensive like a game it can be choppy even with low cpu utilization.

There is so much wrong with this post.

First, Itanium didn't fail due to difficulties in paralleling things. Software never ramped up due to low market penetration and the fact that they shoved instruction execution back onto the compiler writers, it had poor perfromance for X86 code, and it was never targetted at anything but big server iron. It was never intended to a consumer level chip. Another big reason Itanium failed was the introduction of AMD 64.

Secondly the anecdotal latency that you experience in SWToR even though CPU utilization only being 40% is unlikely due to "core waiting on the other to finish something" and I challenge you to present a heap dump illustrating such a block correlated with your metric for latency. If you have not done an in depth analysis you'd have no way to know, but if you did I'd be curious as to your findings.

Finally, I have no idea why you would think that the i7 (I assume that's what you meant by icore7) "execute[s] data before it arrives." That doesn't even make sense. What you are most like referring to is out-of-order execution, or possibly branch prediction - both features that are also present in AMD chips and earlier Intel chips going back to the Pentium Pro. The better IPC of the i7 certainly has nothing to do with magical future seeing math and more to do with better execution units, OoO executions resources and superior FPU hardware.

It is true that in general games have no been able to scale to use 100% of your cpu 100% of the time, but it's not for the reason that you have stated and I'm quite doubtful that threading has introduced the type of latency a human would notice in the equation as you describe. There is a latency/throughput trade off, but ti is quite possible to achieve superior frame latencies with multiple cores than with single cores.

Re:High end cpu's get little to no boost (3, Informative)

0123456 (636235) | about 9 months ago | (#46112171)

It was never intended to a consumer level chip.

I take it you weren't around at the time? I remember many magazine articles about how Itanium was going to replace x86 everywhere, before it turned out to suck so bad at running x86 code.

Re:High end cpu's get little to no boost (2, Informative)

Anonymous Coward | about 9 months ago | (#46112311)

I was around at the time and before. Itanium was touted as eventually replacing X86 years off in the hype phase, but when it was only ever released as as a server chip (with an order of magnitude more transistors and cost than any consumer chip) and any notion that it would ever be for end users was dropped. I'm not sure if the early hype had anything to do with Intel's intentions or simply everyone else's day dreams, but Itanium was designed specifically to compete with RISC on big iron. Intel was not even considering 64-bit consumer chips at that time.

While they may have had notions that decades from then Consumers would run Itanium after a bunch of process node shrinks, that has more to do with the EPIC ISA and a lot less to do with the chip Itanium.

Since the x86 performance was basically a P4 built alongside the EPIC core, they certainly could have devoted more transistor budget to it if that was critical for them, but EPIC was designed to replace x86 ultimately, not be a high speed x86 execution engine.

Don't get me wrong, Itanium was a disaster and I'm not defending Intel on it. If it wasn't for AMD we wouldn't have seen an update to 32-bit x86 or gotten consumer level 64-bit for a long, long time.... But no version of Itanium or Itanium 2 was targeted for you to in a Dell.

Re:High end cpu's get little to no boost (0)

Anonymous Coward | about 9 months ago | (#46115367)

The Itanic was the PS/2 of Intel. In all respects.And it failed for pretty much the same reasons.

You can blame compilers, developers, x86 code, ISAs and transitor counts all day long if you like, but if you've been around for as long as you claim, you know that none of that matters really when it comes to dominating a market. If you look around, there are lots of really bad products that at one point or other totally dominated their respective market, eg. Windows and VHS.

What did the PS/2 and the Itanic have in common? They were closed, proprietary hardware, designed to make oodles of money for IBM respectively Intel and to kill off the competition by using market share as the enabler. Good for them, not so good for every one else. Ironically, but not very surprisingly, they where announced the same way - the new thing that was going to take over the world, lived the same way - mostly used by businesses with more money than sense, and ended the same way - an ignominious death, disowned by their own creators.

Re:High end cpu's get little to no boost (1)

K. S. Kyosuke (729550) | about 9 months ago | (#46128385)

The Itanic was the PS/2 of Intel.

Could have been the iAPX 432 of Intel. That would have been even worse for Intel. Oh, wait...

Re:High end cpu's get little to no boost (1)

dave562 (969951) | about 9 months ago | (#46111899)

I have an i7-960 (3.2GHz) and the ESO beta was pushing it pretty hard, averaging 30-40%.

Idle is not the only measure... (2)

Junta (36770) | about 9 months ago | (#46111981)

We are talking about a real time application, so even without 100% load over a relatively large sampling interval, performance can be degraded.

Let's assume that you have 2 sequential things that cannot be overlapped. CPU setup and GPU processing. You cannot begin CPU setup of next frame until GPU is done with current frame (gross oversimplification, but there are sequencies that bear some resemblence to this).

So a hypothetical CPU takes 1 ms to setup a frame, and then the hypothetical GPU then takes 4 ms to render it. There's 20% usage CPU wise reported on average over as small a sampling interval as 5 ms, and a throughput of 200 frames per second.

Let's say that the CPU now takes 10 times less time, and takes 100 us to setup a frame, and then the GPU still takes 4 ms to render it. You have gotten about a 20% speedup. Even though the CPU was not fully utililized, it did represent a throttling factor in FPS.

Re:High end cpu's get little to no boost (0)

Anonymous Coward | about 9 months ago | (#46112213)

MaximumPC paints this a little bit different. [maximumpc.com] Where only lower end cpu's get a big boost in conjecture with higher end AMD cards.

I was wondering how that made any sense, because I've never seen my i7 more than 20% used in any game where I've monitored CPU usage. However, I haven't played the Battlefield games in years.

Haven't played in years, eh? You might as well have said decades then. From a technological standpoint, that much has changed.

Re:High end cpu's get little to no boost (1)

0123456 (636235) | about 9 months ago | (#46112581)

Haven't played in years, eh? You might as well have said decades then. From a technological standpoint, that much has changed.

I said 'I haven't played the Battlefield games in years,' not 'I haven't played in years'

Note 'the Battlefield games'.

If you understood English, you'd realize that refers to a specific set of games that I haven't played in years, the latest of which happens to be the one they're benchmarking, so it might be much more CPU-intensive than the many games I have played, most of which were designed for consoles with a tiny fraction of the i7's processing power.

Re:High end cpu's get little to no boost (1)

Bacon Bits (926911) | about 9 months ago | (#46112509)

You also have to consider how the game was programmed and compiled. Many games are not able to support multiple cores, or may not support as many cores as you have. If you've got a 6 core CPU and your game is only designed and optimized for 2 cores, your CPU can bottleneck at 33% utilization, and some threads may bottleneck at 17% utilization. Even if the game does support all 6 cores, you still can get threads that hit capacity on a single core and won't be split to multiple cores.

Re:High end cpu's get little to no boost (1)

GigaplexNZ (1233886) | about 9 months ago | (#46115691)

MaximumPC paints this a little bit different. [maximumpc.com] Where only lower end cpu's get a big boost in conjecture with higher end AMD cards.

I was wondering how that made any sense, because I've never seen my i7 more than 20% used in any game where I've monitored CPU usage. However, I haven't played the Battlefield games in years.

CPUs can bottleneck even at 20% utilisation. The task manager will show 20% average utilisation, but that could mean that it sat at 100% utilisation for 20% of the time, rather than 20% utilisation for 100% of the time (or some mix in between).

Re:High end cpu's get little to no boost (1)

Luckyo (1726890) | about 9 months ago | (#46116715)

Load up WoW and try raiding with reasonable amount of add-ons enabled. Watch your CPU choke like a baby who swallowed a lego brick.

BF also tended to use a lot of CPU, but WoW is just unrivalled in eating your CPU alive.

Re:High end cpu's get little to no boost (1)

Bengie (1121981) | about 9 months ago | (#46117219)

WoW chokes, but it only uses about 30% of my CPU. My CPU is mostly idle.

Re:High end cpu's get little to no boost (1)

Luckyo (1726890) | about 9 months ago | (#46117369)

To be specific - it chokes on its main thread. Badly. Highly overclocked i5 wipes the floor with i7s because of it.

Re:High end cpu's get little to no boost (1)

Baloroth (2370816) | about 9 months ago | (#46111929)

MaximumPC paints this a little bit different. [maximumpc.com] Where only lower end cpu's get a big boost in conjecture with higher end AMD cards.

I'm not sure exactly what you're referring to in the link. The only comparisons are low(er) end CPUs with high end cards or high end CPUs with low(er) range cards. You don't get a boost if you've got high-end CPUs and a lower end card, but that should be completely obvious: the point of Mantle is to shift the load to the GPU. If the GPU is already maxed out, you won't see much (or any) gains at all.

Re:High end cpu's get little to no boost (2)

edxwelch (600979) | about 9 months ago | (#46112061)

So, MaximumPC must not consider the i7-3970x Extreme mentioned a high end CPU. Because that gets a 58% boost.

Re:High end cpu's get little to no boost (0)

Anonymous Coward | about 9 months ago | (#46112085)

I think you are missing a decimal point in that. I see single digit percentage increases.

Re:High end cpu's get little to no boost (0)

Anonymous Coward | about 9 months ago | (#46113741)

... for a i7-4960X ($1059) with a R7 260X ($139) running BF4 at ultra settings at 1080p.
In other news, a API designed to reduce CPU overhead doesn't help much if you are completely GPU limited.

Re:High end cpu's get little to no boost (1)

sexconker (1179573) | about 9 months ago | (#46115253)

I think you are missing a decimal point in that. I see single digit percentage increases.

http://battlelog.battlefield.c... [battlefield.com]

Test case 3: High-end single-player with multiple GPUs
CPU: Intel Core i7-3970x Extreme, 12 logical cores @ 3.5 GHz
GPU: 2x AMD Radeon R9 290x 4 GB
Settings: 1080p ULTRA 4x MSAA
OS: Windows 8 64-bit
Level: South China Sea “Broken Flight Deck”
This single-player scene is heavy on both the CPU and GPU with lots of action going on. Test was done on the highest end Intel CPU on Windows 8, which is the fastest option before Mantle thanks to DirectX 11.1. Still this CPU is not fast enough to keep the 2 290x GPUs fed at 1080p on Ultra settings so we get a significant CPU performance bottleneck which results in major performance improvement when enabling Mantle.
Result: 13.24 ms/f -> = 8.38 ms/f = 58% faster

Re:High end cpu's get little to no boost (0)

Anonymous Coward | about 9 months ago | (#46113559)

A 6 core Phenom II would be a Thuban processor, which means the AM3 platform.

If you haven't already, check your motherboard manufacturers support page - some AM3 motherboards could be updated to support AM3+ processors like the 8350 with a simple BIOS update.

LibreOffice support too in release notes (0)

Anonymous Coward | about 9 months ago | (#46111889)

I find it interesting is some of the LibreOffice 4.2 code uses hardware acceleration [maximumpc.com] and this driver with mantle. I think LibreOffice also is supposed to come out today.

Great day to upgrade your software.

Well (1)

ADRA (37398) | about 9 months ago | (#46111895)

"which can be quite common even on high-end machine"

Sure, when the games are coded to use 100% of 1 thread while ignoring (most likely) 3-7 threads just screaming to be utilized, then CPU's are surely a contentious bottleneck.

Re:Well (0)

Anonymous Coward | about 9 months ago | (#46111999)

Hey! I'm sure they could use some Frostbite engine programmers! Go hit em up! I'm sure they could use your engine programming expertise!

Re:Well (0)

Anonymous Coward | about 9 months ago | (#46112263)

"which can be quite common even on high-end machine"

Sure, when the games are coded to use 100% of 1 thread while ignoring (most likely) 3-7 threads just screaming to be utilized, then CPU's are surely a contentious bottleneck.

Uh, we've been in the land of 64-bit computing, multi-core, and multi-thread capability for quite a long time now. If you have programs still being written for single-threaded operations, I'd say your bottleneck is in the programming department.

And it's a problem you better learn to get fixed and fixed quickly before someone makes an example out of your shitty code.

Re:Well (1)

kllrnohj (2626947) | about 9 months ago | (#46112971)

Games are still largely gated by a single thread because graphics APIs still don't allow multiple threads to share a context[1]. When you're only allowed to talk to the GPU from a single thread, it's no surprise that thread is going to be CPU bound.

1. Yes I know you can do limited resource sharing between contexts and such for background uploading of resources like textures, but the actual act of issuing all the drawing commands to produce the frame has to happen from a single thread.

Re:Well (2)

Pino Grigio (2232472) | about 9 months ago | (#46113239)

It's not just that, it's the locking behaviour needed for correct concurrency. There's no point in multi-threading a lot of cores that are going to spend most of their time waiting on another thread to release a lock. Even Mantle is single threaded. i.e. There's a queue for the GPU, one for Compute and one for DMA, on different threads. There aren't going to be 8 threads all using the GPU at once. It'll still be serialised by the driver, just more efficiently than you can do it with existing D3D or GL drivers.

Re:Well (2)

Bengie (1121981) | about 9 months ago | (#46114655)

Even Mantle is single threaded

Single threaded per queue and you can create as many as you want per application. During setup, you create all of the queues you want to use, then register them. Once registered, the GPU can directly read from the queue instead of making a system call. This pretty much removes those pesky 30,000 clock cycle system calls. Kaveri goes a step further and registers queues directly with the hardware, because each queue item is exactly one cache line in size and the APU shares the cache-coherency and protected memory, the latency between the CPU and GPU is about the latency of the cache, which is around 10ns.

Because prior to Mantle, system calls were used, a 10ns communications latency was completely dwarfed by the 30,000 cycle system call. Now we're getting along the lines of 0 call overhead and 10ns latency. Obviously the GPU must have free resources to start working and some code may need to be ran in order for the GPU to schedule, but the main point still remains.

Re:Well (1)

Pino Grigio (2232472) | about 9 months ago | (#46115087)

Yes, that was my point - the GPU has to schedule. It multi-threads at the warp level on the actual metal, but it'll execute one "command" at a time. The benefit here is as you say - lock free queuing of commands and highly optimised state management.

Re:Well (1)

Bengie (1121981) | about 9 months ago | (#46117237)

AMD GPUs can execute several kernels at a time and each Mantle task is a kernel. GPUs are meant to be high throughput and will keep itself busy as long as you can feed it work fast enough.

Re:Well (1)

Pino Grigio (2232472) | about 9 months ago | (#46113187)

Two words, "false sharing". Just throwing cores at a problem doesn't necessarily improve the performance. You have to caress your cache lines pretty gently to get real improvements.

Sounds like a bit of a bust. (1)

Anonymous Coward | about 9 months ago | (#46111897)

The grand predictions they made for the performance increase now have a _huge_ asterisk (* when CPU limited).

What serious gamer is CPU bound these days? Most people have giant CPUs and GPUs that struggle with 1080p and especially higher resolutions.

Now, at first blush this doesn't matter - a 10% improvement is a 10% improvement, great. The problem is that the _cost_ is lock-in to this weirdo AMD Mantle stuff instead of DirectX or OpenGL.

Posted anonymously because with one or two bad moderations, due to the cunt policies implemented at Slashdot, you can go from Positive to Bad karma and get limited to some ridiculous small number of posts in 24 hours. What a bunch of shitheads the people who run this place are.

HOWZ IT GO !! (0)

Anonymous Coward | about 9 months ago | (#46112051)

Conform - or be cast out !!

And stop calling me a cunt head !! (There is no such thing besides !!)

AMD strategic... (3, Interesting)

Junta (36770) | about 9 months ago | (#46112069)

So gamers get a small boost to their gaming rigs, but that's not *really* the goal for AMD.

The real goal is that AMD demonstrably lags Intel in *CPU* performance, but not GPU. OpenGL/Direct3D implementations cause that to matter, meaning AMD's cpu business gets dinged as a valid component in a configuration that will do some gaming. Mantel diminishes the importance of the CPU to most gaming, therefore their weak CPU offering is made workable to sell their APU based systems. It can do so cheaper than Intel+Discrete GPU while still reaping a tidy profit.

Re:AMD strategic... (1)

K. S. Kyosuke (729550) | about 9 months ago | (#46128311)

The real goal is that AMD demonstrably lags Intel in *CPU* performance, but not GPU.

Well it's a stupid idea to use all that spiffy CPU performance for syscalls anyway, regardless of whether the CPU is an Intel chip or an AMD chip...or not?

Re:AMD strategic... (1)

Junta (36770) | about 9 months ago | (#46133933)

It's not to say it's a bad thing to reduce needless CPU activity, I was just explaining how this particularly was a strategic move for AMD. The GP was saying 'it's pointless because gamers buy power CPUs', I was simply pointing out that's not what AMD would currently want, since beefy CPU means Intel at this particular point (and really most points since Nehalem came out).

I will say it *could* be a bad thing if it leads to Mantle-exclusive games that are unable to support other GPUs at all (Intel, nVidia, etc).

Re:Sounds like a bit of a bust. (5, Insightful)

Anonymous Coward | about 9 months ago | (#46112551)

I think that misses the point. CPUs aren't the limiting factor in part because game devs limit the number of draw calls they issue to avoid it being a limiting factor (because not everybody has a high end CPU). Mantle may not offer vastly more performance in the short term, but it will enable more in game engines in the long term if the claims DICE and AMD make are accurate. That doesn't get away from the cost of lock in, but like any new release of this sort Mantle may never catch on but it may push DX and GL to change in a mantle-like direction which does then benefit all developers.

Re:Sounds like a bit of a bust. (1)

Pino Grigio (2232472) | about 9 months ago | (#46113323)

Precisely. Mod this dude up.

Re:Sounds like a bit of a bust. (1)

mikael (484) | about 9 months ago | (#46114323)

Weirdo AMD stuff? The descriptors that DirectX and Mantle use are basically the registers that control your GPU. Traditional desktop GL has the large GL state (textures + framebuffer objects + blending + vertex buffers + transform feedback) and which is what the driver maintains and attempts to sanity check with all those GL calls (500+ of them) to make sure nothing inconsistent or fatal is allowed to run on any of the GPU cores. Setting a single parameter can involve cross-referencing dozens of other variables. If you look at some of the GL commands with the most parameters such as the texture image uploads, the driver has to sanity-checking ten or more parameters eg. the texture image functions have to check texture width, height, type, internal format, mip-map level, pixel data source type, pixel data pointer.

DirectX11 has the debug mode interface which does a similar thing, but can be bypassed once everything is working correctly. Those drivers don't just issue graphics commands, they must also compile and link shaders, run OpenCL, CUDA and interoperate with all the other API's that you can find at Khronos (www.khronos.org)

Re:Sounds like a bit of a bust. (1)

Bengie (1121981) | about 9 months ago | (#46114669)

What serious gamer is CPU bound these days?

Most of my games are thread bound. 80% idle CPU with a 90% idle GPU and getting 20fps.

Acronyms (0)

Anonymous Coward | about 9 months ago | (#46112015)

Still wondering here what AMD or GPUs have to do with Health Savings Accounts.

Re:Acronyms (1)

mounthood (993037) | about 9 months ago | (#46116767)

Still wondering here what AMD or GPUs have to do with Health Savings Accounts.

Heterogeneous System Architecture (HSA) http://developer.amd.com/resou... [amd.com]

This complaint might be annoying when you see it on every thread, but that doesn't mean it's wrong.

Direct3D "light weight" runtime (2)

edxwelch (600979) | about 9 months ago | (#46112137)

On a related note, Microsoft are working on an update to Direct3D to provide a "light weight" runtime similiar to the XBone. Presumably, this will solve the same draw call issues that Mantle deals with.
Unfortunately, it doesn't sound like the update will happen anytime soon - maybe for Windows 9?
Also, it's unclear whether they will back port the update to Windows 7.

https://blogs.windows.com/wind... [windows.com]

Re:Direct3D "light weight" runtime (1)

FreonTrip (694097) | about 9 months ago | (#46112215)

Willing to bet the answer to that is "no." Meaningful features are a cudgel used to bludgeon customers into buying new Windows licenses, after all!

Re:Direct3D "light weight" runtime (0)

Anonymous Coward | about 9 months ago | (#46125591)

No... lightweight in this case doesn't refer to the same idea as Mantle (low level access).

DirectX (and OpenGL) were designed to abstract a lot of the details of GFX cards. It worked... well... but we are in a situation now where that low level access needs restoring. The abstraction is preventing these massive processing engines (the GPUs) from being used to their full potential.

Lightweight DirectX is just a directX install without a lot of legacy bloat. It doesn't change the way DirectX works... and that's where this problem lies.

Windows hardware (0)

Anonymous Coward | about 9 months ago | (#46112221)

Yay for AMD's Windows only features.

Re:Windows hardware (1)

Bengie (1121981) | about 9 months ago | (#46117249)

AMD has been making commits to FreeBSD and Linux for Mantle.

Will it work with IE10 or IE11? (0)

Anonymous Coward | about 9 months ago | (#46112239)

I can't even upgrade my browser because my fxxking AMD video card is incompatible. WTF? Though, I blame Microsoft more than anybody else. Firefox and Chrome work just fine.

Re:Will it work with IE10 or IE11? (1)

higuita (129722) | about 9 months ago | (#46118593)

1- i don't imagine how the video card drivers can affect a browser, can't MS blacklist that driver? does everyone have this problem?
2- why use IE?
3- why even use windows!?

R9 290X vs 650 Ti Boost (1, Interesting)

Cruciform (42896) | about 9 months ago | (#46112287)

I was really disappointed by the comparative performance of the AMD 290x 4GB vs my nVidia 650 Ti Boost 2GB.

The nVidia let's me run games like Borderlands 2 and Skyrim at max settings on my old Core 2 Duo smoothly, yet the 290X hitches and drags, almost as if it were streaming the gameplay from the hard drive. I expected a card with 2000+ shaders to be faster than that.

If my processor isn't bottlenecking the 650s performance too badly, at least the 290X should be able to cap out at something reasonable.

Re:R9 290X vs 650 Ti Boost (0)

Anonymous Coward | about 9 months ago | (#46112609)

NVidia driver quality vs AMD driver quality...

Re:R9 290X vs 650 Ti Boost (1)

Anonymous Coward | about 9 months ago | (#46112757)

I would say ATI has the better driver these days. The latest Nvidia drivers have caused quite a few BSOD and even bricked a few cards!

It is not 2004 anymore. ATI has consistently improved while nvidia has moved in the other direction. I only once had a driver issue with them in 3 years.

Re:R9 290X vs 650 Ti Boost (2)

Mashiki (184564) | about 9 months ago | (#46112879)

NVidia driver quality vs AMD driver quality...

Chances are in the GP's case there's something else going on. Since I've got a 7950 and a FX-6300, and can run skyrim on the max settings while getting no frame hitching. Then again it could just be the 290, since there were TDP issues on some batch runs from what I've heard.

But driver quality? Nvidia's drivers are the reason I went to AMD, after nearly a year of them blaming the end user for constant TDR crashes, then deciding to man up and pay to have rigs in the US shipped to California for TDR testing, then releasing a driver which mostly fixed the TDR issues--where they were very quiet on revealing why it was crashing(all they said was "we fixed it in most cases"). Though a few intrepid people found it had to do with the drivers dropping the core and ram voltages so low that the cards became unresponsive and unstable. Then there was the 5-7 months where the 290-3xx series drivers were causing hardlocks across the board for 400,500,600 series owners.

I do remember when ATI's drivers were shit, I owned a couple of radeon cards during that time. But since AMD bought them out, their driver quality has been increasing quite a bit.

Re:R9 290X vs 650 Ti Boost (2)

TheRealMindChild (743925) | about 9 months ago | (#46112953)

Hell, I'm stuck on Nvidias 314.22 drivers because every driver from the 32x and 33x series causes my machine to freeze or restart the driver in a "safe mode". You can read the many links to this horror via https://www.google.com/search?q=nvidia+video5 [google.com]

Re:R9 290X vs 650 Ti Boost (0)

Anonymous Coward | about 9 months ago | (#46113719)

This update is almost tailor made for people like me with a 7870ghz edition paired with an AthlonII x3 @3.2ghz. I'm looking forward to seeing modest gains in... Oh wait, I don't own any games that I can't run at either high or ultra high settings...

Maybe I'll be able to really max out the hair effects on Tomb Raider, or the cobblestone tesslation on Bioshock Infinite or something. Or maybe move those two games from high graphics up to maximum. Either way my rig already plays crysis 3 across 3 1080p monitors at 30fps, maybe it'll get me boosted up to 40fps?

It's a good time to be a PC gamer.

Re:R9 290X vs 650 Ti Boost (1)

A Friendly Troll (1017492) | about 9 months ago | (#46117815)

Nvidia's drivers are the reason I went to AMD, after nearly a year of them blaming the end user for constant TDR crashes, then deciding to man up and pay to have rigs in the US shipped to California for TDR testing, then releasing a driver which mostly fixed the TDR issues--where they were very quiet on revealing why it was crashing(all they said was "we fixed it in most cases"). Though a few intrepid people found it had to do with the drivers dropping the core and ram voltages so low that the cards became unresponsive and unstable.

Y'know, that's quite funny, because I had TDR crashes with my AMD 7790. The crashes were such that I didn't even get a BSOD, and they were seemingly random (although never occuring under 3D load, only under 2D).

Since I built an brand new computer, I had no idea it was the graphics card. I couldn't reproduce a crash, because sometimes it happened three times per hour, and sometimes a week passed by without issues. Could have been anything in the computer, including SATA cables.

It took a month of system instability until I figured out that Windows did create minidumps (but not in the standard location), which pointed me to the GPU being the problem. After that, it took a week of fiddling around with various drivers and even TDR registry settings, until I realized that fixing the GDDR speed at maximum, and not letting it "save power", fixes the crashes.

AMD tech support was clueless the entire time, and so did the internet, actually, because I couldn't find anyone with the same freezing problem and the same solution.

Re:R9 290X vs 650 Ti Boost (1)

rwise2112 (648849) | about 9 months ago | (#46113925)

I was really disappointed by the comparative performance of the AMD 290x 4GB vs my nVidia 650 Ti Boost 2GB.

The nVidia let's me run games like Borderlands 2 and Skyrim at max settings on my old Core 2 Duo smoothly, yet the 290X hitches and drags, almost as if it were streaming the gameplay from the hard drive. I expected a card with 2000+ shaders to be faster than that.

If my processor isn't bottlenecking the 650s performance too badly, at least the 290X should be able to cap out at something reasonable.

That doesn't make any sense, as I've got a 7850 that runs Skyrim at max settings with no problem, but I do have an i7 processor.

Re:R9 290X vs 650 Ti Boost (0)

casualgeek (851422) | about 9 months ago | (#46117109)

You have an R9 290X and you waste it on games? Ever heard of scrypt coin mining (Litecoin, Dogecoin, etc.) ? This card could pay for itself in a few weeks or months mining 24/7.

Re:R9 290X vs 650 Ti Boost (1)

Cruciform (42896) | about 9 months ago | (#46117461)

It mines Gridcoin. I just don't do it 24/7.
Anyway, it's still marketed as a gaming card, so the lack of performance for the specs is pretty disheartening.
At least the word is that new drivers should resolve a lot of issues with performance.

Here is a more important question (2)

DarkAce911 (245282) | about 9 months ago | (#46113577)

Does it increase my mining hash speed? If you are buying AMD cards for gaming at today's prices, you are an idiot.

Seems like it'll screw them in the long run (3)

Sycraft-fu (314770) | about 9 months ago | (#46113749)

Well, assuming it takes off which I don't think it will. If this stuff is truly "close to the core" as the Mantle name and marketing hype claim, then it'll only work so long as they stick with the CGN architecture. It won't work with any large architecture changes. So that means that they either have to stick with GCN forever, which would probably cripple their ability to make competitive cards in the future as things change, or they'd have to abandon support for Mantle in newer cards, which wouldn't be that popular with the developers and users that had bought in. I suppose they also could provide some kind of abstraction/emulation layer but that rather defeats the purpose of a "bare metal" kind of API.

I just can't see this as being a good thing for AMD in the long run, presuming Mantle truly is what they claim. The whole reason for things like DirectX and OpenGL are to abstract the hardware so that you don't have to write a render for each and every kind of card architecture, which does get changed a lot. If Mantle is tightly tied to GCN then that screws all that over.

So either this is a rather bad desperation move from AMD to try and make up for the fact that their CPUs have been sucking lately, or this is a bunch of marketing BS and really Mantle is a high level API, but just a proprietary one to try and screw over nVidia.

Re:Seems like it'll screw them in the long run (0)

Anonymous Coward | about 9 months ago | (#46113871)

Or it could be something they developed for the consoles and are opening up to developers on the PC.

Re:Seems like it'll screw them in the long run (1)

Anonymous Coward | about 9 months ago | (#46115133)

It is a high level API, and not particularly tied to GCN.

The major difference to DirectX and OpenGL: application, library, mapped GPU memory and GPU message buffer all live in the same user address space. Meaning zero context switching overhead for a lot of GPU calls.
On top, there's not all that much error checking.
They can do that because the GPU has pre-emptive task switching and a MMU with protected memory.
So at worst a misbehaving application causes the GPU equivalent of a GPF, terminating that application without affecting the rest of the system.

And no, I don't see why Nvidia couldn't create a competing API.
Or MS creating a new DirectX version that does the same thing for that matter.
Both Kepler and GCN support preemption, exceptions and have a MMU.
For older GPU archs it would have to fall back to software emulation/error checking.

So yeah, neat idea... but not as ground-breaking as AMD makes it out to be.

Re:Seems like it'll screw them in the long run (1)

Sycraft-fu (314770) | about 9 months ago | (#46127045)

If it is just support for preemption and an MMU MS already created an API for that, it is called "DirectX 11.2" or more properly the WDDM 1.3. 11.1 (WDDM 1.2) supported full preemption, 11.2 supports page based virtual memory.

I dunno, I guess we'll see how performance in real games actually shakes out but if this is nothing more than an API with a couple newish features, features that DX already supports, I'm not really sure that the "giveashit" factor for devs will be very high.

I also wonder as to how they'll handle GPU memory mapping in user space. With large memory cards, that is going to mean saying no 32-bit versions which at this point is still something that may be problematic for game developers.

Re:Seems like it'll screw them in the long run (2)

higuita (129722) | about 9 months ago | (#46118609)

Don't forget that all new consoles today are using AMD cards, so the game developers are using mantle for those, they would also can do that to PC with little change

This might very well become AMD's tombstone (1)

mic0e (2740501) | about 9 months ago | (#46118487)

AMD, with their decreasing market share, should use open, or at least widespread, standards such as OpenGL or DirectX.
Remember AMD's alternative to NVidia's CUDA? I think it was called Fire- something or something-stream... that really went well.
If AMD opens up a new front here, and Mantle has any success at all, NVidia will retaliate by creating their own API, and guess how many people will use Mantle then.

Re:This might very well become AMD's tombstone (1)

higuita (129722) | about 9 months ago | (#46118695)

Again, nvidia don't have the console market today, AMD have it, so if game developers will use MANTLE in the consoles, the PC market should follow.
If the performance is as claimed, game developers will for sure try to use it, specially if its supported in all systems the same (all consoles, windows, mac, linux)

Re:This might very well become AMD's tombstone (1)

DarkAce911 (245282) | about 9 months ago | (#46120103)

decreasing market share? AMD has fully sold out it's entire retail inventory to the point they are talking increasing production to meet demand. AMD numbers are going to be huge.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?