Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

AMD, NVIDIA, and Developers Weigh In On GameWorks Controversy

Soulskill posted about 6 months ago | from the there-can-be-only-one-(or-more) dept.

AMD 80

Dputiger writes: "Since NVIDIA debuted its GameWorks libraries there's been allegations that they unfairly disadvantaged AMD users or prevented developers from optimizing code. We've taken these questions to developers themselves and asked them to weigh in on how games get optimized, why NVIDIA built this program, and whether its an attempt to harm AMD customers. 'The first thing to understand about [developer/GPU manufacturer] relations is that the process of game optimization is nuanced and complex. The reason AMD and NVIDIA are taking different positions on this topic isn't because one of them is lying, it’s because AMD genuinely tends to focus more on helping developers optimize their own engines, while NVIDIA puts more effort into performing tasks in-driver. This is a difference of degree — AMD absolutely can perform its own driver-side optimization and NVIDIA's Tony Tamasi acknowledged on the phone that there are some bugs that can only be fixed by looking at the source. ... Some of this difference in approach is cultural but much of it is driven by necessity. In 2012 (the last year before AMD's graphics revenue was rolled into the console business), AMD made about $1.4 billion off the Radeon division. For the same period, NVIDIA made more than $4.2 billion. Some of that was Tegra-related and it's a testament to AMD's hardware engineering that it competes effectively with Nvidia with a much smaller revenue share, but it also means that Team Green has far more money to spend on optimizing every aspect of the driver stack.'"

Sorry! There are no comments related to the filter you selected.

Optimizing the driver stack... (0)

Anonymous Coward | about 6 months ago | (#47161861)

means having source code access and knowingly obfuscating code to impede your competitor?

Re:Optimizing the driver stack... (0)

Luckyo (1726890) | about 6 months ago | (#47161941)

Yes, but not in a way you mean it. AMD execs would likely kill to get their hands on information how nvidia optimizes their drivers. It's a well known reality that ATI had severe problems with driver quality from early stages, and these problems persist today long after AMD bought the company.

So yes, it makes perfect sense not to give software part of technology stack away to the competitor that has emphasized hardware at expense of driver quality. Hardware-wise AMD currently has better bang for a euro than nvidia, but nvidia offers far better drivers and some extra features like physx.

I chose my last gaming PC to be nvidia based after sitting on 4870 for several years. That card was excellent and performed well above the price category it was sold for when everything worked well in my opinion, but drivers and driver-related issues made me want to punch a kitten at times.

Hilariously my laptop with integrated graphics is AMD for the exact same reason - I only play some rather old games on that machine, so I need performance per euro (it's a cheap laptop) rather than great support for latest games and driver stability and features.

So it's absolutely understandable that nvidia chooses not to open the driver code. That is one of their key competitive advantages and something that AMD really wants to have.

Re:Optimizing the driver stack... (2)

by (1706743) (1706744) | about 6 months ago | (#47161959)

So it's absolutely understandable that nvidia chooses not to open the driver code.

...try telling that to Stallmen et al!

Seriously though, maybe it makes me a Bad Linux User, but I'm absolutely ok with the state of nVidia drivers: installation is a piece of cake, 2D and 3D performance is great (I think 3D performance is on-par with Windows [OpenGL, obviously]).

I don't have any experience with new ATI cards under Linux, but I've had hit-or-miss luck the times I've used slightly older cards (interestingly, I've had much better luck with 3D performance than 2D...horrible tearing/update problems in 2D, but Nexuiz/OpenArena work fine...).

Re:Optimizing the driver stack... (1)

exomondo (1725132) | about 6 months ago | (#47161979)

...try telling that to Stallmen et al!

I'm quite sure he would understand it, I'm quite sure he wouldn't find it acceptable though and that's his prerogative.

Re:Optimizing the driver stack... (1)

Anonymous Coward | about 6 months ago | (#47162161)

Stallman would prefer if AMD and nVidia would only compete to make better hardware. Which may or may not be realistic. I'm not sure what such a world would look like.

Re:Optimizing the driver stack... (2, Informative)

Charliemopps (1157495) | about 6 months ago | (#47162039)

...but nvidia offers far better drivers and some extra features like physx

It's more than that. NVIDIAs drivers aren't even that good. It's just that ATI's (AMDs) are just so terrible that they look good in comparison. Who the hell decided the catalyst control center was a good idea? It reminds me of some glitchy 1990s spam ladened chat program. What a joke. The drivers are so sketchy almost every game I'd play would have "STICKY: For ATI users check here first!" at the top of their support forums. Trying to get hardware acceleration to work on my linux media PC was almost impossible until I switch to NVIDIA. Stop creating new cards I can cook and egg on and fix your damned drivers. I have enough fried eggs I just want to watch a movie without spending 30min dinking around with arcane driver settings while my wife keeps asking me why we canceled cable.

Likely a faulty chair to keyborad interface. (1, Funny)

TapeCutter (624760) | about 6 months ago | (#47162365)

It reminds me of some glitchy 1990s spam ladened chat program.

Sounds to me like you are using a 1990's card too, AFAIK "catalyst" is no longer supported and it's certainly not bundled with recent cards. I updated my NVIDIA driver just the other day, sure the driver is enourmous (250MB) but it installed flawlessly in the background without a reboot. I play WoT regularly at maximum detail on an i7 and have no issues other than the 200ms round trip from Oz to the US but it stays playable until that hits ~350ms. I've also been mucking around with CUDA for a few months, the developer resources are excellent and free for non-commercial use. If you really want to squeeze every last flop out of the card NVIDIA provide free resources such as the online book "CPU Gems" and the white papers that accompany some of the demo source, such as the optimised n-body example.

Stop creating new cards I can cook and egg on

My $150 GE Force 750 maxes out at just over a terraflop, significantly faster than ANY super computer that existed pre-Y2K. It uses less wattage than an old fashined light bulb, sure it can fry an egg, even scramble it with the on-card fan, but why is that a problem if it remains within its operating specs, is it that difficult to keep your eggs away from the video card. If your chickens are attracted to the heat then move the chickens outside where they belong.

In my professional opinion, I think the problem on your system is the chair to keyborad interface, it has nothing to do with NVIDIA or AMD's trully amazing technology since there is absolutely no need to fddle with default driver settings just to watch a movie. Listen to your wife and save yourself two headaches, forget about the PC drivers and just pay for the damned cable.

Re:Likely a faulty chair to keyborad interface. (1)

Anonymous Coward | about 6 months ago | (#47162615)

Psst. Catalyst is indeed still shipped with ATI drivers, especially proprietary ones in Linux, and I think you misunderstood the GP was talking about how he's had shit luck with ATI but his nVidia drivers were fine.

Re:Likely a faulty chair to keyborad interface. (1)

Charliemopps (1157495) | about 6 months ago | (#47163203)

Psst. Catalyst is indeed still shipped with ATI drivers, especially proprietary ones in Linux, and I think you misunderstood the GP was talking about how he's had shit luck with ATI but his nVidia drivers were fine.

Thanks! That's exactly what I was saying. :-)

Re:Likely a faulty chair to keyborad interface. (3, Informative)

drinkypoo (153816) | about 6 months ago | (#47163721)

Sounds to me like you are using a 1990's card too, AFAIK "catalyst" is no longer supported and it's certainly not bundled with recent cards.

Not only is CCC still a thing, a bug-ridden piece of shit thing which can cause systems to crater and which amounts to 150MB for a preferences GUI, but ATI abandons cards much, much more quickly than does nVidia. Indeed, when I bought my last ATI-graphics product new in the store (so old it's a R690M-based subnotebook) it was all of the following things:

  • Never getting another/newer official graphics driver for any OS
  • Unsupported by fglrx as being "too old" and unsupported by the OSS ati driver as being "too new"

That right, it was not just obsoleted but abandoned while it was still being sold.

The nvidia driver is enormous because one download supports cards practically back to the time of Methuselah. It hasn't really been that long since they finally abandoned support for literally their oldest cards. AMD abandons them while they're still on store shelves. I don't care if it's because they're spread too thin, or just because they're assholes, or because the heavens conspire against them. It just doesn't make sense to use their graphics cards. You seem to have noticed this, as you have an nVidia card.

Re:Likely a faulty chair to keyborad interface. (0)

Anonymous Coward | about 6 months ago | (#47171209)

I've had similar happen, although my card was pretty well supported by the OSS driver, somethings even worked better. But the 3d perf wasn't as good. I was a little upset as the laptop was only like 3 years old.

Re:Optimizing the driver stack... (1, Funny)

fey000 (1374173) | about 6 months ago | (#47162829)

...Stop creating new cards I can cook and egg on...

I think I've found your problem. What you are looking for is called a skillet, and it does not go in the computer.

Re:Optimizing the driver stack... (1)

Anonymous Coward | about 6 months ago | (#47162189)

Heh... You'd like to think that, but you'd be mostly wrong.

The problem with "broken" drivers and "quality" is less an optimization and overall fit-and-finish deal with AMD versus NVidia- and more of something most wouldn't get unless they'd been digging in either company's proprietary codebases at some point combined with being IN the Games Industry and really understand the story properly.

AMD's drivers tend to explicitly follow the OpenGL standards. To a fault.

NVidia's compensates for many inappropriate mis-uses thereof (But not all, mind...) but oftentimes doesn't get it fully "right"- but it's something that the game studios code towards...the noncompliant driver behavior.

Take a long, wild guess what devs at the Studios TYPICALLY do? They implement code that's "clever" and omits a shader parameter here, an Alpha there- and largely works with NVidia's drivers at speed because NVidia's code makes assumptions when you're missing stuff. They implement code that does things like recycling VBOs intraframe instead of the sensible interframe use (Because if you'd have READ the spec, you'd know that there's a good chance you'll stall the pipeline doing this stupid thing...) and dragging a top end card from a hundred or so FPS to seconds per frame.

Correct the engine code doing the rendering so it's not doing stupid crap and the game FLIES on both company's products and maxed out 100+ fps and does it stably...on both GPU targets.

AMD would love to get in the path with the studios like NVidia is right now, so they can work with them and help them make performant code that doesn't take shortcuts. In many cases, their games don't run as well as they could on both NVidia and AMD because of the stuff that NVidia's propping them up on. If you did it "right", the games would actually tend to run faster- and it's not even remotely hard to get it "right". Hell, they don't even get it "right" on D3D in the same senses, so it's not that OpenGL's difficult to code for. It isn't. It's just that nobody ever stops to check to see if they've pooched themselves on an API edge by...funny this...reading the damn standards and documentation for a change. AMD doesn't want "NVidia's 'optimizations'" because if they did, they'd have been reverse engineering them all along and getting damned close to them. Thing is, it's mostly a waste of time trying to play "catch-up" on things there- so, they work on the fast "correct" path per the specs and try for larger marketshares to get where they want to be.

(And a hint for you: The reason NVidia's not opened up is more of a brown-paper-bag-over-the-head reason than the one you allude to- much of their stuff isn't all that great...)

Re:Optimizing the driver stack... (0)

Anonymous Coward | about 6 months ago | (#47162257)

They implement code that's "clever" and omits a shader parameter here, an Alpha there- and largely works with NVidia's drivers at speed because NVidia's code makes assumptions when you're missing stuff.

And AMD's perf tools identify that if there is an issue.

They implement code that does things like recycling VBOs intraframe instead of the sensible interframe use (Because if you'd have READ the spec, you'd know that there's a good chance you'll stall the pipeline doing this stupid thing...) and dragging a top end card from a hundred or so FPS to seconds per frame.

Which is equally bad for nvidia and amd.

Correct the engine code doing the rendering so it's not doing stupid crap and the game FLIES on both company's products and maxed out 100+ fps and does it stably...on both GPU targets.

What kind of nonsense is that, these coding speed hacks are what prevents games from running maxed out at 100fps? I suppose the reason they dont code properly is all some big conspiracy to drive hardware sales now is it?

AMD would love to get in the path with the studios like NVidia is right now, so they can work with them and help them make performant code that doesn't take shortcuts.

They can and do do that with many studios, the results arent some phenomenal "maxed out at 100fps".

If you did it "right", the games would actually tend to run faster- and it's not even remotely hard to get it "right".

So nobody does it right but that is beneficial to nvidia ... right.

(And a hint for you: The reason NVidia's not opened up is more of a brown-paper-bag-over-the-head reason than the one you allude to- much of their stuff isn't all that great...)

But apparently they dont need to, all the game developers have to do is do it right.

"Quirks mode" all over again? (3, Insightful)

DrYak (748999) | about 6 months ago | (#47162401)

to me it sounds like again like the beginning of Internet Explorer vs. Firefox compliance to HTML standards.

Down to the detail of how it pans out:
- one company being the popular one (Microsoft, Nvidia), so everybody code to their platform (IE, drivers) and end up unknowingly produce bad that code that happen to rely on the peculiarities of this platform (the non standard assumption of Nvidia's drivers, the weird re-interpretation of HTML done by IE's engine). When there are problem, they tend to hack their own code.
- the other company being the underdog (Mozilla, AMD) making a platform (Firefox, Catalyst) that tries to follow the open standard to the letter (HTML5, OpenGL), but in the end other person's code (websites, code) behaves poorly, because it breaks standard and relies on quirks that aren't present in that platform. The users complain of problem (broken HTML rendering worse under Firefox than IE, non-compliant openGL code's performance being more degraded on AMD then Nvidia hardware).

Funnily, if past history is any indicator, on the long run AMD's approach is better and either them or one of their successor is bound to manage to bring opengl-compliance more important than driver tricks.
(the fact that AMD is dominating the current iteration of consoles, might help bring more power to them)
Interestingly the embed world might one also end up helping just like it did the browser wars (Internet Explorer was far less prevalent in embed machine like PDA/Smartphone/Tablet than on desktop and the problems with broken HTML became much more apparent, and compliance with HTML5 [sure to run on as much platforms as possible] was determinant. Also the embed eco-system mostly centered around compliant engine (like Webkit)) due to the same factors (extremely heterogeneous ecosystem hardware-wise, where Nvidia is just one player among tons of others with their Tegra platform. compliance with OpenGL ES is what is going to be determinant as the embed platforms are going to need a lingua franca to insure that porting an engine is as smooth as possible and works easily on all smartphones/tablets, no matter if they boast PowerVRs, Vivante, Lima, Adreno, etc.)

Maybe we might need something along Acid test and w3c conformancy test to exercise drivers and test game code for standard non-compliance.
(That partly exist as "piglit" - the test suite that freedesktop.org uses to test opensource mesa and gallium drivers).

Re:"Quirks mode" all over again? (0)

Anonymous Coward | about 6 months ago | (#47162795)

- one company being the popular one (Microsoft, Nvidia), so everybody code to their platform (IE, drivers) and end up unknowingly produce bad that code that happen to rely on the peculiarities of this platform (the non standard assumption of Nvidia's drivers, the weird re-interpretation of HTML done by IE's engine). When there are problem, they tend to hack their own code.
- the other company being the underdog (Mozilla, AMD) making a platform (Firefox, Catalyst) that tries to follow the open standard to the letter (HTML5, OpenGL), but in the end other person's code (websites, code) behaves poorly, because it breaks standard and relies on quirks that aren't present in that platform. The users complain of problem (broken HTML rendering worse under Firefox than IE, non-compliant openGL code's performance being more degraded on AMD then Nvidia hardware).

Not really comparable, since IE not only had non-standard "features", the use of which broke other browsers, but it also had poor support for pages that were fully compliant with up to date HTML and CSS standards (obviously, this encouraged web developers to use the - actually working - IE-only alternatives, and thus get locked into a Microsoft environment). The same attitude could be seen from Microsoft towards new C and C++ standards (like C99).

On the other hand, it is not necessary to use any proprietary extensions or depend on implementation defined behavior to make OpenGL code work on the Nvidia driver, but fail on anything else. As the tests at http://www.g-truc.net/post-0655.html#menu show, the Nvidia driver has better (or equal when both scored 100%) API compatibility than Catalyst or Mesa at all tested OpenGL versions. Unlike in the case of IE, "coding to the standard" could very well end up working better (or at all) on Nvidia anyway.

Re:Optimizing the driver stack... (2, Informative)

Anonymous Coward | about 6 months ago | (#47162715)

AMD's drivers tend to explicitly follow the OpenGL standards. To a fault.

That is a popular excuse, especially for the open source drivers that frequently have problems with newer commercial games, but having more complete support for what is in the standard and being more permissive to what is not are not mutually exclusive. For example, see this page for some actual conformance testing results: http://www.g-truc.net/post-0655.html#menu As you can see, the Nvidia binary driver clearly passes a higher percentage of the tests than any of the others, and it is the only driver to pass all samples from OpenGL 3.3 to 4.4.

From a consumer's point of view, it is also a poor attitude from Mesa developers to interpret "implementation defined behavior" as "license to break anything as we see fit" (GCC developers tend to do the same, by the way, even though the compiler has its own set of non-standard extensions as well). They are free to add a configuration option that lets the user choose between strict conformance (mainly for developers testing their code) and maximum compatibility, but the casual consumers will not care why the game they paid for fails to work, if it keeps happening, they will ignore the excuses and just delete Linux and go back to Windows/Direct3D.

Re:Optimizing the driver stack... (1)

mikael (484) | about 6 months ago | (#47164055)

I'd say the fundamental problem is that the specifications themselves are a patchwork of code changes written in a natural language.
The original specification is written before the original driver code is modified, or derived from an existing driver for one hardware system, and then recoded for a new driver for another hardware system. With other device drivers (networking), each extension specification is actually specified in a high-level language which can be processed straight into device driver code.

Direct3D has the advantage that the hardware must match the software specification, while OpenGL is more extension applied over extension on different hardware. Since each vendor has different hardware and supported extensions, the implementation of one extension may or may not affect other extensions. For example, you could support FBO (framebuffer objects) using textures as a destination. But then if you implement compressed textures, then those textures can't be used with FBO's, and so additional code has to be added to prevent that use. Usually the reason that you can't use a particular combination of extensions is simply because the hardware logic hasn't been implemented yet.

Re:Optimizing the driver stack... (1)

spire3661 (1038968) | about 6 months ago | (#47162253)

"Hardware-wise AMD currently has better bang for a euro than nvidia,"

I would say this is a matter of opinion. The 750 Ti is a MONSTER at its price point, especially with ShadowPlay.

Re:Optimizing the driver stack... (1)

GigaplexNZ (1233886) | about 6 months ago | (#47162329)

Last I checked, the 750 Ti had amazing performance per watt, but performance per dollar it fell behind AMD cards such as the 260X and 270X.

Re:Optimizing the driver stack... (1)

spire3661 (1038968) | about 6 months ago | (#47162623)

Its feature set, in my opinion blows away those cards. Better drivers, cheaper to run, cooler, no power connections needed. Cheapest doesnt mean 'best value'.

Re:Optimizing the driver stack... (0)

Anonymous Coward | about 6 months ago | (#47162617)

"Bang for a euro"? Really? The reason the phrase is "Bang for the buck" is because it's an alliteration. I guess you failed to notice that.

Re:Optimizing the driver stack... (1)

Technomancer (51963) | about 6 months ago | (#47162691)

First, I don't think AMD (or any other company) execs would recognize driver optimization if it hit them in the face.
Second, do you think nVidia is hiring from a different talent pool than AMD? Neither company has any special secret magic sauce driver optimizations that a well trained monkey at the other company cannot come up with. If you look into nouveau and radeon open source kernel and Mesa drivers you will be able to see how much easier nVidia hardware is to work with, that may be one of the reasons nVidia drivers are less of a mess. But not by much. I had a fair share of driver hangs and memory corruptions on my laptops with nVidia cards (GTX 560M on Asus G53SX in particular).

Re:Optimizing the driver stack... (1)

oji-sama (1151023) | about 6 months ago | (#47163231)

In my experience the AMD drivers&software have been more stable. Probably I just have bad luck, but just today I got a pop-up from the Nvidia Experience that there's an update. Clicked it and got 'cannot connect to Nvidia servers' or something similar. The last driver update failed to update one of the components. And the display adapter has crashed once (managed to recover though).

I'm currently using GTX 670 (got the Windforce model and it is really quiet) and I'm reasonably happy with it, but I had none of the update problems with my previous HD6850. Of course, my thinking may be coloured by the horrible nvidia chipset I once had. Or the lack of support for my previous laptop gpu. Compared to those, my current GPU problems are minimal at worst.

Re:Optimizing the driver stack... (1)

Luckyo (1726890) | about 6 months ago | (#47166309)

I don't use nvidia experience as I prefer to have manual control over most of the card's features. That said, I've had the update problems as well, and most of them were firewall-related.

Re: Optimizing the driver stack... (1)

Redbehrend (3654433) | about 6 months ago | (#47167155)

Not sure why the AMD hate. I gave my friend my 3 year old AMD 5770 that I bought for 150 a couple years ago and he plays everything on high. I feel they have better Linux drivers by far and they have attempted to keep old hardware around longer. In my experience AMD has won bang 4 buck and lasts longer just stay away from xx and other cheap hardware brands. NIVIDIA is trying to push them out they are already trying to force nividia only hardware on monitors, the companies are saying no universal is the key will give you a slot.

Re:Optimizing the driver stack... (1)

plonk420 (750939) | about 6 months ago | (#47163315)

what driver issues? i have yet to see gaming related drivers issues...

launch day games i've tried with no issues on my 5870 include BF3 (includling "paid beta"), Sup Com 2, Starcraft 2a, Portal 2, Crysis 2 trial, NFS: Shift, Metal Gear Rising, Borderlands 2, Skyrim*, Train Simulator 2014* (* = launch + 1 month)

only issues i've had were a few demoscene nvidia-favoring demos and bitcoin-related OpenCL driver combinations.

Re:Optimizing the driver stack... (1)

Luckyo (1726890) | about 6 months ago | (#47166325)

Good for you. I've had more than my share of games that worked atrociously on release date when I was still playing on 4870 and driver settings that just plain refused to work.

I'm not alone in that experience either unfortunately. It's not like it has scared me off AMD, as I said I still use it. Just not for performance stuff where I need reliability and stability more than a few extra FPS.

Re:Optimizing the driver stack... (1)

GiganticLyingMouth (1691940) | about 6 months ago | (#47167867)

Their OpenCL drivers are pretty bad. Completely arbitrary changes can have huge impacts on things like VGPR usage. Optimizing an OpenCL kernel on an AMD card is like black magic. Not that I'm praising NVIDIA here, they're still on OpenCL 1.1...

Re:Optimizing the driver stack... (1)

drinkypoo (153816) | about 6 months ago | (#47163513)

It's a well known reality that ATI had severe problems with driver quality from early stages, and these problems persist today long after AMD bought the company.

And by "from early stages" you mean from the beginning, I hope. I've been having ATI graphics blow up windows since 3.1 with the Mach32. Even RADIUS made more reliable video cards. I wish they'd stuck around and ATI was gone now.

GameWorks is an arcade (1)

Anonymous Coward | about 6 months ago | (#47161871)

This article is very confusing to me

You don't need 3 BILLION to optimize software (0)

Anonymous Coward | about 6 months ago | (#47161879)

2 BILLION will do fine.

Sometimes things aren't done for evil. (2)

mindmaster064 (690036) | about 6 months ago | (#47161907)

Bottom line, if a game runs poorly on a graphic device AMD and NVIDIA directly get blamed. This program is merely NVIDIA's tack towards improving user perception. They know if you have a problem running software on one of their cards you will probably go buy a Radeon. The computing hardware in each card is far beyond the privy of any single developer to understand at this point. You need a glue layer and technical resources to properly expose the interfaces. The problem is when one vendor is specifically excluded from the glue layer. Both of these vendors have been cheating benchmarks by analyzing what game is attempting to access the features and then dumbing them down selectively in barely perceivable ways to artificially pump benchmark results. The problem I have with NVIDIA doing this is mostly that they typically have their own black box code (that is closed) and you have no idea how that is interacting. If it interacts poorly with your application you are just screwed. There is nothing to fix you must patch around it. Ergo, the state of the current NVIDIA drivers in Linux. =)

maybe it's time for a new graphics api standard? (1)

epyT-R (613989) | about 6 months ago | (#47161925)

When opengl 1.0-1.3 (and dx5/6/7) was king, gpus were fixed function rasterizers with a short list of togglable features. These days the pixel and vertex shader extensions have become the default way to program gpus, making the rest of the api obsolete. It's time for the principal vendors to rebuild the list of assumptions of what gpus can and should be doing, design an api around that, and build hardware specific drivers accordingly.

The last thing I want is another glide vs speedy3D...err I mean amd mantle vs nvidia gameworks.

Re:maybe it's time for a new graphics api standard (1)

PhrostyMcByte (589271) | about 6 months ago | (#47162159)

It's time for the principal vendors to rebuild the list of assumptions of what gpus can and should be doing, design an api around that, and build hardware specific drivers accordingly.

For the most part, they've done that. In OpenGL 3.0, all the fixed-function stuff was deprecated. In 3.1, it was removed. That was a long, long time ago.

In recent times, while AMD introduced the Mantle API and Microsoft announces vague plans for DX12, both with goals of reducing CPU overhead as much as possible, OpenGL already has significant low-overhead support [gdcvault.com] .

Re:maybe it's time for a new graphics api standard (1)

exomondo (1725132) | about 6 months ago | (#47162227)

And Apple has followed that with its own "Metal".

Re:maybe it's time for a new graphics api standard (0)

Anonymous Coward | about 6 months ago | (#47165349)

Wasn't MeTal a rendering API on S3 cards?

Re:maybe it's time for a new graphics api standard (1)

mikael (484) | about 6 months ago | (#47164153)

You should look at the latest OpenGL ES specification. This is OpenGL optimized for mobile devices and gets rid of most of the old API bits while still supporting vertex, fragment and compute shaders. Anything else is just implemented using shaders.

But mantle gives you access to the hardware registers (those descriptors) while avoiding the overhead of updating the OpenGL state, then determining what has changed and hasn't, then writing those values out to hardware.

I don't get the outrage (0)

Anonymous Coward | about 6 months ago | (#47161945)

Mind you, I dislike lack of choice just as much as the next guy. But I'm not sure what exactly Nvidia is being accused of here. They make graphics cards and they make middleware that works better with their own graphics cards than their competitors'. What's the problem?

Wrong target of blame. (1, Insightful)

DMJC (682799) | about 6 months ago | (#47161963)

Frankly, it's time to stop blaming NVIDIA and start blaming ATi, yes everyone likes the underdog. But in this case seriously? They had 20 years to get OpenGL correct. Noone has been blocking them from writing their own drivers for Linux/Mac/Windows. Frankly I think that ATi has made a huge engineering mistake by only focusing on Win32 and by not supporting Unix from day one as a first class citizen, they've shot themselves in the foot, now they expect the industry to clean up the mess by conforming to ATi. I don't recall NVIDIA anywhere holding a shotgun to our heads and demanding we use OpenGL or else. They just made OpenGL available and importantly WORKING. OpenGL wasn't even NVIDIA's project originally, it's inherited from SGI. They've had approximately 20-25 years to implement an open spec, and they've failed to do so at every step. I've been watching the last 13 years as NVIDIA grew from a buggy hard to compile mess on Linux to the stable, fully featured driver it is now. ATi has never pulled off a competent GL implementation in all those years. Now people want to bring in conspiracy theories about NVIDIA blocking ATi from developing software? What a joke.

Re:Wrong target of blame. (1)

viperidaenz (2515578) | about 6 months ago | (#47162007)

Which nvidia drivers do you compile?
geforce is a binary driver and nv doesn't support 3D and is no longer supported.
nouveau isn't developed by nvidia.

Re:Wrong target of blame. (3, Funny)

Arkh89 (2870391) | about 6 months ago | (#47162027)

The part of the driver which is compiled as a kernel module to serve as adapter against the binary blob?
You thought that it wanted the linux-headers package just for the fun of reading it on its own time?

Re:Wrong target of blame. (2)

DMJC (682799) | about 6 months ago | (#47162031)

You've always had to compile the interface layer between NVIDIA's blob and XFree86. Dont' even make me go into how complicated and messy that process was. The installer script is so much easier than the shit we used to deal with back in the TNT2/geforce 2 days.

Re:Wrong target of blame. (1)

DMJC (682799) | about 6 months ago | (#47162041)

ftp://download.nvidia.com/XFre... [nvidia.com] This is an example, notice where it says run MAKE and MAKE INSTALL. And you had to do it multiple times, in different folders... and half the time it broke. And you didn't know why. Absolute madness.

Re:Wrong target of blame. (1)

exomondo (1725132) | about 6 months ago | (#47162043)

geforce is a binary driver

And how do you go about having it support a kernel with an unstable ABI?

AMD supports openGL just fine (1)

dutchwhizzman (817898) | about 6 months ago | (#47162179)

AMD supports openGL just fine, but they aren't gracefully failing sloppy programming. The Nvidia driver tends to try and make "something you probably sort of meant anyway" out of your illegal openGL instruction and AMD fails you hard with an error message. That's no reason to blame the manufacturer. The game developers deserve blame for sloppy coding and sloppy testing.

Re:AMD supports openGL just fine (2)

blackpaw (240313) | about 6 months ago | (#47162239)

AMD supports openGL just fine, but they aren't gracefully failing sloppy programming. The Nvidia driver tends to try and make "something you probably sort of meant anyway" out of your illegal openGL instruction and AMD fails you hard with an error message. That's no reason to blame the manufacturer.

Nvidia is hewing to the following:

Robustness principle [wikipedia.org]

In computing, the robustness principle is a general design guideline for software:

        Be conservative in what you do, be liberal in what you accept from others (often reworded as "Be conservative in what you send, be liberal in what you accept").

The principle is also known as Postel's law, after Internet pioneer Jon Postel, who wrote in an early specification of the Transmission Control Protocol

Its generally a good idea in when implementing a standard, if you want people to use it. Slavish perfectionism is the bane of developers that has killed many a project.

Re: AMD supports openGL just fine (0)

Anonymous Coward | about 6 months ago | (#47162523)

Same postel then said that robustness was his worst mistake?

Re:AMD supports openGL just fine (1)

ameen.ross (2498000) | about 6 months ago | (#47162589)

That's interesting. Coding as a kid, I more or less came up with the same principle for my little programs. I also later figured that it was misguided to leave robustness up to the implementation, instead of the specification (or in my case the function definition).

API functions that have any reasonable expectations for default values should just define those defaults, not silently default to something seemly random and completely undocumented.

Re:AMD supports openGL just fine (0)

Anonymous Coward | about 6 months ago | (#47162695)

Slavish perfectionism is the bane of developers that has killed many a project.

And yet riddle me this Batman: How is Linux kernel development run? What
percentage of the world's fundamental computer infrastructure runs on a
Debian or Theo & co. derived system?

The ability to persue slavish perfectionism without fear of going out of
business, or slavish perfectionism AS the motive instead of profit,
is what has given us the tools to literally drive modern civilization.

It makes us great.

Trying the same thing in a place where you need to turn a profit next quarter
or be out of a job, yeah, dumb mistake. But in the long run what are those
tools but ephemeral trinkets? Either go big or go open source. There is no
other long term solution.

Re:AMD supports openGL just fine (1)

Raenex (947668) | about 6 months ago | (#47162835)

It's a garbage principle that makes a mess of the ecosystem, because then you have each implementation making different decisions on just how much slop you allow, resulting in programs that work differently on different systems. It's better to have hard errors.

Re:AMD supports openGL just fine (1)

JesseMcDonald (536341) | about 6 months ago | (#47164313)

While I agree that the principle can result in a mess if misapplied, my interpretation has always been that "be liberal in what you accept" only means that you should avoid defining rigid input formats full of arbitrary rules. If there are a number of different ways to say the same thing, such that it's still clear what was meant, accept them all as equivalent. Allow independent modifiers to be written in any order; don't put artificial restrictions on which characters can be used in a name, or the specific type or amount of whitespace needed to separate symbols. That sort of thing.

This is a principle that should be applied when defining the format, not just when implementing the parser. Once you have the format defined, input which is not in compliance should trigger a warning at the minimum. It remains much better to reject ambiguous input with an error rather than silently guessing at the intent and getting it wrong.

One area where "be liberal in what you accept" does affect the implementation is that a program should accept every valid input defined by the standard, not just the most common patterns. Accepting only a subset of the valid inputs is an easy way to introduce incompatibility.

Re:AMD supports openGL just fine (1)

Raenex (947668) | about 6 months ago | (#47164801)

While I agree that the principle can result in a mess if misapplied, my interpretation has always been that "be liberal in what you accept" only means that you should avoid defining rigid input formats full of arbitrary rules.

If you read the Wikipedia article [wikipedia.org] , you'll see that it came about as advice for implementing the TCP protocol.

Re:AMD supports openGL just fine (1)

JesseMcDonald (536341) | about 6 months ago | (#47165529)

If you read the Wikipedia article, you'll see that it came about as advice for implementing the TCP protocol.

Yes, and I did say that it can result in a mess if misapplied. The right time to consider what to accept and how to interpret it would have been when writing the TCP standard, not at the implementation stage, but we can't always have what we want. It's still a good idea to be liberal in what you accept, perhaps especially so when the standard is lacking in detail, since you never know just how the sender might interpret a particular section. You need to make your software work with all other reasonable implementations of the standard, not only those who interpret it the precise way you do. Just don't stretch it to the point of making up arbitrary data where the input is unclear.

Re:AMD supports openGL just fine (1)

Raenex (947668) | about 6 months ago | (#47167731)

Yes, and I did say that it can result in a mess if misapplied.

You can't tell me it's being misapplied when the origin applies it in exactly that manner. You misunderstood, that is all.

Re:AMD supports openGL just fine (0)

Anonymous Coward | about 6 months ago | (#47163295)

In computing, the robustness principle is a general design guideline for software:

        Be conservative in what you do, be liberal in what you accept from others (often reworded as "Be conservative in what you send, be liberal in what you accept").

The principle is also known as Postel's law, after Internet pioneer Jon Postel, who wrote in an early specification of the Transmission Control Protocol

Its generally a good idea in when implementing a standard, if you want people to use it. Slavish perfectionism is the bane of developers that has killed many a project.

No it's not. Not always at least.
Be liberal in what you accept made HTML the mess it is, with parsers now having to accept any malformed input and not really be more robust as a result. There are examples in other domains.

Re:AMD supports openGL just fine (2, Insightful)

AndOne (815855) | about 6 months ago | (#47162601)

AMD supports OpenGL just fine? That's gotta be the quote of the day. If AMD ever supports OpenGL just fine I'll throw them a fucking parade. Kernel panicking and bringing the entire system to it's knees because AMD doesn't check to see if a target is attached to a shader output? Sloppy coding yes, but not an acceptable response by the driver as you have no idea where the crash is coming from.. No OpenGL error message, just a crashing system. GPU to GPU buffer copies don't work unless you do any other memory operations first on the target buffer on AMD... Numerical precision is often dodgy on AMD cards. Phantom shader errors on certain generations of cards with no error message string... just failure to compile. Hell there was a series of crashes with AMD cards just trying to bind and clear framebuffers that only ended up being fixed by a driver update. Allocating too many vbo ID's causes cascading memory consumption and kills your application. And those are just the bugs I remember off hand today (and these were for their supposedly "good" Windows drivers, I shudder to think how bad their Linux stuff is. )

If the spec is even remotely vague about something, it seems like AMD consistently chooses the least robust and slapdash manner to resolve that ambiguity.

As a side note I'm fairly certain AMD only opened up their drivers because they're so terrible at writing them they're hoping the community will do it for them. It's not altruism, it's bottom line they don't have the resources to do it well and decided to try and look good. It's just PR.

Re:Wrong target of blame. (0)

Anonymous Coward | about 6 months ago | (#47162221)

How about you read the rant above from the guy replying to the FIRST post? You've got it pretty much all dead wrong on many counts. Even on the Linux side of things. This is from someone in the know and having built a good part of this stuff from the ground up.

That took talent, sir.

Why am I posting Anon? So they don't cut me off, obviously... >;-D

Re:Wrong target of blame. (2)

stox (131684) | about 6 months ago | (#47162277)

Just by coincidence, a lot of Nvidia engineers were "inherited" from SGI.

Re:Wrong target of blame. (0)

Anonymous Coward | about 6 months ago | (#47162825)

"Frankly I think that ATi has made a huge engineering mistake by only focusing on Win32 and by not supporting Unix from day one as a first class citizen,"

Well, i dunno. The ps4 and xbone are larger markets than the unix crowd. Makes perfect sense to me.
Maybe not that pc gaming is slooooowly creeping towards linux that they will change their focus. But their strategy made perfect sense in the past.
Remember 3Dfx? They had a nifty opengl implementation. Where are they now?
A gpu's manufacturers success is obviously not measured in the success they have in implementing opengl.

Re:Wrong target of blame. (1)

uncomformistsheep (2950041) | about 6 months ago | (#47162927)

The ps4 and xbone are larger markets than the unix crowd.

ps4 also uses opengl.

Re:Wrong target of blame. (2)

Narishma (822073) | about 6 months ago | (#47162949)

No, it doesn't. Can we stop with this myth? The only main console to have supported OpenGL to some degree was the PS3 with the very slow PSGL (OpenGL ES 1.0 + Nvidia Cg shaders + proprietary extensions) that only a handful of indie PSN titles ever bothered to use for easy porting.

Re:Wrong target of blame. (2)

Yunzil (181064) | about 6 months ago | (#47164499)

Frankly I think that ATi has made a huge engineering mistake by only focusing on Win32 and by not supporting Unix from day one as a first class citizen,

Yeah, how stupid of them to focus on a platform that has 90%+ of the market. Clearly it would have been a better decision to dump all their resources into a niche platform.

And looking in the mirror.. (0, Flamebait)

thesupraman (179040) | about 6 months ago | (#47162123)

"Since AMD debuted its Mantle libraries there's been allegations that they unfairly disadvantaged NVIDIA users or prevented developers from optimizing code."

Get the idea?

Re:And looking in the mirror.. (0)

arbiter1 (1204146) | about 6 months ago | (#47162213)

Yea, but mostly those are no one cares cause its AMD and they get free passes when it happens but when its against nVidia its huge news everywhere

Differnet perspective (4, Insightful)

DrYak (748999) | about 6 months ago | (#47162439)

AMD's perspective is that Mantle is less problematic:
- Mantle's spec are open.
- Also it's just a very thin layer above the bare hardware. Actual problems will mostly be confined in the actual game engine.
- Game engine code is still completely at the hand of the developer and any bug or short coming is fixable.
Whereas, regarding GameWorks:
- It's a closed-source blackbox
- It's a huge midleware, i.e.: part of the engine itself.
- The part of the engine that is GameWorks is closed and if there are any problems (like not following standard and stalling the pipeline) no way that a developer will notice and be able to fix, even as AMD are willing to help. Whereas Nivida could be fixing this by patching around the problem in the driver (as usual), because they control the stack.

So from their point of view and given their philosophies, GameWorks is really destructive, both to them and to the whole market in general (gameworks is as much problematic to ATI, as it is to Intel [even if it is a smaller player] and to the huge diverse ecosystem of 3D chips in smartphone and tablets).

Now, shift the perspective to Nvidia.
First they are the dominant player (AMD is much smaller, even if they are the only other one worth considering).
So most of the people are going to heavily optimise game to their hardware, and then maybe provide an alternate "also ran" back-end for mantle. (Just like in the old days of Glide / OpenGL / DX backends).
What does Mantle bring to the table? Better driver performance? Well... Nvidia has been into the driver optimisation business *FOR AGES*, and they are already very good at it. What is the more likely, that in case of performance problems developers are going to jump on mass to a newer API that is only available from one non-dominant PC player, and a few consoles, and completely missing on any other platform? Or that Nidia will patch around the per problem by hacking their own platform, and dev will continue to use the ?
In Nvidia's perspective and way to work, Mantle is completely irrelevant, barely registering a "blip" on the marketing-radar.

that's why there's some outcry against GameWorks, whereas the most Mantle has managed to attract is a "meh". (and will mostly be considered as yet another wanabe-API that's going to die in the mid- to long-term)

Re:Differnet perspective (0)

Anonymous Coward | about 6 months ago | (#47162747)

- Mantle's spec are open.

How exactly are they open? http://developer.amd.com/mantle/ requires a password to enter.
When searching for "more technical details" the FAQ points to the ... whitepaper ( http://support.amd.com/en-us/search/faq/200 )

Not strictly the open/non-open spec problem, but still slightly relevant:
There's no word on OS support beyond Windows ( http://support.amd.com/en-us/search/faq/187 )

Re:Differnet perspective (1)

yenic (2679649) | about 6 months ago | (#47164765)

Nvidia has been into the driver optimisation business *FOR AGES*, and they are already very good at it.

So good they've been killing their own cards for years now.

2010 http://www.zdnet.com/blog/hard... [zdnet.com]
2011 http://forums.guru3d.com/showt... [guru3d.com]
2013 http://modcrash.com/nvidia-dis... [modcrash.com]

This has never happened once to AMD cards, because they're more conservative with their optimizations. NV isn't even the price/performance leader and rarely is. So you get to spend more, and they optimize the crap out of your drivers and card until they break it.
They're almost averaging once a year in killing cards. No thanks. While both have bugs, I prefer AMD's superior driver support that doesn't kill your card.

Why AMD doesn't deserve sympathy (1)

DMJC (682799) | about 6 months ago | (#47162281)

I'd also add that considering the NVIDIA Binary blob works on FreeBSD, Solaris, Mac OSX, and Linux as well as Windows, that it is well engineered. The AMD/ATi driver doesnt' even work correctly on Linux, and Apple had to write their own driver for Mac OSX. There is an officially available (from nvidia.com) driver for Mac OSX for their Quadro cards. It is pretty obvious that AMD/ATi has always favored Windows/Microsoft and has put minimal effort into supporting Unix based platforms. Now they're reaping what they've sown and everyone's trying to defend them? No. You don't get to abandon a platform for years, then try to claim victimhood status for your own poor business choices. That's doesn't fly. Intel is having their first crack at this graphics stuff within the i915-current drivers. They're solid, reliable, a little lacking in features. But overall it works. My code written for Linux works on both nvidia and Intel. Not sure why the fact it breaks on AMD doesn't shout to people that AMD is the problem, but there you go.

Re:Why AMD doesn't deserve sympathy (1, Interesting)

drinkypoo (153816) | about 6 months ago | (#47163499)

It is pretty obvious that AMD/ATi has always favored Windows/Microsoft and has put minimal effort into supporting Unix based platforms.

The same is true of nVidia, the definition of "minimal" over there is simply greater than it is at AMD. nVidia is well known to have aimed their cards directly at D3D support and filled in the gaps in [slower] software for OpenGL in the past. The difference is either in where they threw up their hands and said fuck it, or simply in the area of competence. They, too, put more of their effort into development for Windows. But they also manage to put together a working Linux driver. As you say, ATI can't even put together working Windows drivers. Anecdote of the moment, my [now ancient] Gateway LT3103u with R690M/X1250 chipset works with official drivers under Vista, causes hangs on shutdown, reboot, hibernate etc. under Windows 7, or works perfectly with the hacked "DNA" drivers for my system. That's right, I have to run hacked drivers under Windows to make my ATI card not crater my system. Meanwhile under Linux, fglrx doesn't support it and radeon has massive display trashing which has only gotten worse over the years as ATI has continued to fail to provide the specifications needed to support it, in spite of their fake-ass bullshit lying claim to support the OSS driver.

nVidia ain't perfect, they have had numerous long-standing bugs, but they're still worlds ahead of ATI. I shudder to think of what would happen without ATI for competition, though, so I hope they continue to exist and I hope morons continue to buy their cards.

NVIDIA has better experience (1)

zisel (3561213) | about 6 months ago | (#47162613)

Developer see NVIDIA has better experience in creating great hardware than AMD, I assumed.

Avoi9ding to answer (5, Informative)

citizenr (871508) | about 6 months ago | (#47163041)

Nvidia PAYS for removal of features that work better on AMD

http://www.bit-tech.net/news/h... [bit-tech.net]

Nvidia pays for insertion of USELESS features that work faster on their hardware

http://techreport.com/review/2... [techreport.com]

Nvidia cripples their own middleware to disadwantage competitors

http://arstechnica.com/gaming/... [arstechnica.com]

Intel did the same, but FTC put a stop to it
http://www.osnews.com/story/22... [osnews.com]

so how exactly is that not Nvidias doing??

Nvidia is evil and plays dirty. They dont want your games to be good, they want them to be fast on Nvidia, any means necessary. They use "means to be played" program to lure developers in, pay them off and hijack their games to further nvidias goal.

For example how come Watch Dogs, a console title build from the grounds up with AMD GPU/CPU optimizations to run good on both current gen consoles, is crippled on PC when played on AMD hardware? How does this shit happen?

This is something FTC should weight in just like in Intels case.

Re:Avoi9ding to answer (2)

nhat11 (1608159) | about 6 months ago | (#47163131)

http://www.bit-tech.net/news/h... [bit-tech.net] - What? It's just story speculation here

http://techreport.com/review/2... [techreport.com] - the article doesn't state that Nvidia pays anyone, it's a statement you made up yourself.

At this point I decided not to waste anymore of my time after looking up the first 2 links

Re:Avoi9ding to answer (1)

citizenr (871508) | about 6 months ago | (#47165139)

http://la.nvidia.com/object/nz... [nvidia.com]

"The Way It's Meant to be Played"
Nvidia pays you shitload of money for participating in this program, and can additionally guarantee certain sale goals (by bundling your product with their GPUs).
In order to participate you only have to do two things, insert nvidia ad clip at the start of the game, and let nvidia rape your codebase.

On paper Nvidia pays you for joint marketing campaign, but deep down in the paperwork you are letting them decide what your codebase will look like.

Oh look, what a coincident: Watch dogs, game that is crippled on AMD, is a participant
http://www.geforce.com/getwatc... [geforce.com]

Re:Avoi9ding to answer (2, Informative)

Ash Vince (602485) | about 6 months ago | (#47163373)

Nvidia PAYS for removal of features that work better on AMD

http://www.bit-tech.net/news/h... [bit-tech.net]

Reading the link you posted above, it seems like a bit of a non-factual load of waffle. Nvidia deny paying, Ubisoft deny being paid, and the only sources mentioned are anonymous speculators we have no way of knowing are not just a few paid ATI shills.

Nvidia pays for insertion of USELESS features that work faster on their hardware

http://techreport.com/review/2... [techreport.com]

Wow, another example of amazing journalism here.

Some guy moaning about Crysis having loads of detailing that is only used in the DirectX11 game. He give loads of examples of this, then posts a summary page of wild speculation with no sources quoted other than his own imagination. He never asks any of the companies involved, he just posts a bunch of stuff about why this might be the case.

I have another possible suggestion as to why this was the case: Crytek like making stuff look overly detailed and include graphics detailing that means their games continue to max out graphics cards long after they are released. They always make they games playable on the budget cards if you crank the detailing down, but they also like catering to people who buy a new graphics card then go back and play a few oldies that they had to crank the detail down on previously. Crytek also probably also quite like their games being used in hardware reviews because their games hammer the hardware.

Nvidia cripples their own middleware to disadwantage competitors

http://arstechnica.com/gaming/... [arstechnica.com]

Ok, congratulations on actually posting an article that was real journalism, with quote sources and not just made up of the authors own conjecture.

The issue here though seems to be that there was an optimisation, moving from x87 to SSE that they did not do on a bunch of legacy code. Instead they rewrote it from scratch, which took slightly longer to use SSE.

This was not them intentionally doing something to hobble a competitor, this was them not doing anything to help them quickly. That is very different.

They did however ultimately fix it:

"PhysX SDK 3.0 was released in May 2011 and represented a significant rewrite of the SDK, bringing improvements such as more efficient multithreading and a unified code base for all supported platforms"

Intel did the same, but FTC put a stop to it
http://www.osnews.com/story/22... [osnews.com]

There is a massive difference here, Intel's were intentionally hobbling the code their complier created based on finding a competing vendor name in the product string. They did not say "wait for version 3" like the PhysX case, they just did something then just sat their tight lipped until it went to court and they were forced to change it.

This is something FTC should weight in just like in Intels case.

As I said earlier, Nvidia made the all important change to use SSE when running PhysX on the CPU without the FTC being involved.

Misconception in the OP (1)

yenic (2679649) | about 6 months ago | (#47164815)

AMD made about $1.4 billion off the Radeon division. For the same period, NVIDIA made more than $4.2 billion. Some of that was Tegra-related and it's a testament to AMD's hardware engineering that it competes effectively with Nvidia with a much smaller revenue share, but it also means that Team Green has far more money to spend on optimizing every aspect of the driver stack.'"

While that's true for revenue, the difference in profits between AMD and NV are very close.

Re:Misconception in the OP (0)

Anonymous Coward | about 6 months ago | (#47166179)

In this day and age of sister companies, off shore Cayman island accounts, Irish tax havens. I do not trust any companies report on profits, but revenue. Revenue speaks volumes. If their profits are "very close" that seems rather embarrassing to Nvidia, there is something tremendously inefficient about the way they operate, but I doubt this is the case.

Re:Misconception in the OP (1)

Xest (935314) | about 6 months ago | (#47172353)

How is -$83million (AMD) close to $581million (nVidia)?

allegations that they unfairly disadvantaged AMD (0)

Anonymous Coward | about 6 months ago | (#47166023)

Anytime I see the word "unfair" used to describe a business practice, I laugh out loud. And then I mumble "idiots" in my best Jeremy Clarkson as I walk away.

sparta38 (0)

Anonymous Coward | about 6 months ago | (#47170143)

oyun dediÃYidiÃYin Zaman tip akla Gelen AMD baÃ...Yka Ãf ¼ stÃf ¼ ne tanÃfÆ'à ± mam

http://www.akcayaktifemlak.com/

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?