×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Preps For Server Graphics Push

Soulskill posted about a year and a half ago | from the servers-need-to-be-able-to-play-quake-too dept.

AMD 41

Nerval's Lobster writes "AMD named John Gustafson as senior fellow and chief product architect of AMD's Graphics Business Unit, the former ATI graphics business unit. Gustafson, known for developing a key axiom governing parallel processing, will apply that knowledge to AMD's more traditional graphics units and GPGPUs, co-processors that have begun appearing in high-performance computing (HPC) systems to add more computational oomph via parallel processing. At the Hot Chips conference, AMD's chief technical officer, Mark Papermaster, also provided a more comprehensive look at AMD's future in the data center, claiming that APUs were the keystone of the 'surround computing era,' where a wealth of data — through sensors, gestures, voice, augmented reality, metadata, and HD video and graphics — will need to be contextualized, analyzed, and either encrypted or assigned privacy policies. That, of course, means the cloud must shoulder the computational burden."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

41 comments

Good (2)

Anonymous Coward | about a year and a half ago | (#41172485)

Nvidia could really use some competition in the server space. The render farms and most GPGPU (or CUDA) are pretty much completely dependent on Nvidia.

A Wealth of Data, Huh? (1, Informative)

jazman_777 (44742) | about a year and a half ago | (#41172503)

You mean, all that stuff from surveillance cameras, etc? Yeah, we're gonna need LOTS of processing power for the Total Surveillance State!

Re:A Wealth of Data, Huh? (0)

Anonymous Coward | about a year and a half ago | (#41174105)

This reminds me of Person of Interest tv series.

So it has nothing to do with graphics then... (1)

bugs2squash (1132591) | about a year and a half ago | (#41172583)

They want a co-processor, so build one, or add extensive FPGA capabilities. Don't just put in a GPU and disconnect the monitor, make something more specifically applicable to the task at hand.

Re:So it has nothing to do with graphics then... (5, Informative)

gman003 (1693318) | about a year and a half ago | (#41172925)

Except a modern GPU is basically a coprocessor that, 99% of the time, is used to run a library that primarily does graphics. Rendering, shading, transformation, those are now all done "in software". The only things still done "in hardware" are texture lookups and video output (turning an int[4][1080][1920] into a DVI or HDMI or VGA or whatever signal.

They're also a pretty high-volume market, so you get them much cheaper than you would a custom-built coprocessor or even FPGA, and they're *probably* better-designed than the one you would make, as they have entire teams of professionals working on them.

Also, both nVidia and AMD already make "compute-only" cards - nVidia under the brand "Tesla", AMD under the brand "FireStream".

Re:So it has nothing to do with graphics then... (2)

thexile (1058552) | about a year and a half ago | (#41175495)

There is no longer FireStream. Instead we now have FirePro covering both workstation (as FirePro W) and server graphics (compute) (as FirePro S)

Re:So it has nothing to do with graphics then... (2)

Creepy (93888) | about a year and a half ago | (#41180741)

You'd be surprised at how useful servers with GPUs are these days. When you're talking about clients like iPad and android devices, often the rendering is done server side and then sent to the client. A (CAD related) product I work on renders thumbnails using a server GPU. There is also a game service that does all rendering server side and sends it to a display (often a TV).

That oughta work (3, Insightful)

Cute Fuzzy Bunny (2234232) | about a year and a half ago | (#41172587)

Geez, didn't we have this stuff years ago, only it was called mainframes and minicomputers?

Someone refresh my memory as to why we fled those for PC's? Oh yeah, it cost too much to centralize, the 'one size fits all' solutions actually fit no one, and it took too long to wait for someone to fix things or come up with new tools.

Same problem with "the cloud". Good luck with it.

Re:That oughta work (0)

Anonymous Coward | about a year and a half ago | (#41172717)

Agreed. Absolutely nothing has changed since the 70s. Nope. Nothing.

Re:That oughta work (1)

Cute Fuzzy Bunny (2234232) | about a year and a half ago | (#41172807)

Agreed. Absolutely nothing has changed since the 70s. Nope. Nothing.

I wish it had anything at all to do with technology or things that have improved over time, but it doesn't. Centralizing resources and having those under one persons charge while many others want to use it or use it in a currently unsupported way costs more and gives less flexibility. That human issue has been the case for a lot longer than the 70's.

Re:That oughta work (1)

Anonymous Coward | about a year and a half ago | (#41173027)

Centralizing resources and having those under one persons charge

Soooooo... you want to fire your CTO? Not sure how cloud computing is somehow run by one person. Maybe you just have no idea what you are talking about.

Re:That oughta work (0)

Anonymous Coward | about a year and a half ago | (#41175961)

I fired my CFO, we're all doing our own books on napkins now...I'm calling it BYOC, bring your own calculator.

Re:That oughta work (3, Informative)

PopeRatzo (965947) | about a year and a half ago | (#41173155)

Agreed. Absolutely nothing has changed since the 70s. Nope. Nothing.

Having watched half an hour of the Republican Convention last night, I'll have to agree.

Re:That oughta work (5, Insightful)

Anonymous Coward | about a year and a half ago | (#41172779)

We have things we didn't have last time.
          Massive central storage
          Enormous bandwidth
          Excellent frameworks for distributed processing (no RPC does not count)

Long ago.. your Cloud had to be custom built for the app. EC2 doesn't have that restriction. I know the people who developed S3. They had no idea they'd be hosting thier Killer App. (Netflix) at design-time. It's that flexible.

Plus, we now have PCs. No one is saying we have to go back to thin clients, you can keep your PC, and you the cloud where it excels. Gmail and Netflix Streaming are both things for which I've done the equivalents on home servers, and they don't hold a candle to the cloud versions.

Re:That oughta work (1)

arkane1234 (457605) | about a year and a half ago | (#41172867)

Yeah, the virtualization instruction flagset in the Intel/AMD processors that were invented only a couple of years in the past have nothing to do with any of this.
(/sarcasm)

Re:That oughta work (0)

Anonymous Coward | about a year and a half ago | (#41174657)

"Enormous bandwidth"

I'm pretty sure we had that at the time for what was in use...

I'm also pretty sure I had it till last year and this year when most carriers started capping B/W... currently I'm sitting on 150GB a month where before I was unlimited, call me crazy but that's definitely NOT "enormous bandwidth"...

Pass the pipe!? It is however your choice of pipe: crack pipe, weed pipe, internet pipe, a literal pipe I can use to hit you upside the head...)

Who said mainframes ever went away? (0)

Anonymous Coward | about a year and a half ago | (#41174021)

What do you think Slashdot runs on? A PC sitting on someone's desk? Large, centralized computing never stopped being the solution for applications that need more power than one PC can provide.

nice if AMD incorporated an fpga in the apu (0)

Anonymous Coward | about a year and a half ago | (#41172663)

If they really want to go after hpc market, an fpga on the apu would be really nice, if the gate count was high enough.

Of course for normal users, it may not be too useful unless AMD shipped some cores with it that could put it to use when not running custom stuff.

Can they just not work on their device drivers? (1)

cyberspittle (519754) | about a year and a half ago | (#41172939)

I have had no fun with their software on Linux or Windows. Then again, Nvidia is not much better.

Re:Can they just not work on their device drivers? (0)

Anonymous Coward | about a year and a half ago | (#41173243)

I don't like many NVIDIA policies, but Linux device drivers (disregarding tainting the kernel) are not one of them.

They work flawlessly for most purposes, for all devices, for just-released-mainline-kernels (i.e. from kernel.org) since many years.

Re:Can they just not work on their device drivers? (1, Interesting)

sl3xd (111641) | about a year and a half ago | (#41173267)

Didn't you know? They're open source now. Fix the problem yourself!

Sarcasm aside, I feel AMD open sourcing the drivers was more because they're throwing up their hands in surrender; they can't manage it themselves, so they're asking for outside help.

AMD also provides a library that makes it easy to write a userspace program to disable all fans and thermal throttling on the GPU - melt the thing; maybe even start a fire... useful feature, that.

The beauty is that if a user can run a GL program (or even a GPU compute job), you can fry the GPU.

Good stuff, those AMD drivers...

Re:Can they just not work on their device drivers? (0)

Anonymous Coward | about a year and a half ago | (#41173769)

Last I checked they're still two generations behind on their documentation releases. I haven't seen any releases of docs past either the R700 series or evergreen.

Has anyone else?

I'm damn sure the 6xxx series is still not publicly documented. Given that the 7xxx stuff has been out for 6 months now, you'd expect them to have docs for that too. (Especially given the subpar reviews I've seen of the fglrx drivers.)

And to really drive this home, the R600/700 cards have been dropped as of Catalyst 12.6 (on linux, 12.6 non-legacy on Windows) despite R600 based AM3 motherboards still being in retail channels (Nevermind the lack of OpenCL support for R700+ and OGL 3.3 or 4.2 support for all compatible hardware since the R600.)

Where's our support AMD? Where's our docs? Where's our *OPEN SOURCE*?

Re:Can they just not work on their device drivers? (0)

Anonymous Coward | about a year and a half ago | (#41175261)

Please, stop spreading misinformation. There are no open hardware specs for 3D graphics of the latest AMD cards. If you claim otherwise, please provide a link to the hardware programmer manual of any card among their latest 2 generations.

Privacy policy in the cloud? (0)

Anonymous Coward | about a year and a half ago | (#41173075)

"... through sensors, gestures, voice, augmented reality, metadata, and HD video and graphics — will need to be contextualized, analyzed, and either encrypted or assigned privacy policies. That, of course, means the cloud must shoulder the computational burden".

Great idea. Capture raw and potentially sensitive information, send it elsewhere for stripping/classification. Whose privacy policy was that again? And which entity might be required to log all data coming and going.

It makes about as much sense as having a remote (I'm sorry, I mean "cloud-based") plaintext-to-encryption gateway.

Not for the foreseeable future (5, Interesting)

sl3xd (111641) | about a year and a half ago | (#41173145)

There's a big problem, however: http://developer.amd.com/sdks/AMDAPPSDK/assets/App_Note-Running_AMD_APP_Apps_Remotely.pdf [amd.com]

To run apps that use AMD's GPU's remotely (ie. not from a local X11 session - and I mean "Local X11 session"), you have to open a security hole so big you can fit Rush Limbaugh's ego through it.

* Log into the system as root.
* Add "Xhost +" to your X11 startup config (so every X session allows anybody to access it... with root permissions)
* chmod ugo+rw /dev/ati/card*

I asked a group of devs from X.org how stupid it was... the short answer is "how stupid is giving root access to everybody?"

So, I asked AMD when they were planning on fixing the problem.

Short answer: Not for the foreseeable future.

I seem to recall a similar issue where CERT told users not to use AMD drivers for Windows, because it forces Windows to disable many of its security features.

I'm sensing a trend...

Do you want this kind of irresponsibility in the datacenter? EVER?

Re:Not for the foreseeable future (0)

Anonymous Coward | about a year and a half ago | (#41173711)

I think the point is more along the lines of utilising OpenCL to offload work from the CPU, which should hopefully close a percied floating point performance gap between AMD and Intel. We're at a point where any onboard GPUs (HD4000, AMD Fuzion) are probably more capable than the on-die instruction sets, thus it makes sense for AMD to want to push their advantage in this area, as its pretty much the only place they're leading the pack.

A side effect of this would be the ability to get decent graphics performance from a server, but I doubt anyone really wants it for this.

Re:Not for the foreseeable future (1)

sl3xd (111641) | about a year and a half ago | (#41183705)

The problem is that to use OpenCL (or ATI's Stream SDK) to offload work from the CPU, you have to do the "xhost +" breakage, which is a serious problem for anybody who actually cares about security.

Re:Not for the foreseeable future (3, Interesting)

antdude (79039) | about a year and a half ago | (#41173839)

Which Windows' security features? I wasn't aware of this. :(

Re:Not for the foreseeable future (1)

Zeromous (668365) | about a year and a half ago | (#41174523)

IIRC It's DEP. My wintel crashed for ages without it disabled. Had to be uber vigilant but I don't think it's an issue now.

Re:Not for the foreseeable future (1)

antdude (79039) | about a year and a half ago | (#41174913)

DEP for all programs or just essential Windows programs and services?

Re:Not for the foreseeable future (1)

drinkypoo (153816) | about a year and a half ago | (#41176835)

You have to disable DEP entirely or the drivers will fail. At least at one point you couldn't even open CCC.

Re:Not for the foreseeable future (1)

antdude (79039) | about a year and a half ago | (#41177365)

Wow. I only have essential programs and services in my old Windows XP Pro. SP3 machine. No problems.

Re:Not for the foreseeable future (0)

Anonymous Coward | about a year and a half ago | (#41173841)

Inevitable result:

All your GPU's are belong to us!!

Re:Not for the foreseeable future (1)

Charliemopps (1157495) | about a year and a half ago | (#41173859)

And this has what to do with the entirely new, not in production yet coprocessors that don't have any drivers for Windows, much less linux yet? They're talking about an entirely new chip that will be geared towards handling large amounts of video in a server environment. NOT your $100 graphics card. I'd imagine what they are going to produce will go into server farms with custom made software.

Re:Not for the foreseeable future (0)

Anonymous Coward | about a year and a half ago | (#41174267)

Hope so !

AMD gave us 70-85% of intel performance, but 4 times the cores, at 12% of the intel cost.

If AMD can let me build something similar to the tesla stations at 12% cost, I'm in.

Re:Not for the foreseeable future (3, Interesting)

sl3xd (111641) | about a year and a half ago | (#41174775)

I think AMD is jumping into the arena because they feel they have to:

- NVIDIA is already making quite a splash in big data processing with their many-core GPGPU offerings
- AMD already offers their FirePro line to compete with NVIDIA's Tesla and Quadro
- Intel is entering the arena with their MIC/Xeon Phi product line (http://en.wikipedia.org/wiki/Intel_MIC)

AMD apparently feels they have to go down a similar path. Hopefully they will do it in a way better than is possible with their competition's offerings; NVIDIA doesn't build a full CPU on-die with their GPU, and Intel appears to have chosen not to.

Additionally, NVIDIA's and presumably Intel's many-core offerings can easily swamp the latest PCIe Gen3 bus with the number of cores they have. The total memory per core on the GPU or Phi device isn't that high, so it's very easy to become bounded by PCIe's I/O bandwidth - they have to transfer boatloads of data over the PCIe bus.

For some workloads, you can get some great performance gains; it's also important to remember that while NVIDIA (for one) likes to trumpet their 20-30x performance increase, the fact is they're cherry-picking workloads that are well-suited to their product. In my experience, it's 3-5x in the general case, because of their fundamental limitation in memory bandwidth between the PCIe card and other sources of memory - be it the "main" system memory, RDMA via InfiniBand, etc.

I'm confident AMD will design decent hardware - they might even turn a corner and make great hardware again. Without the software to drive it, however, it's a lost cause - and I have zero confidence in AMD's ability to develop that software.

Re:Not for the foreseeable future (0)

Anonymous Coward | about a year and a half ago | (#41177181)

Maybe I am getting to be an old geezer but back in my day we'd kill for a 0.25x speedup. 3-5x? Unheard of!

Re:Not for the foreseeable future (0)

sl3xd (111641) | about a year and a half ago | (#41174635)

NOT your $100 graphics card. I'd imagine what they are going to produce will go into server farms with custom made software.

I couldn't care less about desktop graphics; it just isn't interesting to me.

My original post was about my experiences with their $1-2k FirePro boards that compete with nVIDIA's Tesla & Quadro, to be slotted into several hundred nodes of a supercomputing cluster. If that isn't a server environment, then what is?

I hate to break it to you, but AMD's attention to software, even for a high-profile, multi-million dollar order, is laughable. Ever wonder why the Top500 has plenty of NVIDIA based supercomputers, but only one of the top 100 have AMD GPU's?

And it’s not that I hate AMD; five years ago they were fantastic. But you know what? They've had miss after miss for over half a decade, their software is horrible, and they haven't even met their own performance expectations with any product for the entire time. I really hope they can turn around, but I haven't seen any signs that they're doing it.

I have waited patiently (1)

Anonymous Coward | about a year and a half ago | (#41175347)

I have waited patiently for Intel's offering 'Knights Corner' now rebranded as "Xeon Phi". We have been tempted with 64+ cores and 4 threads per core. We have been tempted with 'run existing software'. But I don' see anything available in stores. That they aren't pushing product into stores means the thing will be gawd-awful expensive, production is limited, and they don't want people to spend a grand or two, and create amazing software around the hardware. Instead, the word 'Xeon' means in general 'not for you'. I wanted this to be widely available, but intel is going the wrong way. I suspect for most, it will be much cheaper and easier for most to recompile, add CUDA to what's available. Intel does not seem to be interested in bringing high performance computing to the masses. Cost, availability, attitude. Get an Nvidia card *today*. Add software today, and use it today. The cost per performance is likely better, and you don't need to join intel's billionaires club.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...