×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

AMD FirePro W9100 16GB Workstation GPU Put To the Test

Unknown Lamer posted about 4 months ago | from the more-power dept.

Graphics 42

Dputiger (561114) writes "It has been almost two years since AMD launched the FirePro W9000 and kicked off a heated battle in the workstation GPU wars with NVIDIA. AMD recently released the powerful FirePro W9100, however, a new card based on the same Hawaii-class GPU as the desktop R9 290X, but aimed at the professional workstation market. The W9100's GPU features 2,816 stream processors, and the card boasts 320GB/s of memory bandwidth, and six mini-DisplayPorts, all of which support DP1.2 and 4K output. The W9100 carries more RAM than any other AMD GPU as well, a whopping 16GB of GDDR5 on a single card. Even NVIDIA's top-end Quadro K6000 tops out at 12GB, which means AMD sits in a class by itself in this area. In terms of performance, this review shows that the FirePro W9100 doesn't always outshine its competition, but its price/performance ratio keep it firmly in the running. But if AMD continues to improve its product mix and overall software support, it should close the gap even more in the pro GPU market in the next 18-24 months."

Sorry! There are no comments related to the filter you selected.

Litecoins (1)

blackt0wer (2714221) | about 4 months ago | (#47452613)

But how many KH/s does it do?

any way to use some of that ram as a ram disk? (1)

Joe_Dragon (2206452) | about 4 months ago | (#47452643)

any way to use some of that ram as a ram disk?

Re:any way to use some of that ram as a ram disk? (0)

Anonymous Coward | about 4 months ago | (#47452717)

Probably but why? Is it cost effective in some situtation?

Re:any way to use some of that ram as a ram disk? (0)

Anonymous Coward | about 4 months ago | (#47452785)

No, none of the most kickass computers are. We add another $500 GPU just to get a few more FPS, we build machines that can eat the average computer for a tea time snack, then use them for minecraft servers or molecular biology. We buy I-7's and to overclock them to the max we make them into expensive Pentiums.
You ask for a cost benefit analysis, and you get laughed at.

Re:any way to use some of that ram as a ram disk? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47453147)

I-7s [wikipedia.org] ? Those are pretty ancient stuff. But making them compute anything must be a pretty mean feat!

Re:any way to use some of that ram as a ram disk? (0)

Anonymous Coward | about 4 months ago | (#47452863)

It could sit nicely between ARC and L2ARC for ZFS based file servers that are maxed out on memory...

Re:any way to use some of that ram as a ram disk? (0)

Anonymous Coward | about 4 months ago | (#47452759)

Sounds like a good idea.
Lets increase bus contention even more.

Re:any way to use some of that ram as a ram disk? (0)

Anonymous Coward | about 4 months ago | (#47453831)

PCIe is point-to-point... there's no "bus" to have contention...

Re:any way to use some of that ram as a ram disk? (1)

asmkm22 (1902712) | about 4 months ago | (#47453023)

Trust me. The people using these cards, already have workstations with a shit ton of RAM anyway, not to mention SSD's are becoming more common for those use-scenarios.

Never mind ram disk (0)

Anonymous Coward | about 4 months ago | (#47453075)

Never mind ram disk, imagine how big a print spooler you could have!

Re:any way to use some of that ram as a ram disk? (1)

Anubis350 (772791) | about 4 months ago | (#47453151)

RAM is cheap, most people who are planning to toss this card in their workstation can also max its memory out to a level that 16GB wouldn't make much of a difference if they needed a ramdisk. For the few remaining people there are better solutions.

Add to that that then you're competing on bandwidth on the PCIe lanes for access to the card's memory with anything you're actually using the card for and it makes no sense to bother.

Re:any way to use some of that ram as a ram disk? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47453161)

Those people have enough money for DIMMs. Here you pay premium for GPU-to-VRAM bandwidth. And you certainly wouldn't be able to use that over PCIe properly. So you do pay much more money, but at least it's way slower...is that what you wanted? ;-)

50% discount (0)

Anonymous Coward | about 4 months ago | (#47452689)

and out of the gate, they are offering a 50% discount. Pre-approval required.... http://www.fireprographics.com/experience/us/

I'm confused (1)

Anonymous Coward | about 4 months ago | (#47452757)

Is this a new Slashdot advertising site? Or is it the Slashdot review site?

what? (0, Troll)

Charliemopps (1157495) | about 4 months ago | (#47452763)

A $3500 video card is news? wtf...

Other than some crazy very specific applications this cards worthless.

Re:what? (1)

asmkm22 (1902712) | about 4 months ago | (#47452943)

This isn't "Slashdot: News for Charliemopps, Stuff that Matters"

Re:what? (3, Informative)

arielCo (995647) | about 4 months ago | (#47452953)

Understanding the Workstation Market:

The first thing we need to talk about is the difference between workstation and consumer GPUs. The GPUs themselves are essentially identical -- NVIDIA's Quadro K6000 is based on GK104 (Kepler) the older Quadro 6000 is a GF100 (Fermi)-based chip, the W9000 uses the same GCN core that powers the HD 7970/R9 280X, and today's W9100 is essentially identical to the Hawaii XT core inside the R9 290X. What sets these workstation cards aside are the amount of RAM they carry (typically 2-3x as much as a consumer card), their validation cycles (workstation GPU cores are hammered on far more than the consumer equivalents) and the amount of backend vendor support and optimization that AMD and NVIDIA both perform.

This optimization process and long-term vendor partnership is what distinguishes the workstation market from the consumer space and the need to pay for some of those development costs is part of why workstation cards tend to cost so much more than their consumer equivalents.

From TFA [hothardware.com] .

Re:what? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47453165)

I think they also have ECC.

Re:what? (2)

ArchieBunker (132337) | about 4 months ago | (#47453685)

Exactly. Also double precision is a requirement for scientific modeling and calculations. Do you really want a 1 bit error on a simulation that took a month to process?

Re:what? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#47453729)

Even with 16 gigs of state, one should still be able to do some checkpointing. But it is a nuisance anyway, and ECC at least gives warnings that your RAM may be well on its way to give way for good. Without it, one is almost clueless.

Re:what? (1)

SJester (1676058) | about 4 months ago | (#47456237)

This. I'm (90%) a scientist and I model some pretty nifty stuff. Our lab desktops have consumer GPUs. We write our code, run it for a bit, and if it looks good we send it over to the supercomputing center where it's run on Tesla systems. Beats the hell out of the days (before my time) when you'd have to stick a Post-It on your monitor that says" Do Not Turn Off - Working" and then come back three weeks later to find that it's crashed.

Re:what? (0)

Anonymous Coward | about 4 months ago | (#47453859)

there are certainly ways of doing ECC for the DRAM (somewhat similar to doing "software ECC" in hardware with the same 32b/64b data bus and bandwidth/latency penalty, rather than the usual 64b+8b seen on memory interfaces). but i wouldn't assume that the internal SRAMs have ECC... some may, and some may have it instead of repair (obviously would have to be SECDED or similar). but ECC is expensive area-wise, and not really needed on consumer GPUs.

and if anyone thinks the "workstation" ASIC parts undergo more "validation cycles", well, i've got a bridge to sell. parts are tested, repaired, retested, binned... one or more of those bins become the premium parts, but they weren't tested any more than any other parts (except the ones that failed early in the testing and were tossed).

the DPFP is the main HW feature of these premium GPUs. the rest is all software magic.

Re:what? (2)

Kjella (173770) | about 4 months ago | (#47453271)

Except that hothardware being tech geeks confuse cause and effect. The estimated cost of the 8GB of GDDR5 in the PS4 is ~$100, the hardware costs are almost the same and the "extra validation" mainly involves staying a little conservative with clock speeds and code optimizations. The real reason is market differentiation and if there is none you create one like with student and senior citizen discounts even though they all take up one seat. That lets you set entirely different prices based on the willingness to pay in workstation markets as opposed to gamer markets.

Of course to make it work, you must make sure that the workstation market won't use the gaming card so you make sure those features are absent or not working or not tested/validated/supported on the consumer cards. It's the same reason Intel won't give you ECC on a consumer motherboard/CPU like AMD does, it would be beneficial and cost next to nothing but it'd encourage penny-pinchers to use them as poor man's servers. It's all about steering customers to the "right" product, the rest is implementation details.

Re:what? (0)

Anonymous Coward | about 4 months ago | (#47453667)

There is a lot more difference than just the amount of ram, it is also ECC ram which has also been tested to a higher certification.. It certainly has a massive premium on it which is to be expected considering the relatively small market segment these sort of cards sell too but to suggest that they are using the equivalent of the crap that they shove in consoles is naïve at best. try and find even 4GB of that for $100, let alone 16GB, it is a low volume market and hence the memory chip manufacturers will also be charging there own premium on top of the premium AMD charge, everything gets significantly more expensive in low volumes.

Re:what? (0)

Anonymous Coward | about 4 months ago | (#47453875)

it's not "ECC ram"... the ECC function is basically software-ECC, but with the function implented in hardware. there are no extra DRAMs on the board for ECC, and there are not "ECC certified" DRAMs. suggesting that these cards are using something that is other than the same parts (ignoring speed binning maybe) in the consoles, that's being naive.

Re:what? (0)

Anonymous Coward | about 4 months ago | (#47453021)

And those "crazy" apps are WHY cards like this exist.

You haven't been around long, have you? How old are you?

Re:what? (0)

Anonymous Coward | about 4 months ago | (#47458017)

And those "crazy" apps are WHY cards like this exist.

You haven't been around long, have you? How old are you?

Old enough to remember when porn wasn't served up in 3D HD.

Let's not try and be trite with justifications, shall we? My example tends to show that plenty of shit has come around with little or no demand for it.

Re:what? (1)

Anubis350 (772791) | about 4 months ago | (#47453155)

GPGPU is an important segment of the high end market these days...

/The machine I'm typing this on right now has an nvidia Tesla in it

Re:what? (1)

mr_mischief (456295) | about 4 months ago | (#47458455)

Crazy and very specific applications like CAD, video editing, video transcoding, and stuff like that you mean? Yeah, that's what they benchmarked.

In before "$3,300 WTF?!" (3, Informative)

arielCo (995647) | about 4 months ago | (#47452979)

From TFA:

Understanding the Workstation Market:

The first thing we need to talk about is the difference between workstation and consumer GPUs. The GPUs themselves are essentially identical -- NVIDIA's Quadro K6000 is based on GK104 (Kepler) the older Quadro 6000 is a GF100 (Fermi)-based chip, the W9000 uses the same GCN core that powers the HD 7970/R9 280X, and today's W9100 is essentially identical to the Hawaii XT core inside the R9 290X. What sets these workstation cards aside are the amount of RAM they carry (typically 2-3x as much as a consumer card), their validation cycles (workstation GPU cores are hammered on far more than the consumer equivalents) and the amount of backend vendor support and optimization that AMD and NVIDIA both perform.

This optimization process and long-term vendor partnership is what distinguishes the workstation market from the consumer space and the need to pay for some of those development costs is part of why workstation cards tend to cost so much more than their consumer equivalents.

Re:In before "$3,300 WTF?!" (1)

gmhowell (26755) | about 4 months ago | (#47455929)

Too late [slashdot.org] .

Too bad they forgot how to make a cpu (1)

Osgeld (1900440) | about 4 months ago | (#47453315)

As I sit here at my power hungery, but yet fairly slow FX4170 budgeting out a new i7 machine, I have to wonder if they noticed the very large pile of old video chipsetmakers sitting out in the street?

the R series are not that impressive, hell a few of them are outran by their own cheaper older cards, workstation cards are a niche market, even most 3d cad tools like solid works runs reasonably on a integrated intel.

It makes me wonder what they are thinking, there was a time where AMD made a huge variaty of intergrated chips, not anymore. There was a time when AMD was locked in mortal combat with intel, not any more, but hey check out this 3,300$ video card we might sell a dozen of!!!

good luck buddies

Re:Too bad they forgot how to make a cpu (1)

Anonymous Coward | about 4 months ago | (#47453411)

What they 'forgot' how to do was something that is arguably untenable given the transistor complexity of modern architectures; design a high end CPU with hand tuned design.

They are instead suffering a logical but not quite real world realized architecture and are pretty much stuck with that until their next is ready for release (in something like 12 to 30 months is what I recall some story speculating the date should be like).

Re:Too bad they forgot how to make a cpu (1)

TheGoodNamesWereGone (1844118) | about 4 months ago | (#47456965)

Things started oing downhill went they went fabless

Re:Too bad they forgot how to make a cpu (0)

Anonymous Coward | about 4 months ago | (#47454113)

These things sell. People don't use $100 cards for GPGPU, they use the expensive ones. The only reason it may not sell is OpenCL has not the reach CUDA does, despite being open and not tied to a manufacturer.

Interesting Card, the graphics in the Article sux (4, Informative)

Virtucon (127420) | about 4 months ago | (#47453329)

Nice writeup but I wish these folks writing these kinds of articles would at least take a look at the graphics they're putting out there. The graphs are senseless drivel and you don't know if the new card is better or worse. It's still a nice card but here's a better write up at Toms Hardware. [is.gd]

Plagiarize FTW (1)

steelfood (895457) | about 4 months ago | (#47453369)

Wow, TFS is copied almost verbatim from the first two paragraphs of TFA. The only difference is the awkward cutting and editing that ultimately contort and torture the prose in TFA.

Way to add value, editors!

How well did they do? You decide! From TFA:

It has been almost two years since AMD launched the FirePro W9000 and kicked off a new battle in the workstation GPU wars. Today, we're reviewing the company's FirePro W9100 -- a new card based on the same Hawaii-class GPU as the desktop R9 290 and R9 290X, but aimed at the workstation market and professional consumers. Does AMD's new card have what it takes to seize the professional performance crown?

The W9100 is a full Hawaii GPU with 2,816 stream processors, 320GB/s of memory bandwidth, and six mini-DisplayPorts, all of which support DP1.2 and 4K output. It carries more RAM than any other AMD GPU -- a whopping 16GB of GDDR5 on a single card. Even NVIDIA's top-end Quadro K6000 tops out at 12GB, which means AMD sits in a class by itself in this area. The W9000 and W9100 have one other major point of differentiation -- each offers support for up to six 4K displays using DisplayPort 1.2. NVIDIA's top-end Quadro K6000 tops out at just two DP 1.2 ports. You can still theoretically run more DisplayPort 1.2 displays if you use a hub, but if you want to hook everything up through the video card, AMD has a distinct advantage here.

So when will we see a new Mac Pro version? (1)

King_TJ (85913) | about 4 months ago | (#47453683)

Seriously, the release of the W9100 means the 2013 Mac Pro no longer has the latest FirePro chipset in it ... making it a great opportunity to see if Apple will actually release video card upgrades for this machine, or if it's true that owners will just be stuck with whatever was ordered with it.

Re:So when will we see a new Mac Pro version? (1)

RyuuzakiTetsuya (195424) | about 4 months ago | (#47453793)

They might do a refresh with the new part, but upgrade kit? Not likely. Not first party anyway. I wouldn't be shocked if owc figures out the secret sauce to make it work.

Impressive (1)

Marquis231 (3115633) | about 4 months ago | (#47454405)

I'm always so impressed when AMD manages to compete against Nvidia while AMD is only one tenth the size, they have far less resources to invest back into R&D and yet every generation they show up to the ring with a worthy entry ready for another fight. Both my GPU and CPU are AMD make, I love supporting the underdog.

Not quite (0)

Anonymous Coward | about 4 months ago | (#47454659)

From each company profile on their respective web sites:

AMD = 10,700 people
NVIDIA = 8,800 people

Re:Not quite (1)

mr_mischief (456295) | about 4 months ago | (#47458705)

Intel has 107,600 [intel.com] .

How many people do you think work in the graphics division of AMD? How many at NVidia?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?