Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


And the crowd, didn't care.. (1)

FredFredrickson (1177871) | more than 2 years ago | (#36227456)

I guess I mostly don't care since neither major GPU manufacturer wants to come up with a naming structure that makes a lick of sense. Honestly, You could use a random number generator, I always just end up looking up benchmarks to figure out which cards are better than others. There's nothing in the names.

Oh, and that's why this press release is so pointless.

Also, in other news, processor manufacturers are in the same boat. You know why nobody (read: avg consumers) buys top end processors? Because from a PR naming standpoint, they all seem confusing and .. do stuff. Seems like this here emachines from walmart has the intel core ix blah blah blah, it must be good. Only $299! What a steal!

I know that in a post somewhere below me, somebody's going to point out the financial advantage to this, but I feel like it just doesn't make sense to purposefully confuse the public by not coming up with reasonable names.

Re:And the crowd, didn't care.. (1)

Yvan256 (722131) | more than 2 years ago | (#36227558)

I think you're right. People are lazy, so they won't do any research apart from (maybe) reading what's indicated on the labels below the computers. And when they can't make an informed decision, they go with the cheapest choice.

Re:And the crowd, didn't care.. (3, Interesting)

b4dc0d3r (1268512) | about 2 years ago | (#36229552)

Lazy? I asked slashdot, and read up for a good two months, and the best advice was to compare performance using something like Tom's Hardware. Not knowing what the apps I wanted to use would actually use, the results were largely meaningless.

I was not about to look up the GPU for every card listed on every computer I might buy, along with the upgrades available for each, so I could look those up on a chart to see their performance. I did piles of research and still did not have enough to make an informed decision, short of making a huge database of everything I came across. I've done that before, but this amount of data quickly became ridiculous, and by the time I decided on one model it was no longer available. I gave up then.

I ultimately looked at the specs of something in my price range and since it had HDMI, Intel onboard video got my business. This part of the crowd does not care. for nearly everyone, my advice has been and will continue to be, buy the cheapest thing you can find, it will do what you want.

Re:And the crowd, didn't care.. (1)

karnal (22275) | about 2 years ago | (#36230038)

Let's face it - you weren't lazy, but you also didn't come to the table with expectations. Not knowing what applications you want to use would probably be akin to going to a car/truck dealership and not knowing what you'd like your new vehicle to be able to do for you. And at that point - you could walk off the lot with an SUV when a compact car would have done the trick.

Re: Didn't read, did you? (1)

b4dc0d3r (1268512) | more than 2 years ago | (#36243730)

I specifically said I wanted 1080 output to HDTV, and to be able to play Oblivion. I thought I was very clear.

http://ask.slashdot.org/story/10/03/09/0134223/Making-Sense-of-CPU-and-GPU-Model-Numbers [slashdot.org]

Re: Didn't read, did you? (1)

karnal (22275) | more than 2 years ago | (#36249568)

Sorry; didn't realize you actually were the author of the main question. Was just going off of the comment thread - you stated "Not knowing what the apps I wanted to use would actually use, the results were largely meaningless."

Re:And the crowd, didn't care.. (1)

sjames (1099) | about 2 years ago | (#36232594)

If by lazy, you mean too busy living to learn an entirely new language that bears only passing resemblance to anything spoken by non marketing droids, then yes.

Re:And the crowd, didn't care.. (1)

marnues (906739) | about 2 years ago | (#36233688)

There's nothing lazy about it. It's a confusing topic and the manufacturers and reviewers both seem to be on board with aiding the confusion. No, most people with a real budget that I know really do research. It may not be good research, but knowing where good research comes from is in itself a complex task.

Re:And the crowd, didn't care.. (0)

Anonymous Coward | more than 2 years ago | (#36227598)

Congrats for being totally off topic and confusing consumer-marketed crap with business hardware.

Re:And the crowd, didn't care.. (0)

jawtheshark (198669) | more than 2 years ago | (#36227786)

You know why nobody (read: avg consumers) buys top end processors?

There is another reason... Given enough RAM, any x86 computer built in the last 7 years has enough oompha for those "average" users. Heck, if it didn't blew it's caps, I'd still be using my wifes P-IV 2.6GHz HT (originally 512MB RAM, but upgraded to 2GB). That's a 2003 computer.

Heck, my AMD Athlon 2400+ MP is gathering dust in the basement (also a 2003 computer), because I got an AMD Athlon 64 4300+ from someone who didn't need it anymore. It's my main computer and it's perfectly fine for daily use. I'd still like to find a use for my MP, but it's so loud, I can't keep it on 24/7 so a server is out.

Re:And the crowd, didn't care.. (4, Informative)

gman003 (1693318) | more than 2 years ago | (#36227852)

The consumer cards actually do make sense. For nVidia, it's "first number is the generation, second number is the part within that generation" - a 580 is better than a 570, but worse than a 590. Likewise, a 480 is newer than a 280, but not as new as a 580. You can also generally make the assertion that cards with the same ending numbers, but different generations, will fill the same role (and same rough price), but the newer one will be slightly better. AMD/ATI uses four numbers, but the last is always a 0 and can be ignored. They essentially follow a similar patter - first number is generation, middle two are part within that generation, and last one is a zero (to make it look cooler). So a 5870 is better than a 5770, but not as good as a 5970. And a 5970 is older than a 6990, but newer than a 4970. AMD recently changed how their within-generation numbers go, so you can't just assume that, say, a 6970 will outperform a 5970 (it won't, actually), but comparisons within a generation are still good. And these are hardly new - ATI/AMD has used that patten since 2006, while nVidia has been using theirs since 2008 (prior to that, they had a 4-digit number (really two digits with a 00 at the end) and a few letters).

The workstation cards, though, are an absolute mess. About the only claim you can even generally defend is that "bigger numbers are better". And even that is rather iffy. And trying to figure out which consumer card a workstation card was based on requires an encyclopedia of them.

While I imagine workstation cards can get away with having non-linear names like that (since anyone buying a $3,500 graphics card will do their research), I imagine even professionals get confused by it all easily.

Re:And the crowd, didn't care.. (1)

Metabolife (961249) | about 2 years ago | (#36228338)

They really should stick to a more streamlined naming scheme.

I got it:
Slow but Cheap
Fast Enough for Youtube HD
Faster for Most Games
Fastest for epeen
Rip Off for anything other than 3 monitors

Then tack on '10, '11, etc for the year and you're golden.

Re:And the crowd, didn't care.. (1, Informative)

sexconker (1179573) | about 2 years ago | (#36230062)

You're just oh so wrong.
Nvidia used to use 4 digits (FX 5xxx, 6xxx, 7xxx, 8xxx, 9xxx), then they went to 2xx.

After 2xx they went to 4xx. Along the way they peppered in a few 1xx and 3xx parts that nobody bought (they were all rebadges of the defective G92 chips. The 9xxx and early 2xx were also defective. The revamps in the 8xxx (8800 GT, and the second revision of the 8800 GTS) line were also defective. Then they went to 5xx.

The last number hasn't always been 0, either. There's the GTX 285, for example. And of course, OEMs can add whatever bullshit they want at the end of it, such as OC, SE, SSE. And of course they have to include the Nvidia shitfest of GT, GTX, Ultra, M, whatever.

In order of performance (best to worst) it goes Ultra, GTX, GT, GTS, (nothing), GS, then M, LE, and other shit. You can compare within a single model number, but not across generations. These monikers didn't exist until the 6800 family came out. We had the 6800, the 6800 GT, and the 6800 Ultra. The 7000 series was just a rehash of the 6000 series. The 8000 series was indeed a new GPU, and introduced the shitfuck of GTS (which seemed to be tacked on to the cards that would otherwise NOT have a GT/Ultra/whatever shit added to them). Now we start adding GX or GX 2 to shit to indicate it's a dual-gpu card. The next family of chips came with the 2xx series. Not the first few out the door, mind you, but the Fermi 280s.

The only thing consistent about Nvidia for the last decade is that if the second number is an 8, you have the flagship part. You always want the flagship part, because it is the only one that actually receives proper engineering and testing. You can choose whatever binning (ultra gtx gt gts) or overclocked horseshit you want from msi/asus/whoever.
6800 Ultra/GT/vanilla.
8800 Ultra/GTX/GTS.
280, 480, 580, etc into the future maybe.

When they start futzing around with 8600s or 7950s, or GX2s, or the 8800 GT or 8800 GTS v2 (aka 8800 GTS 512), what's happening is they're hastily tweaking shit to adapt to the market sectors and reduce cost (or slap as much shit as they can fit in an ATX case and go for the supid performance crown for bragging rights). These are always sloppy jobs. With Nvidia, we had bumpgate. But in general, you get less reliable shit, at a later date, for a bit less money.

With ATi/AMD, you've got a whole different can of worms.
They were doing 8xxx and 9xxx a decade ago, then went to x1xxx (the firxt x is a literal x) and x2xxx.
Then they went to HD 3xxx, HD 4xxx, HD 5xxx, and HD 6xxx.

They've used monikers such as XT, Pro, and LE.
They've stopped using the 8 as the flagship indicator (from the old 9800 pro to the 5800 series) and now use 9 for the flagship.
As such, a 5850 is better than/the same as a 6850, and a 5870 is better/the same as than a 6870.
The generational bump this round added a 1 to the second number. The 6970 is the big brother of the 5870.
And if you want to compare the 5970 to something, you'll be looking at the 6990.

So no. In short, it makes zero sense. You can't look at the name and discern anything when they add changing and intentionally consufing XT Pro GT GTX GTS GTS v2 GS LE M GX GX2 etc, along with model numbers that can't be directly compared unless the first digit is the same. Add in the OC tweaks and branding, ePeen gun-style cases, and CG girls and orcs, and no one can tell you the difference between the Gigabyte Radeon HD 5870 SOC and the MSI Radeon HD 6870 HAWK without looking at benchmarks.

Amen to that (1)

Lonewolf666 (259450) | about 2 years ago | (#36232048)

The numbers are confusing indeed, and sometimes a new card is only a re-branded model of yesteryear. For a quick overview, I recommend the comparison tables at Wikipedia.
AMD: http://en.wikipedia.org/wiki/Comparison_of_ATI_Graphics_Processing_Units [wikipedia.org]
Nvidia: http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units [wikipedia.org]

The numbers for transistor count, theoretical GFlops and so on should at least give you a rough idea if you are looking at a high end, mid range or low end card. The comparison to the last generation can be interesting too. Hint:
AMD made relatively little progress between the HD 5xxx and the HD 6xxx. Nvidia, just as little or even less between the 400 and 500 series. In both cases, a bargain offer for a card from the previous generation may beat the latest generation in bang for the buck ;-)

Re:And the crowd, didn't care.. (1)

JoeMerchant (803320) | about 2 years ago | (#36228614)

I feel like it just doesn't make sense to purposefully confuse the public by not coming up with reasonable names.

I agree, but... if they were to try to create a naming scheme that could be understood by the average Wal-Mart shopper, they would still fail.

I would want a GPU name to convey:

  - Number of pipelines
  - Speed/Capability of pipelines
  - Amount of memory
  - Speed of memory
  - Software compliance (itself a multidimensional variable)

So, when I compare a VP350UG266-G2X11 to a VP350SG233-G2X11, I know that what I'm getting is more video memory, but at a lower speed, and the same pipeline capabilities.

Yeah, that'll help the Wal-Mart shoppers.

Re:And the crowd, didn't care.. (0)

Anonymous Coward | about 2 years ago | (#36229182)

You don't put the RAM size in the GPU name, they're two different things.

Re:And the crowd, didn't care.. (1)

Lennie (16154) | about 2 years ago | (#36232646)

Actually this is true in any field. PR is just PR and a way to confuse consumers to sell them stuff they don't need.

Whooo whooo! (1)

L4t3r4lu5 (1216702) | more than 2 years ago | (#36227474)

Here comes the Press Release train! Next stop; Slashdot! Any details to disembark? Any at all? Nope, ok then, onwards to the next destination!

Seriously? One sentence? GFY submitter and editor both.

Re:Whooo whooo! (1)

Ant P. (974313) | about 2 years ago | (#36228422)

People are going to keep submitting blogspam until it starts hurting them in the wallet, so I just added *.icrontic.com to my DNS blacklist. No ad revenue or eyeballs for them from my network now and never.

So someone please explain the difference (1)

EmagGeek (574360) | more than 2 years ago | (#36227538)

between a 300W $500 high-end gaming video card and a $500 "workstation" card that consumes half the power? What is missing from the workstation card?

they are build for pro work / open cl vs games / (0)

Joe The Dragon (967727) | more than 2 years ago | (#36227606)

they are build for pro work / open cl vs games / DirectX

Re:they are build for pro work / open cl vs games (1)

Osgeld (1900440) | about 2 years ago | (#36230040)

I understood this when looking at a riva tnt vs a open gl processor with removable ram, but often times now these are the same card with different settings

SO in order for me to understand the difference between games and workshop in 2011 your going to have to do better than that 1 liner, its really up there with "mac's are better at audio"

Re:So someone please explain the difference (1)

UnknowingFool (672806) | more than 2 years ago | (#36227646)

Not sure myself but I assume the consumer model is all about the flashy numbers game with 12 centillion polygons per second while the workstation model is about reliability and stability. Most often the workstation model has better drivers and gone through OpenGL testing and certification.

Re:So someone please explain the difference (3, Informative)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#36227710)

Typically, the "workstation" card makes you pay out the nose per unit silicon(though, at the same time, the top of the "workstation" range is going to be the only place to find the maximum RAM available to that generation, along with genlock and similar); but the "gamer" card will probably skimp on things like double-precision math and drivers that don't suck for anything other than playing Metal of Duty Crysis Evolved.

Re:So someone please explain the difference (1)

Anonymous Coward | more than 2 years ago | (#36227868)

The main difference is driver support. The workstation cards are designed for rendering CAD/3D visualizations/physics where they do not drop frames. A gaming card can drop a frame here and there because it doesn't really matter, you drop a frame in a physics simulation and you have to start over.

Re:So someone please explain the difference (1)

edxwelch (600979) | more than 2 years ago | (#36227878)

3d content creation apps typically use opengl commands that aren't used for games and these cards are optimised to make those commands run faster.
(The article mentions a feature called GeometryBoost)

Re:So someone please explain the difference (1)

EmagGeek (574360) | about 2 years ago | (#36228538)

That's important. If that's the case, if I have X-Plane (a flight sim that uses OpenGL and not DirectX) it would be better to use a workstation card than a typical DirectX gaming card, I assume?


Re:So someone please explain the difference (1)

AJH16 (940784) | about 2 years ago | (#36229130)

Probably not worth it as X-Plane would still be optimized geometries. In workstation graphics, you have far weirder, less optimal situations that you encounter while working on modeling and such before optimizations are applied. Both NVidia and ATI have gotten better at having solid OpenGL native support in their gamer cards. As a general rule of thumb, if you don't know why you need a workstation graphics card, you don't need a workstation graphics card.

Re:So someone please explain the difference (1)

AJH16 (940784) | about 2 years ago | (#36229154)

Put another way, if you don't know why you need a workstation graphics card, you probably don't need one. Particularly since the OpenGL support has gotten much better in both ATI and Nvidia cards as of late.

Re:So someone please explain the difference (1)

StikyPad (445176) | about 2 years ago | (#36232610)

And yet, if the price is the same (as it now appears to be), wouldn't it be better to get the workstation card and have all the features of the consumer card and then some?

Re:So someone please explain the difference (0)

Anonymous Coward | more than 2 years ago | (#36235948)

It doesn't have all the same features. DirectX and shader support is different on the gamer cards and the equivilant price points are not the same. The $999 card is probably most similar to a $350 or so gaming card for gaming performance. It is however kind of hard to really explain the differences to someone that doesn't work in an industry that needs workstation graphics cards. The general idea would be comparing a racecar tuned for a racetrack and trying to drive it down a country dirt road. One is designed for high precission, raw geometry. The other is optimized for high quality, pre-processed and optimized geometry and effects. This is why you have to update your graphics drivers whenever a new game comes out and you get wierd issues if you don't have the optimizations, but the workstation card wouldn't be able to even use those optimizations as it just pushes raw data.

Re:So someone please explain the difference (1)

makomk (752139) | about 2 years ago | (#36231420)

Or in the case of NVidia's current generation of cards, the performance of 3D content creation apps depends OpenGL commands that aren't performance-critical for games, so NVidia's consumer cards are pessimised to make those commands artifically slow. (Specifically, texture upload and readback is artifically restricted to be slower than on the previous generation of NVidia cards, which weren't crippled in this way. This makes them useless for running stuff like Maya.)

Re:So someone please explain the difference (1)

gman003 (1693318) | about 2 years ago | (#36227988)

Well, that $500 workstation card is a lot less powerful. Workstation cards are priced MUCH higher than their gamer equivelants - a FirePro V7800 is essentially a Radeon HD 5850 (clocked 50mHz lower, actually). Only real difference is the drivers they use. And the price tag - the FirePro costs $649, while the Radeon costs $349. And some of the higher-end ones get ridiculous - current going price for a V9800 is upwards of three grand.

As for how they can get away with that, I have no idea. Same way Microsoft can charge $300 for an Ultimate Edition OS, or how Apple can charge a fortune for a 133mHz increase in clock speed.

Re:So someone please explain the difference (1)

UnknowingFool (672806) | about 2 years ago | (#36228348)

As for how they can get away with that, I have no idea. Same way Microsoft can charge $300 for an Ultimate Edition OS, or how Apple can charge a fortune for a 133mHz increase in clock speed.

They can get away with it because workstation cards are not intended for consumers in general and gamers in particular. Workstation cards are normally tested and certified for OpenGL/OpenCL and not DirectX which makes them cost more. They are also built with reliability and stability in mind than raw numbers like polygons per second. Workstation cards generally handle things like Maya better.

To use an analogy, digital cameras run the gamut from $100 to $10000. You might wonder how Canon can get away with charging customers several thousand dollars for a model that doesn't do video or come with a zoom lens while Canon will sell you one that for has those features. Of course that would only means you don't understand difference between a professional DSLR and a point and click camera system.

Re:So someone please explain the difference (1)

gman003 (1693318) | about 2 years ago | (#36228926)

Except for one thing: the cards are essentially identical. The processor, the memory, all that is essentially identical. You can, with some difficulty (there's heavy DRM on this part) reflash a gamer card to function as a workstation card or vice versa. The difference is essentially all in software - the workstation drivers have been optimized for accuracy, producing subpixel-perfect images, and are mainly tested (as you said) for OpenGL. The gamer drivers have been incredibly optimized for speed - trading off perfect rendering for vastly increased framerates. The only actual hardware difference I've seen is on many Quadro models, where you can plug in an external clock generator to run several displays on the same clock phase (useful when filming CRTs, as you can run the camera on the same clock and thus eliminate the problems inherent with that).

This, to extend your analogy, would be like Canon selling the exact same camera to hobbyists as professionals (same high-end lenses, same powerful image sensors, all the bells, whistles and gongs), but the hobbyist one saves to an SD card while the pro-grade saves to some proprietary super-high-density card, then selling the latter for five times the price.

Re:So someone please explain the difference (0)

Anonymous Coward | about 2 years ago | (#36230324)

Bad analogies are bad.

It costs Nvidia/ATI a lot of development time to make these cards work well for workstation applications. Development time for a small (in volume) market. It's only worth their time to develop the drivers to work well with these apps if they get more money per unit for them.

While to a large extent they charge more because they can, they also charge more because it costs them money to build the software environment around this low volume market. Does it make sense to charge consumers slightly more, or charge workstation users a more appropriate cost for the extra work that is only required for them?

Nvidia charging 3 grand for their workstation cards is much like what Intel used to charge for x86 server processors, until AMD released the K8/Opteron. Competition in these markets helps make the "markup" for specialized needs that requires specialized development more reasonable. ATI entering this market seriously should do the same to Nvidia as what AMD did to Intel in the server space.

Re:So someone please explain the difference (1)

UnknowingFool (672806) | about 2 years ago | (#36231240)

They cards are identical if you choose to ignore details that are not important to you. In the example you provided, a HD 5850 might have the same amount of DDR5 RAM as the FirePro 3D v7800 but you didn't see that the FirePro RAM is 256 bit wide while the 5850 is only 128 for twice the bandwidth. Also the 7800 supports single and double precision while the 5850 does not mention it. It can probably do it but that aspect is something that they are not going to test on a consumer card. Lastly the 7800 supports and has been tested for OpenCL; this is not supported on the 5850. These little details don't matter much to a gamer; they matter to a professional.

To extend my analogy, you're saying two cameras are the same because the sensors capture the same amount of megapixels and there is no difference between full, APS-H, APS-C, and other sensors. You're also ignoring that the professional camera is more rugged and can shoot 5x faster but the only thing you're concerned about is the speed of the media. See the reason the media is faster is because the camera can shoot faster.

A hobbyist isn't concerned with 1/8000s shutter speed or full size sensors or noise reduction software or that the camera has been tested to 300,000 cycles; a professional cares about these things and will pay top dollar.

Re:So someone please explain the difference (1)

compro01 (777531) | about 2 years ago | (#36228434)

Different processors. The $500 workstation card is more similar to a $250 gaming card, only modified for real work (3D modeling, GPGPU number crunching, etc.) at high precision, with drivers to match and certification from the major names (autoCAD, etc.). That's what you're paying the premium for.

This V7900 is between a 6870 and a 6950 in terms of hardware and the v5900 is between a 6670 and 6750.

Re:So someone please explain the difference (3, Informative)

Skynyrd (25155) | about 2 years ago | (#36229722)

between a 300W $500 high-end gaming video card and a $500 "workstation" card that consumes half the power? What is missing from the workstation card?

What's missing from the card? Certification for SolidWorks, Inventor, etc is missing from the consumer card.

is it good for bitcoin? (0)

Anonymous Coward | more than 2 years ago | (#36227602)

how much mh/s can one get with it for bitcoin mining?

Re:is it good for bitcoin? (1)

compro01 (777531) | about 2 years ago | (#36228494)

The V7900 would probably net somewhere around 200-250 Mhash/s. In terms of clock speed and processor count, it's slightly inferior to a 6870.

The V5900 would probably get 100 Mhash/s or so.

GPU x86 mapping feasible? (1)

doubleyou (89602) | more than 2 years ago | (#36227716)

How soon until you can emulate an x86 instruction set on one of these? Sure, architectural differences make it an apples and oranges comparison, but I wonder how far such a project could go...

Re:GPU x86 mapping feasible? (1)

Macman408 (1308925) | about 2 years ago | (#36228636)

I'm pretty sure GPU's are Turing-complete, which means that they can implement any algorithm, just like any CPU. However, they'd be dog slow - just because they can crunch lots of data in parallel doesn't mean they'd be able to do the same if the instructions were in x86 format. Common things like branches aren't handled well on the GPU - and some studies have shown that about one in every four instructions is a branch. Also, there's lots of very specific hardware beyond the more general-purpose math-type stuff, like for supporting virtual memory and interrupts and things like that. Emulating all this would take huge amounts of software work. Add to this that any particular math-type operations that aren't supported would also have to be completely computed in software (as an example, imagine you had to implement square root in software - which you might, since the GPU's square root might be different precision from the CPU's), and you might start to see why this isn't a good idea. Even when it's done with hardware that is a better match for an x86 CPU (like what Transmeta did), it's a hard problem, and not necessarily beneficial.

An Actual Summary. (4, Informative)

YojimboJango (978350) | more than 2 years ago | (#36227782)

A summary since we don't seem to have a good one here:
AMD releases two new video cards targeted at the CAD type audience competing with the Quadro line from Nvidia. The hardware itself isn't anything you couldn't find in your average high end gaming card, but new but they've done stupid amount of driver optimisation for design work which is why these cards cost more. More interesting though is how (comparatively) low AMD has priced these models ($599 and $999).

From the Article:
"We’ll do a follow-up article with the charts and graphs that the more pedantic among you expect, along with some interesting comparisons to other products, but in the meantime, I will summarize it with this: In SpecViewperf 11, the V7900 is about neck-and-neck with the $4000 NVIDIA Quadro 6000, and in some tests exceeded the legendary Q6000."

Re:An Actual Summary. (1)

JoeMerchant (803320) | about 2 years ago | (#36228642)

Yeah, I bought one of those "stupidly optimized" workstation cards to go with AutoDesk Inventor - as recommended by the AutoDesk certified training center professionals.

Damn card would power-spike the system bus and cause a power-fail reboot every time certain rotate operations were performed - real helpful it was.

Re:An Actual Summary. (0)

Anonymous Coward | about 2 years ago | (#36231736)

That's a good reason for not using unsupported OpCodes such as HCF [wikipedia.org] .

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account