Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building A High End Quadro FX Workstation

Hemos posted more than 11 years ago | from the making-it-faster-and-better dept.

The Almighty Buck 89

An anonymous reader writes "FiringSquad has an article detailing some of the differences between building a high-end workstation and a high-end gaming system. They go into things like ECC memory, and the difference between professional and gaming 3D cards. The Quadro FX 2000 coverage is particularly interesting -- the system with the Quadro FX 2000 was never louder than 55 dB!"

cancel ×

89 comments

Sorry! There are no comments related to the filter you selected.

Never louder (0)

Anonymous Coward | more than 11 years ago | (#5214668)

55db, is that supposed to be quiet?

interesting (2, Interesting)

Sh0t (607838) | more than 11 years ago | (#5214676)

Compared to the 75 db GFFX, that's a whisper

Re:interesting (0)

Anonymous Coward | more than 11 years ago | (#5220728)

Especially given the exponential nature of the decible system.

You should all be watching the Snooker! (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5214678)

You should all be watching the Snooker on BBC TV! Oh wait, most of you live in that shitty, fat, arrogant and imperialistic country called "The USA".

Have fun killing innocent people.

Re:You should all be watching the Snooker! (0)

Anonymous Coward | more than 11 years ago | (#5215428)

When You hear gunfire, you know you're there!

There goes my comment ! (0)

Anonymous Coward | more than 11 years ago | (#5214706)

I was going to read article before posting but that's what I got.
Access to firingsquad.gamers.com

The site you have requested is currently blocked... blah blah blah

I guess have to go back to work.

Re:There goes my comment ! (0)

Anonymous Coward | more than 11 years ago | (#5215949)

Yeah, same here. Fucking corporate filter policy.

Quiet (0, Redundant)

koh (124962) | more than 11 years ago | (#5214708)

-the system with the Quadro FX 2000 was never louder than 55 dB

Which is, mind you, 10 times more quiet than the X-box !!

Seriously, who can afford the money to put in somthing like that ? You're better off starting building another STEP system in your garage...

ECC Memory? (3, Insightful)

Proc6 (518858) | more than 11 years ago | (#5214712)

Can someone tell me why ECC memory is a good idea? I don't think I can remember in all my years of computing a machine crashing due to a memory error, or a machine not crashing because ECC memory saved it. Maybe I wouldnt know if it did, but, I've always felt like ECC memory was slow, more expensive, and necessary about like UFO insurance. Personally Id rather have regular memory, that taco's the machine completely when there's a problem, so I know there's a problem, than I would ECC constantly correcting memory errors without my knowing, untill I go to leave on vacation, then the whole DIMM gives out.

I Am Not A Memory Expert though.

Re:ECC Memory? (2, Interesting)

evilviper (135110) | more than 11 years ago | (#5214730)

I don't think I can remember in all my years of computing a machine crashing due to a memory error

Either you just haven't recognized when it happened, you don't work with any significant number of computers, or you've been INCREDIBLY lucky.

Memory isn't perfect. If your uptime is important, you need ECC.

Re:ECC Memory? (1)

Alan Partridge (516639) | more than 11 years ago | (#5214740)

you know, I always specify ECC for our Win2K workstations, but it rather strikes me that Winders will let the side down LONG before memory errors stop the show. Am I right or what?

Re:ECC Memory? (4, Informative)

larien (5608) | more than 11 years ago | (#5214883)

OK, two points:
  1. If you're aiming for stability, you try to remove all such possible causes; even if Windows will crash once a week, there's no point making it worse by risking memory failure.
  2. Even if your machine doesn't crash, a flipped memory bit could invalidate your data results by altering a crucial figure. In some cases, it's not important, but a flipped bit at the higher range could alter a conclusion significantly and you wouldn't notice.
Depending on your target audience, the latter may be more important than the former.

Re:ECC Memory? (0)

Anonymous Coward | more than 11 years ago | (#5220733)

Even if your machine doesn't crash, a flipped memory bit could invalidate your data results by altering a crucial figure.

...parity, anyone?

Re:ECC Memory? (0)

Anonymous Coward | more than 11 years ago | (#5221386)

You ever seen a ECC error on Windows?

Remember, even though it says "Memory Parity Error", it still means that M13r0$03T SUX0RS.

Re:ECC Memory? (2, Insightful)

rtaylor (70602) | more than 11 years ago | (#5215196)

It's subtle corruptions most people worry about. If you're doing financial transactions, you do everything you can to ensure that 4 doesn't turn into an 8 accidentally.

Re:ECC Memory? (1)

evilviper (135110) | more than 11 years ago | (#5216955)

Yes, subtle problems are bad, but it's just a matter of percentages. It is far more likely that an instruction, or binary data will be corrupted, causing a crash or corruption. I've seen it happen with memory, I've seen it happen with CPUs, and I've even seen it happen with the system bus.

If someone is dealing with critical numbers, I would hope that they have a lot more redundancy and comparison/verification in place, than just trusting the hardware of a single machine.

Re:ECC Memory? (5, Informative)

e8johan (605347) | more than 11 years ago | (#5214766)

RTFA Read The F**king Article!

"Two to twelve times each year, a bit in memory gets inappropriately flipped. This can be caused by cosmic rays flying through your RAM or a decay of the minute radioactive isotopes found in your RAM - the impurity need only be a single atom. Most of the time, this flipped bit is unimportant. Maybe it's a flipped bit in unallocated memory, or maybe it just altered the position of a pixel for a fraction of a second. If you're unlucky though, this flipped bit can alter critical data and cause your system to crash. In our situation, a flipped bit could potentially alter our results significantly."

Quoted from the second paragraph of the fourth page.

Re:ECC Memory? (1)

inbox (310337) | more than 11 years ago | (#5215203)

Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

With ECC RAM, you're just eliminating that *known* unlikely event. What about other *unknown* unlikely events? Those may have just as high a liklihood as a flipped memory bit.

Running the tests twice should eliminate the vast majority of these kinds of known and unknown rare glitches, no?

Re:ECC Memory? (4, Insightful)

e8johan (605347) | more than 11 years ago | (#5215263)

Large simulations (such as this, or car crash simulations, etc.) take days, if not weeks to run. Since the ECC ram isn't 100% slower (i.e. time of fast memory times two is more than time of ECC memory) there is no need to run it twice.

Anyhow, if the two simulations differ, you'll have to do it a third time to check if you get a match, and still you only know that you are *likely* to have gotten it right. With ECC the chance of getting it right increases.

Re:ECC Memory? (2, Interesting)

still_nfi (197732) | more than 11 years ago | (#5216237)

Um....since a large number of memory accesses come from cache, wouldn't it be more important to have an ECC cache than main memory? Certainly, that is where it is most likely that a flipped bit is going to cause a problem. I have doubts that any of the processors use ECC code in the L1 or L2 caches?

Also, it's been a while but don't most non-ECC memory use parity bits? So that a single flipped bit will be noticed...hence the isolated blue screens of death/ kernel panic on very rare occasions. Or is a parity bit what passes for ECC these days?

Re:ECC Memory? (1, Informative)

Anonymous Coward | more than 11 years ago | (#5216487)

since a large number of memory accesses come from cache, wouldn't it be more important to have an ECC cache than main memory? Certainly, that is where it is most likely that a flipped bit is going to cause a problem. I have doubts that any of the processors use ECC code in the L1 or L2 caches?

I believe SRAM cells are less likely to have bits flipping than DRAM cells (but don't take my word for it). That said, AMD's Hammers will have extensive error checking for cache. L1 data cache is ECC protected, L1 Instruction cache is parity protected. The unified L2 cache fully ECC protected, including separate ECC for L2 tags. The integrated memory controller supports Chipkill ECC RAM.

Re:ECC Memory? (2, Insightful)

kperrier (115199) | more than 11 years ago | (#5215359)

Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

So, if it takes 4-6 DAYS for a test to run, you want to run it again to verify the results? They don't have the time to do it again. Take this from some one who manages a 190 node Linux cluster. We use this for seismic data processing. Our processing run times are 3 to 4 days, each and there are multiple runs for each job. We have project schedules that we need to meet, and running each step in the processing schedule twice is not an option.

Depending on what you are doing, the money is better spent on the front end for quality hardware, than to double the time for a project to process the data. You could double the initial cost of the hardware, have two clusters and run the tests in parallel and compair the results. This may be the best thing to do, depending on what you are modeling/processing, but its much cheaper to invest in the quality hardware up front.

Kent

Re:ECC Memory? (1)

Wolfrider (856) | more than 11 years ago | (#5216614)

>> Wouldn't running the tests twice be a better way to ensure this kind of thing doesn't happen?

--This is pretty much what Mainframes do... Only they do it WHILE the test / application is running.

fourth page!?!? (1)

sirshannon (616247) | more than 11 years ago | (#5215562)

Can I get this in "Articles on Tape" format?

can someone read it aloud and email me the mp3?

Re:fourth page!?!? (0)

Anonymous Coward | more than 11 years ago | (#5220748)

..why can't you just click the link and reat it yourself?

Re:ECC Memory? (1)

Animats (122034) | more than 11 years ago | (#5216496)

Of course you get ECC memory. I've had ECC memory on all my machines for years. The price differential at the manufacturer level is small (go to Crucial and check).

Why have crashes? Even my Win2K machine stays up for months at a time.

Re:ECC Memory? (1)

mosch (204) | more than 11 years ago | (#5217073)

I Am Not A Memory Expert though.
That's pretty damned obvious, but since your post was moderated as insightful, I'll reply.

ECC is unneccessary if you use your computer to listen to mp3s, download porn and play counterstrike. If you're using your computer for important tasks however, ECC corrects single bit errors which occur more often than you realize (most of your bits aren't very important, so you don't usually notice), and it also detects multi-bit errors, thus preventing data corruption that could otherwise go unnoticed.

Multi-bit errors with ECC will generate a non-maskable interrupt, which will purposefully take your machine down, rather than allowing you to continue with unreliable memory.

On a high-end server, every single data path runs ECC, so no data can be accidentally modified, ever. On PCs, it's generally considered acceptable just to ECC the memory, since PCs rarely are engaged in ultra-critical applications.

P.S. You're a fucking retard.

Re:ECC Memory? (0)

Anonymous Coward | more than 11 years ago | (#5218596)

A few years back I had a huge Excel data quality project that used a macro to run through tens of thousands of data points. Once I had the program running I came to realize that the machine I was using would not give me a different set of results every time I ran the same data set. After three days of trouble shooting and pulling my hair out I switched to another machine and I was finally getting the same result no matter how many times I ran it. I found out later that bad memory could have been the culprit. You wouldn't have known it though because the rest of the computer appeared to run just fine.

Easy (1, Interesting)

unterderbrucke (628741) | more than 11 years ago | (#5214739)

"differences between building a high-end workstation and a high-end gaming system."

1. workstation == better processors
2. gaming system == better graphic cards

Re:Easy (2, Interesting)

Molt (116343) | more than 11 years ago | (#5214796)

You may like to read the article. This is a scientific visualization workstation being built with a seriously nice Quatro FX graphics card.


The author even benchmarks UT2k3 on it, and the scores are.. umm.. impressive.

Re:Easy (1)

troc (3606) | more than 11 years ago | (#5214813)

define *better*

Better processors how? Faster? better multiprocessing? Vector?

Better graphits cards how? cunning filtering? double whammy pipelines with 8x anti aliassing? fast 2D? accurate 3D?

Certain workstations require graphics cards which would make your Nvidia blahblah cry for mercy *in specific operations* This makes it a BETTER graphics card - FOR IT'S INTENDED USE. Yes, it'll be a crap gaming card. Likewise, it's possible a workstation will have a processor that's useless when it comes to running windows - and that threfore people would say their gaming machine had a *better* processor. But for the specific applications of that workstation it would be fine.

So there are lots of workstations with better graphics cards and worse processors and vice versa.

hohum

Troc

Re:Easy (3, Insightful)

sql*kitten (1359) | more than 11 years ago | (#5215019)

1. workstation == better processors
2. gaming system == better graphic cards


Not as simple as that. A games card will trade precision for speed, because precision is less important if you are updating the scene dozens of times a second anyway. If two walls don't meet perfectly for 1/60th of a second, who will even notice? A workstation card will trade speed for precision - you cannot risk a mechanical engineer missing an improperly aligned assembly because of an artifact created by the graphics card, or worse, breaking an existing design because an artifact shows a problem that doesn't exist in the underlying model.

Re:Easy (2, Informative)

clarkc3 (574410) | more than 11 years ago | (#5215262)

2. gaming system == better graphic cards

I just cant agree with that statement - its more a 'drivers written to function better in games' than a better graphics card. The one in the article uses a Quadro FX and I know lots of other people who use a 3dlabs Wildcat series - both of those cards wipe the floor with 'gaming' cards in 3d rendering for things like cad/3d studio/maya

Not entirely surprising (5, Informative)

kruetz (642175) | more than 11 years ago | (#5214775)

Let's face it - the main focus in a games PC is a blindingly fast GPU that can do umpteen hundred frames/sec at 1600x1200x32 or whatever, so you also need your system to be able to give the data to your video card as fast as possible. (Sound is another consideration, but not quite so major).

But "honest-to-goodness computation" (numerical analysis, ...) doesn't use a GPU too intensively, except for displaying graphical data, for which the high-end OpenGL cards are ideal. The main focus here is CPU's performance in doing complex numerical tasks, not just passing data to the AGP slot. And let's face it, multiple-CPU PCs don't necessarily do anything for gaming, but they're great for this sort of stuff.

However, most if not all of the points in this article are quite informative - did YOU know the difference between Athlon XP and MP. I thought I mostly did.

And his choice of ECC RAM - Two to twelve times each year, a bit in memory gets inappropriately flipped ... If you're unlucky though, this flipped bit can alter critical data and cause your system to crash. In our situation, a flipped bit could potentially alter our results significantly. Geez.

We come to the video card - a hacked GeForce isn't the same thing as a Quadro - bet some of the FPS freaks might be a little surprised, but the GeForces and Radeons aren't made for this sort of stuff. No real surprise, if you think about. But, as he says, why not a FireGL? Everything comes back to the lesson of the day: know your task. And boy, he certainly does.

Anyway, enough of regurgitating some of the finer points of this great article. Read it for yourself. And don't post comments about how 1337 your Radeon 9700 Pro or Ti4800 is. Know your task.

Re:Not entirely surprising (-1, Troll)

Alan Partridge (516639) | more than 11 years ago | (#5214980)

great article? it's a shit article written by a pants creaming box nerd trying to sound well informed with some regurgitated marketing hype. Why the fuck else would he do game benchmarks? What about the gfx cards he didn't mention, like those from 3Dlabs?

totally boring, totally uninformative - nothing you couldn't surmise yourself after a quick perusal of THG and a couple of price lists.

Re:Not entirely surprising (1)

override11 (516715) | more than 11 years ago | (#5215227)

Hehehe, actually, multi-cpu's in XP or 2K Pro help in gaming quite a bit. When you dont wanna shell out 500 bucks for a top of the line CPU, a dual athalon system is smokin fast for gaming, and lots of games like Unreal Tournament and Quake 3 are either allready setup to detect and use multi-cpu's, or you can tweak a config file to use it.

Uh. (1)

Scott Francis[Mecham (336) | more than 11 years ago | (#5216519)

No, they don't.
The Unreal engine has never been multi-threaded(there was a RUMOR that a future build of UT2k3 would have it in for laughs, this has not happened yet). For Quake3, you can use the "r_smp" variable in a Q3 engine game, but this is more of a testament to Carmack than anything else(stability problems, here we come).
Speaking as an owner of a dual-Athlon system, buying a SMP machine entirely for gaming is a shootable offense--there's no viable reason. Most games really aren't bound by the CPU, they're very fill-rate and TnL dependent, and more likely to run into your video card, RAM or bus speed barriers first. More CPU helps if you're running a server or for some reason want to play with a ridiculous amount of bots, but a bus speedup or better video card will aid the client much more.
Where it DOES come in handy is if you do development work; you can launch the client without having to quit out of your editing environment, compile a level in the background, or encode MP3s without a single loss of frames...

Re:Uh. (0)

Anonymous Coward | more than 11 years ago | (#5218132)

or encode MP3s without a single loss of frames...

If your single CPU system is losing frames ENcoding MP3's, you have more serious problems than needing an extra CPU. Not even a 486 needs to lose frames while encoding.

Re:Not entirely surprising (1)

saintlupus (227599) | more than 11 years ago | (#5216103)

And don't post comments about how 1337 your Radeon 9700 Pro or Ti4800 is. Know your task.

My task: Running a console on the rare occasion that a monitor is plugged into my server at home.

My card: An S3 Trio32

Ph33r my 1337ness.

--saint

Re:Not entirely surprising (1)

gl4ss (559668) | more than 11 years ago | (#5217364)

you should be running the trident svga-card we used to have.. it was sure groovy to get snow on the screen everytime the palette was altered!

oh wait, it blew it's gain circuitry or something to bits........

Re:Not entirely surprising (1)

wrenkin (71468) | more than 11 years ago | (#5218137)

Personally, I have great affection for my onboard S3 Trio 64V+. Sure, it isn't accelerated, and with only 2MB of VRAM you're not gonna get much performance... and there was that whole issue of pre-4.1.x X corrupting the display if you switch back and forth from a virtual console, not to mention the fact that it only became supported with Xfree86 not supporting the card until around 4.0.2...

wait a minute...

SGI should be put out of its misery (4, Funny)

Proc6 (518858) | more than 11 years ago | (#5214776)

Not intending to start a Holy War, I realize the 64 CPU monsters have their place but their workstations are just ignorant (this is coming from a previous SGI only owner)...

"These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"

WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!

P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org] .

Re:SGI should be put out of its misery (0)

Anonymous Coward | more than 11 years ago | (#5214906)


WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up!

double MHz != double Performance. MP 2800-3000 should do the trick.

cu

Re:SGI should be put out of its misery (1)

orangesquid (79734) | more than 11 years ago | (#5214921)

(1) Economies of scale (esp. with chip manufacture!)
(2) Spreading the overhead and costs of R+D (which can be *huge*)
If everybody went with SGI instead of IBM, we'd all be buying R12K boxes (from clone manufacturers, no less :) for $1500 apiece now.
Shop eBay... best UNIX for your dollar.

Re:SGI should be put out of its misery (-1, Flamebait)

Alan Partridge (516639) | more than 11 years ago | (#5214927)

oh , fuck off

what you don't know about sgi system architecture can just about fit into the Grand Canyon

Re:SGI should be put out of its misery (2, Interesting)

lweinmunson (91267) | more than 11 years ago | (#5215321)

Not intending to start a Holy War, I realize the 64 CPU monsters have their place but their workstations are just ignorant (this is coming from a previous SGI only owner)...
"These systems were around $40,000 when first released. Each R12000 400MHz has a SpecFP2000 of around 350-360 and so it's approximately equal to an Athlon 1.2GHz. The caveat is that the SpecFP2000 benchmark is actually made up of a bunch of other, smaller, tests. For computational fluid dynamics or neural network image recognition, the 400MHz SGI CPU is 2.5 to 5 times faster than the Athlon!"

WOW! 2.5 times faster than a 1.2Ghz Athlon!? Man, you'd almost need a $168 2.4 Ghz Athlon [pricewatch.com] to keep up! I wish they made them!

P.S. The 3.06 Ghz P4 is just under 1000 on the SpecFP benchmark [specbench.org].


Lets see, the last generation that we have SPEC numbers on for SGI is the 600Mhz R14K. It clocks in at 529 PeakFP compared to 656 Peak FP for the 2.4 GHz MP that was used in the benchmark. That's about a 20% difference in Speed. The original CPU's that he was dealing with, the R12K 400 and the 1.2 K7 are 407 and 352 respectively. That actually gives the SGI a lead by about 15%. Now if the 2.5x increase in an application holds true, I'd say the SGI is still a good deal if you can afford it. Now granted I don't have $40,000 to spend on a workstation, but there are plenty of companies who are willing to spend the extra $30,000 once to get double the performance out of their $60,000 a year engineers for the next two or three years. Also, as is pointed out in the article, the P4 is insanely optimized for SPEC. It's numbers have no real meaning to most realworld applications. If you want to get right down to it, SGI can give you 512 CPU's run through a single Infinite Reality module. No one would actually do this, but it's nice to dream about it once in a while :)

Re:SGI should be put out of its misery (1, Informative)

Anonymous Coward | more than 11 years ago | (#5215363)

Your pricewatch link links to an Athlon 2400+, which does not run at 2.4 Ghz, but at 1.93Ghz.

Biased? (4, Interesting)

Gheesh (191858) | more than 11 years ago | (#5214778)

The article carefully explains the choices made. However, we find the following line at the end of it:

Special thanks to AMD, NVIDIA, TYAN, and Ryan Ku at Rage3D.com for helping me with this project.

Well, maybe they had no influence at all, but then how come that most of the chosen products match this 'special thanks' line?

Re:Biased? (3, Insightful)

sweede (563231) | more than 11 years ago | (#5214843)

perhaps the author of the article did research and picked out the componants of the system BEFORE contacting vendors and buying them.

you dont order food or car parts without knowing what is there and what you want/need do you??

Oh, and if you also notice that the rest of the site is based on new hardware reviews and performance, you'd think that they would have good experiances with what works and what doesn't.

If you went out and researched companies or people for a project you where doing, would you not include them in a `special thanks to' section of the paper?

Re:Biased? (3, Insightful)

vivIsel (450550) | more than 11 years ago | (#5214931)

Welcome to the world of "hardware review" sites. Bias is their collective middle name.

Two of Hearts (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5214792)

I-I-I-I-I-I need, I need you
two of hearts
two hearts that beat as one
I need you, I need you
come on, come on

I never said I wasn't gonna tell nobody
no baby
but this good lovin' I can't keep it to myself
Oh no
When we're together it's like hot coals in a fire
hot baby
my body's burning so come on hear my desire
come on, come on

I've got this feeling that you're going to stay
I never knew that it could happen this way
before I met you I was falling apart
but now at last I really know that we are

two of hearts
two hearts that beat as one
I need you, I need you
come on, come on

People get jealous 'cause we always stay together
yeah baby
I guess they really want a love like yours and mine
together forever
I never thought that I could ever be this happy
yeah baby
my prayers were answered boy you came in the nick of time
woah woah woah woah

news flash -- for those who can't read articles (0)

Anonymous Coward | more than 11 years ago | (#5214976)

News flash: performance per Hz on ia86 cpu's suck, Athlon and PIV zeon both suck but for different reasons. If you are interested in performance per $ then it's a different story.

ISV Certification (5, Informative)

vasqzr (619165) | more than 11 years ago | (#5214985)


If it's not ISV certified it doesn't do you much good, as for as a workstation goes.

From Ace's Hardware:

When you look at the typical price ($4000-$6000) of a workstation built by one the big OEM's you might ask yourself why you or anyone would pay such a premium for a workstation.

In fact if you take a sneak peek at the benchmarks further you will see that a high-end PC, based upon a 1400MHz Athlon, can beat these expensive beasts in several very popular workstation applications like AutoCAD (2D), Microstation.

Yes, it is possible that you are better served by a high-end PC, assembled by a good local reseller. Still, there are good reasons to consider an OEM workstation.

Most of the time, a workstation is purchased for one particular task, and sometimes to run one particular application. Compaq, Dell and Fujitsu Siemens have special partnerships with the ISV's (Independent Software Vendor) who develop the most important workstation applications. In close co-operation with these ISV's, they verify if the workstation is capable of running each application stablely and fast. In other words, you can ask the OEM whether or he and the ISV can guarantee that your favorite application runs perfectly on the OEM's workstation. ISV certification is indeed one of the most critical factors that distinguishes a workstation from a high-end desktop.

Secondly, it is harder to assemble a good workstation than a high-end PC. Typically, a PC is built for the highest price/performance. A lot of hardware with an excellent price/performance ratio comes with drivers which do not adhere strictly to certain standards such as the PCI and AGP standards. Even if this kind of hardware might comprise stability in very rare cases, it is unacceptable for a workstation.

Last but not least, workstations come with high-end SCSI harddisks and OpenGL videocards which are seldom found in high-end PC's. Workstations are shipped with ECC (Error Checking and Correction code) memory and can contain 2GB to 4GB memory. High-end PC's typically ship with non-ECC memory and are - in practice - limited to 512MB (i815 chipset) - 2GB (AMD760).

Re:ISV Certification (0)

Anonymous Coward | more than 11 years ago | (#5215401)

Quadros are ISV certified.

Re:ISV Certification (1)

UberLame (249268) | more than 11 years ago | (#5216490)

While ISV certification is often very important, in this case of this article, where they were talking about a machine for a specific task, I don't believe certification is too important. MATLAB doesn't require certification (I don't think they even give certification to anyone), and I'm pretty sure that the Quadro viewer program mentioned also doesn't require certification other than that the machine have a proper quadro.

Xeons have more L2 cache? (1)

_|()|\| (159991) | more than 11 years ago | (#5214995)

In the discussion of AMD vs. Intel, I was surprised to read the following:
While both the P4 and XEON are based upon a similar cores, the XEON offers multiprocessor support and larger L2 caches.
The Pentium III Xeon had larger L2 cache, but not the Pentium 4 Xeon. I just checked intel.com [intel.com] , and there is a Xeon MP with a large L3 cache, but that only goes to 2 GHz, so I doubt that was under consideration.

Perhaps the author felt that it goes without saying, but I'll say it. Regardless of theory, the choice of CPU would ideally be left until after some domain-specific benchmarks.

Only 55db (1)

scotay (195240) | more than 11 years ago | (#5215096)

The GeForce is clocked @ 500MHz. The Quadro is clocked @ 400MHz and doesn't need the hoover for cooling.

Re:Only 55db RTA? (1)

insanecarbonbasedlif (623558) | more than 11 years ago | (#5216476)

Only 55db (Score:1) by scotay (195240) on Monday February 03, @06:20AM (#5215096)
The GeForce is clocked @ 500MHz. The Quadro is clocked @ 400MHz and doesn't need the hoover for cooling.


didja RT*A? From the horses mouth:
I've run benchmarks at high resolutions when possible to minimize the influence of the CPU. By default the Quadro FX 2000 operates at 300/600MHz in 2D mode, and 400/800MHz in 3D performance mode. The new Detonators allow "auto-detection" of the optimal overclocking speed. This was determined to be 468/937. The GeForce FX 5800 Ultra runs at 500/1000. Here are the results we obtained with the card overclocked to 468/937:

I'm getting solid performance with a GPU that never runs past 63C and enters into the "high fan speed mode."


Hmmm. So. You were... wrong. OK. Bye,

MP3 Playback? (0)

Anonymous Coward | more than 11 years ago | (#5215111)

With the increased popularity of Linux among gamers, we certainly hope that Philips will at least produce basic drivers that support MP3 playback.


What exactly does this mean? You just need drivers that work (i.e. can make the soundcard produce sound)! Is there something magical about MP3s?

Re:MP3 Playback? (1, Interesting)

tvadakia (314991) | more than 11 years ago | (#5215289)

He's saying that you need drivers that work and work without flaw, especially with a dual-processing setup. Any driver flaw at all could comprimise the workflow (as well as unsaved work), and with the high-end work that's done on a workstation such as this that's ever so important. Look at drivers put out by Creative Labs... they're reputed to be really buggy in a dual-processor setup.

And as far as "Is there something magical about MP3s?," I think he's talking about standard wave output support in linux instead of enabling 5.1 surround, midi, game-port, ect. for a minimal make-the-linux-user-happy driver support.

Re:MP3 Playback? (2, Interesting)

mccalli (323026) | more than 11 years ago | (#5216217)

Look at drivers put out by Creative Labs... they're reputed to be really buggy in a dual-processor setup.

Confirmed in my my experiences with an AWE64 and a dual 533Mhz Celeron setup. I moved to a Santa Cruz Turtle Beach - no problems.

And as far as "Is there something magical about MP3s?," I think he's talking about standard wave output support...

Many card/driver combinations are supposed to be able to recognise the kind of data put through them. The Santa Cruz, for example, had a 'Hardware MP3 accelerator' option in the control panel. I really don't know how they recognise it though - by instinct I'd agree that surely the waveform has been decoded by the main CPU anyway? Be interested to hear from anyone who knows more about this point.

Cheers,
Ian

Re:MP3 Playback? (1)

Gothmolly (148874) | more than 11 years ago | (#5217020)

FYI, if you use the Win2K drivers for your SB-series card on that BP6, you'll be fine. Don't use the Creative Crapware package.
(A fellow BP6er)

Re:MP3 Playback? (1)

unitron (5733) | more than 11 years ago | (#5219663)

If you're using a BP6 I'd suspect capacitor disease as soon as anything else.

Re:MP3 Playback? (1)

mccalli (323026) | more than 11 years ago | (#5220111)

If you're using a BP6 I'd suspect capacitor disease as soon as anything else.

Yep - my first BP6 board died due to that. The replacement was always fine however.

Cheers,
Ian

Re:MP3 Playback? (1)

mccalli (323026) | more than 11 years ago | (#5220090)

(A fellow BP6er)

Sadly no longer, as I've moved over to a Shuttle SB51G. But yes, it was the BP6 I ran - best value board for a long time. Excellent device in my opinion.

Cheers,
Ian

Re:MP3 Accelerator (1)

bobbozzo (622815) | more than 11 years ago | (#5237282)

Creative used to claim "MP3 acceleration" for encoding and decoding. They didn't give much info, but I got the impression it would only work if you used their software for playing and encoding.

What Changes for a Linux Math Machine? (2, Interesting)

Confessed Geek (514779) | more than 11 years ago | (#5215206)

Quite a Nice article, and useful to me since I'm consistantly building workstations for use in physics research, but what changes would be made for a linux based system?

The information on GPU's was great, if your running in windows and doing visualizations, but most of science doesn't use Windows. They started their projects on Big Iron Unix and are now moving to linux.

Our current spec out looks like this:
2 Athlon MP 2400
Tyan Tiger MPX
We were using Thunder, but found we didn't need the onboard SCSI so moved to tiger. After the fits I've been having w/ Gigabit cards and the AMD MP chipset though I'm considering going back to the Thunder for built in gigabit.
2Gig Kingston ValueRam EEC RAM (its what tyan suggests)
120GB WD Spc. Ed. 8M cache HD
Additional Promise IDE controllers for new HD's when needed.
Generic TNT2 or Gforce2 Video. (they are just math boxes)
Plextor ide CDRW
Still looking for the prefect tower.
Extra case fans.

The CPU's have been changing over the last year or so as the MP's get faster, And we have moved from 1 to 2G of ram.

Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.

Re:What Changes for a Linux Math Machine? (1, Interesting)

Anonymous Coward | more than 11 years ago | (#5215364)

One thing you will need to look at if you are doing physics research is the availablity of compilers and optimisation routines.

Fortran 90 is still the main scientific programming langauge (along with c and matlab). Intel make a very good P4/Xeon compiler. Be interesting to compare it to say NAGs comiler or the Portland Group one that runs on my office machine on both Intel and AMD.

With matlab it depends on what you are doing (FPvsINT and memory bandwidth again)

Re:What Changes for a Linux Math Machine? (1)

Confessed Geek (514779) | more than 11 years ago | (#5215434)

Xeon, is something else I've been considering, but the serious price jump per workstation has rather curtailed my chances to experiment.

Your right, most of the apps my users run/write are fotran, but they generally use GCC for compiling, or pre compiled binaries from Fermilab. The experiements they are working on have some pre prepared software sets used throughout that they are loathe to change or recompile for fear of adding any additional factors (or so it was explained to me - I'm not a researcher, just the SysAdmin.)

I guess I should find out what Compiler they are using "upstream".

Re:What Changes for a Linux Math Machine? (2, Informative)

Glock27 (446276) | more than 11 years ago | (#5215528)

(they are just math boxes)

If they had higher-end NVIDIA graphics cards, they could also be very good OpenGL development/visualization stations, using Linux. Port all that SGI code with very little effort...

Biggest problem I'm still having is the system sounds like a 747 taking off and I've had official AMD CPU fans burn out on me. I would still love to get a bit more oomph out of this though if there are any suggestions.

I'd use aftermarket fans, I thought AMD's fans were cheesy (to use a technical term;). If you want a good product, I recommend the PC Power and Cooling [pcpowerandcooling.com] Athlon CPU cooler. PCP&C generally has top-quality products (great choice for power supplies as well).

You should probably start going for DVD/RAM drives also, lots more capacity for backups...

One final thought on numerics - you might want to compare some of the commercial compilers with gcc. For instance, Microway resells [microway.com] a strong line of commercial compilers. The Portland Group compilers, in particular, look promising.

Re:What Changes for a Linux Math Machine? (2, Informative)

Brian Stretch (5304) | more than 11 years ago | (#5215619)

Replace the AMD heatsink/fan kits with Thermalright SLK800's, YS Tech 80mm adjustable fans, and use Arctic Silver 3 thermal compound. The catch is that the pink crap AMD uses instead of proper thermal compound may be permanently attached at this point, though the right chemicals (Goof-Off cleaner followed up with rubbing alcohol) can probably remove it. I'm using SLK800's on my dual 2400+ ASUS A7M266-D board and with the fans adjusted to 2000RPM the system is very quiet, the most annoying noise is from the fan on the Ti4200 card and there's no room for one of those neato Zalman heatpipe GPU coolers. With this setup I'm getting lower CPU temps than I was with 1800+ chips and the retail box heatsink/fan kits (using AS3, scraped off the pink stuff).

See 2CoolTek [2cooltek.com] for this gear. I've been buying from them for years and highly recommend them.

You could go with one of those Vantec fan speed adjusters (handles 4 fans) instead of variable-speed fans... might be a better choice in your case.

Perfect tower: one of the Lian-Li aluminum cases, probably an extended length model (extra 10cm of space). See NewEgg [newegg.com] , etc. Actually, they've got the cooling gear too.

"two sticks of RAM instead of one for Redundancy" (2, Insightful)

Zak3056 (69287) | more than 11 years ago | (#5215682)

Did anyone else see a logical disconnect between his assertation that two sticks of RAM were better than one because if one failed, the machine could still operate while they waited for a replacement stick... and yet he chose NOT to use RAID?

Even worse, his choice of drive was a single WD 80GB IDE drive? WTF? There's a reason the warranties on those things just dropped to a year!

Re:"two sticks of RAM instead of one for Redundanc (0)

Anonymous Coward | more than 11 years ago | (#5215983)

I was pretty amazed he didn't go with a SCSI solution for stability's sake if nothing else. Sure it's more expensive, but so is a QuadroFX for graphics.

Re:"two sticks of RAM instead of one for Redundanc (2, Interesting)

Billly Gates (198444) | more than 11 years ago | (#5216116)

Actually the warranty dropped from their scsi units as well. Something tells me a defect might be in there. Especially with larger capacity drives.

Also many scsi drives are less reliable then ides. Hu?? This is because scsi drives typically spin at higher revolutions so they tend to fail more. Higher capacity drives are more prone to defectives and data corruption. The lower capacities typically are more reliable. Ask any admin how often they replace scsi drives on various raids? The fastest and biggest ones from what I read here on slashdot fail every 2-6 months! Quantums I heard fail on a weekly basis on some of the more questionable units. The newer ones seem to be the worse.

I have been doing computers since 1991 and I have never seen a hard drive fail. I only use ide. I believe part of the reason is I use to upgrade my drives every 2 years and until recently did not run my systems 24x7 like servers do. For the last 2 years I have been running 24x7 without any problems. Like you I would still select scsi assuming its for critical level work and money isn't an issue. I would pick Ide if raid was not needed since scsi is not more reliable unless its in a raid-5 configuration. Most workstations use alot of graphics and cpu power. Server applications tend to bottleneck at the hard drive. So hard disk performance is not really a factor unless the application runs of memory and swaps to the drive. Scsi vs Ide benchmarks show that they are almost identical in speed unless lots of i/o requests go to the drive in parrellel. Most cad apps today easily stay within the 2 gigs of ram. I know exceptions exist but they are rare.

However I would try to stay within 7200 rpm and not go above 10,000 for the drive. Your asking for trouble with the higher speeds not to mention do not really provide an increase in performance more then single percentage points in alot of benchmarks. Another benefit also with going with slower rpm drives is that they are alot more quiet.

Scsi is nice because it offloads alot of i/o processing to the scsi card. For any database or crtical application where raid is needed its the only way. For a graphical workstation for non critical use (artist or grunt level engineer) price and huge storage might be a bigger factor as well as reliability. Scsi without raid is not more reliable. I know a few raid workstations exist but raid is almost exclusively used in servers and is expensive for a desktop. Most engineers save their work on a network share. I guess you have to take in the cost of a hard drive failure. Yes engineers are sometimes expensive but not more then any guy in sales or marketing in a big corporation. You might as well give everyone raid.

Re:"two sticks of RAM instead of one for Redundanc (0)

Anonymous Coward | more than 11 years ago | (#5216454)

First - SCSI and EIDE drives quite often use the exact same mechanical hardware and platters - the only difference exists on the controller board. It is possible, on some lines, to actually swap out the controller boards, thus making one version of a drive a completely different one. Almost invariably, the physical format is identical, so the drive can be used as though it had always been that model.

For example, I have a bunch of older Seagate ST32550W(D) drives. Some are HVD, some are SE. I can swap the controller boards at will, and they essentially become the "other" model.

Second, it is actually better to leave a disk running 24x7 than to constantly restart it. Sure, if you are concerned about power, then by all means spin down the drives - but keep in mind that spinning up and down is where most of the wear occurs. I've seen drives fail suddenly that had been in operation for years. They just failed to restart. It comes down to inertia.

The warranty issue is most likely not related to actual increases in failures, but instead, related to support costs. They are narrowing the window of "lifetime" to eliminate drives that fail further out. SCSI drives are a premium product, and buyers expect more service - which is why they still enjoy (largely) the traditional three and five year warranty.

IOW - the warranty issue is a marketing thing, not a reliability thing.

Re:"two sticks of RAM instead of one for Redundanc (0)

Anonymous Coward | more than 11 years ago | (#5217014)

He went for the 8Mb cache WD drive - the guarantee on those is still 3 years FYI.

55 dB not loud? (0, Troll)

hcdejong (561314) | more than 11 years ago | (#5215770)

Computers should be silent. Any noise at all is too much, and 55 dB is way too much.

Re:55 dB not loud? (0)

Anonymous Coward | more than 11 years ago | (#5216480)

That's no damn troll! It's the truth.

Re:55 dB not loud? (1)

thx2001r (635969) | more than 11 years ago | (#5218318)

At the rate we're going, this is the type of hardware [usgr.com] we'll need to dissipate heat in 5 years!

You'd think that at the rate the latest and greatest silicon is being churned out and running hotter and hotter, one of the brilliant minds of today could figure out a way to make quiet "stealth" cooling fans. Yep, I know there's liquid cooling for PC's, but even though it's "safe", the idea of liquid and 500 Watts flowing side by side is not appealing to me! Not to mention, are you gonna liquid cool your Powersupply too?

It's incredible that at all the web sites you see "ultra quiet" CPU cooling fans for sale... their decibel ratings are starting at 30! Of course, lots of them drop down to 30 only when their speed limiting systems kick in with the system idling. There is nothing quiet about that!!! You'd think there'd be some scientific solution to move air with a fan and not make such a racket!!! (If so, someone PLEASE point me in the direction of the CPU and Case fans that do this!!!)

Price? (1)

venomkid (624425) | more than 11 years ago | (#5215954)

Maybe I missed it, but I didn't see a price final. For all his talk about cost vs. performance in the beginning, you'd think we'd see a final overall price for this thing...

high end workstation? (2, Interesting)

asv108 (141455) | more than 11 years ago | (#5216007)

Hard Drive:
Western Digital 80GB Caviar with 8MB Cache

Why would you use a single IDE HD when you have SCSI built in the motherboard? In my experience storage upgrades always provided tremendous speed improvements. Disk access is always a big bottleneck. If your going to have a "high-end" workstation, you need at least SCSI, preferably SCSI RAID. If you want to go barebones, at least have IDE-RAID with a really good backup plan.

And WTF do Quake 3 benchmarks have to do with a workstation?

Re:high end workstation? (2, Insightful)

Billly Gates (198444) | more than 11 years ago | (#5216203)

He mentioned in the article about his budget. Have you looked at the price of scsi drives? Tiny 20-gigs $400 each! Ouch.

Scsi is not faster or more reliable then ide unless its in Raid. So if your going to do scsi then you might as well buy not 1 but 4 drives for raid. That adds up. If your doing alot of i/o requests in parrallel then scsi is faster because it can offload the tasks and que them from the controller. A single app will not do this unless its a database or other server oriented application. I notice a bigger increase in performance from a faster processor but this is because I do not run a server. A workstation with lots of ram has the bottleneck in memory, cpu, and graphics card. A server on the other hand is different.

More emphasis should be on the processor and video card for any workstation purchase.

I agree with IDE-RAID if the job can not be interrupted because of a failed drive but 4 drives are expensive but still alot cheaper then scsi. Also worth mentioning is using bigger storage capacities with the ide from the amount of money saved. Keeping critical jobs is not as important as it use to be because engineers like their other white collar associates never store the finished jobs on their own drives. They rather use a network share when they are done. You would be a fool to store your work on your own drive since the file server backs it up on tape. Workstations typically run win2k today rather then unix so this means they can use NT and Novell file servers.

Re:high end workstation? (0)

Anonymous Coward | more than 11 years ago | (#5216486)

Incorrect. SCSI is now capable of running at 320 MBytes/sec, while the fastest ATA drives are now at 133MBytes/sec. Factor in that SCSI still has a lot of features in its command set that make it *VASTLY* superior to ATA, especially in RAID's. The ability to disconnect while awaiting data from the platters is probably the biggest, though with 8Mb+ caches, less and less of a factor.

The problem is that the high end SCSI drives cost a lot more per byte than ATA. Also, I would imagine that for this particular workstation, RAM and CPU will be the drivers, not disk IO. This isn't a server, it's a number cruncher - and he mentioned that they were loading 800MBytes into RAM per CPU. IOW, the data sets get loaded into RAM first, then crunched.

In this case, the ATA disk is being used simply for non-volatile storage, and performance isn't a driver.

Re:high end workstation? (2, Informative)

Billly Gates (198444) | more than 11 years ago | (#5216627)

But 320/mb is the theoritical limit. Last time I looked a typical hard drive transfers only 30/MBsec. This was over a year ago so it may be higher now. ATA can easily handle the fastest hard drives.

A raid with 4 drives might be more usefull 4*30 = 120/MBsec which begans approaches the ATA limit in Eide. Newer drives comming out will probably hit the ata limit soon in raid and only scsi can keep up. For a single drive scsi is not worth it.

Its strength will not show unless you run very heavy i/o bound applications. I agree that SCSI is supperior. I can't picture an engineer swaping out his hard drive while rendering a scene so swapping support is important only in the server arena.

Your post just repeated mine in saying the emphasis on a workstation is not i/o bound and scsi is not worth it unless its in raid. Price is important in this day and age of shrinking IT budgets the scsi myth is being exposed. A sigle scsi drive is not that much faster or reliable then an IDE.

Well written, but weak article (3, Interesting)

zaqattack911 (532040) | more than 11 years ago | (#5216895)

I think he starts off well talking about the decision making process, the move over x86, what ECC means.

However, he pretty much dumps his chosen hardware in our laps by the end of the article without much explanation. It feels rushed almost.

There is way more out there than Tyan, who cares what google uses. What about dual channel DDR? What about the fact that Xeons and newer P4s have HyperThreading?

He starts slow, then in a few paragraphs blurts out some mystery hardware he decided to go with. Then babbled about Geforce VS Quadro for the rest of the article.

Oh well, he's a good writer. Better luck next time.

Re:Well written, but weak article (1)

WarSpiteX (98591) | more than 11 years ago | (#5218608)

If you'd read more closely you'd realize that he specialized his system for raw FPU performance - that means Athlon. HyperThreading is totally not an issue. He had a budget in which he was constrained, and high-speed ECC DDR and a SCSI hard drive were both cut out.

Re:Well written, but weak article (1)

zaqattack911 (532040) | more than 11 years ago | (#5232059)

uuuh, of course hyperthreading is an issue.

Him using multi-cpus denotes multi-threaded applications, therefore more cpus is better than less cpus. He might have found at LEAST a 20% increase in performance simply by using a p4 with HT.

Where are the unices? (1)

Schugy (556670) | more than 11 years ago | (#5218977)

For x86 Ati and Nvidia have Linux drivers. How about a RS6000 with a gtx3000 GPU? (Or Sun & CGI) Where can I find more info about production workstations running in environments with thousands of clients? (automotive engineering e.g.)

Last Post! (0)

alpg (613466) | more than 11 years ago | (#5323985)

"In this replacement Earth we're building they've given me Africa
to do and of course I'm doing it with all fjords again because I happen to
like them, and I'm old-fashioned enough to think that they give a lovely
baroque feel to a continent. And they tell me it's not equatorial enough.
Equatorial!" He gave a hollow laugh. "What does it matter? Science has
achieved some wonderful things, of course, but I'd far rather be happy than
right any day."
"And are you?"
"No. That's where it all falls down, of course."
"Pity," said Arthur with sympathy. "It sounded like quite a good
life-style otherwise."
-- Douglas Adams, "The Hitchhiker's Guide to the Galaxy"

- this post brought to you by the Automated Last Post Generator...
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>