SGI launches R16000 352
nkrgovic writes " SGI has just launched a new CPU - the long expected R16000. The new CPU works on 700MHz, has 4MB secondary cache and more goodies.
For now the new CPU is only used in SGI's Fuel workstations, but we should expect to see it pretty soon in SGI's Origin servers as well. With new high density compute nodes this should make the Origin's the fastest supercomputing server per square foot."
I'm running one with IRIX! (Score:3, Funny)
That's only because your last first post (Score:2)
It runs IRIX? (Score:3, Interesting)
Re:It runs IRIX? (Score:1)
Re:It runs IRIX? (Score:4, Informative)
SGI will continue to make investments in IRIX and MIPS until it makes sense to move all of their products and customers to Linux on IA64, and that may not happen until theres something better than Linux+IA64 out
Linux isn't there yet for the bread and butter SGI customers. Neither is IA64.
People still buy IRIX boxes. (Score:2)
- A.P.
Re:It runs IRIX? (Score:3, Informative)
You can do it with OS X [uiuc.edu]...but in a swimming pool.
blakespot
"fastest supercomputing server per square foot" (Score:2, Funny)
Not a general purpose processor (Score:4, Interesting)
purpose processor like the P4 or Athlon. For
specific floating point intensive problems, they
can be quite effective. What is annoying is that
they are usually 2 or more generations behind in
manufacturing process capability. So the lines
and heat dissipation in the 3GHz P4 are much more
advanced than the R16000.
Also, SGI has an annoying tendency to use
proprietary ASIC's in the their memory which
make their entire system much more expensive
than it need be. Some of this is because
their design cycle is so long that when SGI
committed to a architecture, the performance
just wasn't there.
Given these constraints, it is hard to see
how SGI could market "cost-sensitive" systems.
Re:Not a general purpose processor (Score:3, Interesting)
proprietary ASIC's in the their memory
If you're refering to the ccNUMA-style systems, it's not just an MMU - it's a whole different architecture for the system. They don't have a bus - they have a switch between core components as the central feature of the system.
Re:Not a general purpose processor (Score:2)
Re:Not a general purpose processor (Score:2)
Only in the same way that other manufacturers have an annoying tendency not to use Crossbow, which makes their entire systems much slower than they need to be.
Given these constraints, it is hard to see how SGI could market "cost-sensitive" systems.
It's all relative to what you want to do. Sure, SGI won't make a general purpose desktop (altho' they once tried to, with the Indy). But they are competitive for the markets they sell into, which require extremely high memory bandwidth and fast precise rendering. Sacrificing precision for rendering speed like a gamer's PC does isn't an option for CAD or medical imaging.
At the high end, things like CPU power/square foot really do matter, and SGI are competitive there too. Hopefully, the company will be able to recover from the Belluzo regime.
R16000 uses 0.13 micron (Score:3, Interesting)
This was true in the days of the R10000 and R12000. However, things began to change with the R12000A.
The R16000 uses a 0.13 micron process utilizing copper interconnects. It is indeed buzzword compliant.
SGI's R1x000 series is designed in-house these days and is fabbed for them by NEC.
err... 0.11 micron (Score:2)
Scratch that. As someone else already pointed out, the R16000 uses a 0.11 micron process. I was thinking of the R14000A.
Re:Not a general purpose processor (Score:2)
> SGI has an annoying tendency to use
proprietary ASIC's in the their memory
Little Linux Boy, can you say bandwidth? Check out the memory bandwidth specs on a years old O2 vs. a brand new Intel/AMD POS. Then look at the bandwidth of a new SGI workstation vs. promised Intel/AMD POS'.
Computer power per watt? (Score:3, Interesting)
Re:Computer power per watt? (Score:2)
The real issue is both. SGI provides 16 CPUs with a nutty high performance interconnect in just 4U of rack space. Each of the current CPUs consumes something along the lines of 12 watts. There are also the various chipset ASICs, the RAM, drives etc...
It does add up, so don't stick it in your bedroom closet... but it beats the pants off a P4 (or even P3) blade system for performance-per-watt. Especially when you factor in I/O.
Mhz Muppets (Score:4, Informative)
Disclaimer: I know ive ignored how much work can be done in an instruction, pipelineing and other features, but im sick of all this idiotic posts that think mhz is anything but a meaningless indication of processor speed, like a bogomip
Re:Mhz Muppets (Score:2)
No modern processor operates without a pipeline. It may take 20 cycles for an instruction to complete on a P4, but the P4 will issue another 20 instructions (more, in fact, because the P4 is superscalar) in the meantime.
Feel free to discuss actual relevant points, like issue width, number of functional units, instruction latency and dependencies, cache misses, branch mis-predictions, bus bandwidth, cache and memory latency...
Re:Mhz Muppets (Score:2)
Also, the cache size on the R16000 is over 10 times greater than that of the newest Athlon XP, the memory access speed is faster, and IIRC, the bus bandwidth is greater. Kudos though on disspelling bullshit on the pro-MIPS side, though there's a lot more in favor of P4s and such.
One thing I did find out from the AMD website (yes, i *DO* do my homework before arguing) is that the AthlonXPs have some kind of instruction pre-fetch "lookahead buffer". That sounds pretty cool, and would probably see the Athlon XP to victory in terms of instruction latency and dependencies. Or at least the former, the buffer *is* just a buffer, not a dependency checker.
Re:Mhz Muppets (Score:2)
When we say the p4 has a 15 stage pipeline, that means that it takes 15 clock cycles for an instruction to go from start to finish.
This does not mean we complete an instruction every 15 cycles. It's like an aseembly line with 15 stations. When instruction #1 reaches station number 2, station number 1 is free to start on the next instruction.
So, for a single pipeline, we complete an instruction every cycle. The p4 can issue up to 6 internal operations, so the peak rate of a P4 2ghz is 12gigaflops.
In practice however, we rarely see average performance anywhere near the peak. This is because not all instructions calculate in the same amount of time (divisions for example). And we can't always find enough work to fill the pipelines. Or we wait on memory to fill the cache.
In any case, the R16000 was originally going to be a dual core chip multiprocessor, comparable to the Power4. It seems SGI abandoned this and instead did a shrink and cache enlargement on the R14k.
The mips line are well designed to their economic situation and workload. Any statement along the lines of "my athlon roxor's it" or "my risc ownzers your lame intel" is overly simplistic. If you know how to use a computer, one would hope you could think a bit better than that.
It's sad that many people in this thread have no idea what they're talking about. It's frightening that they still feel led to post this misinformation.
Re:Mhz Muppets (Score:2, Insightful)
For only 4-5 times the price, using a proprietary architecture...
Impressive.
Don't forget, as long as your comparing the latest and greatest, that the P4 3.6GHz (unlike the 2.8GHz you are comparing the MIPS too) is HT. Opps, there goes the performance crown.
I wonder how many GHz Intel will be turning before MIPS manages to achieve 1. Face it, the future is in mass produced chips spinning really fast over sub-optimal instruction sets. Don't like it? Take up music or something. The R16000 is probably the last release MIPS will ever have.
SGI is dying (Score:5, Interesting)
On the other end, their HPC (super-computers) is being attacked from above. On that sector, price is not really a problem, its just pure performance. And there too they are being beaten, SGI just does not have the research power that
NEC or IBM can have. So they are starting to be pretty much behind, so they become not only more expensive (which does not really matter), but more importantly much slower...
Also on the workstation market, their desktop SUCKS, its just a pain to use. They are still stuck in the pre-win95 era... It might have been good compared to win3.1 or twm, but it just is not in the same world as GNOME, KDE, WinXP or MacOSX.
Also, their other strengh where there graphics board, they invented modern 3D hardware. And for a long time the roadmap for the PC 3d hardware was simple, they just had to do what SGI already had, but we have now passed a point where the PC hardware has actually more features then the SGI stuff. The only difference now between the pro and game markets are the amount of ram/cache and those "pro" cards exist on PCs. They do cost $ 2000-3000, but they are nowhere near the cost of the SGI workstation that includes them...
SGI has no future. They have been losing money for years. I have been thinking for quite a while that they where a good target for an acquisition, but now that MSFT has bought much of their patents. It might be cheaper to wait for them to go bankrupt and to pick up the pieces. They where in a fast playing game and they have gotten slow.....
Re:SGI is dying (Score:2, Interesting)
riiiight, unless you think E&S and Quantum3D are selling regular pc's, enlighten me on:
- memory bandwith
- dynamic resolution
- genlocking
- multi channel displays
- hard real-time update rates
- calligraphic lights
Of course I won't choose SGI every time I need some graphics horse-power. But if you need to get a really big job done in real time, PCs don't cut it yet.
Re:SGI is dying (Score:2)
I got most of that, but could you please explain 'calligraphic lights'?
Re:SGI is dying (Score:2)
Thanks for impressive comment as usual, comparing $3000 Wintel PC's with SGI etc.
Hey, look they don't have SSE/MMX too! 700Mhz even!
argh...
Re:SGI is dying (Score:2, Insightful)
Then you're running small tasks that require little memory, little I/O and don't use much cache, and a substandard compiler. I've got a particle simulation going right now, the Origin 300 with 2 R14000A@600Mhz and 2MB L2 cache and 4GB RAM, using MipsPro compiler, that I have access to outperforms the dual Xeon 1.9GHz with 512kB L2 cache using both VS and Intel's own compiler. The difference in time is measured in days. It's the same thing with a cluster of athlons(And if you run a task where the task isn't easily parallellized, and need to keep in synch with the others, a node crash might ruin a lot of work and force you to start over)
here's my perspective (Score:2)
Five Nines? I don't think so... (Score:2, Insightful)
> and even do it better and especially do it much
> cheaper.
I don't buy it. With ix86 PCs, it's not just the software that's crap compared to legitimate enterprise solutions, but the hardware too. Linux is nifty and all, but it only improves the software side. The hardware is still shit.
I've used ix86 boxes from most every builder... from solidly well-built IBM machines, to crap boxes built by dell from commodity parts. Not a one of them has achieved five nines. Remember, that's only five and a quarter minutes of downtime PER YEAR. With most OSs, if you reboot two or three times, that eats up all of your downtime right there, assuming NO other problems.
ix86 boxes just are NOT up to the "five nines" standard. OTOH, I've seen more than a few Sparc, SGI, and RS6000s that can do it.
Remember... just because you CAN do something on the cheap with crap hardware doesent mean that you should. And it doesn't mean that enterprise hardware doesn't have its place.
cya,
john
Re:Five Nines? I don't think so... (Score:2)
First, five 9s does not make the mistakes that three 9s makes.
Second, five 9s recovers smoothly from the mistakes that three 9s makes.
Third, five 9s does not let errors go by unnoticed like three 9s does.
Fourth, five 9s has a much more critical sense of what constitutes an error.
Fifth, when something does break, it has to be fixed. That counts.
Misquote from Dijkstra: "A baby crawling and a jet plane from JFK to LAX are both means of transportation".
Probably stays up 100%. That's what? About -1 or -2 9s, methinks.
Re:I/O used to be decent (Score:2)
Re:SGI is dying (Score:2, Informative)
Here are the figures you can't imagine:
32% Government & Defense
28% Science
21% Manufacturing
12% Media
7% Energy
Or looking at it another way
Servers accounted for 38% of fiscal 2001 revenues; Global services, 37%; Visual Workstations, 19% and Other, 6%
See:
http://www.sgi.com/newsroom/press_releases/2002
http://www.sgi.com/company_info/investors/prese
http://www.sgi.com/newsroom/factsheet.html
So what are the benchmarks? (Score:5, Insightful)
In my experience SGI's are slow but are extremely scalable. With IA32-based machines you'd be lucky to get 4 CPU's sharing memory, unlike the 64+ you get from SGI. Very good for scientific codes but not so hot for applications that are either not parallelizable at all, or embarassingly parallelizable such as Seti@Home or ray-tracing a feature film.
SPEC (Score:4, Informative)
OK there are no numbers for 16K but here the numbers for 600Mhz 14K
SPECint2000 500
SPECfp2000 529
For comparison
UltraSPARC III Cu 1.015GHz
SPECint2000 576
SPECfp2000 775
AMD XP 2800
SPECint2000 913
SPECfp2000 843
INTEL P4 2.8
SPECint2000 1040
SPECfp2000 1048
Re:SPEC (Score:2)
You can't build large node MP AMD or Intel machines, period. So it's something of a moot point.
-psy
SGI uses and other tidbits (Score:3, Informative)
But... keep in mind that it consumes far less than 20 watts of energy (and thus gives off little heat) and will eventually find itself packed in with other CPUs into Origin servers/supercomputers. The CPU bricks for the O3900, for example, have 16 CPUs in just 4U of rack space.
SGI's ccNUMA MIPS/IRIX machines are typically used for tasks that are severely I/O bound, that is, their strong point is chugging thru massive amounts of data where raw per-node CPU power is important, but not the largest factor. Somewhat like a mainframe, but with less redundancy and more CPU power.
Re:SPEC (Score:2)
competition (Score:2, Funny)
Re:competition (Score:2)
That's so very different than the capitolist companies like the US Airline industry and software companies such as Mandrake, where they make moey based solely on their commerical success... *cough*
SGI's reality distortion field: fully operational (Score:3, Interesting)
Oh? Quick, everyone with Radeon 9700 PRO graphics boards in your PCs, make sure you have them in tower cases, or something!
For reference, the ATI specs page states:
I guess SGI might refer to actual output precision, i.e. the RAMDAC D/A-converters... In that case, it seems they still have the edge, since the ATI boards only have 10 bits per component. Still, I think that's of lesser value than the actual precision image operations are performed at.
Re:SGI's reality distortion field: fully operation (Score:2)
Not the width of the GPU!!!!
You're confusing apples and oranges.
-psy
Re:SGI's reality distortion field: fully operation (Score:3, Interesting)
the matrox card that does 48(? or was it 42), is actually using a dirty hack of some sort to get that depth on windows..
Re:32-bit color (Score:2)
Re:32-bit color (Score:2)
i have never enountered such mode in actual use.
the 8 bits got wasted in every mode available on any card(that had any sense to use)..
Re:SGI's reality distortion field: fully operation (Score:2)
"...output precision... of lesser value than the actual precision image operations are performed at."
Not true if you're doing real imaging work. How about that fancy LCD monitor you've been eyeballing (or just picked up)? Noticed any of the color problems, especially with dark shades? No?
Then you aren't doing graphics work that needs the display accuracy of an SGI or equivalent.
Re:SGI's reality distortion field: fully operation (Score:2, Informative)
Bottom line, if you need high precision integer colours, you still need an SGI. Of course, there's not many people who do, and someone will probably be doing it on the PC in a couple of years, so it's looking pretty grim for SGI as that's one of their few remaining technical advantages in the graphics workstation market.
Re:SGI's reality distortion field: fully operation (Score:2)
Re:SGI's reality distortion field: fully operation (Score:2)
Finally someone compared a desktop (not pro, I know some ati pro level cards as matrox ones) Gfx card, optimized for gaming... With... SGI...
Yes, folks this is history.
processor features (Score:5, Funny)
1.???
2.Profit!
3.Build new processor.
Re:processor features (Score:2)
So did the R14000. And the R12000. And the R10000...
Re:processor features (Score:2)
BUT (Score:3, Informative)
How big is the target marked? (Score:2)
It's not the very parallell computing, like movie rendering. Clusters, usually Linux clusters do much better compared to cost.
It's not most kinds of servers, that are usually IO bound and it's the disks, controller, NIC and mobo (backplane) that make the server. Few of those need more than dual MP cpus to do well.
I know roughly where the SGIs still shine. But how many really have those specific needs? Not many that I can think of.
Kjella
Re:How big is the target marked? (Score:2)
No, but those that do, have SGI.
-BrentWhose chip is it anyway? (Score:2)
Isn't it MIPS that make the CPUs?
(This is not sarcasm, I really wanna know.)
die size (Score:3, Informative)
Wait a minute... (Score:2)
Hey, according to intel, this processor, at 700MHz, is about 4 years old, and has no hope of competing with intel's True MHz processors!
Re:Wait a minute... (Score:2)
Re:Behind the times. (Score:4, Insightful)
I thought enough material had finally invaded the net for people to realize Mhz means nothing... I guess I was wrong.
Let's play what if... cause I don't have any facts on this processor: What if the mov operation of said processor is 1 cycle, whereas mov of pentium is 7?...
Where does that put you?
Books are written on CPUs. pick one up, and you'll understand Mhz means nothing.
Re:Behind the times. (Score:2)
You mean Apple ads?
Seriously, what 'material' are you talking about? I know about SPEC [specbench.org], according to which the currently fastest CPU is the Itanium 2 1000 MHz, followed closely by the PIV 3.06 GHz. From that I would deduce that even if you've got a relatively slow CPU (in terms of computations per clock cycle), if you manage to run it at very high frequencies, you'll still have one of the fastest CPUs out there.
Re:Behind the times. (Score:2)
I thought enough material had finally invaded the net to realize sarcasm.
Note the original posters helpful use of an emoticon at the end of his post. Emoticons can symbolize many tones in situations where the text would otherwise be unclear.
Where does that put you?
Re:Behind the times. (Score:4, Informative)
SGI's workstation line is largely unimpressive, especially for the 99% case of computer users, hell, even engineers.
The problem is, for a small set of jobs, for a small set of people, nothing else is suficient - at any price. You're either using an SGI, or the work isn't taking place.
That market is continuing to erode, but i dont think it will ever dissolve completely. I think eventually SGi will effectively become a US govt subsidized entity. SGI continues to build the systems that only governments need and only government agencies can afford.
Clustering has nothing to do with the markets SGI sells in. Please don't mention it, it makes me think you don't know what you're talking about.
Do you ?
Re:Behind the times. (Score:5, Informative)
While it was the first implementation of MIPS4, and it was an FP monster, and had a huge TLB for the time, it really wasn't so hot as a general purpose CPU.
A far as "true 64 bit" in the R4000, which version of IRIX ran on R4k with 64 bit pointers ? 6.2 and 6.5 certainly don't on my IP22.
When the R3k came out it was the first real example of commercially FAST and successful RISC design. It was used in multiple machines from multiple companies. SGI didn't "really" up the ante again until R10k, which was their first offering that was superpipelined and superscalar.
Finally, regarding SGI and clustering:
SGI is not price-competitive with shared-nothing clusters of PCs or Alphas. Nor is it trying to be. You probably know what the O2k/O3k systems are good at and how they differ from any other system being sold today, othewise you wouldn't have responded to me. I think my statement is valid --- the SGI big iron solves problems that shared nothing clusters CANT. Furthermore, they're so much more expensive than shared nothings that if you need shared nothing and buy origin, you're silly.
64 bit support... (Score:3, Informative)
64-bit support was first supported with IRIX 6.0 running atop R4000 and up.
However, certain platforms do not support 64-bit pointers. IP12/IP20 (Indigo) IP22 (Indy/Indigo2), and IP32 (O2) are among those that don't. This is due to memory contraints and other assorted issues.
Most, if not all, Onyx and Challenge (L and XL only) machines support 64-bit pointers with IRIX 6.0 and up.
Onyx2, Origin, Octane, and Fuel certainly do.
Re:Behind the times. (Score:5, Funny)
True, but your architecture still sux
Re:Behind the times. (Score:5, Funny)
I guess that makes it faster than my car. :)
Re:Behind the times. (Score:2)
Re:Behind the times. (Score:4, Informative)
You must remember, the R16000 is 64-bit, not 32-bit.
Also, it has 4000k of L2 cache, not 256k or 512k.
Also, out-of-order instruction execution, x86 chips can't do this.
you are trying to compare two things that are completely different.
Re:Behind the times. (Score:5, Informative)
For the record, the R10000 series can run either 32-bit or 64-bit code. All other things being equal, the 32-bit version of a program will run faster than the 64-bit version; you can fit more 32-bit ints into cache at once than 64-bit ints, so the 64-bit version of a program generally suffers more cache misses than its 32-bit counterpart.
On an SGI box, you don't compile for 64-bit unless you absolutely have to address more than 2 GB of virtual memory.
Also, it has 4000k of L2 cache, not 256k or 512k.
That's pretty puny for an SGI. The processors they use in the Origin servers have typically been equipped with 8 MB of secondary cache; the 4 MB version must be just for the workstations, to keep costs manageable.
you are trying to compare two things that are completely different.
On this point, however, you're 100% correct.
Re:Behind the times. (Score:2)
Of course, you're right. I don't know what I was thinking. I mean to say "pointer" but typed "int" anyway. Oops.
Re:Behind the times. (Score:2, Informative)
"Also, out-of-order instruction execution, x86 chips can't do this."
Bull.
x86 has done this since the introduction of the Pentium Pro.
Re:Behind the times. (Score:4, Informative)
You, sir, are almost completely uninformed. The R16000 is an R10000 variant, just like the R12000 and R14000 before it. It is not a vector processor, and has no vector units. The R16000 is, furthermore, a desktop processor in its own right, because it's currently being used in the Fuel workstation.
Incidentally, SGI divested itself of Cray some time ago. Cray was bought by a company called Tera Computing, which then changed its name to Cray. They're building the SV2 vector supercomputer now, using their own processors, and they also have an arrangement with NEC to market the SX-6 in the United States with a Cray logo, but that's strictly a resale agreement.
Re:Behind the times. (Score:2)
erm. Slight update on the Cray thing.
They also sell the MTA [cray.com], a hardware threaded architecture - from the Tera days - and the SV2 is now called the X1 [cray.com]. They are also doing an AMD Opteron derived one-off system [cray.com] for Sandia National Laboratories [sandia.gov]. Though, from what I am hearing, it might not be a one-off system - they're considering productizing it.
Re:too little too late (Score:2, Informative)
Re:too little too late (Score:4, Insightful)
Come on people. You all root for the Athlon when it is clocked well under the P4, yet you believe that SGI's MIPS line is crap when it tops out at 700Mhz???
Sun's UltraSPARC III Cu tops out at 1.05Ghz last I checked. Does that mean that the P4 at 3Ghz stomps the hell out of it? If you said yes, you are a fucking idiot.
People, the Unix world is far far different from what you are used to in PC land. High speed backplanes, dedicated busses, huge amount of L1 cache, insane L2 cache, incredibly efficient cpu designs (where 1 clock per instruction is pretty much the norm and cache misses don't occur every 3 operations), hot swap damn-near-everything, upwards of 72 processors and 288 GB of RAM...
It all adds up to a fucking badass machine that smacks the piss out of any PC on the planet when it comes to getting its job done. Don't compare apples to oranges. The applications these machines are designed for do not include Quake 3. The benchmarks you have memorized don't mean a damn thing in this realm, so go back home.
Getting back to the article, I'm glad to see SGI coming out with a new CPU. I still see a few SGIs in the wild now and again. If they lock down Irix a bit more security wise and expand their target market, they might be a decent competitor for Sun within the next 10 years. I don't see them winning any shining star awards right off the bat, but if they are persistant they'll do alright in the long run.
Re:too little too late (Score:5, Insightful)
Re:too little too late (Score:2)
SGI, with it's ccNUMA architecture excels at handling problems that need the performance boost of multiple processors but that require a single-system image because they are inherently only poorly-parallelizable. These are usually problems like 3D seismic modelling, crash test simulations for vehicle manufacturers and CAT scan visualization in medical computing. This is the sort of computing that would slag a cluster of Athlons into a puddle of glowing silicon if the Myrinet cables didn't melt first. Origins handle them with ease.
Re:too little too late (Score:2)
changes afoot (Score:2)
Hear Hear. I'm also happy to see SGI pushing some new kit. It sounds like they've been quite busy lately. Rumor has it there are even some revolutionary (not simply evolutionary) MIPS cpu changes due soon.
IRIX security isn't too bad, it's certainly way better than it was just a couple years ago. If you dig around the software section of their website you'll see that they've even been working with the IPFilter author on some pretty serious IRIX packet filtering.
A lot of us out in academia/research hope SGI decides to drop their per-cpu price soon. Their individual CPU performance is still pretty decent but certainly not cutting edge. It's their I/O and thruput that's amazing... and we'd like to make better use of that. Shucks, the IRIX kernel can easily support 512 CPUs in a single machine (1024 if you use the IRIX XXL kernel). It's been tweaked every which way. But as it stands, we can't afford more than a 64 CPU machine. Still pretty nice, though. Even when working on a 6-CPU job, our (already somewhat old) Origin 3000 stomps all over our Myrinet-based cluster for anything that uses a significant amount of I/O. When shared memory is involved, the differences are even greater! (To compare, the newest Myrinet interconnect is 4 gbit/sec full duplex... SGI's NUMAlink3 is 25.6 gbit/sec [3.2 gbyte/sec]).
I'm looking forward to working with the new MIPSpro compilers too. Our SGI sales rep is supposedly going to bring the newest version and some demo licenses soon.
Re:too little too late (Score:4, Informative)
Two way systems are not data center solutions that IBM, Sun, and SGI are competing for with this kind of hardware.
Even if they were, you're ignoring the fact that you cannot physically pack as many CPUs with Intel or AMD as with MIPS, Power4, or Sparc into a chassis. Part of the reason they are clocked slower is because you need to balance heat management with performance density when you're dealing with the big servers.
These boxes are about aggregate compute and storage power per dollar, not about whether the individual CPU cores smoke. The only place you see these cores as singletons is workstations (Single-cored "servers" are usually just the same or similar motherboards as a workstation, but in a case that has a beefier power supply and room for a useful number of hot-swap cages.)
You try and pack 32 Intel cores at 3GHz into a chassis that will handle the same number of MIPS cores, and the only thing you're going to get is voltage underflow from an overloaded power supply. Beef up the power supply, and within minutes you're going to be getting that wonderful whiff of frying, overheated electronics.
Raw performance of a core is only one factor in engineering a complete server. Anyone who claims otherwise has clearly not been involved with the hardware end of this industry.
Re:too little too late (Score:2)
Blades are clusters-in-a-box, not integrated SMP systems. Like clusters, they hide the fact that you're dealing with multiple boxen, and don't have the shared system image and devices of a large system chassis.
Assuming those are 2-CPU blades, you'd need 16 of them to equate a 32-core system chassis, and would still need to add RAID arrays to the rack (unless a virtual SAN will do for your application -- it won't for large database servers.)
Bottom line is you need to know what the system is going to be used for, and compare the features that support those needs. Even identically-cored systems based on SMP vs. cluster vs. blade are going to have radically different performance characteristics and benefits for different uses.
Blades are great for things like web server hosting, where you want a lot of isolated processes. Clusters are good when you need shared storage, but don't need shared memory. SMP chassis can handle all of the above, plus deal with the large IO caches and shared memory that database and application services require, but at a higher dollar/benchmark cost than the first two. (Not surprising -- SMP backplanes require far more complex engineering than clusters or blades of off-the-shelf SMP systems relying on GNet or other non-backplane interconnects.)
Re:too little too late (Score:2, Interesting)
As you can guess from the above post, I don't like the x86 architecture, ugly_hack(){ ugly_hack(); }. There's something to be said about elegance in the design of a processor.
Re:too little too late (Score:4, Interesting)
The N64 did well as a system, and had far more power than the playstation. The playstation just did incredibly well.
Hollywood is a city, not a company. I am assuming you are talking about 3D and compositing visual effects studios, of which a few are near Hollywood, California. They aren't going to BSD, they are going to Linux, not just for rendering, but for workstations. Irix is unix and it makes it a very flexible choice for an OS. Because Linux is so similiar, it is also a flexible and powerful.
The N64 was what? (Score:2)
On another note, I'm not even going to begin to comment on your thoughts on clock speed etc. I'm sure everyone else will flame you over the whole Megahertz Myth®.
hard to pirate cartridges (Score:2)
This is not because people are inherently criminal [although the something for nothing element can't be denied] but because for most casual gamers £40 for a game they may only play once is just too much.
Here we have a folder with literally hundreds of copied ps2/xbox titles. 99% of them don't get played for more than a couple of hours on the day they got downloaded. [the pile of non-pirate games is larger than most people's collection too]
Re:The N64 was what? (Score:2)
But I notice that pre-rendered video is going away in video games. I think nintendo was right on the pre-rendered video thing. It pulls you out of the game, instead of keeping you immersed in it.
I don't think that capacity of the media had anything to do with how long games were back then. FF7 could have easily fit on to one disk, or hell even an N64 cartrige if it weren't for all those cut scenes. But they just had to put 10 minutes of uncompressed video into every disk.
Oh well, I'm on a rant. I think the N64 was a great system, and the choice to not move from a cartrige system was well founded at the time. Load times on a 2x cdrom sucked. (-1 Offtopic here I come.)
Re:too little too late (Score:2)
Are you kidding? The reason that the PS seemed 'smoother' than the N64 was that it had almost no graphic features turned on. Don't believe me? Fire up Ridge Racer and watch the road beneath you. Notice that it turns all zig-zaggey when it gets close enough to you? That was one of the limitations of the PS hardware. It wasn't doing anywhere close to the number of calculations per pixel that the N64 was doing.
The real reason that the PS appeared 'smoother' was that it used minimal graphics tricks and pumped around 300k triangles on the screen. The N64 had all the features turned on and was getting around 100k triangles. So the result was that the N64 had fewer triangles to work with, but much MUCH better texture quality.
As for being a nail in SGI's coffin, I agree with you, but not for the reason you suggest. The N64 was both quite powerful and quite popular. The PS may have done better, that doesn't mean that the N64 didn't do well. The SGI processor did just fine, but they pretty much designed themselves out of business. Why are they charging a premium for their hardware when they can get a slimmed down system crammed into $150 box? In reality, that may or may not have directly affected their credibility. But it did significantly lower the value of the effects they were able to accomplish. Suddenly, consumer hardware can do what SGI does. Hrmm Why do I want this expensive box again?
"PC's running bsd are still a far greater value than expensive sgi hardware."
No argument here. Though I believe SGIs have their place, I think your comment's right on the ball. SGI didn't isn't doing enough to wow customers. Let me give you an example, when they launched the Intel based NT Line, there was only one real major difference between that machine and any other PC on the market was that it had a much faster bus between the RAM and the graphic chip. The problem is, what do you do with that when everything's designed around a 1x AGP bus? (this was 2-3 years ago..)
My company has a particular application today (but not back when we had the machines) to get ludicrious amounts of data to the graphics processor, but that's a very specific need. Not something you can build a whole company around.
They should have done more than just having the fast graphic bus if they were going to cater to the Wintel crowd.
So yeah, I basically agree with what you said, but your details about the N64 were significantly wrong.
Re:too little too late (Score:2)
$10,000 and up, but who's counting?
Re:faster than anything you have used. (Score:2, Insightful)
So wrong (Score:2)
-psy
Re:Why only 700Mhz? (Score:3, Interesting)
Have you read the SPECint and SPECfp results posted above? The Pentium4 runs at 6 (six) times this cpu's speed, yet only scores twice. Talk about good cpu design.
You should also keep in mind that SGI has some ass kicking technology when it comes to cpu and memory interconnect. NUMAFlex makes it possible to have a penalty as little as 1.5 vs 1 for memory accesses outside the local ram banks. Now try doing that with commodity x86 hardware. For problems that aren't easily broken down in small parts and, that have huge datasets, nothing touches SGI.
Kudos to the SGI engineers for their great job.
A long time SGI fan :)
Re:Why only 700Mhz? (Score:2)
CISC is even more of MHz increaser, because the decoding of the instruction gets chopped up into many parts too...
Re:IN SOVIET RUSSIA (Score:2, Interesting)
I've seen two jokes with Soviet Russia now. And I'm not laughing. Can someone let me in on the inside joke here?
Re:In Soviet Russia, mackstann teaches you (Score:2)
typical headline reads:
Noun Verb Noun
aka subject and predicate
i.e.:
IBM announces some spiffy thing
Alan Cox washes his nuts
SO, the whole soviet russia thing, wherever the hell it came from, goes like this. they exchange the first noun with "you", flop the nouns around, and prepend the headline with "in soviet russia".
so, here we go:
SGI launches R16000
- In Soviet Russia, R16000 launches you!!!!
cant believe i wrote this post.
Another hot SGI box... (Score:2)
Octane workstation
24" HD monitor, 21" monitor
dual R12000 @ 400 MHz
two internal scsi drives
internal DDS4 tape drive
two XIO gfx cards
fibrechannel XIO gfx card w/ external ciprico fibre raid
video capture XIO card
scsi pci card w/ assorted external drives
two weirdo data capture pci cards
Oops, now that I think of it, he does have sort of a mod... he bought an LED lightbar from reputable.com to replace the incandescent bar after it burnt out.
The machine is used pretty heavily to analyze video signals from various bits of broadcast and closed-circuit sources.
Another odd tidbit... he runs a much older version of IRIX 6.5.x, not the more recent 6.5.17 or 6.5.18. (IRIX and its applications and freeware CD sets are updated quarterly). Does the job, I guess, so no major reason to upgrade.
One more time... (Score:2)
As time went on, SGI noticed that the MIPS market was fragmenting... high end R1x000 series CPUs for workstations and supercomputers and low end embedded cpus for the consumer market. So SGI spun off MIPS, Inc but kept the R1x000 for itself.
These days MIPS Inc has nothing to do with SGI. And SGI's R16000 etc have nothing to do with MIPS Inc. I believe NEC fabs the R1x000 series for SGI.
Re:Too little too late?? (Score:2)
It lost to Windows, believe it or not. Lots of FX houses have sprung up in recent years running Lightwave on Wintel platforms. Are they as fast? Nope, but they're much cheaper. And that's what counts in the TV FX industry.
I'm not saying you're wrong though, just adding a little more info to what you said. Today Linux is getting wide-spread use all over the graphics world. Maya runs well on it and is the program of choice. Want to render faster? Throw on another render node.
I doubt SGI's even on the radar with 3D people anymore. If they want to cater to that market, what they'll have to do is create hardware that's really really really tuned to rendering and gets the job done much faster. I doubt they're really going that way though. If Nvidia keeps up with their support of the Cg programming language, we may find ourselves upgrading video cards instead of processors. That will be an interesting day.
Re:Here's my take (Score:2)
You've got me considering doing the same again--every year or so I think of grabbing an old SGI. I used to lust after the INDY for home in the early 90's.
Would add it to the collection [blakespot.com].
blakespot
Re:Here's my take -- SOLD (Score:2)
After doing some digging, several things became clear:
- For a little more $$ than you'd need to spend for a used Indy (maybe $150 more on eBay), you can get a used O2 or Octane which are both much more powerful and viable today.
- The Octane is notably more powerful than the O2, but the market is flooded with them so the Octane is oddly cheaper than the O2. It's also much louder and larger than the O2--less of a "personal workstation" (see O2 and Octane photo here [post-logic.com] -- O2 on left)
So I have just grabbed an O2 with:
- R10000 CPU @ 175MHz, 1MB L2 cache
- 256MB RAM (unified memory architecture)
- 4GB HD
- A/V module (audio & video in & out)
- O2 cam
- keyboard / mouse
So I shall add to the list [blakespot.com] my first IRIX machine. Hope my OS X box does not get jealous...
blakespot
Re:what about? (Score:2)
I guess that makes moderators insane...
(or too lazy to understand the thread)