Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Story Behind a Failed HPC Startup

kdawson posted more than 4 years ago | from the build-it-and-they-will-come-if-you-don't-run-out-of-money-first dept.

Supercomputing 109

jbrodkin writes "SiCortex had an idea that it thought would take the supercomputing world by storm — build the most energy-efficient HPC clusters on the planet. But the recession, and the difficulties of penetrating a market dominated by Intel-based machines, proved to be too much for the company to handle. SiCortex ended up folding earlier this year, and its story may be a cautionary tale for startups trying to bring innovation to the supercomputing industry."

cancel ×

109 comments

Sorry! There are no comments related to the filter you selected.

Lesson learned (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29970680)

Don't try anything new.

Re:Lesson learned (2, Interesting)

khallow (566160) | more than 4 years ago | (#29970792)

Don't be unlucky. At least, that's what the story is about.

More seriously, it looks like they were trying for high end supercomputing. There's probably a lot more money in smaller supercomputing clusters, but then they'd get hit hard by the proprietary structure of their hardware.

Re:Lesson learned (0)

Anonymous Coward | more than 4 years ago | (#29971098)

The lesson is find VC's that understand you may just be tilting at windmills and have the guts to ride it out.

"For a company like that entering an established market, it takes time to get a good footprint and the right kind of profitability," Conway says. In this case, "investors were not willing to wait the normal amount of time that in a healthier economy a company like SiCortex would have had."

Re:Lesson learned (1)

khallow (566160) | more than 4 years ago | (#29971438)

VCs often fail when economies go bad. Same thing happened in 2000-2001. To be honest, I'd have expected the VCs to fail sooner, say in winter 2008 or Spring 2009. They actually got some good mileage out of the VCs.

Re:Lesson learned (3, Insightful)

jd (1658) | more than 4 years ago | (#29972552)

Having worked in one HPC startup (Lightfleet), I can say that one of the biggest dangers any startup faces is its own management. Good ideas don't make themselves into good products or turn themselves into good profits by selling. Good ideas don't even make it easier - you only have to look at how many products that are both defective by design AND sell astronomically well to see that.

I can't speak for SiCortex' case, but it looks to me like they had a great idea but lacked the support system needed to get very far in the market. It's not a unique story - Inmos didn't fail on technological grounds. Transmeta probably didn't, either.

Really, it would be great if there could be some effort into examining the inventions of the past to see what ideas are worth trying to recreate. For example, would there be any value in Content Addressable Memory? Cray got an MPI stack into RAM, but could some sort of hardware message-passing be useful in general? Although SCI and Infiniband are not extinct, they're not prospering too well either - could they be redone in a way that didn't hurt performance but did bring them into the mass market?

Then, there's all sorts of ideas that have died (or are dying - Netcraft confirms it) that probably should be dead. Bulk Synchronous Processing is fading, distributed shared memory is now only available in spiritualist workshops, CORBA was mortally wounded by its own specification committee and parallel languages like PARLOG and UPC are not running rampant even though there are huge problems with getting programs to run well on SMP and/or multicore systems.

Re:Lesson learned (1)

cheesybagel (670288) | more than 4 years ago | (#29975514)

Dunno about the other ones, but distributed shared memory and parallel languages are being funded in the IBM PERCS DARPA program, while CORBA was basically replaced by web services.

Re:Lesson learned (1)

onionman (975962) | more than 4 years ago | (#29978660)

I would argue that a big "lesson learned" to hardware innovators is that your awesome new hardware needs to have a total ecosystem that it can be easily used in. This means a good, cheap development tool chain, and preferable a port of Linux and some major applications.

For example, let's consider Itanium. Despite all the naysayers, the Itanium architecture is beautiful and has tremendous potential, but there isn't any good compiler support for Itanium. (Some would argue that good compilers for Itanium can't even be written, but I think that point is contentious.) So, ignoring the cost overruns and project delays, Intel still ended up with a mighty processor that almost no one can use effectively.

On the other side of the spectrum, look at the AVR. This is a tiny little 8-bit controller, but it was released with a good free compiler, a great simulator, and other development tools. The whole development setup was cheap enough to play with at home, and ended up being used in many academic settings. The result is a wildly popular (and profitable) product.

I'm sad to see SiCortex fold, because I think their MIPS based approach was really cool, but it looks like they just didn't have a broad enough ecosystem to have their equipment widely deployed. Too bad.

Re:Lesson learned (1)

flaming-opus (8186) | more than 4 years ago | (#29977740)

I'd like to agree with this one. The bulk of the market is in the low-end, but the low end is going to be reluctant to embrace anything unusual. Sicortex uses mips processors, which means you can't use your off-the-shelf applications. Even if the rack of intel blades uses more power, and takes up more space, a low end cluster still isn't that large, or that power-hungry. You're not talking about building a whole new building to house the machine.

The high end, where custom applications rule, is more likely to embrace a custom architecture; Cray vector, IBM power, Itanium still play in this arena. However, the largest sicortex machine really can't play in the big leagues. 5000 low-power mips processors is a pretty modest cluster, even if the network is good. The big leagues also means you're dealing with the established HPC customers, who are very demanding on the software and service front.

The low end has a lot of market, but the competition is fierce, and the margins small. The high end requires a lot more infrastructure than an 80 person company can provide. In all cases, developing a new processor is very expensive. Intel and AMD spend billions of dollars designing each generation of chips, and have the tools to build them with full custom logic, instead of asic designs. Once sicortex invests all that money in designing the processors, they still have to build a machine around that. Then you have to build a software stack and service organization. Then, you have to sell the thing into a competitive marketplace.

Tough row to hoe.

The low end is a larger market.

Re:Lesson learned (2, Insightful)

sopssa (1498795) | more than 4 years ago | (#29970840)

The thing is, industries like these are already really, really dominated by single players and everyone uses them. It's the same with Windows too - it's own marketshare will keeps it having that marketshare. In airplane industry all the European companies had to merge so that they could compete with Boeing.

When something becomes like a standard, it's really hard to break in.

Re:Lesson learned (2, Insightful)

serviscope_minor (664417) | more than 4 years ago | (#29971254)

Single player? Have you looked at the top 100? It's equal parts Intel (x86), AMD (x86) and IBM (Power related), with a smattering of others: Cell - mostly SPU, Itanium, SPARC, NEC and others.

There's certaqinly no dominanint player and not even much of a dominant instruction set. The thing is, supercomputers are so expensice and unique that porting to a different instruction set is usually the least of the work, except for Roadrunner which is fast but rather hard to use.

Re:Lesson learned (1)

flaming-opus (8186) | more than 4 years ago | (#29977846)

looking at the top 100 is pretty misleading, however. The TAM for a low end cluster is still several times larger than the market for massive supers. A very small number of customers can still adapt to weird architectures and everyone else uses x86 + linux. Also, just about everything non-x86 has failed to gain much market, apart from IBM. IBM manages to keep this going by sharing their development costs with non-HPC products. Cell is a video game processor; Power6 is the foundation of their database servers; Blue Gene is a close derivative of their embedded systems IP.

I'd call the high-end of the market a duopoly of IBM and x86(mostly intel, AMD mostly because of Cray) The mid-range and low-end: all x86.

Re:Lesson learned (0)

Anonymous Coward | more than 4 years ago | (#29971416)

In airplane industry all the European companies had to merge so that they could compete with Boeing.

In Socialist Europe, airline industry companies merge, get government subsidies, and move to China.

Re:Lesson learned (1)

cheesybagel (670288) | more than 4 years ago | (#29975646)

How is that any different from what happened in the US? I still remember how the US government pushed consolidation in procurement to allegedly reduce costs (yeah right). Remember Vought? Douglas? McDonnell? Rockwell? Grumman? Fairchild? Republic? Boeing gets subsidies to open plants, move their HQ, outsource parts for the 787 basically from everywhere. When they needed to make the Dreamlifter cargo plane they went to a Taiwanese company...

Re:Lesson learned (4, Insightful)

Jacques Chester (151652) | more than 4 years ago | (#29973702)

They didn't die because their customers abandoned them for something cheaper. They died because they had a cashflow crisis due to investors pulling out of a planned round of fundraising. They had millions of dollars of sales in the pipeline.

The lesson isn't "Don't compete with Intel", it's "When you run out of money, you're out of business". Or perhaps, "The financial crisis killed lots of otherwise sound businesses". Luck, as the OP pointed out, played a large part.

THE MESSAGE IS CLEAR (0)

Anonymous Coward | more than 4 years ago | (#29970986)

SAVING ENERGY HAS FAILED!

Re:Lesson learned (2, Insightful)

tphb (181551) | more than 4 years ago | (#29971466)

Lesson learned: there is no market for proprietary CPUs on MPP supercomputers. It's gone. If Cray and SGI couldn't do it, how are a couple guys from DEC and Novell going to pull it off?
It's always sad when someone's dream fails, but come'on guys. You're pursuing a 15-years-ago market, just like DEC and Novell did when they died (okay, Novell exists, but it is irrelevant).

Supercomputers are commodity processors increasingly in commodity boxes running commodity open-source software. A supercomputer running slower processors is not going to cut it.

Re:Lesson learned (1)

proxy318 (944196) | more than 4 years ago | (#29972670)

"You tried your best and you failed miserably. The lesson is 'never try'. " - Homer

Re:Lesson learned (0)

Anonymous Coward | more than 4 years ago | (#29974468)

No one can stop the x86 train! not even Intel!!

1 down (2, Informative)

Locke2005 (849178) | more than 4 years ago | (#29970688)

Lightfleet [lightfleet.com] soon to follow. How is the company that was using Transmeta chips doing?

Re:1 down (2, Insightful)

fm6 (162816) | more than 4 years ago | (#29971278)

Orion? Long gone.

http://www.theregister.co.uk/2006/02/14/orion_shuts_down/ [theregister.co.uk]

The weird thing here is that the Register quotes Bill Gates as calling Orion's deskside supercomputers as part of a "key trend". Now, I've always though Bill's understanding of the marketplace was overrated. But you'd think that somebody whose immense fortune comes almost entirely from the triumph of commodity processors would know that this kind of effort is doomed.

Some people are just in love with these fancy RISC architectures and stick with them in the face of their total failure in the marketplace. When I was at Sun, the Sparcophiles would quote impressive raw numbers for Sparc architectures, even trying to sell them to people who already had a solid commitment to commodity systems. And yet every single Sun product in the HPC Top 500 run Intel or AMD!

Re:1 down (1)

Locke2005 (849178) | more than 4 years ago | (#29971820)

Yes, Orion, thanks, I couldn't remember the name. Insisting on basing an HPC on a chip that achieves low power by throttling back performance under high demand probably wasn't a smart choice either. Although at least with an Orion, you could develop your app on your desktop PC.

Re:1 down (1)

fm6 (162816) | more than 4 years ago | (#29973424)

You're thinking of Transmeta's LongRun technology, which reduces clock speed (and thus power consumption) when the system isn't working hard. Similar features are actually quite standard in the current generation of CPUs. There's no impact on performance, because when the system's busy, it's always running at maximum speed. There is some hassle with system software that gets confused when the CPU its running on goes into idle mode.

Most of the supposed power savings for a Transmeta cpu comes from the fact that its fundamental instruction set is not x86-compatible. Instead, it's a VLIW [wikipedia.org] architecture that greatly reduces the transistor count. The instruction set is optimized to support software emulation of other instruction sets. The x86 emulation is best known, of course, but you can also get Java byte code support.

My first thought about Orion is that it used the native Transmeta instruction set. But no, all my sources say it's x86 compatible. It might seem strange to do HPC on an x86 emulator, but the emulator includes "code morphing" technology which optimizes use of native code at run time. (Most Java virtual machines now do something similar.) In theory, you can get pretty good performance that way. The practice seems to be a little disappointing.

Re:1 down (1)

jd (1658) | more than 4 years ago | (#29972636)

I dislike intensely saying bad about companies I've worked for, but it's not bad to simply say outright that Lightfleet is (for all practical purposes) clinically dead. It is possible that it could be revived, I suppose. Some of the early design work was ingenious and has a lot of merit. At this time, though, Count Dracula has better vital signs.

Low Power Supercomputer (0)

jameskojiro (705701) | more than 4 years ago | (#29970810)

Why not use something based of the Atom chip but massively parallel.

You can create something that is unique but you reduce the buying base of your systems....

I would like to see a super computer based off of Laptop type low power components.

Re:Low Power Supercomputer (3, Insightful)

Wrath0fb0b (302444) | more than 4 years ago | (#29971196)

Why not use something based of the Atom chip but massively parallel.

You are probably one of those guys that thinks that if you can get 36 women working together on making a baby, it will be ready in 1 week.

Not all problems can scale out to many cpus (or wombs, for that matter). Threading overhead, network latency/bandwidth, mutual exclusion (or the overhead on atomic data types) all conspire to defeat attempts to scale. This is, of course, if your problem is one that is even amenable to straightforward parallelization in the first place -- many problems (for instance, lattice simulations of Monte Carlo) are excruciating to scale to even 2 cpus.

In my own (informal) tests on our HPC (x64, Linux, see my post above for details), I concluded that you need to be able to discretize your work into independent (and NONBLOCKING) chunks of ~5ms in order to make spawning a pthread worth it. Of course, "worth it" is a relative term -- some people would be glad to double the cpu-time required for a 25% reduction in wall-clock time while others might not, so I'll concede that my measurement is biased. IIRC, I required a net-efficiency (versus the single-core version) of no worse than 85% -- e.g. spend less than 15% of your cpu-time dealing with thread overhead or waiting for a mutex. This was for 8 cores on the same motherboard by the way, if you are spawning MPI jobs over a network socket, expect much much worse.

Re:Low Power Supercomputer (1)

Cryacin (657549) | more than 4 years ago | (#29971702)

You are probably one of those guys that thinks that if you can get 36 women working together on making a baby, it will be ready in 1 week.

And I would have gotten away with it if it weren't for those meddling kids!

Re:Low Power Supercomputer (2, Interesting)

The Archon V2.0 (782634) | more than 4 years ago | (#29972338)

You are probably one of those guys that thinks that if you can get 36 women working together on making a baby, it will be ready in 1 week.

It'd certainly be fun trying, though.;)

Not all problems can scale out to many cpus (or wombs, for that matter). Threading overhead, network latency/bandwidth, mutual exclusion (or the overhead on atomic data types) all conspire to defeat attempts to scale.

It's not my skill set, but I remember years ago seeing a fascinating show on how blindly adding more resources can make something SLOWER. To translate the case study (involving editing individual segments for a news show on limited editing equipment) into geek speak, they demonstrated that unless you do things right, you might wind up with cores 2-8 zipping through their parts just to wait for core 1, which has unceremoniously had all the long tasks scheduled to run on it because the scheduling algorithm was only made with a dual- or quad-core system in mind and gets stupid when handed more, resulting in it scheduling things wrong.

Really, I got the feeling from that show that trying to make multiple interrelated units work together on a single task without bottlenecks or downtime is a logistical nightmare no matter if it's people in a company, robots in a factory, or cores in a computer.

Re:Low Power Supercomputer (1, Funny)

Anonymous Coward | more than 4 years ago | (#29972576)

You are probably one of those guys that thinks that if you can get 36 women working together on making a baby, it will be ready in 1 week.

I think I'd be sore for a week, and have 36 kids 9 months later.

Re:Low Power Supercomputer (1)

jd (1658) | more than 4 years ago | (#29972776)

Low-power CPUs could be useful if you've the bandwidth. Ultimately, though, you're limited to how parallelizable the problem is - a fundamentally sequential problem will remain a fundamentally sequential problem, for example.

If the problem can be parallelized, you're then limited by the nature of the CPU vs. the nature of the problem. A problem that is essentially SIMD is going to do great on a cluster of identical processors. A problem that is essentially MIMD is not. MIMD problems will always do better on heterogeneous systems, where each CPU is best suited for the work it is being asked to do.

(Ideally, you'd have extremely low-level hardware that ran an emulated instruction set, essentially what Transmeta did. The difference would be that instead of compiling programs for the CPU, you'd compile the CPU for the programs.)

Even that is still too much of a simplification. Some Intel chips parallelize at the CPU element level. The University of Manchester, in 1978, had a compiler that parallelized at the single instruction level. Most modern parallel languages parallelize at the code block level. Most modern message-passing libraries (like MPI) parallelize at the thread level. Some cluster patches for Linux will parallelize at process level or even coarser-grain.

I honestly think all of these different levels of granularity have value and that the "ideal" parallel environment would be one in which you could mix-and-match freely, depending on the nature of the problem.

(Actually, the "ultimate" would be to have a language and compiler such that the compiler analyzed the cost/benefit of each method on each parallelizable unit and generated all necessary CPU microcode and program object files to run optimally.)

Re:Low Power Supercomputer (1)

Jacques Chester (151652) | more than 4 years ago | (#29973720)

>This was for 8 cores on the same motherboard by the way, if you are spawning MPI jobs over a network socket, expect much much worse.

They thought of that in the SiCortex design. They use a custom internal fabric to reduce the maximum hop distance to 3 between any pair of chips; and furthermore they included on-die MPI-handling logic to take that overhead out of the CPUs.

Re:Low Power Supercomputer (1)

flaming-opus (8186) | more than 4 years ago | (#29983740)

That said, there are a lot of task that do parallelize well. There's a large market for machines with >5k cores. Often with a significant share of the jobs running on >1k cores. The big HPC sites (weather, energy, defense, research, seismic) have invested the last 3 decades into making parallel algorithms to solve their problems; first with shared memory parallel, but massively parallel has been the name of the game for at least 15 years.

Because your algorithm doesn't scale, does not mean that there is no market for parallel machines. Cray, HP, IBM seem to be making a lot of money selling parallel machines. Sicortex just couldn't make their architecture awesome enough to take sales away from the entrenched players.

Sicortex isn't the only vendor to fail in the HPC space. With or without a low power architecture, it's a hard market to make a lot of money in. It's an easy market to get into, so a lot of people try, but it's not easy to stay profitable, and the investors wanted to lower their risks.

Re:Low Power Supercomputer (0)

Anonymous Coward | more than 4 years ago | (#29972978)

I'm sure it could be made to work pretty well, but it's not unprecedented at all.

IBM have already been building low power supercomputers for years.. the whole Blue Gene series is based around the idea of using lower end, low power processors: it even says in the article that the Blue Gene/P has similar power usage. They use custom-built low power 800MHz PowerPC chips.

Theoretically at least x86 chips are going to have some additional overhead compared to RISC architectures due to the extra instruction decoding hardware: the economies of scale were enough to overcome this when it comes to compute power, I'm not sure if it is going to be the case for power consumption as well.

Too Early (1)

Dripdry (1062282) | more than 4 years ago | (#29970838)

If energy efficiency was their main pitch, perhaps starting it when they did was just bad luck or being too far ahead of the curve.
Wait a couple 5-10-20 years or so. As people use less energy and the companies raise their rates to compensate, perhaps these types of solutions will become more appetizing.

OTOH I don't have any numbers data to back it all up, though, such as cost of a new HPC system from a different vendor versus Intel and all the energy, support, and training that go with each.

Your official guide to the Jigaboo presidency (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29970858)

Congratulations on your purchase of a brand new nigger! If handled properly, your apeman will give years of valuable, if reluctant, service.

INSTALLING YOUR NIGGER.
You should install your nigger differently according to whether you have purchased the field or house model. Field niggers work best in a serial configuration, i.e. chained together. Chain your nigger to another nigger immediately after unpacking it, and don't even think about taking that chain off, ever. Many niggers start singing as soon as you put a chain on them. This habit can usually be thrashed out of them if nipped in the bud. House niggers work best as standalone units, but should be hobbled or hamstrung to prevent attempts at escape. At this stage, your nigger can also be given a name. Most owners use the same names over and over, since niggers become confused by too much data. Rufus, Rastus, Remus, Toby, Carslisle, Carlton, Hey-You!-Yes-you!, Yeller, Blackstar, and Sambo are all effective names for your new buck nigger. If your nigger is a ho, it should be called Latrelle, L'Tanya, or Jemima. Some owners call their nigger hoes Latrine for a joke. Pearl, Blossom, and Ivory are also righteous names for nigger hoes. These names go straight over your nigger's head, by the way.

CONFIGURING YOUR NIGGER
Owing to a design error, your nigger comes equipped with a tongue and vocal chords. Most niggers can master only a few basic human phrases with this apparatus - "muh dick" being the most popular. However, others make barking, yelping, yapping noises and appear to be in some pain, so you should probably call a vet and have him remove your nigger's tongue. Once de-tongued your nigger will be a lot happier - at least, you won't hear it complaining anywhere near as much. Niggers have nothing interesting to say, anyway. Many owners also castrate their niggers for health reasons (yours, mine, and that of women, not the nigger's). This is strongly recommended, and frankly, it's a mystery why this is not done on the boat

HOUSING YOUR NIGGER.
Your nigger can be accommodated in cages with stout iron bars. Make sure, however, that the bars are wide enough to push pieces of nigger food through. The rule of thumb is, four niggers per square yard of cage. So a fifteen foot by thirty foot nigger cage can accommodate two hundred niggers. You can site a nigger cage anywhere, even on soft ground. Don't worry about your nigger fashioning makeshift shovels out of odd pieces of wood and digging an escape tunnel under the bars of the cage. Niggers never invented the shovel before and they're not about to now. In any case, your nigger is certainly too lazy to attempt escape. As long as the free food holds out, your nigger is living better than it did in Africa, so it will stay put. Buck niggers and hoe niggers can be safely accommodated in the same cage, as bucks never attempt sex with black hoes.

FEEDING YOUR NIGGER.
Your Nigger likes fried chicken, corn bread, and watermelon. You should therefore give it none of these things because its lazy ass almost certainly doesn't deserve it. Instead, feed it on porridge with salt, and creek water. Your nigger will supplement its diet with whatever it finds in the fields, other niggers, etc. Experienced nigger owners sometimes push watermelon slices through the bars of the nigger cage at the end of the day as a treat, but only if all niggers have worked well and nothing has been stolen that day. Mike of the Old Ranch Plantation reports that this last one is a killer, since all niggers steal something almost every single day of their lives. He reports he doesn't have to spend much on free watermelon for his niggers as a result. You should never allow your nigger meal breaks while at work, since if it stops work for more than ten minutes it will need to be retrained. You would be surprised how long it takes to teach a nigger to pick cotton. You really would. Coffee beans? Don't ask. You have no idea.

MAKING YOUR NIGGER WORK.
Niggers are very, very averse to work of any kind. The nigger's most prominent anatomical feature, after all, its oversized buttocks, which have evolved to make it more comfortable for your nigger to sit around all day doing nothing for its entire life. Niggers are often good runners, too, to enable them to sprint quickly in the opposite direction if they see work heading their way. The solution to this is to *dupe* your nigger into working. After installation, encourage it towards the cotton field with blows of a wooden club, fence post, baseball bat, etc., and then tell it that all that cotton belongs to a white man, who won't be back until tomorrow. Your nigger will then frantically compete with the other field niggers to steal as much of that cotton as it can before the white man returns. At the end of the day, return your nigger to its cage and laugh at its stupidity, then repeat the same trick every day indefinitely. Your nigger comes equipped with the standard nigger IQ of 75 and a memory to match, so it will forget this trick overnight. Niggers can start work at around 5am. You should then return to bed and come back at around 10am. Your niggers can then work through until around 10pm or whenever the light fades.

ENTERTAINING YOUR NIGGER.
Your nigger enjoys play, like most animals, so you should play with it regularly. A happy smiling nigger works best. Games niggers enjoy include: 1) A good thrashing: every few days, take your nigger's pants down, hang it up by its heels, and have some of your other niggers thrash it with a club or whip. Your nigger will signal its intense enjoyment by shrieking and sobbing. 2) Lynch the nigger: niggers are cheap and there are millions more where yours came from. So every now and then, push the boat out a bit and lynch a nigger.

Lynchings are best done with a rope over the branch of a tree, and niggers just love to be lynched. It makes them feel special. Make your other niggers watch. They'll be so grateful, they'll work harder for a day or two (and then you can lynch another one). 3) Nigger dragging: Tie your nigger by one wrist to the tow bar on the back of suitable vehicle, then drive away at approximately 50mph. Your nigger's shrieks of enjoyment will be heard for miles. It will shriek until it falls apart. To prolong the fun for the nigger, do *NOT* drag him by his feet, as his head comes off too soon. This is painless for the nigger, but spoils the fun. Always wear a seatbelt and never exceed the speed limit. 4) Playing on the PNL: a variation on (2), except you can lynch your nigger out in the fields, thus saving work time. Niggers enjoy this game best if the PNL is operated by a man in a tall white hood. 5) Hunt the nigger: a variation of Hunt the Slipper, but played outdoors, with Dobermans. WARNING: do not let your Dobermans bite a nigger, as they are highly toxic.

DISPOSAL OF DEAD NIGGERS.
Niggers die on average at around 40, which some might say is 40 years too late, but there you go. Most people prefer their niggers dead, in fact. When yours dies, report the license number of the car that did the drive-by shooting of your nigger. The police will collect the nigger and dispose of it for you.

COMMON PROBLEMS WITH NIGGERS - MY NIGGER IS VERY AGGRESIVE
Have it put down, for god's sake. Who needs an uppity nigger? What are we, short of niggers or something?

MY NIGGER KEEPS RAPING WHITE WOMEN
They all do this. Shorten your nigger's chain so it can't reach any white women, and arm heavily any white women who might go near it.

WILL MY NIGGER ATTACK ME?
Not unless it outnumbers you 20 to 1, and even then, it's not likely. If niggers successfully overthrew their owners, they'd have to sort out their own food. This is probably why nigger uprisings were nonexistent (until some fool gave them rights).

MY NIGGER BITCHES ABOUT ITS "RIGHTS" AND "RACISM".
Yeah, well, it would. Tell it to shut the fuck up.

MY NIGGER'S HIDE IS A FUNNY COLOR. - WHAT IS THE CORRECT SHADE FOR A NIGGER?
A nigger's skin is actually more or less transparent. That brown color you can see is the shit your nigger is full of. This is why some models of nigger are sold as "The Shitskin".

MY NIGGER ACTS LIKE A NIGGER, BUT IS WHITE.
What you have there is a "wigger". Rough crowd. WOW!

IS THAT LIKE AN ALBINO? ARE THEY RARE?
They're as common as dog shit and about as valuable. In fact, one of them was President between 1992 and 2000. Put your wigger in a cage with a few hundred genuine niggers and you'll soon find it stops acting like a nigger. However, leave it in the cage and let the niggers dispose of it. The best thing for any wigger is a dose of TNB.

MY NIGGER SMELLS REALLY BAD
And you were expecting what?

SHOULD I STORE MY DEAD NIGGER?
When you came in here, did you see a sign that said "Dead nigger storage"? .That's because there ain't no goddamn sign.

Fool's errand (3, Insightful)

Locke2005 (849178) | more than 4 years ago | (#29970864)

In a blog post after SiCortex shut down, Reilly says he believes there is still room for non-x86 machines in the HPC market. He is wrong. Much more money is being spent every year on improving x86 chips than all the competitors combined. Basing a supercomputer on MIPs was short-sighted; even if it offers a a price/performance or power/performance advantage now, in a couple years it won't, because x86 is being improved at a much faster rate. Where is Sequent now? The only way to build a successful desktop HPC company is to be able to do system design turns as fast as new x86 generations come out and ship soon after the new CPUs become widely available, e.g. a complete new product every 6 months. That requires partnership with either Intel or AMD, not use of a MIPs chip that no one is spending R&D resources on anymore.

Re:Fool's errand (1)

convolvatron (176505) | more than 4 years ago | (#29971130)

unfortunately thats not sufficient. you also need the us govt to throw you tens of millions in
'research contracts' that dont amount to anything, and have them agree to buy your overpriced
machines even though they dont really do anything useful

even then its a pretty difficult market

Re:Fool's errand (1)

Jah-Wren Ryel (80510) | more than 4 years ago | (#29971210)

Basing a supercomputer on MIPs was short-sighted; even if it offers a a price/performance or power/performance advantage now, in a couple years it won't, because x86 is being improved at a much faster rate. Where is Sequent now?

Uh, Sequent never used MIPs chips. In fact, the vast majority of the system that they sold were Intel based.

Maybe you mean SGI? Their problems seemed to coincided with their moves to Intel chips (SGI PCs that flopped and then later the wholesale move to ia64). Not to say that the problems were caused by those moves - maybe they were, maybe they weren't, but it certainly didn't make the problems go away.

Re:Fool's errand (1)

Locke2005 (849178) | more than 4 years ago | (#29971886)

Sorry about the confusing jump there. I was referring to SiCortex's use of MIPs, not Sequent. Sequent was always x86 based, their weakness was lagging the release of the latest x86 chips by about a year in their products. Lightfleet also tried than discarded the Broadcom/SiByte processor, because it's floating point unit quite simply did not perform as advertised.

Re:Fool's errand (1)

hemp (36945) | more than 4 years ago | (#29972398)

I think IBM's entry into the multi-processor Unix machines with the RS/6000 was the death knell in Sequent and Pyramid. The both had their day in the Sun. Now Sun is gone too.

Re:Fool's errand (1)

serviscope_minor (664417) | more than 4 years ago | (#29971268)

He is wrong.

Have you looked at the top100 recently? x86 is certainly not the only game in town.

Re:Fool's errand (1)

Fulcrum of Evil (560260) | more than 4 years ago | (#29971650)

Yeah, it's only 95%...

Re:Fool's errand (1)

serviscope_minor (664417) | more than 4 years ago | (#29975396)

Yeah, it's only 95%...

Seriously, what is wrong with you? Read the list and understand that PPC is not x86.

Re:Fool's errand (1)

Fulcrum of Evil (560260) | more than 4 years ago | (#29981148)

I read the list - almost everything in the top 100 and it's all xeon and opteron with one IBM machine.

Re:Fool's errand (1)

serviscope_minor (664417) | more than 4 years ago | (#29987510)

Might I suggest you learn to read? It's hard to claim that there is only one IBM machine in the top 100 when there are 5 IBM in the top 10. That's 50%, not 95%, by the way.

GPU's? (2, Interesting)

toastar (573882) | more than 4 years ago | (#29972084)

My next cluster is going to be based around Tesla's. GPU's are the future. It takes 100,000 x86 cores to get a petaflop, You can get there with 25,000 if you use cells(5K x86, 20k cells) You can do the same thing with 10k if you use GPU's (5k x86, 5k Tesla's) Guess what the cheapest option is? They might not be the most energy efficient, but haven't we learned the problem with custom chips in HPC market, That's why we went to clusters in the first place

Re:GPU's? (3, Informative)

peawee03 (714493) | more than 4 years ago | (#29974010)

Currently, Teslas are the single-precision future. All my work is in double precision (64-bit), which is where most GPUs are much much slower. IIRC, the next generation GPUs are going to have respectable double precision performance, but they're way down the road- hopefully I'll have moved on to a job where it doesn't matter by then. Hell, I consider it a victory when I've gotten a code translated from FORTRAN 77 to Fortran 95. GPUs? I'll wait until next decade. More normal cores are low-hanging fruit I can use with any MPI code *now*.

Re:Fool's errand (2, Insightful)

BikeHelmet (1437881) | more than 4 years ago | (#29971918)

Right now x86 has only two viable competitors.

-ARM
-Whatever IBM can design. (but IBM's stuff is expensive)

ARM CPUs tend to be cheap, power efficient, and pack a ton of performance for the price - and the company has enough cash to keep developing for years and years. Other companies fab, so that lets them keep focused on what they're good at. It's a relationship that mirrors GPU makers - ATI/nVidia/TSMC. However, ARM has a very low performance cap compared to x86, so that limits usage scenarios. Good for low power servers, but not so great for scientific computing or anything that hits the CPU hard. ARM hopes to release dual-core Cortex-A9 chips in 2010, so maybe they'll catch on - only time will tell.

IBM has always been the leader in performance, but the price would knock you flat. 5ghz Power6, anyone? It still beats everything Intel puts out, and it's years old - assuming you can foot the bill and deal with the different architecture. And look at the Cell - upon release, it was something like 30x more efficient than Intel's highest end CPUs when in supercomputers. (because Intel's CPUs of the time failed completely at scaling upwards past a few cores) Also, it was cheaper - but the architecture isn't exactly friendly, and most companies prefer to toss a few dozen extra $2000 servers at a problem rather than deal with training/hiring employees that can work with a new architecture.

And that's the problem - everyone knows x86, and even if a server costs 5 times as much, it comes out more economical.

But luckily for ARM, lots of people are getting more familiar with their instruction sets. These days just about every tiny device has an ARM CPU powering it... finding developers will not be a problem.

Re:Fool's errand (1)

Locke2005 (849178) | more than 4 years ago | (#29972012)

Cell architecture is good for specific problems sets, ones that look a lot like video streaming. It is by no means a general purpose HPC architecture.

Everybody knows x86 is poor foundation for high speed computing, but the software, tools, and R&D budget to keep improving it are there. Specialty processors can offer temporary advantages, but within a few years Intel engineers will do their best to integrate any new ideas into their latest designs. Since recovering from the Whitehall fiasco and getting the performance per watt religion, Intel has actually been doing quite well.

Re:Fool's errand (1)

maitas (98290) | more than 4 years ago | (#29977042)

Sorry but Nehalem has more than twice the performance per socket than Power6, and it is much cheaper.

http://tiny.cc/nuabU [tiny.cc]

http://tiny.cc/sQ1fT [tiny.cc]

  When building processors, you tend to be around 300mm2 in size, nomather what fab you are in, becouse is the size that gets you the best relationship between yield in 300mm waffer and cost to package. Depending on the fab you get more or less transistor. Is up to each vendor to create as much or as little cores, threads, etc. as he wants.

  A extreme version of this is Sun T2 processor, with 8 cores, that is faster than Power6 per socket but slower than Nehalem.

http://tiny.cc/Tm9r2 [tiny.cc]

Re:Fool's errand (1)

BikeHelmet (1437881) | more than 4 years ago | (#29984752)

Sorry, I wasn't too clear. As a gamer, my first measure of performance is how fast a single core is, and my second is how many cores it scales well to.

You are correct that for multi-core multi-socket systems running server tasks, Nehalem is likely the best.

I'm glad that Nehalem beats an old architecture like Power6.

They didn't use MIPS (1)

PCM2 (4486) | more than 4 years ago | (#29972304)

Basing a supercomputer on MIPs was short-sighted; even if it offers a a price/performance or power/performance advantage now, in a couple years it won't, because x86 is being improved at a much faster rate.

It wasn't even MIPS. From TFA:

But SiCortex went against conventional wisdom by building its own processors and this decision limited the company's market to early adopters, Conway says. In building its chips, SiCortex obtained intellectual property from several vendors, including MIPS Technologies, and tweaked the design to meet its own needs.

An HPC start-up going into the microprocessor design business now? That really is a fool's errand. Mind you, that's sort of how the ARM processor came to be, but that was a loooonnng time ago.

Re:They didn't use MIPS (1)

Jacques Chester (151652) | more than 4 years ago | (#29973740)

It *was* MIPS. They licensed core designs and used the MIPS ISA. But they had a custom 6-to-a-die design with some specialist MPI and fabric circuitry of their own design.

Re:They didn't use MIPS (1)

cheesybagel (670288) | more than 4 years ago | (#29975712)

My guess is they hoped to be bought off by someone. Hey it happened to Nexgen, P.A. Semi, IDT Centaur.

Re:Fool's errand (1)

f16c (13581) | more than 4 years ago | (#29973104)

In a lot of ways x86 processors suck. Newer processors make great PCs and parallel processing can work wonders for some tasks but there are some things that require a whole lot more power on a clock by clock basis. There are lots of problems where parallel efforts are a wast of time. Some problems are sequential and can only be solved one node, tile or element at a time. Those are the problems for HPC rigs. Some architectures scale a lot better than Intel for similar problems and MIPS is one of the first to come to mind.

Re:Fool's errand (2, Insightful)

Jacques Chester (151652) | more than 4 years ago | (#29973750)

> Much more money is being spent every year on improving x86 chips than all the competitors combined.

By your logic, General Motors should be crushing Ferrari in the supercar market. After all, GM spends much more on their car development than Ferrari does.

Re:Fool's errand (1)

cheesybagel (670288) | more than 4 years ago | (#29975716)

Ferrari is owned by Fiat.

Re:Fool's errand (0)

Anonymous Coward | more than 4 years ago | (#29974124)

Where is Sequent? Hell, where is Symbolics and the other LISP chip makers? (The warfare over this hardware was one of the founding events of the Free Software Foundation.)

Three letters... (0)

Anonymous Coward | more than 4 years ago | (#29974556)

GPU

but, even with the GPU you have to:

1) make a serious effort to train programmers
2) recognize it will penetrate new markets faster than old markets and
3) offer a factor of 10 (or more) improvement

The Sicortex people really only competed in existing markets. Nvidia is *developing* new markets, like embedded and deskside HPC. Cheap 3D CAT scan anyone?

Re:Three letters... (0)

Anonymous Coward | more than 4 years ago | (#29975280)

I'd like to point out that most of the cost of a 3D CAT scan is not actually in computing, not by far. First you have to have a decently point source X-ray tube ($$$$$$), a power supply for it ($$$$$), then a whole row of detectors+preamps+ADCs ($$$$$$$) and something to mechanically scan them with ($$$$$). The compute node needed, at $$$$$ is not even a major component.

Re:Fool's errand (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#29974992)

Of course there are all those GPU's offering orders of magnitude more GFlops than any x86 and being improved at a much faster rate than x86.

The most energy-efficient HPC clusters (0, Offtopic)

Icegryphon (715550) | more than 4 years ago | (#29971004)

in ZA WARUDO!
MUDA DA!

The fanciest-sounding solution ... (5, Interesting)

Wrath0fb0b (302444) | more than 4 years ago | (#29971020)

... is almost always wrong. As one of the principals on a large-ish (not large by world standards, 1000 cores, mainly Nehalem so approximately 100 GFLOPS) cluster, I've been very pleased that we've done things as simply as possible. Sun Grid Engine and ROCKS running on commodity 1Us delivers an economical and effective solution (no, I don't work for Sun).

Most importantly, the environment does not unduly restrict what kind of compute jobs can be run. If it can be compiled on *nix, we can probably run it. We lose to specialized hardware (GPU-based, Cell-based, ... ) in raw throughput but we make up for it in both initial price and ease of deployment. We don't even have a dedicated admin for the cluster -- we had one to set it up and he did such a good job we haven't needed to hire a replacement!

Ultimately, I feel like it's not worth paying extra in hardware and software-dev costs to save few dollars on cooling and power. Sure, you get credibility of running a "green" cluster (nevermind that you have to pay to feed and house those extra developers, which should legitimately come out of your carbon budget) but you end with with a far less useful product.

Long Live X86(_64)!

Re:The fanciest-sounding solution ... (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#29971084)

Long Live X86(_64)!
Long Live Viruses!
Long Live Worms!
Long Live Windows!

Re:The fanciest-sounding solution ... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29982942)

The parent is flamebait? X86 has made some very bad choices as far as security is concerned. No architecture is perfectly secure but the fact that X86 is awful, incredibly fast for what it is worth.

Notice the amount mac bugs/security issues went up when they moved to Intel?

http://linux.slashdot.org/story/09/11/04/0320254/Bug-In-Most-Linuxes-Can-Give-Untrusted-Users-Root
"We are not super proud of the solution, but it is what seems best faced with a stupid Intel architectural choice."

So what you're saying is... (0)

Anonymous Coward | more than 4 years ago | (#29971752)

The easiest product to use should be the best product. OSX running on PPC chips was great, but not as good as x86? I suppose it's true, but why wasn't it true when all the fanboys were on PPCs?

Re:The fanciest-sounding solution ... (1)

jmknsd (1184359) | more than 4 years ago | (#29971960)

What are you using to benchmark your cluster? I benchmarked a pair of nehalem Xeons(the 2,0Ghz ones, i.e. the cheapest quad cores) at 90GFLOPS with linpack. Individually ~57Gflops.

Re:The fanciest-sounding solution ... (3, Interesting)

Gorobei (127755) | more than 4 years ago | (#29972734)

Exactly right. I've got >10K cores and >10M LOC. "Hardware fault" typically means a datacenter caught fire, or was flooded, or an undersea cable got cut.

If someone pitches a cheaper solution (e.g. power savings,) I'm happy to listen for 10 minutes. Then I just want to know how fast I can see results: a dev costs $50K/month here, so I'll give it a week or two: if you don't have a test farm ready to go with full compilers, a data security plan, etc, I'm going to just reject. If you can get traction with universities, great, come back and pitch again in a year.

Re:The fanciest-sounding solution ... (2, Interesting)

Jacques Chester (151652) | more than 4 years ago | (#29973612)

That's what they designed: it's basically a bog-ordinary Linux-with-MPI cluster in a box. They had a custom internal fabric that was far more efficient than ordinary switches and even had on-die MPI accelerators. They also shipped with compilers for C, C++ and Fortran.

It was meant to be a drop-in replacement for room-sized clusters for a fraction of the space and heat. Basically what killed them was cashflow.

Re:The fanciest-sounding solution ... (2, Interesting)

Gorobei (127755) | more than 4 years ago | (#29973692)

Yep, cashflow is a bitch: if I need to spend $25K to even look at the product, and they need $20M to run a demo datacenter, they need something like $100M in capital to avoid dying on the vine :(

Re:The fanciest-sounding solution ... (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#29975024)

Just out of curiosity how much of that $50K/month is salary? And how does the rest break down?

Re:The fanciest-sounding solution ... (1)

Jacques Chester (151652) | more than 4 years ago | (#29973626)

By your logic, General Motors should be crushing Ferrari. After all, GM spends much more on their car development than Ferrari does.

Re:The fanciest-sounding solution ... (1)

Jacques Chester (151652) | more than 4 years ago | (#29973730)

Apologies, the above comment seems to have been assigned to the wrong OP.

Re:The fanciest-sounding solution ... (0)

Anonymous Coward | more than 4 years ago | (#29974640)

The sicortex folk were not originally interested in *green* computing. They only hyped this aspect when they realized they screwed up the memory architecture.

I don't know how Intel could run screaming into the power wall -- and then threw away a whole generation of bad Nehalem designs, plus the Indian design team -- and yet people claim they are unbeatable.

People always blame their failure on... (0)

Anonymous Coward | more than 4 years ago | (#29971068)

...market conditions, didn't have the right people, time just wasn't right... But people don't buy what you sell. They buy why you sell it.

These guys failed at that, and got unlucky.

End the FED (0, Offtopic)

benjamindees (441808) | more than 4 years ago | (#29971136)

And this is basically a perfect example of how central bank meddling makes us all worse off. Small firms responding to the market and engaging in actual innovation threaten large, established corporations. Stock indexes fall. The "economy" collapses. The FED goes into damage control mode and starts printing money to hand out to their friends: the large, established corporations. Small firms and start-ups don't receive any of this free money. Large firms use this taxpayer money and the inflationary power of the FED to catch up to their smaller competitors by making incremental changes to existing production lines. The small firms go belly-up. Oligopoly is maintained. The newly unemployed die from lack of healthcare or are sent to get shot at in some unnecessary foreign war funded by their taxes and the same banks that put them out of business. Everything goes smoothly until a new generation or flood of immigrants precipitates resource shortages which incentivize the rise of new, innovative start-ups and begins the "business cycle" all over again.

Dang (1)

maoinhibitor (1670588) | more than 4 years ago | (#29971190)

I heard about this company in Mass High Tech, started checking them out as a potential employer, and then heard they went out of business. It's unfortunate, they had an interesting product. This also means I won't be applying for jobs at startups until the economy is much stronger.

Wile E Coyote (1, Insightful)

chill (34294) | more than 4 years ago | (#29971198)

Whenever I hear a story about some new type of "super" computer, I think of an old Road Runner cartoon. Wile E Coyote, Genius, is mixing chemical explosives in his little shack, which he doesn't know was moved onto the train tracks.

He says to himself, "Wile E. Coyote SUPER genius. I like the sound of that." He then gets hit by the train.

Some of these companies remind me a LOT of good, old Wile E. Coyote. The one in this article just found the train.

Entrenched? (1)

serviscope_minor (664417) | more than 4 years ago | (#29971202)

x86 is certainly entrenched in the desktop, but in supercomputing? In the top 10, it's maybe half x86. There's a strong showing from Power (BlueGene) and of course the #1 spot held by an x86/Cell hybrid (which gest most fo the FLOPS from the SPU, not PPC or x86)

Hardly entrenched.

Looking down further, there is mainly x86, but still a strong showing from Power (IBM) but also SPARC, NEC's vector processor (kind of PPC), Itanium and a few randoms.

So, the to 100 is dominated by AMD, Intel and IBM in roughly equal parts, but there is still room for other vendors.

Still sad to see an inovative computer go to the wall :(

Re:Entrenched? (2, Informative)

jd (1658) | more than 4 years ago | (#29973136)

First, the HPC world has a lot of commodity computers, but it also has a lot of very special-purpose computers.

Second, the odds of someone buying an HPC machine and then running pre-compiled generically-optimized code on it is virtually zero.

Third, HPC computers (as compared to loosely-clustered "pile-of-PCs" systems) are expensive and almost invariably use components that aren't "run-of-the-mill" (such as Infiniband or SCI for the interconnect).

In consequence, not only is the ix86 not "entrenched", it can't be "entrenched". It can only be popular in specific segments of the HPC market and even then only until something better comes along.

If HPC was tied that firmly to Intel, they'd all be using Windows Cluster Edition rather than Beowulf or MOSIX. Why? Because Beowulf and MOSIX require engineers who think, Windows does not. If thinking was superfluous to requirements, they'd be using an OS to suit. They aren't.

Now, will MIPS/MIPS64 ever do well in HPC as a whole? Probably not. MIPS is great for the embedded market, which means most MIPS engineers understand the embedded terrain. That's not a skill you can readily migrate to other areas. I do expect, however, MIPS/MIPS64 to do extremely well in some HPC domains. It's low-power (which is why it's popular in embedded systems) which is great when you can't cart around huge generators, can't dispose of the heat easily or have to minimize the radio noise. Plenty of markets there.

The Cell processor is an interesting design and seems to do great, but problems tend not to split 6-ways very often. I'd have preferred them to have a 4-way grouping of number-crunchers and have the other 2 cores really good at something else entirely. Perhaps as the manufacturing scale gets smaller, they'll be able to increase the variety of cores.

But sooner or later, someone is going to build a chip that is absolutely just what the HPC world needs. The Gnu C compiler is easily enough extended and although it's not quite as good as Green Hills or some of the other high-end compilers, the gap isn't so great that HPCers won't use it.

My guess is that such a chip will be very easily reconfigured through microcode and that it'll really be not much more than a bunch of core operations on silicon, a block of largely unsegmented memory and enough networking logic to allow the operator to fully exploit what's there. Oh, and a hell of a lot of internal bandwidth. To pull this off, you'd need to do for CPU internal buses what Infiniband and SCI have done for machine-level networking. That's the only truly hard part.

Such designs have been attempted before, where CPUs have no registers but just a block of memory you can use as you will. This idea goes a little further, since it replaces both Intel's notion of hyperthreading and the modern idea of multiple cores with the idea that hyperthreads x cores would be fixed with the microcode deciding the values. The compiler for the program can then section the CPU according to the needs of the program, rather than sectioning the program according to the needs of the CPU.

Could Intel borrow this? No. The above has no architecture, per-se, and no real instruction set. Just processor elements. There's nothing to copy, nothing to patent, and with no fixed instruction set, nothing to lock customers in with. The only thing they could really steal would be the faster internal bus. Which would keep them on desktops for decades to come, but because general-purpose is ALWAYS slower than special-purpose, it wouldn't keep them in the HPC market.

We've seen the same with other components of computers, of course. Long-gone are the days of proprietary floppy drives (yes, some companies really tried to tie customers to their brand of floppy disk), proprietary printers, proprietary tape drives, proprietary hard disk interfaces, even proprietary RAM (Got RAMBUS?).

Transmeta came close, but didn't go all the way (their CPU had an architecture of some sort) and were far too interested in the secrets business. Understandable - money is money and what I'm envisaging isn't going to make anyone rich. Quite the opposite. If it actually works the way I think it should, it'll reduce the value of the CPU market the way Linux and the *BSDs are reducing the value of the OS market.

Commodity (2, Interesting)

oldhack (1037484) | more than 4 years ago | (#29971330)

FTFA:

"It is possible for a small company to compete in the computer systems business," Reilly wrote. "There are some who will say that nobody can compete against 'commodity manufacturers.' Ignore them. ... There are only two true commodities in the computer business: DRAMs and wafer area. Everybody pretty much pays the same price for DRAMs. Wafer area is what you make of it. If you insist on building giant 100W chips, life will be tough. But if you use the silicon wafer area for something new, different and efficient, a market will open up to you."

Many years ago, I wrote a paper for my business class that using DRAM industry as a commodity industry. The ignint professor gave me a C for that cuz he insisted DRAM is not a commodity. That dude at the time was a young one, too.

Lesson? Don't waste your time and money at b-school - it may damage your brains.

Re:Commodity (0)

Anonymous Coward | more than 4 years ago | (#29971542)

Other Lesson: pay attention in english class.

Re:Commodity (0)

Anonymous Coward | more than 4 years ago | (#29971686)

Other lesson: pay attention in English class.

Re:Commodity (0, Troll)

coaxial (28297) | more than 4 years ago | (#29972566)

Many years ago, I wrote a paper for my business class that using DRAM industry as a commodity industry. The ignint professor gave me a C for that cuz he insisted DRAM is not a commodity. That dude at the time was a young one, too.

You're right that DRAM is a commodity. Clearly the reason why you got the C is because the prof was thinking of traditional commodities, and you didn't support your premise.

Re:Commodity (1)

Have Brain Will Rent (1031664) | more than 4 years ago | (#29975072)

it may damage your brains.

Apparently to the point where grammar and spelling skills are lost and delusions of having multiple brains set in!

classic business fail (2, Insightful)

timmarhy (659436) | more than 4 years ago | (#29971760)

These guys failed in a very typical geeky fashion. they understood the technology but not the business, and at the end of the day your customers need a business case to use your services. it's the tail attempting to wag the dog.

Re:classic business fail (0)

Anonymous Coward | more than 4 years ago | (#29982828)

it's the tail attempting to wag the dog.

Apparently you've never seen an excited dog before :) It's not just the tail that goes, it's their whole body!

Interesting archictecture (1)

belthize (990217) | more than 4 years ago | (#29971864)

    Points (made above) about non-x86 processors are doomed aside, the Si-cortex had an interesting interconnect design. Their kautz graph based interconnect was fairly (at least to me) innovative.

    Personally I'm sorry to see them go, we never had a chance to benchmark our software on their system but I was suspicious it might have behaved very well per $. Even if the underlying system disappears their interconnect ideas may survive.

SiCortex, WAY too expensive. (0)

Anonymous Coward | more than 4 years ago | (#29971930)

I looked at the SiCortex machine (and its price) online about 2 years ago. They were charging ~$30/core, and their cores were simple ~500 mhz MIPS processors. Considering that Tilera and nVidia have actual customers, this could just be company specific.

The future is low power. (1)

NCamero (35481) | more than 4 years ago | (#29972196)

FLOPS / Watt is the future. No doubt. Tesla (NVIDIA) has the edge on the low end now. Low power per cycle will win in the long term. Multi core x64 bit for now. But to the powers that be, 32 / 64 /128 bits per watt per cycle will rule someday.

Re:The future is low power. (1)

afidel (530433) | more than 4 years ago | (#29972402)

(FLOPS/W)/latency
For non embarrassingly parallel jobs it won't matter how efficiently you can compute if you can't communicate the results between nodes.

There's No Hope . . . (2, Insightful)

MarkvW (1037596) | more than 4 years ago | (#29972274)

Somebody is going to crack the market--and it won't be one of the people who sit at home and cry in their beer about how Intel rules the world and that nobody has any hope of success!!

Thank goodness for the entrepreneurs who spit on lassitude and take their shot! Those wozniaks are the people who end up delivering really cool stuff for the rest of humanity, and leave the conventional wisdom people in the dust.

Re:There's No Hope . . . (0)

Anonymous Coward | more than 4 years ago | (#29974382)

I once took a course in entrepreneurism. One of the important lessons was "don't try to enter somebody else's market unless you have a huge advantage over them."

The Woz entered an empty market and everybody who tried to follow him failed, despite their much cheaper prices.
The C64 had sprites, it had sound, it could have had a better disk drive. But the Apple II got there first.

Si-cortex didn't have that large an advantage.

....but (1)

trum4n (982031) | more than 4 years ago | (#29972698)

can i have their inventory?

Did you nerds read the article or the links? (4, Insightful)

labradore (26729) | more than 4 years ago | (#29973158)

They were ahead of schedule to profitability. They lost funding for the next gen. equipment development because one of their VCs was overextended (read: losing too much money on other risky ventures) and decided to pull out. The risk with a company like that may be high but once you get enough profitability, you can fund further product development internally. They had sold about twenty $1.5M machines in about a year's time on the market. They said they were about 1.5 years to profitability, so I'm guessing that they were expecting to sell another 75 or 100 top-end machines to get to break-even. At that rate, they were probably spending less than $20M a year on development. I'm guessing that they burned up $100M to get were they got. In the overall scheme of things, that's not a big bet. If they managed to develop 20 to 50- thousand node machines and increase the output per core within 3 years, that is something that would have been able to do more than fill a niche. They probably would have developed some game-changing technology in the bargain. Stuff that the Intel and Google might just be interested in.

To be clear: this was not a failure due to the economics of competing against Intel/x86. This was a failure due to not being lucky. It takes sustained funding to make your way from start-up to profit in most technical businesses. HPC is more technical and thus more expensive than most.

Re:Did you nerds read the article or the links? (1)

Jacques Chester (151652) | more than 4 years ago | (#29973884)

Yep, purely a business failure due to the crappy timing of the GFC, rather than market trends per se.

What really pisses me off is that Sun bought MySQL for a billion dollars when they probably could have gotten SiCortex for a fraction of that.

x86 vs. non-x86 from a practical point of view (2, Informative)

Iphtashu Fitz (263795) | more than 4 years ago | (#29973392)

I work as a sysadmin at a Boston-based university, and one of my jobs is managing an HPC cluster. We actually had SiCortex come give us a demo of one of their systems a little over a year ago and were rather impressed from a basic technology standpoint. However the biggest drawback we saw, which was a significant one, was that their cluster wasn't x86 based. We run a number of well known commercial apps on our cluster like Matlab, Mathematica, Fluent, Abaqus, and many others. Without those vendors all actively supporting MIPS, SciCortex was simply a non-starter for us when we were researching our next generation cluster. And by actively I mean rolling out MIPS versions of their products on a schedule comparable to their x86 product releases. Having to wait 6 months or more for MIPS versions simply isn't acceptable. If they could get firm commitments from those commercial vendors then we might have pursued SciCortex, but that simply wasn't the case. Even the inability to run a standard commercial linux distro was a huge drawback, since many commercial software vendors specifically require a commercial distro like Red Hat or SUSE if you're trying to get support from them.

Re:x86 vs. non-x86 from a practical point of view (1)

Skapare (16644) | more than 4 years ago | (#29976194)

So all these companies like Matlab, Mathematica, Fluent, Abaqus would rather have you waste more power per CPU cycle, and help harm the environment, by limiting you to lesser efficient machines, just because they have no idea how to compile their application on another architecture, and have no idea how to make the application portable across even different distributions of Linux? It's the app vendors that need to be in the hall of shame.

Re:x86 vs. non-x86 from a practical point of view (0)

Anonymous Coward | more than 4 years ago | (#29977984)

Way to really twist things for your own personal agenda.

There's no market for MIPS versions of their apps. It's a classic "chicken and egg" problem. Unless/until there's a lot of MIPS hardware out there they, and most other vendors, won't support it. Until there's software for MIPS most people won't invest in it.

Well thank goodness (1)

Jacques Chester (151652) | more than 4 years ago | (#29973538)

That Sun pissed away one billion dollars on MySQL instead of buying out SiCortex. Smart move, Sun!

SiCortex's failure (2, Interesting)

RzUpAnmsCwrds (262647) | more than 4 years ago | (#29975982)

Having actually used a SiCortex machine, I can tell you that the problem wasn't the VC, or the compilers, or even really the hardware.

The problem was the market.

There are two types of x86-based small clusters (the market that SiCortex was aiming for): clusters with Gigabit Ethernet and clusters with expensive interconnects (Mirinet, InfiniBand, or 10G Ethernet).

Gigabit Ethernet clusters do a good job with problems that are embarrassingly parallel (or at least have minimal communication demands). $150k gets you 300 Nehalem cores and a lot of memory. SiCortex fails here because their competition (the SC1458) is much more expensive and much slower. The fact that the SC1458 uses less power (around 5kW instead of 10kW) is impressive, but unless you're very power or cooling constrained, it's simply more cost effective to deal with the extra power and cooling cost.

SiCortex hardware was more cost effective against clusters with expensive interconnects. The problem is, the people who buy clusters with expensive interconnects do so because their problem is interconnect heavy. Unfortunately, despite all of the cool CS behind SiCortex's interconnect, the fact is that it just didn't do that well against InfiniBand. That's partly because the SiCortex system has more nodes, which means that more messages have to use the interconnect. It's partly because for very small clusters, it's possible to use a single IB switch that connects every node to every other node. And it's partly because SiCortex didn't have the kind of mature hardware/software stack that someone like Mellanox has.

So, there you have it. For the problems that ran well on SiCortex hardware, you could get the same performance at dramatically lower cost using Gigabit Ethernet. For the problems that require an expensive interconnect, the SiCortex approach of "more, smaller nodes" results in dramatically more overhead compared with the "fewer, faster nodes" strategy.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?