×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NCSA and IBM Part Ways Over Blue Waters

timothy posted more than 2 years ago | from the back-to-the-amd-k6-2-for-now dept.

IBM 76

An anonymous reader writes "IBM has terminated its contract with NCSA for the petascale Blue Waters system that was expected to go online in the next year. The reason stated was that NCSA found IBM's technology 'was more complex and required significantly increased financial and technical support by IBM beyond its original expectations.' The IT community is now wondering if NCSA will be renting out space in the new data center that is being built to house Blue Waters or if they will go with another vendor."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

76 comments

Translation (2, Insightful)

andydread (758754) | more than 2 years ago | (#37028326)

The reason stated was that NCSA found IBM's technology 'was more complex and required significantly increased financial and technical support by IBM beyond its original expectations.'

Translation: NCSA found that IBM was trying to lock them in with ultra proprietary technology that would have required IBM's expensive services for the life of the installation.

Re:Translation (1)

Chris Burke (6130) | more than 2 years ago | (#37028664)

Translation: NCSA found that IBM was trying to lock them in with ultra proprietary technology that would have required IBM's expensive services for the life of the installation.

They only just found out about IBM's business model?!

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37028814)

I'm still waiting for them to find out about Microsofts.

Re:Translation (1)

petermgreen (876956) | more than 2 years ago | (#37031192)

My experiance with microsoft in academia is they like to get us on programs that are a lot cheaper than paying for the software normally but are based on paying a subscription based on the size of the whole institution rather than paying for each individual machine.

The result is that there is no motivation to gradually migrate away from MS software since the only way to reduce the ammount paid to MS would be to virtually eliminate MS software from the institution (which is not realisitically going to happen).

Re:Translation (1)

Old Sparky (675061) | more than 2 years ago | (#37032014)

Once again a Microsoft shill masquerading as a Slashdot moderator mods down a comment that is both insightful AND funny!

Re:Translation (1)

jc42 (318812) | more than 2 years ago | (#37034688)

I'm still waiting for them to find out about Microsoft's [business model].

It might be interesting to look through the flock of Microsoft patents (thousands? millions?) with the idea of listing the patents for things published by NCSA people. More generally, how many patent violations there will be in the new super-computer, and how much will NCSA have to pay for licenses to use the things discovered/invented by their own researchers?

And how many companies in addition to Microsoft will be filing infringement suits against the NCSA? Yeah, we know that IV will be there, but how many others will file in their own names?

Re:Translation (1)

rgviza (1303161) | more than 2 years ago | (#37032212)

Yea you don't buy a mainframe from IBM, you rent it and never stop paying them monthly until the project is terminated. It's not exactly small potatoes either.

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37097388)

A couple of comments on this post.
1. This system is not based on IBM mainframe technology.
2. Many customers buy IBM mainframes.

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37028846)

What it really means is that IBM was contractually locked in to a specified price, but couldn't deliver the necessary system at that price.

Re:Translation (2, Informative)

Anonymous Coward | more than 2 years ago | (#37029058)

The reason stated was that NCSA found IBM's technology 'was more complex and required significantly increased financial and technical support by IBM beyond its original expectations.'

As usual the /. summary is misleading at best. The actual language used was:

The innovative technology that IBM ultimately developed was more complex and required significantly increased financial and technical support by IBM beyond its original expectations. NCSA and IBM worked closely on various proposals to retain IBM's participation in the project but could not come to a mutually agreed-on plan concerning the path forward.

Other tidbits from the real press release are that IBM terminated the contract, not NCSA, IBM is refunding the money paid to date, and NCSA is giving back the hardware delivered to date.

Translation:
NCSA found that IBM was trying to lock them in with ultra proprietary technology that would have required IBM's expensive services for the life of the installation.

That's a really dumb translation. Nobody expects a supercomputer to be commodity hardware. Just the opposite, as there is no such thing as a commodity supercomputer. Especially this kind of supercomputer, built in part to attain new performance records. When you buy something like that, you thoroughly expect vendor lock-in, expensive services, etc. There's only two or three vendors you can buy it from, and they're all going to be doing a lot of custom engineering for you, so proprietary is by definition what you're buying.

The real translation here is: IBM realized there was no way to deliver on the original contract without taking a huge loss, and tried to negotiate with NCSA for more budget, or maybe reduced system capability, but NCSA couldn't or wouldn't do that. (Probably couldn't, I doubt they can just scare up more money at the drop of a hat. As for backing off, when your project was funded to build a "petascale" computer, you're pretty committed to delivering a petaflop, so scaling back capabilities was probably not an option.)

Since the sides couldn't come to terms, IBM took a huge hit by terminating the contract. Yeah, they get their hardware back, but it's probably not very easy to sell to anybody other than NCSA. And they have to return all the money, which means they did a lot engineering work for $0, once again with few prospects of monetizing the work in a future deal.

As for NCSA, even though they get the money back they still lost a lot too. Years of development down the tubes, and they have to start over (if at all) with a new supercomputer capable vendor. From scratch. At 2011 prices instead of 2007 prices. Which might well be a disaster for them if they couldn't afford to give IBM enough money to finish the original system.

Re:Translation (1)

That Guy From Mrktng (2274712) | more than 2 years ago | (#37029484)

Stop trying to plant common sense and facts in our IBM bashing

Thats why you get moded 0! Now to make things even let me say one thing "IBM helped the nazis" there .. now everything is normal again, just like we want it.

Re:Translation (1)

imsabbel (611519) | more than 2 years ago | (#37030452)

If you read between the lines, we are of course back to the point of the headline:

a) IBM wanted to siginificantly increase the price ("required signifcantly increased financial support...", which they would of course passed on), which they could not get through. So they decilined delivery to the initially contracted conditions.

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37030968)

There is no "vendor lock-in" anyway. OP obviously doesn't have a clue about supercomputer industry.

These groups require that the system run their jobs with C and FORTRAN with MPI and OpenMP. Linux is almost always used of course, but they're known to happily take another OS if it can do a better job (which they usually can't -- hence Linux dominance).

The actual systems themselves are very often quite unusual, particularly in their interconnect and low level chipset. It wouldn't be unusual to have seen a lab go from using a cluster of Alpha machines, to an Itanium SSI machine, to an x86 cluster, each from a different vendor. They don't care what it is, if it can deliver the FLOPS for the best price.

"vendor lock-in" extends as far as prime contractor is typically selected for design, installation, support and maintenance for the life of the system. But that is not "vendor lock-in". Supercomputer / HPC customers tend to be by far the most mobile and least locked in customers in the entire computing space.

Re:Translation (1)

dbo42 (2433404) | more than 2 years ago | (#37031008)

Since the sides couldn't come to terms, IBM took a huge hit by terminating the contract. Yeah, they get their hardware back, but it's probably not very easy to sell to anybody other than NCSA. And they have to return all the money, which means they did a lot engineering work for $0, once again with few prospects of monetizing the work in a future deal.

Not to mention the marketing disaster.

Re:Translation (1)

flaming-opus (8186) | more than 2 years ago | (#37032552)

One of the big problems here is that this system was a one-off, that was not meant to be. IBM developed the system under the DARPA HPCS contract. They made a very capable system that is also very expensive. They hoped to sell a bunch of them; It looks like they sold just one. As such, all of the engineering costs are being amortised across just one machine. They couldn't leverage a bunch of smaller systems at other customer sites to stabilize the technology before deploying the monster big one at ncsa. Some of this is due to the success of their idataplex offerings, which have stolen the smaller sites away from Power7 machines.

I agree, though, that vendor lock-in is the name of the game in these sorts of systems. However, vendors do care about competing for the next contract, and try to keep engineering costs down. One of the ways you do that, of course, is to not make one-off systems.

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37032784)

I don't think they planned on selling a bunch of them. I think they just wanted to make the one to keep their names high on the "Top 10 SuperComputers of the World" list. The size and price-tag of this monster pretty much made it so that very few other customers would want (or could afford!) one.

Re:Translation (1)

flaming-opus (8186) | more than 2 years ago | (#37035798)

Not sell a system as big as Blue Waters, but using the same technology.The power 755, of which blue waters was supposed to be the prime example, is very powerful per node, has a lot of bandwidth in node, and between nodes, and could be quite useful in much smaller configurations. Tim Morgan at The Register indicates that IBM will still be selling smaller configurations of this machine. It's just hard to keep up that level of per-node performance across so large a machine, for the agreed upon cost.

Re:Translation (0)

Anonymous Coward | more than 2 years ago | (#37034620)

Nicely put. Well done.

Why did IBM do this, and what next for NCSA? (5, Interesting)

bridges (101722) | more than 2 years ago | (#37028406)

Pretty surprising development, given the length of time that IBM and NCSA had been working on this. Dropping a contract like this essentially puts into question IBM's costing on future contract bids, so it's not something that they'd do lightly. It'll be interesting to see the scuttlebutt that comes out afterward to see how much of this was technical shortcomings and how much pure financial considerations from IBM. Maybe since IBM already got their big publicity for Power7 from Watson, they're being more profit-concious on future Power systems so they don't tie themselves to margins that are too low.

From the NCSA side, there will certainly be a fallback of some sort - NSF and NCSA are already working out those details according to recent reports. I'd guess that they go with a large Cray XE6 system, given that a pretty sizeable version of that system is already being stood up and ironed out (the Sandia/Los Alamos Cielo system), and Cray has a lot of history successfully standing up big systems (e.g. ORNL Jaguar, Sandia Red Storm, etc.). SGI Altix is the other alternative, I guess, and there's a pretty big one up at NASA now, though that'd probably be a riskier proposition than Cray IMO, and I expect that NCSA and NSF are going to be pretty risk averse on following up on this.

Re:Why did IBM do this, and what next for NCSA? (-1)

Anonymous Coward | more than 2 years ago | (#37028798)

Because IBM is no longer what it was. IBMer's have been losing their edge, dropping people who know how things really work and replacing them with people who can read a from a script and perform a specific function. The current IBM doesn't even understand their own inner technologies anymore since all the focus has been towards "services". Oh there are still little pockets of knowledge, but it's fading faster every year.

I suspect that the Watsons are spinning in their graves

Re:Why did IBM do this, and what next for NCSA? (0)

Anonymous Coward | more than 2 years ago | (#37030202)

Surprising? BAH! This is nonsense.

  IBM has been looking for a way out of this contract for a hell of a long time now. Anyone that says otherwise is an NSF or NCSA PR person trying to look good. Everyone in the industry knows and has for a long time.

No doubt NSF will find a way to spend the track 1 money on another system, but now everything will be much later than expected. The bigger question is, how does this affect the track 2E proposals that are still outstanding?

And the track 1 money? Back to NSF? Continue to keep the nonsense at NCSA funded for uh.. whatever?

New boxes? It can't be an SGI after the PSC debacle last year. Cray would love the business, but probably won't be able to fulfill the order in a short time-line due to all the upgrades they already have in the pipeline. Another crappy GPGPU hybrid? Intel MIC/knights bridge|corner|whatever-they-call-it-this-month? (rumors are that most of the track 2e finalists all bid MIC)

This will all come down to NSF growing a backbone. Which, after their inability to even decide on simple things like the software architecture to follow teragrid? (seriously.. do both?!? idiots. could be worse I guess. Could be voyager2) We will see...

Re:Why did IBM do this, and what next for NCSA? (0)

Anonymous Coward | more than 2 years ago | (#37031614)

Cray seems to be the only option for a realistic system sustained 1PF system if you rule out IBM (e.g. a BG/Q system), but you're right that their production capacity is likely an issue. I can't imagine NSF taking back the track 1 money regardless of whether or not it would be right thing to do from a technical or financial point of view for this one contract. the political consequences of that to NSF when people are already looking to axe their budget would catastrophic. They (and NCSA) need to be able to claim a track 1 success for political reasons and soon.

Re:Why did IBM do this, and what next for NCSA? (1)

flaming-opus (8186) | more than 2 years ago | (#37032364)

I'm sure Cray can get up to speed in this time frame. They've done if before for the jaguar deployment. However, if they go with Cray, why install it at NCSA. The NSF already has a big Cray running at University of Tennessee. (Kracken) Why not just upgrade the existing cray? They already have the bugs worked out, they would just have to add more cabinets, and probably upgrade the processors.

Re:Why did IBM do this, and what next for NCSA? (1)

bridges (101722) | more than 2 years ago | (#37033420)

Why build it at NCSA instead of just upgrading Kraken? Because:

1) Kraken is an XT5, not an XE system - the associated changes of an upgrade from XT to XE would be very large.
2) NCSA already has a big machine room (that they just built) to support that scale of a system. Does ORNL have enough additional power and cooling capacity to support Keeneland, Jaguar, and growing Kraken by an order of magnitude in size?
3) ORNL is already installing Keeneland, an NSF track 2 system this coming year
4) The larger political implications to NSF of failing the $200M track 1 grant that was awarded to NCSA would probably be catastrophic.

Re:Why did IBM do this, and what next for NCSA? (1)

Durinia (72612) | more than 2 years ago | (#37034766)

5) ...and Cray is already installing a 20-ish PF machine at ORNL in the next year named "Titan".

Re:Why did IBM do this, and what next for NCSA? (0)

Anonymous Coward | more than 2 years ago | (#37059184)

Regarding objection (2): ORNL don't really have any power problems (that is one of the reasons why they have so many large machines). They have lots and lots of hydroelectric power from the Tennessee river. Objection (4), however, is a clincher.

Re:Why did IBM do this, and what next for NCSA? (1)

Dop (123) | more than 2 years ago | (#37032518)

Anonymous Coward, eh? You must still work there.

Re:Why did IBM do this, and what next for NCSA? (1)

flaming-opus (8186) | more than 2 years ago | (#37032476)

NSF already has a big cray XT5: Kraken at UofTenn. So the risk averse would probably say get a next generation XE6. Cray has announced an integrated GPGPU option, so NCSA could get a few cabinets of GPUs to play with, but integrated into a more traditional x86 super. The fact that NSF is already familiar with the machine could make this less risky.

However, this machine is not run by NSF, it's run by NCSA, who have no recent experience with Crays. Mostly they've been running whitebox clusters. They had SGI stuff half a decade ago, but nothing on the scale of what we're talking about here. I'd rule out SGI Altix, because it is not built to compete on price/performance, and not designed to scale this large, as a single system. IF SGI is in the running, it's probably an ICE cluster that would be used. If the problem with the IBM was cost, I don't think altix is going to fix that problem.

Re:Why did IBM do this, and what next for NCSA? (1)

Bill Barth (49178) | more than 2 years ago | (#37032652)

Not that it changes your argument, but you should know that NCSA has a brand new Altix [illinois.edu].

Re:Why did IBM do this, and what next for NCSA? (1)

flaming-opus (8186) | more than 2 years ago | (#37035676)

Yes. Good find. However, that sort of system speaks to the Altix' strengths. You program it like it's a SMP, you have one coherent memory space, and several hundred processor cores. This is the perfect use of an Altix. Of course SGI would rather you use your pre/post processing Altix next to a big ICE cluster, rather than a big IBM.

Re:Why did IBM do this, and what next for NCSA? (0)

Anonymous Coward | more than 2 years ago | (#37032800)

The flaw in your logic about Cray: They're very expensive to maintain. They take dedicated Cray support personnel, on-site and at great cost, to maintain. It's also usually very proprietary hardware.

Re:Why did IBM do this, and what next for NCSA? (1)

bridges (101722) | more than 2 years ago | (#37033324)

As opposed to inexpensive IBM maintenance contracts? All of the big HPC machines are expensive to run and maintain, and NCSA/NSF would be incredibly foolish if they haven't already budgeted for this.

I.B.M. (1)

Anonymous Coward | more than 2 years ago | (#37028460)

I've Been Mugged

NNSA and IBM Blue Gene (1)

1729 (581437) | more than 2 years ago | (#37028514)

Good for NCSA! I just wish that the NNSA had the guts to do the same with the Blue Gene/Q.

Re:NNSA and IBM Blue Gene (1)

Anonymous Coward | more than 2 years ago | (#37028848)

Absolutely. RIKEN in Japan got torn a new one when Fujitsu blew out the schedule (thereby jacking up the price) of the "K computer" by a couple of years, but being the ever trusting society Japan is, nobody made a fuss. Even talking about cancelling such a project would have been considered the height of rudeness, not to mention an admission of incompetence.

It's great to see academic institutions stand up for a change instead of just bending over and taking it.

Re:NNSA and IBM Blue Gene (2)

halfdan the black (638018) | more than 2 years ago | (#37028936)

IBM does need to drop the price of Blue Gene, BUT Blue Gene is absolutely awesome to work on (I use Intrepid). Almost all the rest of the rest of the "supercomputers" out there like Cray are basically just PC clusters.

Re:NNSA and IBM Blue Gene (0)

Anonymous Coward | more than 2 years ago | (#37028966)

And IBM machines are proprietary machines for the same set of applications. What does it matter if they are commodity parts or not, which is more cost effective?

Re:NNSA and IBM Blue Gene (3, Interesting)

1729 (581437) | more than 2 years ago | (#37029098)

Blue Gene is absolutely awesome to work on (I use Intrepid).

Seriously? That's the first time I've heard that. What do you like about it? The buggy toolchain and CNK? The joys of (sort-of) cross-compiling? The I/O bottlenecks? The blazing fast (for 1999) CPUs?

The only way I can see BG/P being a useful machine is either:
1) All you need to do is run LINPACK
2) You're booting Linux on the compute nodes (in which case a commodity Linux cluster would probably be a lot cheaper)

Re:NNSA and IBM Blue Gene (1)

dbo42 (2433404) | more than 2 years ago | (#37030984)

What were you trying to run on there, a web server?
One of the advantages of Blue Gene is precisely that its compute nodes do not run some full-featured OS that gets in your way. As HPC platform, the Blue Gene line is pretty much unrivaled in terms of energy efficiency and reliability.

Re:NNSA and IBM Blue Gene (0)

Anonymous Coward | more than 2 years ago | (#37032140)

If you run a stable Linux release and clock your x86 workstations back to 50%, they're pretty energy efficient too. Those are not generally the top goal of a high performance compute cluster, though; performance is a tad more important.

Re:NNSA and IBM Blue Gene (1)

erikscott (1360245) | more than 2 years ago | (#37032460)

If your code is pure MPI C or Fortran, then the BG is a decent idea. Remember, the original name of the machine was "QCDOC", or "QCD On a Chip" - if you're running QCD, it rocks. Other things, not so good. Let's say you have a big code in Java and you want to run it on your Blue Gene. Well, you're screwed - there's no JVM for the worker nodes. Let's say you have a big code in Perl (and don't laugh - Perl is what about half of computational biology gets done in). That's a problem, because there's no OS on the nodes, so there's no way to run Perl. Couple that with the bugginess of the software, the brittleness of the hardware, and desktop-class I/O and you have a machine that basically is just good for QCD and linpack. So, yeah, running a real OS on the nodes isn't all that bad an idea. Which is probably why slashdot reported on the port of Plan 9 for Blue Gene [slashdot.org] back in 2007. Links to "official" IBM site down in there, which is now throwing Lotus Notes' version of a 404 - and did we expect anything else from IBM?

Re:NNSA and IBM Blue Gene (0)

Anonymous Coward | more than 2 years ago | (#37029606)

That same sentiment was popular among non-x86 workstation users in the late '90s. Where's your SparcStation now?

Where's your SparcStation now? (1)

Douglas Goodall (992917) | more than 2 years ago | (#37030206)

For forty years I dreamed passionately of having my ultimate computer at home. The Apple ][ was my first "workstation, and I invested heavily and actually had two floppy drives. Then I wanted to wire wrap myself an 8086 multitasking computer. Then I had to have an IBM PC/AT. But I knew in my heart that there were these special "expensive" machines called "workstations" that ran on some strange OS called UNIX. I discovered the RISC philosophy, and began dreaming of owning a RISC workstation. I found out about Sun Microsystems, and SunOS 3.1. I began to dream of having one of these 68xxx based Sun workstations someday. Intel based PCs continued evolving, and every time Intel coughed, machines sped up considerably. But each time the architecture sped up, Microsoft released another version of their OS (if you can call it that) that took most of the ram, and ate most of the cycles. So no matter how fast the Intel boxes were, Windows based machines were not showing the performance I expected out of a "workstation". In the mid-nineties, I took a contract to set up a demonstration of an application running on a contemporary Intel Windows box and a contemporary Sun SparcStation. I had to open the Sun box to add memory, and I was stunned by how little electronics was on the circuit board for the ten thousand dollars they wanted for this "workstation". I just could't mortgage the house to buy something with such trivial hardware. Besides, I just hated the look and feel of OpenLook. I worked on contract at Autodesk briefly, and was exposed to a number of contemporary workstations, HP, SGI, MIPS,... But as cool as the X-Window system was, it still seemed somewhat raw, even with Motif. At the commodity level, I began to see computers with multiple CPU's, and operating system support in Windows NT and 386BSD as well as Linux. The day came that I heard about Apple bringing out a new operating system based on the Mach kernel, with 386BSD on top, and their GUI layer on top of that. I was intrigued and it didn't take me long to realize I was getting old and grey trying to compute with Microsoft software. Eventually I invested in the workstation I had been waiting for all those years. I bought a Mac Pro 8-core 3.0GHz 16GB-ram machine, and almost four years later it is still kicking ass, and I haven't seriously considered the need to upgrade to a newer Mac Pro, as my current one still has computing capacity to spare, and plenty of memory for what I do. Sure I paid a little more for an Apple branded Intel box, But almost four years later, Processors are not significantly faster (clock rate wise). The newer processors are said to be more efficient internally, but as I said, I haven't found the need. My entire suite of software I work on compiles in 58 milliseconds. What more can I say. So it never turned out to be a spare, or some HP cpu, or an IBM Power. I fell in love with an x86 workstation. To me, a supercomputer. To me a cluster (8-cores).

Re:NNSA and IBM Blue Gene (0)

Anonymous Coward | more than 2 years ago | (#37034636)

Almost all the rest of the rest of the "supercomputers" out there like Cray are basically just PC clusters.

Cray's XT/XE line uses x86 processors, but everything else about them is almost completely custom, both hardware and software. For people who are looking for peak application performance at this kind of scale, the processor turns out to be one of the least important components.

Indeed, it might have been the DARPA-sponsored fully custom network for Blue Waters that sank IBM. They made a business commitment to only pursue HPC projects that turned a profit [hpcwire.com] (not just revenue) last year, and this appears to be the first major casualty of that decision.

Cue the PERL / Beowulf cluster posts! (2)

FlyingGuy (989135) | more than 2 years ago | (#37028620)

As in I could do what they do with a few lines of PERL and a Beowulf Cluster!!

Re:Cue the PERL / Beowulf cluster posts! (2)

bigtrike (904535) | more than 2 years ago | (#37028636)

How many gigabytes long would each line of perl be?

Ever heard the saying: (1)

Anonymous Coward | more than 2 years ago | (#37028764)

Go away or I will replace you with a very small shell script!

I think: 0.0000001GB would do it.

Re:Cue the PERL / Beowulf cluster posts! (0)

Anonymous Coward | more than 2 years ago | (#37030910)

If you have seen the performance specs for Bluewaters you would know you can't.

Bluewaters is as much about massive IO performance as anything else, so any substitute is going to have serious problems providing an equivalent.

Penalties (0)

Anonymous Coward | more than 2 years ago | (#37028728)

My guess is IBM looked at what they could reliably deliver on time and get accepted and decided that the penalties on a $200 million order was going to cost them more than they bargained for...

Not really shocking news. (3, Interesting)

Zero1za (325740) | more than 2 years ago | (#37028760)

'was more complex and required significantly increased financial and technical support by IBM beyond its original expectations.'

Sounds about normal for an IBM gig then...

Job application (2)

wirelesslayers (2014486) | more than 2 years ago | (#37028780)

Now I know why they canceled the job exams (C, Perl and Linux Admin) I was about to do this week for a position at IBM-LTC. =(

Mosaic and Netscape redux (1)

Anonymous Coward | more than 2 years ago | (#37028840)

I hope we'll have a thread here rehashing how the Mosaic browser was developed at NCSA in the early '90s by a group of grad students informally lead by Marc Andreesen, and how the university sued after Andreesen and most of the original team took off for Silicon Valley to form Netscape.

Re:Mosaic and Netscape redux (1)

lucm (889690) | more than 2 years ago | (#37029112)

Netscape was a crime against the internet and especially against web developers of late 90s early 2000s. If you ever had to design a form in Netscape 4.7 you know what I mean - having textboxes that can only be sized in characters is significantly painful. And I won't even talk about layers because already my blood pressure is getting too high.

Re:Mosaic and Netscape redux (1)

Bing Tsher E (943915) | more than 2 years ago | (#37029284)

Netscape the company was a crime against the Internet. Their aim was to introduce proprietary tags into Navigator and serve up those proprietary tags with their server technology. They were a genuine threat to Microsoft. That doesn't absolve Microsoft for crushing them, but it explains it. And things wouldn't automatically be 'better' if Netscape had won 'the browser war.' We wouldn't have Mozilla in it's present state. And I would really miss my SeaMonkey.

Re:Mosaic and Netscape redux (1)

TWX (665546) | more than 2 years ago | (#37030092)

Heh. I downloaded and installed NCSA Mosaic about twenty minutes ago, and unfortunately it no longer appears to work on Windows 7. I don't know if there's something missing in the TCP/IP stack, something in the Windows Socket Services implementation, or what, but it crashes on trying to load URLs. And yes, I did add the "http://" to the front of the URL like you used to have to do.

Re:Mosaic and Netscape redux (1)

petermgreen (876956) | more than 2 years ago | (#37031302)

IIRC when I tried it on XP it ran ok but you couldn't get very far on the modern web because it doesn't work with servers that use name based virtual hosting.

What CPU? (1)

Arakageeta (671142) | more than 2 years ago | (#37029082)

It took a custom CPU to knock out the Tianhe (GPU-based) supercomputer. Did IBM plan to use an existing POWER chip, or were they trying to develop a new Cell-like (or other boutique) processor? IBM keeps saying that the future of Cell isn't dead. I wonder if NCSA thought they'd get more bang for their buck with a GPU-based solution?

Re:What CPU? (0)

Anonymous Coward | more than 2 years ago | (#37029352)

It was based on their POWER 7 CPUs, I think, same as Watson

Typical (3, Interesting)

lucm (889690) | more than 2 years ago | (#37029096)

My experience with IBM is that every new software or equipment setup is painful, complicated and goes over-budget, but once things are up and running, it is rock-solid, so in the long run it is still the vendor I would trust the most for enterprise projects. Knowing them, I always take into account the extra oil and time that will be needed to make things go smoothly at first.

This is very different from a vendor like Dell, who takes good care of its new customers (especially the ones with deep pockets) and make sure that the delivery is on time and budget, but after a while problems start to appear (wrong firmware, obsolete drivers, etc) and pretty soon they tend to ignore you if they feel you won't bring new business in the next quarter.

In this case with the NCSA thing, it's a typical situation where budgets have no room for the fudge factor because the organization has a price-driven selection process, which is wrong.

Re:Typical (0)

Anonymous Coward | more than 2 years ago | (#37029242)

As an IBM employee, I complete agree with the first paragraph :-)

Re:Typical (1)

bill_mcgonigle (4333) | more than 2 years ago | (#37031882)

In this case with the NCSA thing, it's a typical situation where budgets have no room for the fudge factor because the organization has a price-driven selection process, which is wrong.

As in they don't have an infinite slush fund to tap into? That would be most organizations.

You'd think by now IBM would know how to develop a specification, price it, and honor the contract price. I have to and I've only been in business 7 years. Yeah, once in a while I take a haircut, but that's called honoring your contracts.

Re:Typical (1)

lucm (889690) | more than 2 years ago | (#37032206)

A price-driven selection is an incentive for bidders to go in very lowball, and this only leads to nightmares for both parties. It's a silly practice based on obsolete purchasing practices (such as requiring three quotes for any important purchase - which over the long run drives off the vendors who usually don't win; those could be a very good match in a specific situation but after a while they won't even bother try to win a business because they know that most of the time they are contacted just to make the quota).

This is why more and more organizations are doing a selection process where the price is sealed at first, so they can identify which bids are a match for the requirement and score them accordingly. Then when the prices are revealed a simple dollars per point formula shows which bids are off, and the ratio helps the selection committee to justify not taking the lowest bid if they feel it does not offer good value. In such situation, IBM will shine because they can offer a lot of value without cutting prices like a flea market operator.

Re:Typical (1)

bill_mcgonigle (4333) | more than 2 years ago | (#37032368)

That sounds like a wise way to score proposals, but it still sounds like one of the following is true:

1) the spec was insufficient
2) IBM isn't honoring its contract

I'm assuming here that IBM's contract said they'd complete the spec for a fixed price.

Somehow these government contracts seem to allow for a fixed price bid that doesn't actually work, and then more money appears out of nowhere to make the contractor happy.

Re:Typical (1)

Bill Barth (49178) | more than 2 years ago | (#37032530)

It appears to be the latter. The spec is available here [nsf.gov]. NCSA negotiated a system with IBM, proposed it to NSF under the above linked RFP, went through a peer-reviewed awards process, negotiated an award with NSF, and started working on the delivery and other aspects with IBM and NCSA's other partners. Something went wrong in the last several months, and IBM's pull out was the result. I doubt that there is any more money to be found, and all parties knew what was asked of them in order for the project to be successful.

Re:Typical (0)

Anonymous Coward | more than 2 years ago | (#37032616)

Given the actions in Washington during the last few weeks, there may be a third option:

3) The funds for future years are no longer available.

While they may have the money to pay for IBM to do the initial setup, their budget might have gotten cut (along with Medicare, etc.), and they may no longer be able to keep up with the terms of the contract in the future.

Just a thought, keeping the broader picture in mind.

Re:Typical (1)

rgviza (1303161) | more than 2 years ago | (#37032252)

To be fair, Dell is limited with driver support by what their vendors provide. You can reasonably expect your hardware to be supported until the next version of windows is released. At that time if the drivers aren't compatible with the new version of windows you will be upgrading your hardware.

Pretty much an x86/x86_64 given.

Hardware companies don't make any money maintaining drivers for 4 year old hardware for which they will never see revenue again. Their margins are so thin there's no way they could afford to.

Re:Typical (1)

lucm (889690) | more than 2 years ago | (#37036240)

> To be fair, Dell is limited with driver support by what their vendors provide.

When you have some equipment installed and "certified" by Dell, you don't expect them to use obsolete drivers while there are three or for more recent versions on their own website. This happened to me twice, and almost a third time but then I knew the drill so when the setup was completed I asked for a complete driver inventory and did the comparison with the available versions myself - thankfully I catched them before they left. If this was for a video card or a usb port it would not be so bad, but when it is for a storage adapter it is another ballgame.

This does not happen with IBM. Before they certify a setup they will do extensive tests and validation. What sucks is that when you have a problem with a new equipement at IBM you end up in the support queue with everyone else, while with Dell usually having a technician on-site will speed things up.

Resale (0)

Anonymous Coward | more than 2 years ago | (#37029232)

Personally, I'm hoping IBM will try to cut their losses and part out the system on ebay. I wouldn't mind a few "lightly used" compute nodes. Or hell, gimme one of those storage subsystem cabinets.

Re:Resale (0)

Anonymous Coward | more than 2 years ago | (#37030442)

Personally, I'm hoping IBM will try to cut their losses and part out the system on ebay. I wouldn't mind a few "lightly used" compute nodes. Or hell, gimme one of those storage subsystem cabinets.

Having seen the sweet-looking internals of a compute node at the open house, I will tell you that these are very compact, water-cooled, trays. A little DIY and you're set.

hehehe (0)

Anonymous Coward | more than 2 years ago | (#37029260)

rent it out to one of the Chinese gov. attempts that steal western tech.. Then watch IBM realize that they have created their own nightmare.

The Clash of the Humans [was] NCSA and IBM Part Wa (0)

Anonymous Coward | more than 2 years ago | (#37029646)

Seems some "Testosterome Battles" afoot.

Difficult to judge at this point ... no blood on the floor .. no teeth on the floor ... no broken bodies on the floor.

We must waite. An opening might appear ... and we kill them both! ... enjoy the spoils.

--//++

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...