Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Oracle To Halve Core Count In Next Sparc Processor

timothy posted more than 3 years ago | from the reverse-course-full-steam dept.

Oracle 200

angry tapir writes "Oracle will halve the number of cores in its next Sparc processor and instead improve its single-thread performance, a weak area for the chip but one that's important for running large databases and back-end applications. The next Sparc chip on Oracle's roadmap, the T4, will have eight cores on each chip, down from 16 in the current Sparc T3."

cancel ×

200 comments

Sorry! There are no comments related to the filter you selected.

ORE (1)

Anonymous Coward | more than 3 years ago | (#34481570)

Oracle Ruins Everything

Re:ORE (-1)

Anonymous Coward | more than 3 years ago | (#34481918)

If only you had used the word NIGGER you'd have been modded down to -1 faster. There's a lot of white guilt out there and mods will fall all over themselves to prove how incredibly, really really not-racist they are. Course the real not-racist would understand it's just a word that we need to quit makin such a big deal about but childish people are not capable of understanding that. They need their wordy-dirties so they can have a clear orthodoxy to follow.

and? (1, Interesting)

Pharmboy (216950) | more than 3 years ago | (#34481592)

Not trying to be a smartass, but does it really even matter? Hasn't almost everyone already decided to move away from Sun/Oracle, excepting those with a tremendous investment in that area? Can their sales really do anything except go down on the hardware side? And reducing the number of cores can't help, as cores is now the buzzword, just like megahertz was back in the Pentium days. Even AMD had to fudge the model names back then to get people to buy the processors, which admittedly were faster per Mhz than Intel, but customers looked at raw numbers. I would think that cores would be the same, even with a more sophisticated buyer.

Re:and? (4, Insightful)

MightyMartian (840721) | more than 3 years ago | (#34481622)

I don't think losing some grumpy OpenOffice and OpenSolaris users qualifies as "everyone has already decided to move away from Oracle". Java will be used for a long time to come, and has big time penetration in the enterprise world, as does Oracle's database offerings. And while I agree that "cores" is a buzz word, I'm not so sure at that level it's all down to the quality of marketing. We're talking very big customers who in a lot of cases have very specific needs, and tailoring your hardware to fit with the market your serving isn't a dumb thing to do.

Re:and? (4, Interesting)

Pharmboy (216950) | more than 3 years ago | (#34481748)

At the very least, Oracle has introduced a great deal of uncertainty into Sun products, so you have to ask "What does Sun hardware offer than other hardware doesn't?". With all the bad press, they have an uphill battle converting people to Sun from other platforms, and for those who have a choice, what *exactly* is the big benefit that can't be purchased from someone else for less? Obviously they will sell some product, (and yes, there is obviously some benefits to some customers, but not all) but I don't see how they are going to grow any new significant market share. There is a lot of options out there, and it isn't that expensive to throw a lot of cores at a problem. Any purchaser has to be wary and consider other options with a more open mind.

The problem is that Oracle is *perceived* to not be that concerned about the Sparc platform, whether it is true or not. If the public (or at least the ones making the buying decisions) thinks that they will just be phasing it out or letting it die on the vine, it doesn't matter if it is true or not. I just think Oracle has done a terrible PR job during the whole Sun transition and it will bite them in the ass over the next few years. They certainly haven't made ANY new friends.

Re:and? (1)

guruevi (827432) | more than 3 years ago | (#34482190)

Sun's Sparc processors have a lot of cores which are great for large amounts of concurrent connections to either an (open source) database, file or webserver (as most of the open source designs spawn processes for a limited number of connections).

I think Oracle is trying to compete with Sparc processors in an area Sparc processors were never designed for -- low-end server systems.

Sparc is great with a well designed system and application underneath it and will beat the crap out of a 48U rack of x86 machines on those specific applications in only 6U worth of space. The cost however is heavy initially with the cheapest machinery coming in at ~$10k+ and easily going into the 100k+ for a full set.

Re:and? (1)

whoever57 (658626) | more than 3 years ago | (#34482268)

so you have to ask "What does Sun hardware offer than other hardware doesn't?"

More cores. Oh, wait....

Re:and? (1)

carnalforge (1207648) | more than 3 years ago | (#34482290)

Talking as someone that worked indirectly for years for SUN, i feel sad. A lot of professional people with a fucked up management. After the Oracle deal the news were (FWIK) that every Solaris 10 and other software products like Sun's directory server, Sun cluster and others that before were paid only for support contracts that usually were included on hardware's price from 2011/01/01 will have to be licensed under a payment.
And a lot of hardware lines are getting cut off, see SUN Fire series. Pretty nice machines. Ah, and for now nothing new about their storage products.
Im noticing a lot of costumers from a year now, when they have to upgrade they move to vmware w redhat.
Is it only me?

Re:and? (1)

dogsbreath (730413) | more than 3 years ago | (#34482584)

Solaris is a great o/s. Sun went from one weird CEO to another with no hope for redemption.

When Adrian Cockroft jumped to eBay, you knew there was trouble in the henhouse. Hi tech company that undermined their core technology brain trust.

Sad sad sad.

Re:and? (1)

carnalforge (1207648) | more than 3 years ago | (#34483006)

Solaris is a great OS and you're right on the CEO part.
Too bad they didnt GPL'd their SO before getting sold.
I'll miss their HW/SW stack. Sad for this.

Though linux is getting confortable now.

crypto (4, Informative)

Anonymous Coward | more than 3 years ago | (#34482364)

At the very least, Oracle has introduced a great deal of uncertainty into Sun products, so you have to ask "What does Sun hardware offer than other hardware doesn't?". With all the bad press, they have an uphill battle converting people to Sun from other platforms, and for those who have a choice, what *exactly* is the big benefit that can't be purchased from someone else for less?

Do you care about crypto at all? If so, the T-series CPUs have on-die MD5, SHA-1, SHA-2 family, DES, 3DES, AES (multiple modes of operation), RC4, RSA (up to 2Kb), and ECC acceleration, as well as RNG. The T3s can do almost 80 Kop/sec for RSA 1024. All you have to do is link against the Solaris-provided OpenSSL library and call the appropriate "engine" APIs to activate things (this is built-in to a lot of FLOSS software already (e.g., Apache)).

The T5220 (T2 processor, the T3 just came out) has been benchmarked as doing 44 Gb/s AES128: and that's on the crypto co-processors, so the "real" processors are free to do "actual" work--like serving HTTP requests. At the same time as this, the T2 can also do 38 Kop/sec of RSA 1024. At the time this benchmark was published, a quad-core Xeon 3 GHz could do about 8 Gb/s AES1028 and 9 Kop/s of RSA1024 signing--with little to nothing left over to do anything else.

So you ask, "what can these systems do?" Well, how about: instead of paying for a bunch layer of load balancers to do SSL and RSA, and a whole bunch more machines to do actual web requests, why not just buy a lot fewer T2s (now T3s), and save power, cooling, and rack space?

The T-series is not good at everything, but for the mutl-threaded, multi-client workloads it was designed for it works very well.

Re:and? (2)

PORNorART (1949708) | more than 3 years ago | (#34482462)

"The problem is that Oracle is "perceived to not be that concerned about the Sparc platform"

I don't know where people get that impression. Hasn't Oracle always been saying that their customers use Solaris/SPARC more than any other platform to deploy Oracle products on? This move makes sense in that regard as fewer faster cores are better to run Oracle's database on.

Oracle bought sun to be able to deliver an end to end solution to it's customers and extract more revenue from them. A recent interview with McNealy indicated that Sun's lack of a DB solution allowed Oracle to get more revenue from Sun customers that Sun was hoping to retain. Combining the two companies that were already selling to the same customers reduces overhead and should increase profits.

I just hope that Oracle doesn't try and limit their product range to only appeal to their base customers and instead try and expand that base. Though most of Oracle's customers will need both fast cores for the back end DBs and multi-threaded multi core systems for the front end application/web servers.

Re:and? (1)

jedidiah (1196) | more than 3 years ago | (#34481752)

It's not that everyone has moved away from Oracle. They've moved away from "Sun/Oracle".

You left a very important bit out.

Even Oracle moved away from "Sun/Oracle".

So this isn't just about disgruntled Star Office users.

It's also about Oracle's core paying customers.

Re:and? (1)

Lord Byron II (671689) | more than 3 years ago | (#34482032)

I think the OP was referring to Oracle's hardware offerings. Yes, Java, mySQL, and OpenOffice will be around for a long time.

Re:and? (3, Interesting)

dogsbreath (730413) | more than 3 years ago | (#34482434)

I don't think losing some grumpy OpenOffice and OpenSolaris users qualifies as "everyone has already decided to move away from Oracle".

The original statement was "Sun/Oracle" not "Oracle" and was referencing h/w sales.

Four years ago, we (network/connectivity company) spent over $50 million annually on Sun servers (h/w only, support was on top of that). That is now almost zero. We still buy lots of servers but they are almost all x86 blades. Sun h/w just can't compete in any of the import aspects that affect h/w purchase decisions (performance, power consumption, stability, reliability, capital cost, support cost, TCO, lifetime cost, transition costs). Java is a non-issue and has nothing to do with server purchasing decisions. I know we are not alone in dropping Sun as a vendor.

Note that we were a dyed-in-the-wool Sun/Solaris shop with a terrific core of dedicated Sun/Solaris admins. Nice thing about all that expertise is that, technically speaking, they had little trouble transferring their skill sets to other h/w and o/s platforms. Hardware and o/s vendors were happy to provide transition training. The cost of transition was a blip in our annual spend. Almost no one wants to go back even though Solaris is a superior o/s in many ways (io performance, network stack, scheduler, SMP).

It will be interesting to see what Oracle reports on Sun h/w sales.

Re:and? (1)

Kjella (173770) | more than 3 years ago | (#34481724)

If everyone did as slashdot wanted, you wouldn't see much of .NET around either, but somehow I doubt slashdot commends the IT industry. The reality is that all the biggest software houses - Micrsoft, IBM, Oracle, SAP, CA etc. are an oligopoly, sure you may shuffle the users around a little as they move from one uncooperative money-hungry giant to the other but they don't leave. While PostgreSQL might be an okay alternative to just SQL Server or Oracle the database, they just don't deliver the whole range of tools and services. I know Oracle is now everyone's favorite hate object as they kill off the open source, but I doubt they're going away any time soon.

Re:and? (1, Insightful)

causality (777677) | more than 3 years ago | (#34481958)

The reality is that all the biggest software houses - Micrsoft, IBM, Oracle, SAP, CA etc. are an oligopoly, sure you may shuffle the users around a little as they move from one uncooperative money-hungry giant to the other but they don't leave.

That's because the industry is financially dominated by clueless customers who purchase what they do not really understand and do not wish to learn. This is true both in the case of corporate management (the techies don't usually make the purchasing decisions) and in the case of the "average consumer".

but somehow I doubt slashdot commends the IT industry.

Why not? It could use a little praise from time to time...

Re:and? (1)

MyLongNickName (822545) | more than 3 years ago | (#34482448)

If I moved to mySQL, what tool do they have to replace SQL Profiler?

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482544)

The usual answer to that is as follows: now you aren't paying for Oracle licenses, you can afford to throw hardware at all your performance problems and still save money.

Of course, in practice you will still be paying for an expensive support contract, because big organisations literally can't use anything for free (which is on balance a good thing; e.g. it's why Red Hat is able to stay in business).

Re:and? (1)

DiegoBravo (324012) | more than 3 years ago | (#34483012)

Ok, just googled "mysql profile" and got:

http://dev.mysql.com/tech-resources/articles/using-new-query-profiler.html [mysql.com]

Re:and? (0)

MyLongNickName (822545) | more than 3 years ago | (#34483042)

That tool is nothing like SQL Profiler. I have used it and the query profiler is sorely lacking and does not (as far as I could find) give you the ability to see queries hitting your db in real time.

Have you ever used SQL Profiler?

Re:and? (2)

MyLongNickName (822545) | more than 3 years ago | (#34483088)

Actually, this isn't as far off as I first thought. It lacks a lot of the bells and whistles I am used to... I don't see filtering capabilities and I don't see real time monitoring, but it is a lot better than what I was using a couple years ago. I might have to give this another look.

Thanks.

Re:and? (1)

causality (777677) | more than 3 years ago | (#34483096)

If I moved to mySQL, what tool do they have to replace SQL Profiler?

I believe you missed my point. The bigger question is: why should it be difficult to find a truly good replacement when there is such demand for this kind of useful tool?

Why, it's almost as though a few major players have extreme dominance of this market and can get away with that because many of their customers are not tech-savvy.

That's what my previous post was addressing.

Re:and? (1)

Haeleth (414428) | more than 3 years ago | (#34482560)

While PostgreSQL might be an okay alternative to just SQL Server or Oracle the database, they just don't deliver the whole range of tools and services.

And there's one thing in particular they don't deliver: cost.

Sure, if you switch to open source you might save your company a million dollars, but you can only do that once. Stick with Oracle and you can negotiate a million-dollar enterprise database contract every year! Much more impressive.

Sadly I am not entirely sure I'm joking.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34481754)

There are two different markets. One that prefers less cores (i.e. less threads) but very good single threaded performance. One that prefers more cores (more threads) and does not care much about single threaded performance. Cores is not *the* buzzword that you make it out to be. There are two different markets and hence two different products,although both of them share the design. That is, essentially the same Sparc core but modifications to the SoC to get two different products.

Re:and? (5, Informative)

Doc Ruby (173196) | more than 3 years ago | (#34481774)

No. Nobody's moving away from Oracle - that rhetorical question doesn't make you sound like a smartass, but rather its less intelligent opposite.

What matters to Oracle's customers who buy Sun hardware is that their databases run as fast as possible, as that's the limiting factor on those customers' businesses. That's why Oracle bought Sun: to compete with IBM, which runs DB2 on IBM CPUs at the high end, the HW and SW tweaked to work best together for that operation.

Reducing the number of cores isn't designed to help. It's designed to leave that amount of transistors on the CPU available for making Oracle DBs run as fast as possible in the few simultaneous threads that Oracle needs for DB performance.

Oracle is not selling CPUs to the mass market that can't tell the difference among products, mostly because they don't have a benchmark that describes their use profile specifically. Oracle is selling to customers who pitch $:TPM to their bosses. And the $:TPM buzzword is not only not going out of style, it's what continues to drive $ to Oracle.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482112)

Other than big installs, which means "already heavily invested in". Sun boxen has been trounced by generic intel boxen. When Sun machines are due to be replaced, they're not replaced with like. Oracle runs on linux, they're heavily involved in the kernel and sell their own. Why? Sun boxes are dead in the long term, just list IBM's mainframes and i-series. Over time, the applications are redone on must faster and significantly cheaper hardware, that's not tied to massive licenses.

Re:and? (2)

Doc Ruby (173196) | more than 3 years ago | (#34482586)

And that's why IBM is raking in ever more $BILLIONS in mainframe sales.

You and the post you're defending are like a press release from 1989.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482816)

Why do so many retarden like to use "-en" to pluralise their worden? "-(e)s" all the way!

Boxes boxes boxes.

Re:and? (1)

bill_mcgonigle (4333) | more than 3 years ago | (#34482710)

Gosh, somebody who really gets it. Sorry, no mod points today.

Re:and? (1)

LWATCDR (28044) | more than 3 years ago | (#34481786)

They are selling to people that make choices based on hard numbers and not buzz words. Hopefully that is.
There is still a market for big iron like IBMs Power and ZSystems as well as Sparc.

Re:and? (1)

Pharmboy (216950) | more than 3 years ago | (#34482240)

I understand what you are saying, but the question is why would someone invest in Sparc unless they are already committed to the platform? It would seem that IBM would be a safer choice, and very possibly more cost effective, if not clusters of more generic boxen. Google has the largest platform that I am aware of, and they have shown you can do it with clusters of otherwise commodity grade equipment. I don't think Facebook is running on Sparc either. Obviously enough are to keep them stamping out chips, but they weren't gaining market share even before Oracle bought them out.

Again, those invested heavily in Sparc will continue for at least the short term, but that has to be a tough sale for a new startup system when there are ample choices, all which run the same core OS, Linux.

Re:and? (1)

afidel (530433) | more than 3 years ago | (#34482678)

Because DB2 is less universally supported by LOB apps than Oracle. You are right that Linux on x86 has been eating a big part of all the traditional Unix vendors lunch. It might not be good enough for the top 1% of shops but for the other 99% that used to make up a large market for those vendors they've priced themselves out of the market and their corporate culture is a turnoff to a lot of folks. Heck even 4 years ago when we were looking at platforms for our ERP system they weren't really competitive (IBM was almost 3x more expensive than the next most expensive system when we asked for a box with more than a couple PCI-X slots) and Sun's 5 year cost was almost 2x that of the HP AMD system due to support contracts costing over 20% of purchase price per year vs 15% total for the HP open solution. I have no reason to think that Sun solutions have gotten any cheaper under Oracle.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34481870)

Look kid, there's two things I love: Sparc and shaved pussy. So go back to playing with your pud.

Re:and? (1)

Tenser234 (973242) | more than 3 years ago | (#34481910)

US Government is actually investing more in Oracle now than before. Its become a "one stop shop". Servers, Linux, and DB all at one place.

Re:and? (4, Interesting)

dogsbreath (730413) | more than 3 years ago | (#34482146)

Not trying to be a smartass, but does it really even matter? Hasn't almost everyone already decided to move away from Sun/Oracle, excepting those with a tremendous investment in that area?

I agree. That boat sailed about two years ago for us and we were a major Sun shop ( > 10,000 servers four years ago ). We are now almost exclusively VMware on Intel blades, mostly from IBM, or IBM P systems with IBM o/s. Vendors that were Solaris have moved to Linux. We briefly considered x86 Solaris but there was too much uncertainty with the on-again/off-again support for that platform.

Oracle DB is still at the core of our internal corporate computing because of an excellent licensing deal but we use alternatives for consumer facing services.

IMHO, the Sparc64 is hellishly expensive for the performance provided and the iron in the rack is heavy and power hungry. Nobody likes the M series servers. We don't like buying it, we don't like racking it, and we don't like what it does to our data center power distribution configuration.

The T series are not badly priced and are excellent low power consumption web servers but suck at anything that is single threaded. Almost all application software is effectively single threaded: either there is an explicit single execution path or the app has attempted threading but the threads depend on a core path that is single threaded. Usually I can get a brand name Intel multicore box that provides 4x the execution performance at a lower cost, ... and with 3 yr onsite h/w service thrown in.

Everything about Sun h/w is out of sync with what customers want.

Oracle is almost clueless when it comes to hardware sales and development. Try "www.sun.com"... you get a redirect to the Oracle home page and then you have to search for a link to the server product lineup. It's almost as if they are hiding the fact that they have a hardware product to sell. I don't think the Oracle brain trust knows what to do with Sun h/w and the Solaris o/s.

Oracle is a single core product software shop. That's their whole corporate culture and they don't really do other things well. What were they thinking when they bought Sun's h/w division? Possibly they could have just bought the rights to Solaris and developed it for the x86 h/w and made something of it. An argument could be made for the similarity between db and os development. But h/w? It's a black hole for Oracle. SPARC is dead. Write it off.

Now if IBM had bought Sun and turned their R&D folk loose... there would have been hope for Solaris. Too bad so sad.

Re:and? (2, Informative)

Anonymous Coward | more than 3 years ago | (#34482552)

When I was on the job hunt, I saw exactly this. People took two paths:

1: An exodus from SPARC hardware to x86 servers or blades, and a software exodus from Solaris to RedHat Linux or even Windows.

2: A retooling and a move to IBM POWER6/POWER7 hardware. This hardware has VM support built in from the hardware up. In fact, dedicating a hardware box to a machine is passe, as opposed to having two VIO servers and a LPAR. (LPARs reboot extremely fast because it doesn't have to configure real-life hardware devices, in under than a minute, while a hardware IPL can take half an hour.) Oracle works decently on this environment and DB/2 can work with some SQL+ commands.

What happened to the Sun that was awe-inspiring in colleges? Sun made a lot of groundbreaking items, from NFS, to NIS (not used these days, but a directory server is better than none), ZFS, zones, LDoms, etc. Now, it seems that Sun isn't a torchbearer when it comes to enterprise innovation, but just trying to market stuff.

Software that allows machines to share RAM so box "A" can fetch something in box "B"'s RAM? That's pointless. Sun's enterprise solutions are starting to just not be competitive compared to what IBM can do at the midrange (Power Systems) or high end (zSeries), and what Intel/AMD can do at the low end.

Re:and? (1)

dogsbreath (730413) | more than 3 years ago | (#34482616)

Yup and Yup. You got it exactly. That's what we did and that's what our competition has done.

Spot on.

Re:and? (1)

afidel (530433) | more than 3 years ago | (#34482850)

Heck look at what open systems can do, HP DL980 can scale to the same CPU performance and half the ram of the M9000 and a fully decked DL980 (64 core, 128 thread and 2TB of ram with 4x 8Gb FC ports, 2x QDR Infiniband and 4x 10Gbe ports) costs $233k with 5 years of 6 hour call to repair support where the M9000 starts at twice that for a very underpowered config and doesn't include the ~20% per year maintenance.

Re:and? (1)

dogsbreath (730413) | more than 3 years ago | (#34483082)

M servers are ugly no matter how you look at them. We had a network h/w vendor whose management app only runs on Solaris and is only certified for the M series. We bought the freakin' h/w but everyone hates it. The worst thing is if you need support you get an idiot from a contracted office IT support company (name left out to protect the incompetent). These guys are used to office PCs and printers and have no clue about 7x24 services and have little experience with the h/w they support. They are always on the phone to Sun for advice.

Re:and? (1)

rubies (962985) | more than 3 years ago | (#34482632)

Alternative timeline: Sun should have seen the writing on the wall when NetBSD and Linux started to get popular in the early nineties. Why? Because a lot of us ex-Sun jockeys really, really wanted a Sun at home but just couldn't afford to run even a second hand IPX workstation when a PC was so cheap. Sure the PC was a piece of junk, but loaded up with X windows and all the Gnu tools, you could get most of your support scripts working from home and started not worrying so much about having a Sun.

If they'd have looked downwards to their primary users instead of trying to capture the enterprise market and seen what was happening, they'd have shipped a free Solaris x86 that worked with commodity hardware rather than a narrow (expensive, hard to find) subset and Linux wouldn't have gone anywhere.

Re:and? (1)

dogsbreath (730413) | more than 3 years ago | (#34482838)

Sun needed heavy investment and lots of R&D to continue to compete in the h/w arena. They didn't have it and Fujitsu was no help. IBM, Intel and AMD poured tons of $$$ into chip development while Sun kept missing release dates. While Intel and IBM developed high-density fabrication methods with remarkable decreases in execution cycle times, Sun was stuck with technology that was rapidly being left behind.

On the o/s side, Solaris is a great stand-alone o/s but Sun almost totally missed the transition to virtualization. Their moves in that area have been too little and too late, just like the h/w development side. IBM's P systems and VMware on Intel are so advanced in terms of flexibility and functionality that it isn't funny.

A lot of Sun customers stuck around when they should have been moving on to competing products. Sun had lots of opportunity to fix things but they never delivered.

Sun's executive just milked their cash cows (large unix/solaris shops) until one day the herd dried up. They were lucky to find a buyer for the company.

Re:and? (1)

ToasterMonkey (467067) | more than 3 years ago | (#34482706)

Oracle DB is still at the core of our internal corporate computing because of an excellent licensing deal but we use alternatives for consumer facing services. ...
Everything about Sun h/w is out of sync with what customers want.

Oracle is almost clueless when it comes to hardware sales and development. Try "www.sun.com"... you get a redirect to the Oracle home page and then you have to search for a link to the server product lineup. It's almost as if they are hiding the fact that they have a hardware product to sell. I don't think the Oracle brain trust knows what to do with Sun h/w and the Solaris o/s.

That's unfair, go to any gigantic company's site. EMC, IBM, HP, GE, Hitachi. These are mega corporations, not storage, hardware, computer, jet, wild guess, etc.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482204)

Not trying to be a smartass, but does it really even matter? Hasn't almost everyone already decided to move away from Sun/Oracle, excepting those with a tremendous investment in that area? Can their sales really do anything except go down on the hardware side?

We're moving away Solaris (and SPARC) mostly because of support costs: there's only "premium" support, which is a four hour response. This is great for PRD, but we have many DEV, QA, and STG environments that don't need this level, and which is costly when you add it all up. If there was a 12x5 or 8x5 (or even warranty-only level), it wouldn't be a problem, it's just 24x7 for everything is too pricey for us.

I've been using Linux since '96, BSD since about '98, and Solaris since about '00 (and Mac OS X since '03) and currently Solaris 10 is my favourite OS for servers (previously FreeBSD).

As it stands you can spin up multiple zones on a system with 1% overhead and no hit on IOps. This is almost impossible until recently with any other system (VMware certainly has made great strides recently). Add in ZFS, DTrace, and Live Upgrade and you have systems that generally Just Work. I've never had to worry about live lock or the OOM with Solaris, where those things crop up time to time on Linux in my travels. And given the number of threads available on T-series CPUs, you can consolidate hosts 10:1 on a single piece of hardware without running into licensing fees about "virtual guests" like most other operating systems (especially handy for DEV and QA stuff).

If Oracle simply changed some of the support options we'd be back on it in a second.

Re:and? (2)

drolli (522659) | more than 3 years ago | (#34482272)

I rather think if they optimize the Sparc HW for databases it may a chance for the Architecture to survive in the long term. And no, nobody is going to switch because of ideology. They switch because of cost for running their applications. And no, such decicions are not made on the scale of 1-2 years, but a longer timescale.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482350)

You smart ass.

Re:and? (1)

ToasterMonkey (467067) | more than 3 years ago | (#34482668)

Even AMD had to fudge the model names back then to get people to buy the processors, which admittedly were faster per Mhz than Intel, but customers looked at raw numbers. I would think that cores would be the same, even with a more sophisticated buyer.

Really? cores == cores? You say this on an article about a line of processors with a bazillion hardware threads. No doubt computer buyers often don't understand what they're getting, but to a person that naive, a T3 has 128 processors. Have fun debating processor vs. core with a core == core luddite, not to mention this processor line is a SoC with four way SMP, four memory channels, PCI-e, and dual 10gbit nics. /sigh.. it's just one chip though, and my gaming rig has two, so it's twice as fast. Hopefully those people don't have jobs requiring them to make complex decisions.

Re:and? (0)

Anonymous Coward | more than 3 years ago | (#34482854)

No stupid Those of us who are doing heavy duty numerical work still prefer the faster Ultrasparc to x86. Using Solaris 10u8, SunStudio 12 compilers WRFV3.1 One Ultrasparc-IV+ at 1.35gHz outperforms a 2.4gHz quad-core Intel

blindly pushing marketable limits... (0)

MichaelKristopeit211 (1946194) | more than 3 years ago | (#34481610)

i remember in 1997 alpha processors ran over 1Ghtz when no other processors did, but they were useless on the desktop.

the problem is obviously attention to detail... engineers who do not take pride in the functional optimization of their products.

Re:blindly pushing marketable limits... (4, Insightful)

MightyMartian (840721) | more than 3 years ago | (#34481646)

I wasn't aware the Alpha was that bad. I thought it was simply that the benefit of the processors wasn't great enough to convince companies to move from the much cheaper x86 platform. I saw a couple of Alpha desktops and they were pretty impressive.

Re:blindly pushing marketable limits... (0)

MichaelKristopeit211 (1946194) | more than 3 years ago | (#34481688)

they weren't bad... but they were less responsive than much cheaper computers running at a third of the clock speed. that is probably due to the OS or application layer UI implemented poorly, but i was never impressed with them.

the sun servers and SGI servers i worked on during the same time period that were used in radiation imagery research could handle a much greater load.

Re:blindly pushing marketable limits... (1)

jedidiah (1196) | more than 3 years ago | (#34481788)

um no...

Alpha systems were well worth 4x their other contemporaries.

They made things possible that were simply unable to scale on Solaris kit.

This is why they were also thrown into some of the early computing clusters and render farms.

OTOH: SGI are the poster boys for overpriced gear with lack luster performance.

Re:blindly pushing marketable limits... (0)

MichaelKristopeit211 (1946194) | more than 3 years ago | (#34481948)

ummm... no...

my first hand experience as funded by the national science foundation says otherwise.

do you have any insight of your own to provide? why are we all not using alpha chips if they were so affordable and powerful?

Re:blindly pushing marketable limits... (1)

Moridineas (213502) | more than 3 years ago | (#34482064)

my first hand experience as funded by the national science foundation says otherwise.

Well let's see some numbers then. My firsthand experience (admittedly not funded by NSF!!!) says you're dead wrong. Secondly, who exactly claimed that Alphas were cheap? They WERE more expensive, but also more powerful.

do you have any insight of your own to provide? why are we all not using alpha chips if they were so affordable and powerful?

Perhaps you've heard the term "Wintel" before?

Re:blindly pushing marketable limits... (0)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34482094)

alphas weren't much more expensive... if i remember right $800 got a 1.2 Ghtz bare bones system... about the same as a 433Mhtz with pentiums.

got it... so the ignorant public just never UNDERSTOOD how good the alpha product was and that is the reason they didn't buy it.

you're an idiot.

if only microsoft had dedicated itself to a symbiotic relationship with alpha to cross optimize... wahhhhh wahhhhh.

perhaps one day you'll not be an idiot?

Re:blindly pushing marketable limits... (1)

luis_a_espinal (1810296) | more than 3 years ago | (#34482412)

alphas weren't much more expensive... if i remember right $800 got a 1.2 Ghtz bare bones system... about the same as a 433Mhtz with pentiums.

got it... so the ignorant public just never UNDERSTOOD how good the alpha product was and that is the reason they didn't buy it.

you're an idiot.

if only microsoft had dedicated itself to a symbiotic relationship with alpha to cross optimize... wahhhhh wahhhhh.

perhaps one day you'll not be an idiot?

Apparently, gratuitous insulting has become the cornerstone of logical arguments. Bravo.

Re:blindly pushing marketable limits... (0)

Anonymous Coward | more than 3 years ago | (#34482486)

In case you haven't noticed MichealKristopeit* is a notorious sockpupeting troll. The amazing thing here is that he's somewhat on topic for so long.

Re:blindly pushing marketable limits... (0)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34482608)

no, actually the norm is tongue in retarded cheek backhanded rhetorical implications like "Perhaps you've heard the term "Wintel" before?".

you're all ignorant hypocrites.

slashdot = stagnated

Re:blindly pushing marketable limits... (1)

Moridineas (213502) | more than 3 years ago | (#34482716)

no, actually the norm is tongue in retarded cheek backhanded rhetorical implications like "Perhaps you've heard the term "Wintel" before?".

Yes, one can easily get into a pissing match over rhetorical techniques, like your "Funded by the NSF!!!" call to authority. Big whoop. My point was perfectly valid, and your inability to respond to anything I said lends credence to the AC who claimed you were a sockpuppeter.

slashdot = stagnated

Says the guy with the high 7-digit UID?

Re:blindly pushing marketable limits... (1)

MichaelKristopeit213 (1947002) | more than 3 years ago | (#34483068)

you think it's a big whoop that i've been paid for research?

i've used all the systems first hand... the alphas ran the worst... obviously because of non-optimized software on every layer of the application stack. your inability to understand that the "AC" may very well be yourself as NO ONE has taken responsibility for the claims is hypocritically ignorant.

this account is NEW. you're the same old idiot.

bringing up such trivially irrelevant topics is again hypocritically ignorant.

would you rather i post with my 5 digit UUACCOUNTUSERID? you're an idiot.

if ANY user account CAN be a "sockpuppet" then ALL user accounts are "sockpuppets".

cower some more, feeb.

you're completely pathetic.

Re:blindly pushing marketable limits... (1)

Moridineas (213502) | more than 3 years ago | (#34482696)

alphas weren't much more expensive... if i remember right $800 got a 1.2 Ghtz bare bones system... about the same as a 433Mhtz with pentiums.
got it... so the ignorant public just never UNDERSTOOD how good the alpha product was and that is the reason they didn't buy it.

So what you're saying is that no, you don't have any performance numbers that can remotely back up anything you've claimed?

you're an idiot.

if only microsoft had dedicated itself to a symbiotic relationship with alpha to cross optimize... wahhhhh wahhhhh.

perhaps one day you'll not be an idiot?

Classy.

"Wintel" won as a platform because it was common, cheap, fast (enough) and yes, because Microsoft was very important to the PC landscape! Look at other chips like PowerPCs and so on that had some great performance and energy qualities but were dominated by Intel chips and Microsoft OS/software.

Is it true that you're a notorious sockpuppeting troll? I don't think I've seen your name before.

Re:blindly pushing marketable limits... (1)

MichaelKristopeit213 (1947002) | more than 3 years ago | (#34483026)

wintel won because of a symbiotic practice of cross optimization while patenting the use of provided optimizations on either end.

you're an idiot.

Re:blindly pushing marketable limits... (1)

mr_mischief (456295) | more than 3 years ago | (#34482472)

Because Intel sued them over patents and buried the tech?

Re:blindly pushing marketable limits... (4, Informative)

bhtooefr (649901) | more than 3 years ago | (#34481782)

ISTR benchmark after benchmark saying that they performed about as well as a Pentium Pro/II of the same clock speed, when running native code. Except they were doing 533 MHz when Pentium Pros were doing 200. Oh, and the benchmarks I remember showed that the Alpha could emulate x86 code as fast as the Pentium Pro 200 could run it natively, after DEC's emulation software had profiled the code.

The problem is this... they were also, IIRC, more EXPENSIVE than said Pentium Pro machines, and they could (for the Windows market) only run NT, when everyone targeted 95. And the performance advantage was completely wasted if your code wasn't written for Alpha. (So, you could run Office 95 and such on them, but because Microsoft only compiled the OS and maybe some server software, for general desktop AND workstation duty if your business needed Windows, a PPro box was cheaper and may have been able to do the same job.)

(Keep in mind that back then, Microsoft was ambivalent about x86, at least in the workstation and server market. Windows NT was written to run on quite a few popular processor families - MIPS, PPC, and Alpha, in addition to x86. And, Microsoft made what was essentially an AT Architecture MIPS system specification for running NT on MIPS.)

Re:blindly pushing marketable limits... (1)

MightyMartian (840721) | more than 3 years ago | (#34482070)

That's rather my point. The Alpha was anything but a bad chip, it's just that on a cost-benefit scale for what they were being marketed for, it made no sense. The problem, at the server and high-end workstation end, was that while the Alpha could certainly outperform Pentiums, the price made them very unattractive compared to the Intel was throwing out there.

Re:blindly pushing marketable limits... (0)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34482128)

well... obviously a higher clock speed means the same "native" NOOP commands would run faster... the problem is most certainly lack of optimized software written for the architecture. i'd very much like a highly optimized single core machine that didn't attempt tricks like "hyper-threading" that destroy any attempt to engineer real time systems... but operating systems have grown fond of the tweaked chips and there is seemingly no looking back.

Re:blindly pushing marketable limits... (1)

ducomputergeek (595742) | more than 3 years ago | (#34482324)

It really depended on what you were doing too. Alpha's were built for raw speed and were good for certain tasks. Company I worked for back then used them for Lightwave rendering circa 1996/1997. But I believe the average box was somewhere in the neighborhood of $35,000 - $50,000 a pop for Dual 500 Mhz, 2GB of RAM and pretty much were all render nodes. Most of the workstations were SGI/IRIX and in the early days there were even quite a few Amiga around.

Re:blindly pushing marketable limits... (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34482506)

Yeah I had a 486-DX100 and a 233 MHz Alpha 21064, both running Red Hat Linux.

The Alpha was so much faster for native compiled stuff, but I couldn't get Netscape for Alpha, and running Netscape for x86 under the EM86! emulator was as slow as browsing the web with a Python based browser at the time. They were both too slow to keep up with "fast" downloads like a 28k modem... So I wound up using the 486 machine as my graphics console, and running all of my batch stuff on the Alpha, with them sitting next to each other connected by cheap coax ethernet.

What amazes me is that I now have a quad core 2.4GHz Intel i7 Xeon with 12 GB of triple-channel RAM and gigabit connection to the internet here at work in a university, and I still get uncomfortable lags with browsing. Compared to my 486DX-100 and 20 MB of RAM, I am not sure I see that much more value in todays web to warrant this level of resource overhead. I expected us to be in the sci-fi future by the time we had this kind of equipment...

Re:blindly pushing marketable limits... (1)

Rudeboy777 (214749) | more than 3 years ago | (#34482926)

If you think the web is a bloated mess now, just wait until you see the hardware the sci-fi-future web will bring to it's knees!

Re:blindly pushing marketable limits... (0)

Anonymous Coward | more than 3 years ago | (#34481862)

useless on the desktop

I had Alpha's on the desktop in '97. They were excellent and you are the very first I've ever heard characterize those machines as 'useless.' That's just silly. People still trade those things on E-bay and they get good prices. I watched tens of millions of dollars of revenue engineered on those machines.

Re:blindly pushing marketable limits... (1)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34481996)

i watched 10 times as much engineered on sun and SGI machines... my COMPAQ PRESARIO outperformed my alpha on some tasks running at less than half the clock speed.

they were RELATIVELY useless.

i'm a fan of the MIPS architecture and loath things like hyper-threading, but throughput doesn't lie. there was obviously a lot of bad programming on the OS layer as doing anything in the file browser would cause the window to reload every single element and redraw it... super lag... hard to blame the chip for stuff like that, but there simply wasn't enough software optimized for the architecture.

Re:blindly pushing marketable limits... (3, Informative)

matfud (464184) | more than 3 years ago | (#34481970)

It really depended on what you wanted to do. Sparc machines where great at IO and memory access. Alphas just had the shear grunt to do work (and yes they were running at over 1GHz when most processors where running at half that)
. SGI were crap but if you wanted to visualise it they could not be beat (hudge amounts of custom graphics hardware).

Re:blindly pushing marketable limits... (1)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34482044)

yeah, that's what we were doing... lots of imaging on gigantic data sets where latency was crucial (off site medical diagnosis). the SGIs outperformed everything

the SGIs also outperformed the alphas for server daemons and we switched the math and computer science department web servers off the alpha.

that is when i first realized the Mhtz race was useless marketeering... what good is horsepower without proper gearing? the alphas seemed to lag behind everything else i had access to despite having almost triple the clock speed.

Re:blindly pushing marketable limits... (1)

matfud (464184) | more than 3 years ago | (#34482322)

I have had various experiences. For some of my work the Alphas trounced everything. They were very fast processors. Larger data sets and I found that sunOS on fujisu (sparc) machines worked beter. Mind you I may be biased as the 16 proc machines I had access to were not quie comparable to the alphas (I think the alphas still out performed in floating point though).
But If you wanted to see your data then you had to have something from SGI. SGI really had impressive 3D hardware. Most low end 3D grphics cards can propbably out do SGI now but having to have 8 or 16 full length cards at a few grand a piece was fun. That was the first time I used 3D graphics.

Re:blindly pushing marketable limits... (1)

matfud (464184) | more than 3 years ago | (#34482642)

"That was the first time I used 3D graphics."
Perhaps I should say this was the first time I had used 3D graphics in anger. No textures...just trying to push polygons at the screen. trying to render voxels using GLUT/GL. That is when I started to apprecieate the SGI machine I had access to. It was too slow to compute the scene but nothing else could display it fast enough to get an idea what was going on.

Re:blindly pushing marketable limits... (1)

matfud (464184) | more than 3 years ago | (#34482690)

Oh and this was using the shuttered glasses (LCD) that give you a headacke after ten mins. They claimed 30 hz but that did not really work. The pressure ball controll did work well.

Re:blindly pushing marketable limits... (1)

KonoWatakushi (910213) | more than 3 years ago | (#34482140)

The DEC Alpha was (and still is) a brilliant architecture. The designers took great care from the start to make sure that it would scale, both in clock and core count. It was simple, elegant and fast.

IIRC, the early chips were fast enough to emulate x86 code at a reasonable speed. If all you wanted to do was run emulated x86 code though, then maybe they were "useless". This was especially true before the BWX extension, which introduced a number of byte oriented instructions.

Native code, on the other hand, would leave you with no misconceptions about the speed of the Alpha; it was truly impressive. DECs compilers and math libraries were also excellent.

Re:blindly pushing marketable limits... (1)

Johnno74 (252399) | more than 3 years ago | (#34482280)

IIRC the original athlon actually licensed some of the features of the alpha from DEC too...

Re:blindly pushing marketable limits... (1)

afidel (530433) | more than 3 years ago | (#34482994)

The Athlon used the EV6 bus from the Alpha, in fact the AMD 751 and 761 northbridge chipsets designed for the Athlon were used by Samsung for Alpha 21264 based systems!

Re:blindly pushing marketable limits... (1)

MichaelKristopeit212 (1946196) | more than 3 years ago | (#34482390)

the available native code was relatively non-existent.

Re:blindly pushing marketable limits... (1)

KonoWatakushi (910213) | more than 3 years ago | (#34482676)

That is what compilers are for. It is silly to suggest that there was no software for a Unix based system, especially at that time.

Availability of Windows based software would have helped, but was hardly necessary. The market didn't kill the Alpha, it was pure stupidity on the part of management.

Re:blindly pushing marketable limits... (1)

MichaelKristopeit162 (1934888) | more than 3 years ago | (#34483084)

most of the compilers were never optimized for the alpha... so you either have to recode large portions to work with the alpha compilers or run through an emulation layer.

the operating systems available were all junk as they were sloppy ports and not optimized for the alpha.

I know single thread performance is important... (-1)

Anonymous Coward | more than 3 years ago | (#34481618)

but the real question is how will this chip improve Oracle's ability to rape its customers? When I'm decided between Oracle and another platform, the issue of the magnitude and quality of Oracle's raping of me is a key deciding factor.

halve? (0)

Anonymous Coward | more than 3 years ago | (#34481660)

is another veiled sun downsize?

Bill Gates was quoted... (0)

Anonymous Coward | more than 3 years ago | (#34481674)

Bill Gates was quoted as saying "320 cores should be enough for anybody".

Concentrate on ST perf? What does this mean? (3, Interesting)

multipartmixed (163409) | more than 3 years ago | (#34481692)

Does it maybe mean more register windows?

Because that would certainly help things like Java, and presumably oracle.

Anybody know how often a large query spills registers?

Re:Concentrate on ST perf? What does this mean? (2)

Surt (22457) | more than 3 years ago | (#34482530)

I assume they're talking about improving their multiple dispatch, so that they can go from 3 - 4 (or is it 4-5) ops in parallel on a single core. And probably bring up the clock speed. 8 cores at 2ghz beats 16 cores at 1.5 ghz for a lot of applications.

Re:Concentrate on ST perf? What does this mean? (0)

Anonymous Coward | more than 3 years ago | (#34482606)

that would certainly help things like Java

God yes. I can not believe how badly Java runs on SPARC. I mean, okay, logically it probably did make sense to concentrate on Windows performance first and general x86 performance second, but it just always seemed crazy that Sun's hardware was the absolute worst possible place to use Sun's programming language.

Same as always (1)

matfud (464184) | more than 3 years ago | (#34481704)

I'm pretty sure this was on Suns roadmap. Higher throughput per thread. Higher clock speeds. So have Oracle deviated from the plan Sun had?

incoherent (1)

Chaostrophy (925) | more than 3 years ago | (#34481718)

I don't think the author had any understanding of the history of SPARC or Oracle (Sun)'s product linup. Here is an informative interview from the useful Sun hardware oriented blog on the subject http://www.c0t0d0s0.org/ [c0t0d0s0.org] http://www.oracle.com/us/corporate/innovation/innovator-hetherington-191304.html [oracle.com]

Sparc (5, Informative)

TopSpin (753) | more than 3 years ago | (#34481742)

The reduction in cores from 16 to 8 was part of the Sparc road-map [channelregister.co.uk] before Sun was acquired by Oracle. Despite a lot of speculation it appears Oracle is following through with the plans they bought from Sun.

... Sun was going to cut back the number of cores to eight and crank the clocks to 2.5 GHz ...

Keep the Cores; Make Them Faster (2)

Doc Ruby (173196) | more than 3 years ago | (#34481796)

Reducing the core count lets Oracle make each core bigger, to add features making each faster. But can't Oracle keep the same core count, and instead of increasing the core count in the next generation the way most other CPU makers will, just add circuits to each existing core? Is it really necessary to reduce the count? Process size will probably also be shrinking in that generation, and new tricks developed, as usual. Can't Oracle just make a bigger chip, and also keep the benefits of the high core count Sun already achieved?

Re:Keep the Cores; Make Them Faster (1)

GWBasic (900357) | more than 3 years ago | (#34482432)

Reducing the core count lets Oracle make each core bigger, to add features making each faster. But can't Oracle keep the same core count, and instead of increasing the core count in the next generation the way most other CPU makers will, just add circuits to each existing core? Is it really necessary to reduce the count? Process size will probably also be shrinking in that generation, and new tricks developed, as usual. Can't Oracle just make a bigger chip, and also keep the benefits of the high core count Sun already achieved?

From what it sounds like, Oracle could be devoting the extra space to cache. A large cache can go a long way in CPU-bound operations; or help make a very fast database.

Making a bigger chip isn't as easy as it sounds. As the die size increases, the probability of a defect within the die increases. (Imagine that you have 5 specs of dust on a wafer, if the die size is larger then the ratio of good to bad is worse.) Large die sizes will also have problems with heat distribution, or could limit total clock speed due to irregularities that small die sizes mitigate.

Re:Keep the Cores; Make Them Faster (1)

dgatwood (11270) | more than 3 years ago | (#34482590)

This is why you build each group of cores and the corresponding cache on a separate die, test and bin each die independently, then wire them together [wikipedia.org] inside the package. Sure, there's the added potential for interconnect failure, but so long as you test the integrated module before you epoxy the lid on, you should be able to salvage those parts.

Re:Keep the Cores; Make Them Faster (1)

mlts (1038732) | more than 3 years ago | (#34482612)

Oracle could have always gone the route IBM did with the POWER7 chips and have the best of both worlds. With Power7, you can turn half the cores off. The remaining cores will use the cache on the counterparts that are off, and the clock speed gets a decent bump.

This is what Oracle should have done -- if someone is doing a task that is easily split up into parallel parts, or using a lot of domains/VMs, allow for this. If they need more oomph per core, have half the cores flip off, and the others use their cache.

Re:Keep the Cores; Make Them Faster (1)

Surt (22457) | more than 3 years ago | (#34482550)

An increasing number of cores tends to be a challenge to keep clocked at a high speed. Every CPU developer struggles with this, and they all market higher clocked lower core count parts. Choosing to go for a lower core count in your design phase makes a lot of sense if you are single thread bound.

I'll see your 16 cores and raise you 1024... (1)

metalmaster (1005171) | more than 3 years ago | (#34482004)

but that doesnt really matter now does it? We know your application only supports 2 and scalability isn't an option.

Sh17... (-1)

Anonymous Coward | more than 3 years ago | (#34482158)

Sanity, at last! (3, Insightful)

kanto (1851816) | more than 3 years ago | (#34482306)

When will people realize that not everything runs better on more cores, especially stuff that's highly dynamic say like a database query which is effectively a long sequence of conditionals. You talk to people and the first thing they ask is "yeah, but how many cores does it have"... it's like multithreading didn't exist until dualcore cpus.

A cpu has a limited amount of processing power; some things you can only do in sequence ergo you can't do them in parallel ergo you're limited by the core-speed ergo you're fucked with 16 core 1GHz machine against a 1 core 2GHz machine.

Re:Sanity, at last! (1)

Surt (22457) | more than 3 years ago | (#34482564)

Not everything runs better on more cores, but so many things scale close to linearly with cores that it has become what the majority rightly want. Basically every business function that has to support N users can be partitioned over up to N cores, the more cores you pack per chip, the less chips and sockets and boxes you have to buy.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>