Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Mainframe Running World's Fastest Commercial Processor

timothy posted more than 2 years ago | from the overclocked-with-pencil-lead-of-course dept.

IBM 158

dcblogs writes "IBM's new mainframe includes a 5.5-GHz processor, which may be the world's fastest commercial processor, say analysts. This new system, the zEnterprise EC12, can also support more than 6-TB of flash memory to help speed data processing. The latest chip has six cores, up from four in the prior generation two years ago. But Jeff Frey, the CTO of the System Z platform, says they aren't trading off single-thread performance in the mainframe with the additional cores. There are still many customers who have applications that execute processes serially, such as batch applications, he said. This latest chip was produced at 32 nanometers, versus 45 nanometers in the earlier system. This smaller size allows more cache on the chip, in this case 33% more Level-2 cache. The system has doubled the L3 and L4 cache over the prior generation."

Sorry! There are no comments related to the filter you selected.

CPU (0)

Lord Lode (1290856) | more than 2 years ago | (#41149365)

Why does the article not mention the name of the CPU? Is only its clock speed faster, or also its execution? Can we also use this CPU in consumer computers or is this for IBM Mainframes?

Re:CPU (5, Informative)

betterunixthanunix (980855) | more than 2 years ago | (#41149383)

Re:CPU 6 core and clock frequency (-1)

Anonymous Coward | more than 2 years ago | (#41150541)

Hmmm, six cores with each running at 1 ghz equals 6 ghz with a 5% overhead makes it 5.7 ghz maximum... IBM Marketing!!!!

Re:CPU 6 core and clock frequency (2)

Guy Harris (3803) | more than 2 years ago | (#41152415)

Hmmm, six cores with each running at 1 ghz equals 6 ghz with a 5% overhead makes it 5.7 ghz maximum... IBM Marketing!!!!

And the published information supporting your assumption that the cores are only running at 1GHz, and the 5.7 GHz comes from multiplying the clock rate by the number of cores and subtracting 5% as overhead, rather than each core truly running at 5.7 GHz, is?

Re:CPU (1)

Guy Harris (3803) | more than 2 years ago | (#41152561)

Actually, the z196 is the microprocessor in the previous generation. An IBM paper on the zEC12 [ibm.com] refers to the new microprocessor as the "zEC12 processor chip" or just "the zEC12 chip". As they're not selling it on the open market, there's not much reason to give the processor chip its own name, independent from the name of the systems in which it's being used.

Re:CPU (1)

vlm (69642) | more than 2 years ago | (#41149437)

Why does the article not mention the name of the CPU?

You're probably not buying one at tigerdirect anytime soon, so it doesn't really matter.

It does run linux, which is kinda cool.

http://www.debian.org/ports/s390/ [debian.org]

Re:CPU (0)

Anonymous Coward | more than 2 years ago | (#41149531)

While IBM doesn't disclose prices (you have to talk to their sales), I found a source that said the predecessor (the z196) ran around hundreds of thousands of dollars. I would suspect this processor would cost in the same ballpark (200-300k per unit).

I'm certain IBM would be happy to sell you one, however...

Re:CPU (1)

Pf0tzenpfritz (1402005) | more than 2 years ago | (#41150225)

Why does the article not mention the name of the CPU? Is only its clock speed faster, or also its execution? Can we also use this CPU in consumer computers or is this for IBM Mainframes?

No. They obviously want to profit from high speed trading.

Re:CPU (3, Informative)

jthill (303417) | more than 2 years ago | (#41150737)

They make very few thousands of the really high-end stuff like this. You can bet every dollar you have that these will execute faster. Multinational corporations don't shell out $20M for a mainframe upgrade without knowing exactly what they're getting. L3 cache is 48GB. <== not a typo. There's an outboard L4 cache that's much larger. They've got bandwidth that can feed that beast: they were built to handle TB/s of just I/O bandwidth, not including CPU access to the data, something like a decade ago.

Re:CPU (0)

Anonymous Coward | more than 2 years ago | (#41150991)

L3 cache is 48GB. <== not a typo.

This is insane : you could run a full OS install just from the L3 cache !!!
After ramdisk, welcome to cachedisk...

Re:CPU (4, Interesting)

BBCWatcher (900486) | more than 2 years ago | (#41151293)

Yes, you could do that. Multiple images, actually. And that's basically what these servers do automatically. There are 4 levels of cache, main memory (which is RAID-protected actually, called RAIM -- only IBM does that), and there's another optional level of directly processor-addressable memory called Flash Express which is nonvolatile -- that's new, too. It works particularly well for fast paging, in-memory databases, memory dumps, etc. Then you go into fiber-attached and heavily cached solid state disk, fast disk, nearline disk, tape libraries. There are a lot of storage layers, and they're all very big.

Re:CPU (1)

Anonymous Coward | more than 2 years ago | (#41151255)

"L3 cache is 48GB. == not a typo"
Not a typo, still wrong. Those L3 caches tend to be in the 12-24MB range and even that is usually shared by the cores. The off-chip L4 cache is in the dozens to hundreds of MB.

Re:CPU (5, Informative)

jthill (303417) | more than 2 years ago | (#41151983)

L3 is 48MB, (see p. 43) [ibm.com] , not GB as The Register had it, thanks for noticing that.

Re:CPU (0)

Anonymous Coward | more than 2 years ago | (#41151825)

Yes, it is indeed a typo. The L3 cache is on the die, and the die has a few billion transistors, so there's no way you could have 48GB of L3 cache. It's actually 48MB of L3 cache, with 192MB of off-chip shared L4 cache.

dom

Re:CPU (1)

BBCWatcher (900486) | more than 2 years ago | (#41151883)

No, your correction is partially incorrect. It's 384MB of L4 cache minimum up to 1.5GB maximum per zEC12.

Re:CPU (1)

gl4ss (559668) | more than 2 years ago | (#41153539)

Multinational corporations don't shell out $20M for a mainframe upgrade without knowing exactly what they're getting.

uh, they tend to exactly shell out the money without knowing exactly what they're getting since they're contracting the decision out anyhow.

sure, it would be nice if the whole db fitted on the cache on the cpu.. but uh, you're not getting 48gb of on-die cache of course. you're not going to get that for 20 mil.

bogus claims (0)

Anonymous Coward | more than 2 years ago | (#41149435)

They seem to be claiming "faster" solely on the basis of clock frequency. In actuality, your normal-ass laptop chips probably have higher performance, except on the specific server-type workloads this was made for.

Re:bogus claims (3, Informative)

Tx (96709) | more than 2 years ago | (#41149847)

They claimed "faster", not "more powerful"; clock frequency is the only thing they need to reference for that claim.

Re:bogus claims (5, Informative)

BBCWatcher (900486) | more than 2 years ago | (#41150139)

No, they aren't claiming that. Clock speed is still extremely important, though, and nobody else except IBM has figured out how to hit these high gigahertz numbers, much less within power and cooling constraints. What's all the more impressive is that IBM does it at mainframe service qualities, i.e. this machine runs continuously at 5.5 GHz without shutting off cores, without "burst" mode, and without weird/exotic stuff like cryogenics that might keep a chip running long enough for a screenshot. It's just balls out performance on every thread -- and there's a definitely a market for that. Nobody else is left doing this computer engineering, bless them. Also check their cache sizes (obscenely huge), out-of-order execution, pipelining, crypto and decimal floating point in every core, extremely complex instructions like transactional execution.... This z CPU is a gorgeous piece of engineering in every way. And no, you can't run an entire large bank (for example) on your laptop.

Re:bogus claims (4, Interesting)

Anonymous Coward | more than 2 years ago | (#41150365)

There are some engineering tricks I've seen IBM use which are pretty cool. Take the POWER7 CPU line for example. You can disable every other core, allowing the cores that are operational use the cache of the cores that are not on. This gives not just cache, but allows a higher clock speed. Of course, this feature is mainly used to deal with applications which are licensed by the hardware cores present.

Mainframes are probably one of the most underutilized tools out there. However, for performance per square foot in the data center, they are hard to beat these days.

Of course, the biggest advantage: It isn't x86. With virtually everything running on the x86 or amd64 platform, all it would take is an undocumented instruction similar to the F0 0F bug that happens to give ring 0 access, and virtually the whole world is vulnerable with absolutely zero way of protecting against it except reaching for the network cable or power switch.

Re:bogus claims (1)

Anonymous Coward | more than 2 years ago | (#41150827)

Finally someone who gets it, the x86 monoculture is the single most dangerous thing in the computing landscape today.
Monoculture is bad (remember potatoes in Ireland), it has always been bad and will always be bad.
And no, it won't be better if x86 monoculture is replaced by ARM monoculture. Well, there will more choice of foundries, not just Intel and AMD (that Intel could kill but does not for fear of being scrutinized even more by antitrust authorities).

Re:bogus claims (1)

Anonymous Coward | more than 2 years ago | (#41150975)

You make it seem like IBM is unique. Yes they are, but not in the way you implied. Mainframes are expensive beasts with essentially no limits in cooling running (in many cases) legacy code stacks that can't be (or at least will not be) updated. They are using expensive techniques like ceramic MCM or Multi Chip Modules each ceramic substrate likely costing more than even an Intel top of the line processor. They are using a huge amount of connector on the order of 5x more than the before mentioned top of the line Intel processor in order to feed the processors with power and data.
The closest relative to this level of effort is the supercomputers of yesterday. Todays supercomputers doesn't even compare...

Intel could very well have their 8-10GHz Pentium 4(5?) now if they had continued on that path. I for one like their current processor line better.

Re:bogus claims (1)

BBCWatcher (900486) | more than 2 years ago | (#41152511)

What if Intel had continued boosting clock speed (within power and cooling constraints) and employed other improvements? IBM has done both, and I applaud that. It's important to them (and to many of their customers) that they keep working hard to improve the performance of each thread, and, golly, they keep pulling rabbits out of the hat.

Re:bogus claims (0)

Anonymous Coward | more than 2 years ago | (#41151119)

Impressive numbers, but a Xeon 2.4 Ghz still owns a z196 5.2Ghz CPU in UnixBench :)

Re:bogus claims (1)

BBCWatcher (900486) | more than 2 years ago | (#41151847)

Well, if so -- no idea, really -- then run UnixBench on an Intel Xeon. I see that IBM sells those, too, as it happens. Now how does UnixBench help me run my business better, more securely, more reliably, etc? I've never worked for a business (or government) that runs UnixBench to solve any real business problem(s).

Re:bogus claims (1)

uncqual (836337) | more than 2 years ago | (#41153347)

I've never worked for a business (or government) that runs UnixBench to solve any real business problem(s)

But such businesses likely exist -- those business whose real business problem is selling processors and whose processors run UnixBench very well.

Re:bogus claims (2)

Jeremy Erwin (2054) | more than 2 years ago | (#41150401)

Laptop chips? Please. We're moving away from that. The tradition these days is to compare everything to your cell phone-- Your cell phone beats the pants of a Cray, and so on.

horray! (0)

Anonymous Coward | more than 2 years ago | (#41149461)

for IBM Slash-vertizing!!!!!!!!!!

Reading the words "new mainframe" (1)

JCCyC (179760) | more than 2 years ago | (#41149463)

...gives me a bit of a cognitive dissonance sensation. It shouldn't, really, but it does. Is it just me?

Re:Reading the words "new mainframe" (4, Informative)

Anonymous Coward | more than 2 years ago | (#41149795)

Mainframes run a surprising amount of critical workloads in the real world. They're vastly different than open systems, but they can be kept running through almost anything, if you're willing to spend enough money.

Re:Reading the words "new mainframe" (1)

davester666 (731373) | more than 2 years ago | (#41151765)

Well, pretty much anything can be kept running through almost anything, if you're willing to spend enough money...

Re:Reading the words "new mainframe" (3, Insightful)

BBCWatcher (900486) | more than 2 years ago | (#41151939)

Well, no. Right tool for the right job and all. You can buy the world's most expensive Olympic racing bicycle, but it won't haul an Airbus fuselage to its factory. There are many problems that cannot be solved with infinite amounts of money wrongly applied.

Re:Reading the words "new mainframe" (1)

Anonymous Coward | more than 2 years ago | (#41149797)

its just you

Re:Reading the words "new mainframe" (3, Informative)

gstoddart (321705) | more than 2 years ago | (#41151895)

...gives me a bit of a cognitive dissonance sensation. It shouldn't, really, but it does. Is it just me?

It may not be just you. But I think a lot of people really have no idea of just how many mainframes are still chugging away doing what they've always done.

My wife does outsourced SAN storage, and they still have a couple of clients with big iron running.

Every couple of years when everybody has forgotten about the machines, an IBM tech will call up and say that the machine has phoned home and has a part that needs to be swapped out and that he needs to go onsite. Which usually leads to several hours of people trying to remember what it is and where it is (except the guys who work in the data center, who can't miss it).

I've worked in several places that have had mainframes for literally decades. And I've even worked on a project or two which tried to replace ancient, purpose built software with some shiny new stuff. In the cases I've seen, after spending a few years a a few million dollars ... they still can't replace the mainframe and scrap the project.

I knew someone in the early 2000's who had retired from his job with a full pension, and was back as a consultant making at least 3x his old salary because they no longer could find someone who knew the machines and the software like he did.

Mainframes haven't gone away. Not by any stretch. And I bet this one still runs the stuff from the IBM 360 days quite nicely.

Re:Reading the words "new mainframe" (1)

mlts (1038732) | more than 2 years ago | (#41153175)

Mainframes do a bunch of tasks extremely well. The problem is that there is a "cheapest at any cost" mentality in IT, which is why this type of technology seems to be outmoded.

If businesses looked at the TCO of a mainframe, oftentimes, they would be better off, especially because of the CPU power per square foot of server room space, which a mainframe excels at. This is also true to a lesser extent with the higher end Oracle SPARC and IBM POWER7 machines.

The one advantage of mainframes is that once set up and configured, they pretty much sit there and other than phoning home to the IBM guy if some hardware breaks, it essentially can be forgotten about. No reboots every two weeks, or other stuff required unless there is a major security issue, and those tend to be very rare.

Mainframes are also good if one wants to do reliability from the bottom of the stack up, so having to do custom code for fault tolerance is minimized.

L4 cache (2)

afidel (530433) | more than 2 years ago | (#41149487)

How does the L4 cache in these processors work? Generally going to anything off die is going to induce a major latency penalty due to the need to go through a driver stage which can handle outside interference. How can they make the L4 cache fast enough that its small size doesn't make it basically pointless versus just going to main memory?

Re:L4 cache (1)

MozeeToby (1163751) | more than 2 years ago | (#41150099)

Small is a relative term. The L4 cache is almost 200 Mb on these. Of course, it all depends on the how the math works out. As long as it's faster than going to RAM there will be plenty of situations where it pays off.

Re:L4 cache (4, Informative)

BBCWatcher (900486) | more than 2 years ago | (#41150227)

Actually, L4 cache on this new IBM zEC12 is a minimum of 384MB up to 1.5GB per server in increments of 384MB. As you add cores the L4 is bumped up. IBM doubled the cache in only a 25 month product cycle. Bravo.

Re:L4 cache (0)

Anonymous Coward | more than 2 years ago | (#41150189)

Without reading the article at all, there is some easy ways in which they could implement cache off-die and it still be faster than system RAM, firstly... much faster ram GDDR5 vs DDR3, etc would help, also have the cache logically part of the CPU, with the CPU being the only thing that can talk to the ram at all. Kinda breaking our concepts of northbridge/south bridge... but ehh, that isn't necessarily a _bad_ thing

PLUS... perhaps they have a better caching scheme this way.... having built a cache simulator in undergrad... small changes in a single variable (row width, # rows, # of rows to grab at once, etc) can make a huge impact on even simple application execution. So i'm going to make a non-far fetched assumption that they used simulators and optimized their caching scheme (or hell it might even be variable in some respects based on past processes ) to use the L4 Cache available.

Plus i'm sure there is other ways... I got out of the comp. arch field after undergrad...

Re:L4 cache (1)

inode_buddha (576844) | more than 2 years ago | (#41150623)

FWIW they have a 2 gig page frame now. IBM shrank the process size and crammed cache like crazy on these. Along with some interconnects that make normal computers look lame...

Re:L4 cache (0)

Anonymous Coward | more than 2 years ago | (#41153123)

All you need to do is to cover the latency of the next level of storage hierarchy. There are apparently 30 cores in a single MCM, so the L4 cache is likely very necessary.

Ming Mecca (5, Funny)

unixhero (1276774) | more than 2 years ago | (#41149541)

That's a Ming Mecca chip. Those aren't even declassified yet!

Re:Ming Mecca (1)

Sponge Bath (413667) | more than 2 years ago | (#41151055)

I'm not sure the mainframe crowd will know this pop culture reference, and may end up thinking of the guy with a pointy beard from Flash Gordon.

Re:Ming Mecca (1)

Jeng (926980) | more than 2 years ago | (#41153749)

Or they will be like me, be interested, look it up, and then laugh.

I've since added the movie to the queue.

Then again I'm not part of the mainframe crowd. Damn cool kids with their expensive toys.

Memory performance? (1)

Urza9814 (883915) | more than 2 years ago | (#41149563)

So it was my understanding that part of the reason consumer CPUs didn't tend to go above 3-4GHz was that, at those speeds, the electrons can't actually move through the wires fast enough. Specifically for doing memory reads -- at 5.5GHz, I'm calculating about 4cm per clock cycle -- which may be further than the memory is physically located on a normal desktop PC. Meaning it would take not just two, but possibly three or four clock cycles to read a value from main memory.

Granted, on a server, main memory may be closer to the CPU, and the added cache will help as well. But I'm also mostly a software guy -- anyone with some more computer engineering knowledge have any information about this? Is the memory closer? Are they just taking longer to read? And if so is that likely to impact performance significantly (such that this wouldn't be as significant of a gain from 3GHz-5.5GHz as, say, 1GHz-2GHz?)

Re:Memory performance? (5, Informative)

Anonymous Coward | more than 2 years ago | (#41149769)

CPUs have not accessed main memory synchronously in decades. There are many hundreds of cycles lost if the processor stalls on a RAM access, not just from the length of the wiring but the addressing logic too. In fact, modern CPUs don't do word-level access to RAM, but rather pull in whole cache lines in a more packetized memory access protocol. Even in a multi-CPU SMP system, they don't actually communicate through system RAM anymore, but rather communicate CPU-to-CPU with a cache coherency protocol that provides the illusion of a shared system RAM. Each CPU really has its own set of local RAM behind its own cache and on-chip memory controller.

Even the L2 or L3 caches are unable to keep up with the CPU, but they are still significantly faster than system RAM, so they still help when the working set can fit there.

Re:Memory performance? (3, Informative)

Rockoon (1252108) | more than 2 years ago | (#41150461)

To add to this, the Sandy Bridge has an L1 latency of 4 or 5 cycles (depending on access mode), the L2's latency is 12 cycles, and the L3's latency is 46 cycles plus the response time of the memory chips (typically between 60ns to 70ns)

These chips make up for the high latencies by having many instructions being executed simultaneously, so if one dependency chain completely stalls out on a cache miss any other dependency chains can still fill up the execution units keeping the processor just as busy as if there were no stall at all until everything left in the pipeline is dependent on the result of the stalled out operation.

Re:Memory performance? (1)

BBCWatcher (900486) | more than 2 years ago | (#41150747)

Yes, everybody does that (out-of-order execution, pipelining, etc., etc.) And then...you still need to keep the CPU well fed to boost performance. Enormous 4-level caches help do that. Having a continuous 5.5 GHz clock speed is also quite helpful. So is having 101+ cores that can access the same cache rather than, say, 8 such cores. And at least a couple hundred (at least) other IBM performance tricks, many of which cost money to deliver and thus probably won't find their way into save-a-nickel parts of the market any time soon. It also very, very seriously helps when you design both hardware and software together, as the late great Steve Jobs (among others) reminded us all.

Re:Memory performance? (0)

Anonymous Coward | more than 2 years ago | (#41150165)

Wow.... I don't mean any offense to you personally, because I know most programmers are in a similar boat these days, but it astonishes me that so many people claim to be "software guys" these days with so little understanding of how computers work. Again, don't take this as a personal attack, because I know it's the most common case for almost all programmers in the last few decades who have never really learned how computers work. I've seen it most often in people cut from the Java programmer mold: they (most often) use the machine in tragically inefficient ways, sometimes leaving orders of magnitude on the table.

There is a significant penalty in both latency and throughput to access memory, and this has been true for a long time (meaning, for many, many generations of CPUs). Covering these latency and throughput limitations is a major design goal of a modern CPU. Of course you hope to hit some level of cache, but if you miss every level of cache and much fetch from memory, the latency can be in the range of many dozens up through many hundreds of cycles, depending on the architecture in question, so there is no issue with having to see the result in a single cycle - it's bound by other factors. Even L2 cache accesses have a latency penalty, although not that severe - it can be several up through a dozen or two cycles, again depending on the machine architecture. This latency can be covered by various techniques such as out of order execution, or hardware threads which use the same execution units during the latency period.

Re:Memory performance? (0)

Anonymous Coward | more than 2 years ago | (#41150587)

Wow.... I don't mean any offense to you personally, because I know most programmers are in a similar boat these days, but it astonishes me that so many people claim to be "software guys" these days with so little understanding of how computers work.

These are the same people that believe the claims that C is a low level language.

it is..relatively speaking (1)

Chirs (87576) | more than 2 years ago | (#41153529)

I would argue that the jump from machine code to assembly to C is much smaller than the jump from C to lisp/bash/perl/prolog

Article says 'may be', i.e. no (-1)

Anonymous Coward | more than 2 years ago | (#41149577)

If IBM had the worlds fastest processor, don't you think they'd be trumpeting it?!

So what you have is an article saying 'may be', for a processor whose only distinguishing feature is its clock.

Take a Core i7, stick it Nitrogen cooler, turn the clock multiplier up to its maximum (57 x 100mhz for the model I have, i.e. 5.7 Ghz), then compare the two, see whose the faster processor then.

Core i7 will smoke this chip, by a factor of 3-5 times.

Re:Article says 'may be', i.e. no (1)

ciderbrew (1860166) | more than 2 years ago | (#41149703)

But their chip isn't overclocked to near death and using a nitrogen cooler. So I've no idea how your comparison works. The headline may as well read "Ford make fastest road car" so you can say but it isn't as fast as a car you've have converted to use a jet engine and drag racing tires.

Re:Article says 'may be', i.e. no (3, Insightful)

bws111 (1216812) | more than 2 years ago | (#41149765)

So, you're comparing a ridiculous configuration of a nitrogen-cooled, over-clocked processor that will maybe run long enough to get a screen shot of it running, to a commercial processor that is designed to run at that speed non-stop for years and years? Yeah, that makes sense.

No the basic Core i& extreme will smoke it too (0)

Anonymous Coward | more than 2 years ago | (#41150395)

No the basic Core i7 Extreme will smoke it too running at normal clocks speeds. However if you're foolish enough to judge the processor by its clock speed, the Core i7 can also win that race too, simply by turning up the cooling and the clock multipler to its maximum supported (which would do 5.7ghz on my older i7). IBM's chip is water cooled BTW.

IBM avoids benchmarking its mainframes for good reason.

Re:No the basic Core i& extreme will smoke it (4, Informative)

BBCWatcher (900486) | more than 2 years ago | (#41150829)

OK, here's a benchmark. You're welcome to try running an entire large bank (for example) on one server -- your choice. OK, two servers: I'll allow you one additional for off-site disaster recovery of all development, test, and production workloads, including concurrent batch and online, for all the bank's security zones. Choose wisely, Grasshopper.

Do you believe in fairies too? (-1)

Anonymous Coward | more than 2 years ago | (#41151023)

Before you start talking up vague claims, realize that I do telecoms billing and that's far more transactions than bank billing. I wouldn't do telecoms bill calculations on a mainframe because a) they can't handle the load, people use the phone thousands of time for each time they make a bank transaction, and b) they cost too much, without proper benchmarks to show any value, and c) I'm not gullible enough to listen to your FUD.

Now you might argue that it CAN handle the load, well fine, let me benchmark it, and we'll see if its worth the money.

There's a reason IBM doesn't benchmark its kit, because the benchmarks ain't pretty.

Re:Do you believe in fairies too? (2)

BBCWatcher (900486) | more than 2 years ago | (#41151481)

So you don't like my benchmark then and want another benchmark? OK. I chose a perfectly reasonable benchmark: number of servers (X) to deliver a particular real-world business outcome, where smaller X is better. A benchmark is simply a measurement to assess particular criteria (such as X) against a particular outcome (such as running a bank). I can agree that that an IBM zEnterprise EC12 server is not the answer to every IT problem. It is, however, the answer to many. And if you can't agree to that, then you simply have more to learn. (How exciting!)

Thanks, I've already found some Benchmarks (0)

Anonymous Coward | more than 2 years ago | (#41152115)

It seems the Z114 class at least is benchmarked, their entry level $75k server is 26 mips, which would buy you a big rack of Core i7 servers (each with 177,730 MIP, i.e. 4500 times faster)

http://www.tech-news.com/publib/pl2818.html

So as my own experience of IBM mainframes tells me, they're just too slow. You can claim some magic security gain, but the reality is they don't have enough processing power to do any extra security checks. You can claim extra reliability, but then for the same money I can buy 10 servers and have 10 mirrors running. You can make some vague 'particular real-world business outcome' claim, but I have to prepare real world bills in real world time, and sales talk doesn't crunch numbers.

Re:Thanks, I've already found some Benchmarks (4, Informative)

bws111 (1216812) | more than 2 years ago | (#41153181)

OK, let's put some of this stupidity to rest.

First, nobody who knows anything uses MIPS to compare perfomance between two different architectures. MIPS is only marginally useful in the best of conditions, and even then is only useful as a relative measure between two machines of the same architecture running the same workload.

Second, Core i7 servers execute 178 BILLION instructions every second, on average? Seriously? 80 instructions per clock cycle, sustained? Bullshit.

Third, your nice shiny rack of Core i7 servers doesn't mean anything if it can't run your software.

Fourth, the actual performance of a Z114 processor is around 780 MIPS, not 26. So why do they have that 26 MIPS 'dialed down' model? Because some customer asked for it. Why would a customer pay $800K for a 780 MIPS machine when he only has 26 MIPS of workload? Why would the customer pay software licensing fees for a 780 MIPS machine when he only has 26 MIPS of workload?

Fifth, 'your experience' with IBM mainframes is non-existant, or you wouldn't be making these stupid mistakes and claims.

The Emperor has no clothes (1)

Anonymous Coward | more than 2 years ago | (#41153449)

Points 1,2,3 apply to this chip too. At the end of the day,its a chip running Linux timeslices and Java. It can be benchmarked and it can be compared. Even if IBM runs away from comparisons.

Point 4, The table lists an entry server of 26 MIPS for $75000 which will buy you a big rack of Corei7s. You mention 780 MIPS, the register article (mentioned in a comment lower down) estimates 1600 for the top of the range chip. i.e. 1% of the processing power of the i7.

Presumably that's a top of the range price in millions, but lets ignore that for a second.

If I switched computers to this IBM mainframe, from its current rack of 4 Corei7s I would have 0.25% of the processing power. I would firstly probably ditch the integrity checks, and security checks, they're expensive to calculate and I don't have the processing power. If the floating point is as bad as the MIPs then I would probably have to switch some of the calculations from float to integers or fixed point math, and round, again with lots of problems and contractual headaches. I would calculate the bill less often and not be able to update the live online bill, again a result of the lack of processing power. We would raise prices, this is due to the high cost of the mainframe.

The consequence of vague claims, not backed up by hard reality would be devastating.

After a lot of vague talk from you, when forced to you finally make a benchmark claim. 780MIPS and its pitiful, even if it was a $1000 computer it would be pitiful.

I'm now wondering if my tablet PC (Asus tf700 quad core 1.6Ghz) is faster than your mainframe, because these numbers from you and others who've benchmarked IBM kit, are sooo low.

Re:Thanks, I've already found some Benchmarks (1)

Jeremy Erwin (2054) | more than 2 years ago | (#41153239)

Your 177730 MIPS figure is mirrored by this wikipedia page [wikipedia.org] . Using the same criteria, 30 MIPS is around a 33 MHz 80486,or perhaps even a 68040.

Unless you have an irrational suspicion of IBM, it's fairly reasonable to assume that a mainframe MIP is not a Dhrystone MIP.

Re:Thanks, I've already found some Benchmarks (3, Informative)

mlts (1038732) | more than 2 years ago | (#41153293)

CPU isn't the single item with mainframes. Mainframes tend to have large I/O buses, and that is something that tends to be forgotten about when people talk about CPU power.

Mainframes are designed to do business tasks, be it CICS operations, DB2 transactions, or other integer based operations that require tons of data going in and tons of data going out at a time. This is why IBM has such a good caching design. Having the ability to get the numbers into and out of the CPUs is what mainframes are designed to do.

If someone expects top notch floating point operations, expect to be disappointed. MIPS and sheer bus bandwidth rule the roost when it comes to this section of computing.

Re:Do you believe in fairies too? (0)

Anonymous Coward | more than 2 years ago | (#41151937)

If you are trying to claim that you do telecoms billing on a single x86 server, I am going to claim that you are either at a very small telecom, or are lying.

You claim that mainframes 'can't handle the load', then you claim 'there are no benchmarks'. Well, which is it?

Anyone who decides what machines to use for a major installation such as a telecom based on 'benchmarks' is a complete moron. Unless the benchmark happens to match your workload exactly (and they never do) the benchmark is worthless. The only proper way to benchmark is to try YOUR workload on multiple machines, then make a decision. I am guessing that you are not even aware that you can take your workload to IBM to benchmark it, are you?

Telecom Billing is a Miserable Failure (0)

Anonymous Coward | more than 2 years ago | (#41152361)

Given the faults I've seen with telecom billing at every level (POTS, cell, trunk, global) I'm going to go with the "Your example is best used as a counter-example". By the way, how do you know they can't handle the load when you don't have any benchmarks? They're all bullshit, but there's a benchmark relevant to most business problems.

Re:Do you believe in fairies too? (1)

Nerdfest (867930) | more than 2 years ago | (#41152857)

This ius what I've seen woring in a mainframe shop as well. The performance is not great, and the OS and tools were horrid (this is z/OS, not Linux). The costs were astonomical for the performance as well. The only thing they can really clain is very good reliability, but in the end, it's human error that gets you every time. We had well administered Windows servers running database, etc, that were kicking the mainframe's ass in both performance and in uptime (systems, not hardware). If you never change *anything* that might cause problems, you'll have a great uptime, but so would a cluster of linux boxes, with better price and performance. There are very few workloads where a mainframe is a benefit, and the only thing keeping most peple there is the difficulty in leaving, requiring re-writing software and tools.

Re:No the basic Core i& extreme will smoke it (2)

bws111 (1216812) | more than 2 years ago | (#41151467)

What exactly are you basing your claims on? Just pulling things out of thin air?

Here are some things that IBMs customers care about, where are the Core i7 Extreme numbers for these?

How many CICS transactions can I process per second? How many IMS updates? How about DB2 transactions? How many SSL transactions? What differences are there in performance for on-line vs batch processing? Can I tune the system to maximize performance for my particular workload?

This is great news, however... (0)

Anonymous Coward | more than 2 years ago | (#41150009)

...those speeds are only achieved with the Turbo button depressed,
otherwise it's 8Mhz for compatibility with legacy software.

His name. (0)

Anonymous Coward | more than 2 years ago | (#41150037)

[...] But Jeff Frey

That's when I stopped reading and a train of thought departed the station.

Except.... (1, Informative)

Ancient_Hacker (751168) | more than 2 years ago | (#41150275)

Except the 5.5GHz may not be all that fast, as the Z-line of CPUs are the old IBM 360 instruction set, which is is so large, complex, and baroque that it is mostly usually implemented through a thick layer of microcode.

So 5.5GHz may be the speed of the microcode level, the actual "machine instructions" may be a considerable sub-multiple of that.

Re:Except.... (0)

Anonymous Coward | more than 2 years ago | (#41150517)

Actually, they are supersets of the original S/360s. There are now over 1,000 different instructions (I don't remember the exact count). Many of these have some "RISC" like flavors. I like the new "compare and branch". A single instruction which does a compare of two register, or a register and immediate value, and branches if they match the comparison mask (which can be >, >=, ==, =, , and != ). Unlike previous generations which did a compare, setting the condition code register, followed by a branch. Also, these instructions do not set the condition register.

Also, some of the instructions are "hard wired". Basically the "simple" one. The "complicated" ones are not microcoded. IBM calls them "millicoded" because the implementation is actually kept in main memory and is written in a superset of the "hardwired" instructions and some other "special" instructions which are not accessable except when the processor is in "millicode" mode. I.e. the "complicated" instructions are more like subroutine calls to special hardware subroutines.

Re:Except.... (3, Interesting)

BBCWatcher (900486) | more than 2 years ago | (#41150543)

No, that's not a correct supposition -- quite the opposite, actually. All processors, including Intel X86, use microcode (or what IBM calls millicode) to a degree. IBM knows it well. After all, they invented microcode/millicode in the System/360 in 1965. But IBM uses microcode comparatively less nowadays than other processor architectures. The vast majority of zEC12 instructions are implemented entirely in hardware, including IEEE-754-2008 decimal floating point as an example. There's some really, really interesting new stuff in the instruction set, like the first transactional memory ("transaction execution facility") instructions in a commercial server, and some "feedback" instructions that can tell Java applications/the JVM how to dynamically tune itself in a live running environment. Very cutting edge -- so cutting edge I've got to crack open some engineering manuals to try to figure out what they've done, although they probably need to write those manuals.

Re:Except.... (4, Informative)

Guy Harris (3803) | more than 2 years ago | (#41151709)

No, that's not a correct supposition -- quite the opposite, actually. All processors, including Intel X86, use microcode (or what IBM calls millicode) to a degree.

At least from what I've read about the past few generations of S/3x0 chips, millicode is more like PALcode on the Alpha processor than like traditional microcode, i.e. it's a combination of regular machine code and processor-specific instructions that access specialized registers etc., running in a special processor mode with (presumably) fast entry and exit, support for said processor-specific instructions (which presumably trap in either both "problem state", i.e. user mode, and "supervisor state", i.e. kernel mode), and its own bank of general-purpose registers (part of the "fast entry and exit"). Instructions implemented in millicode trap to millicode routines that implement them.

What IBM called "microcode" rather than "millicode" was implemented using processor-specific instructions completely different from the machine's instruction set (instructions often having fields that directly controlled gates).

(And then there's System/38 and the pre-PowerPC AS/400, where the processor instruction set was a CISC instruction set implemented using microcode, and where the compilers available to customers generated code in an extremely CISCy instruction set [ibm.com] that the low levels of the OS translated into machine code and ran. For legal reasons - they didn't want to have to be required to make the low-level OS code available to "plug-compatible manufacturers", i.e. cloners - they not only called the microcode that implemented the processor instruction set "microcode" ("horizontal microcode", as it probably was "fields directly control gates"-style horizontal microcode), they also called the aforementioned low level OS code "microcode" as well, even though it ran from main memory and its instruction set was the instruction set that was actually executed in application code ("vertical microcode"), and had the group working on that code report to a manager in the hardware group. See Frank Soltis's Inside the AS/400.)

IBM knows it well. After all, they invented microcode/millicode in the System/360 in 1965.

"Invented", no; the paper generally considered to have introduced the concept was "Microprogramming and the Design of the Control Circuits in an Electronic Digital Computer" [microsoft.com] , by Maurice Wilkes and J. B. Stringer, from 1953. S/360 may have been the first line of computers to use microcode in most of the processors (S/360 Model 75 was, I think, implemented completely in hardwired logic).

Very cutting edge -- so cutting edge I've got to crack open some engineering manuals to try to figure out what they've done, although they probably need to write those manuals.

Well, for the previous generation, there's Volume 56, Issue 1.2 of the IBM Journal of Research and Development [ieee.org] has some papers on the z196, but, alas, not for free online. They may publish an issue on the zEC12 at some point.

Re:Except.... (0)

Anonymous Coward | more than 2 years ago | (#41151253)

Excuse me but that's bull. The base instruction set is relatively clean and most code doesn't use microcode execution.

http://www.realworldtech.com/z196-mainframe/

Re:Except.... (0)

Anonymous Coward | more than 2 years ago | (#41151573)

You said so many wrong things it's hard to know where to begin.

z/Arch is an evolution of S/360. S/360 had very few instructions, probably less than an Intel 8080. Today's z is documented in about 1,300 pages, today's Intel x86_64 takes over 4,000 and has more instructions than z.

The instruction set now is large in comparison with S/360 but it is not large compared to Intel, and it is certainly not complex nor baroque compared to Intel.

IBM's hardware and OS can run mixed 24, 31, and 64 bit code in one load module (executable) and 64 bit doesn't automatically take 2x the footprint unlike Intel, who still can't get it right.

Intel's stuff is also microcoded and implemented as RISC under the covers. You're as ignorant as you are stupid and biased. Have a nice day, luser!

Re:Except.... (1)

Guy Harris (3803) | more than 2 years ago | (#41153403)

and 64 bit doesn't automatically take 2x the footprint

Are you referring to instruction density, or to data structure density with 64-bit pointers?

unlike Intel, who still can't get it right.

Actually, that's actually AMD's fault, if you're complaining about the x86-64 architecture (unless you blame Intel for not having done a better job than AMD).

Intel's stuff is also microcoded and implemented as RISC under the covers.

Yup, the whole "translate the "complicated" instructions to microops and schedule and run the microops independently" stuff is also being done in the z196 and, presumably, its zEC12 successors. As IBM zEnterprise 196 microprocessor and cache subsystem [ieee.org] (not available for free) says:

In the z196 microprocessor, the traditional System z CISC (RX) pipelines are split into multiple shorter latency reduced-instruction-set computing (RISC)-like execution units, and the complex z/Architecture* instructions are cracked into RISC-like microoperations.

("RX" means "memory-to-register or register-to-memory instructions" - they include loads and stores, but also include memory-to-register arithmetic instructions).

In modern x86 processors (dating back at least as far as the Pentium Pro), most instructions are, as far as I know, directly implemented in hardwired logic (or translated into microops that are directly implemented in hardwired logic).

So, at the fetch/decode/execute level, a Shiny New Core i{3,5,7} and a Shiny New zEC12 are rather similar - directly executing register-to-register ops in one clock tick, carving register-to-memory/memory-to-register instructions into microops and directly executing each microop in, I suspect, one clock tick (modulo waiting for memory in the load or store microops), and executing multiple microops in parallel, out of order, register renaming, blah blah blah. The more complicated instructions might be implemented in Shiny New x86 processors by jumping to microcode (which I suspect is made out of microops in Pentium Pro and later) and in Shiny New z/Architecture processors by a fast trap to millicode (which is z/Architecture code + some special millcode-mode-only instructions - think "PALcode"), but even that's not a huge difference.

I.e., yes, the person to whom you're replying is very mistaken. Current z/Architecture machines (and at least the CMOS S/390's) don't interpret the instruction set in microcode, with the clock rate being the speed at which microinstructions are run, with several microinstructions being required for every instruction; I'd say "the clock rate is the instruction rate", except that the processors are superscalar, so more than one instruction could be processed in a single clock tick.

don't blame intel for size of 64-bit binaries (1)

Chirs (87576) | more than 2 years ago | (#41153643)

The x86_64 cpus support a mode called x32 where they use the 64-bit CPU mode but with 32-bit pointers. The hardware supports it, but there is relatively little software support. A Linux port is currently in progress.

Damn marketdroids (1)

Anomalyst (742352) | more than 2 years ago | (#41150303)

How about a price list in TFS for budget planning?

Overpriced crap (1, Interesting)

bored (40072) | more than 2 years ago | (#41150581)

5.5Ghz probably makes it about as fast as a 2 year old intel machine. I should know, I have a z114 (previous generation at 3.8Ghz) that i've done extensive benchmarks on. The fact that IBM refuses to publish standard benchmark numbers (specCPU, specVM, etc) should be sufficient proof that they are not pretty.

I can say that the people buying these things are pretty much smoking some fine IBM drugs. Sure, they are actually fairly competitive (but still not class leading) on the high end, but on the low end, which starts at ~200k, after disks and licenses, for 26 MIPS are abysmal. At that price/performance hercules [hercules-390.org] on a midrange desktop PC doing software emulation (and its not even JIT'ed) runs somewhere between 5-15x as fast.

A 26 MIP mainframe is roughly equal to a Pentium 90. A full blown 3.8 Ghz z114 is roughly equal to a 5 year old x86 server.

Worse yet, is FICON, which generally is just a giant layer of inefficiency sitting in front of standard SCSI/SAS disks. So, the IO numbers are pretty abysmal too.

Basically, you have to spend >$400k before the mainframe catches up to what you can do on your desktop with a free emulator.

If your running linux on z, then your really deluded. In fact, your probably better off taking the HMCs, SEs, and CUs that it comes with and running linux on them directly. The only minor saving grace is that IBM doesn't rape people for unlocked processors to run linux (IFL's).

Further, IBM's claims of easier manageability are a joke. I can install ESXi and a half dozen linux machines, in the time it takes an expert system programmer to setup the HCD, install z/vm, and start configuring a linux machine. Oh, and I can migrate the image with a couple mouse clicks. Plus, I don't have to manage my data stores as a bunch of tiny disk images because zOS still prefers to deal with mod3 (~3GB) and mod9 (~9GB) disk partitions. I literally have a few hundred partitions on a machine with just a couple TB of storage. If you think managing a few dozen vmware disks is a problem, multiply it by 3-8x on z to run linux.

Frankly, if you have cobol, JCL, whatever running on these things and your not desperately trying to migrate to another platform, then your must either be extremely rich, or really stupid. The maintenance costs alone over ten years is going to save 7 figure sums, which should more more than enough to hire a couple programmers and a system administrator to port and maintain the apps on a machine that costs $20k every 5 years.

Re:Overpriced crap (4, Insightful)

Wovel (964431) | more than 2 years ago | (#41151223)

You have a point, but you missed it. At least talk in terms of modern workloads. These machines are running over 1,500 MIPS. Your talk of systems running 25-30 MIPS is silly. If your 114 is running at 25 MIPS it is broken. Really, really broken.

No single processor desktop CPU can handle that. Even dual processors. Hercules is no where near the performance of a modern Z series mainframe.

Can you build a server complex with more MIPS for less money? Absolutely. The question becomes what is the cost and risk of migrating that legacy application.

z114, 26 MIPS, Corei7 177730 MIPS (1)

Anonymous Coward | more than 2 years ago | (#41151437)

Entry level z114 is indeed 26 mips for $75000.

http://www.tech-news.com/publib/pl2818.html

Which is a joke surely? $400,000 will get you 330 mips, which is erm, surely a mistake???!! It's way way too low:

http://en.wikipedia.org/wiki/Instructions_per_second

Core i7 is 177,730 MIPS at 3.3 Ghz.

Re:Overpriced crap (0)

Anonymous Coward | more than 2 years ago | (#41152485)

Maybe a z196 is in the 1500-15k MIPS performance region, but the z114 really is a 25 MIPS machine at the low end. If you want a mainframe for under $1M, though, the z114 is probably what you'd get. The z196 is more like $1M-$30M! Heck, your monthly maintenance fee is probably going to be more than an entry-level z114

dom.

Re:Overpriced crap (1)

Anonymous Coward | more than 2 years ago | (#41151659)

I'm a Linux admin working on two z196 machines, everyday I want to make my job obsolete, since these overpriced hunks of IBM crap are making no sense at all. They fail against mid-range x86 Xeon rackmount server in every non-I/O benchmark. Additionally not a single business application or middleware (including pretty much everything IBM, including Tivoli* and DB2) is optimized for s390 or the z/VM virtualisation.

Re:Overpriced crap (3, Interesting)

gstoddart (321705) | more than 2 years ago | (#41152577)

5.5Ghz probably makes it about as fast as a 2 year old intel machine. I should know, I have a z114 (previous generation at 3.8Ghz) that i've done extensive benchmarks on. The fact that IBM refuses to publish standard benchmark numbers (specCPU, specVM, etc) should be sufficient proof that they are not pretty.

I can say that the people buying these things are pretty much smoking some fine IBM drugs

I'm quite sure that for the applications people actually use mainframes for, you're utterly wrong.

Not only do they scale massively higher in terms of throughput, they also manage to do it with obscene uptimes (measured in years) and reliability nothing can compare to.

For certain kinds of applications, what you say is largely true. But at the huge end for things like banking, financial transactions, and airline reservations ... there's really no comparison.

The maintenance costs alone over ten years is going to save 7 figure sums, which should more more than enough to hire a couple programmers and a system administrator to port and maintain the apps on a machine that costs $20k every 5 years.

I've worked on projects trying to do exactly this. And I've seen a couple of them fail.

Trying to map out all of the use cases for software which is mission critical and has been around since the 60's can actually prove to be exceedingly challenging if not impossible.

I'm just not convinced that for the kinds of applications and environments where people will run mainframes that what you suggest would give the same performance or scalability as a big giant mainframe. There just seems to be something missing from that picture, and to me it's the sheer volume of stuff these things handle. Certainly not even in the same category as what you call a midrange desktop.

Re:Overpriced crap (1)

Lawrence_Bird (67278) | more than 2 years ago | (#41153341)

You shouldn't be using a mainframe if you are making these types of arguments. Mainframes were never meant to be the fastest at single tasks where a mips rating might be relevant. They are meant for processing enormous amounts of data/transactions with high throughput and reliability (including fault tolerance). People aren't spending this kind of dough on something they don't need or think they could get for cheaper/better elsewhere. Wasn't the mainframe declared dead in the 80s? 90s? 2000? Where are the competitive systems you seem to think are possible? Quit your job and go make some. Just don't collect unemployment when you fail.

Too bad TurboHercules collapsed (1)

Jeremy Erwin (2054) | more than 2 years ago | (#41150817)

We could have gotten some meaningful benchmarks. According tho this Register arcticle [theregister.co.uk]

When you add it all up, the single-engine performance of a z12 chip is about 25 per cent higher than on the z11 that preceded it. IBM has not released official MIPS ratings (a throwback to the days when IBM actually counted the millions of instructions per second that a machine could process) for the z12 engines, but given that the top-end core in a z11 processor delivered 1,200 MIPS, that puts the z12 core at around 1,600 MIPS.

Back when TurboHercules was still around, in 2009, Tom Lehmann claimed [oracle.com]

By the by, we can run a reasonably sized load (800MIPS with our standard package). If the machine in question is larger than that, we can scale to 1600MIPS with our quad Nehalem based package and we have been promised an 8 way Nehalem EX based machine early next year that should take us to the 3200MIPS mark. Anything bigger than that is replicated by a collection of systems.

On the other hand, if your old and creaky code can't be divvied up among a multiplicity of cores, the existence of a far cheaper 64 core, 8 way Nehelem EX machine (or its current equivalent) that's almost as fast as a single zEC12 core doesn't much matter.

Re:Too bad TurboHercules collapsed (1)

Jeremy Erwin (2054) | more than 2 years ago | (#41150971)

Arrgh
  That last sentence should be "On the other hand, if your old and creaky code can't be divvied up among a multiplicity of cores, the existence of a far cheaper 64 core, 8 way Nehelem EX machine (or its current equivalent) that's only twice as fast as a single zEC12 core shouldn't much matter."

Re:Too bad TurboHercules collapsed (1)

BBCWatcher (900486) | more than 2 years ago | (#41151037)

OK, now go license 64 cores of Oracle DB (for example) and get less performance than one core on a zEC12, as you say. I'll help you out: you'd probably pay about $1.5M in database software licensing plus $300K+ in annual maintenance for your 64 X86 cores versus $47K and $9.4K on a zEC12 core. And that's one cost factor among many, not the only one. So which server is "cheaper"? Is a bicycle cheaper than a truck? (Not an Olympic racing bicycle, probably.) It depends on what you're trying to do. Though I've noticed that the average Slashdot poster hasn't a freakin' clue about IT economics, sadly.

Re:Too bad TurboHercules collapsed (1)

bored (40072) | more than 2 years ago | (#41151167)

Those are emulated cores under hercules. If you were running oracle you would run them natively on the cores, and in that case its closer to 1:1 with the mainframe.

Re:Too bad TurboHercules collapsed (1)

Wovel (964431) | more than 2 years ago | (#41151397)

Huh? I think his Orale example misses the point, but your statement is a bit off. In any case. I doubt you see a lot of people going the Orale on Z route.

Re:Too bad TurboHercules collapsed (1)

bws111 (1216812) | more than 2 years ago | (#41153469)

The numbers provided by TurboHercules are most certainly complete bullshit. Actual MIPS are determined by running standard benchmarks against simulated workload. A 1200 MIPS machine is going to be driving a whole lot of I/O in those benchmarks, and there is no way that Hercules emulated processor and emulated I/O is going to be able to pull that off.

If they didn't test with IBMs standard LSPR tests, their numbers are useless.

Benchmarks? (2)

noname444 (1182107) | more than 2 years ago | (#41150989)

I'll believe their claims when I see some test results they can back it up with.

Re:Benchmarks? (1)

BBCWatcher (900486) | more than 2 years ago | (#41151635)

Only test results? (Yes, 5.5 GHz is fast. A test -- or even a spec sheet -- will tell you that.) But aren't real world results more useful? Go visit any large bank's (for example) data center if they'll let you. How many transactions, how much batch, etc. (and concurrently) do they push through their (one or two) IBM mainframe(s)? And has it ever quit? Is it secure? Does it...work?

Re:Benchmarks? (0)

Anonymous Coward | more than 2 years ago | (#41151977)

I have actual benchmarks at work, and a Xeon 2.4 Ghz beats a z196 5.2Ghz cpu by far in pretty much every non-I/O benchmark.
And from my experience I would say the mainframe isn't more secure than a x86 box.

cat TFA | sed -e 's/Flash Express/Cache Express/g' (1)

hAckz0r (989977) | more than 2 years ago | (#41151785)

The article seems to have typo-ed in the editing phase. The technology is "Cache Express" not Flash Express. Flash memory is SLOOOOOwwww memory. Do a Google for "IBM L2 Cache Express" if you are interested.

With flash memory you read a block, flip some bits, and write it back to modify that block. Not only that, but Flash memory will wear out after so many reads and writes. That would be devistating to a CPU.

Re:cat TFA | sed -e 's/Flash Express/Cache Express (1)

BBCWatcher (900486) | more than 2 years ago | (#41152375)

No, no typo. There's indeed Flash Express -- and yes, IBM's engineers have figured out a way to add yet another memory tier using (very high quality) flash memory. The processor can directly address it -- it's all mapped within the 64-bit virtual address space from what I've read. Yes, it's slower than DRAM but it's faster than storage-attached SSD (which at least has a longer distance to travel). Flash Express is great for things like paging, memory dumps, gigantic in-memory databases, and certain things that Java wants, so that's how operating systems and databases will use it. IBM even encrypts everything that lands on this memory-addressable flash, just in case someone tries to physically rip it out of the server. (Yes, they thought of that.)

F`irst (-1)

Anonymous Coward | more than 2 years ago | (#41152163)

CURRENT CORE WERE by simple fucking 4s it is licensed of the old going

CICS (1)

big dumb dog (876383) | more than 2 years ago | (#41153209)

Long live CICS!

BBCWatcher == IBM Employee (0)

Anonymous Coward | more than 2 years ago | (#41153619)

Seems someone is astro-turfing quite a lot. Some of the cowards sound suspiciously like him as well. And the lingo is exactly hte one you hear from IBM sales people.

While I do think that the Mainframe is underappreciated and am happy about the attention on Slashdot, I'm not very fond of such shady tactics... Either state that you're an IBM employee trying to market your product or don't pretend to just be a Slashdoter without an agenda and shut up. Thanks

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?