Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Business

Managing Linux and Virtual Machines? 239

deijmaster asks: "For a couple of months we have been hearing (as a major consulting firm) IBM people pushing the possibility of installing a Z/Linux VM setup at one of our biggest clients (financial). To a Linux user such as myself this sounds great, at first. Now, I am a bit reluctant when it comes to managing this kind of infrastructure, with little or no local expertise at IBM. Has anyone gone through a Z/Linux VM corporate installation and lived through the management of such a solution?"
This discussion has been archived. No new comments can be posted.

Managing Linux and Virtual Machines?

Comments Filter:
  • Linux/390 resources (Score:5, Informative)

    by Dammital ( 220641 ) on Wednesday September 03, 2003 @08:24PM (#6864666)
    Check out Mark Post's Linux for S/390 [linuxvm.org] site. He collects SHARE papers, distribution info, and pointers to other resources. Lots of good stuff.

    Oh, and the Marist linux-390 listserver [marist.edu] is well worth subscribing to.

  • You WILL need help (Score:4, Insightful)

    by salty_oz ( 457779 ) on Wednesday September 03, 2003 @08:25PM (#6864674)
    If you have never touched VM, then you will be well and truely out of your depth. It's a whole different world to Unix/Linux.

    So you will have to get a VM person in. Probably only on part time contract, and IBM will can provide that person for an additional fee.

    In time you may learn enough to support your very limited VM environment.
    • by Anonymous Coward on Wednesday September 03, 2003 @08:51PM (#6864854)

      Actually VM itself is fairly straight forward to administer

      The biggest hurdle will be:

      • termonology -- For example DASD instead of disk drives
      • sysprog or Systems Programmer - System administrator
      • A userid is just another name for a virtual machine. In otherwords say you have 10 linux systems, they would each be represented by 10 userids (LINUX1, LINUX2, etc.)
      • Within VM itself the concept of deamons (services) are abit different. Each deamon is installed in it's own virtual machine ( Service Virtual Machines or SVMs). For example the TCP/IP stack is it's own virtual machine, the FTP service is another virtual machine, etc. VM provides a very efficient inter-virtual-machine communication system.

        This is also where security comes in. Each SVM is really isolated from each other.

      • There is no concept of a root user within VM, instead individual virtual machines have privilages that are restrict what that virtual machine can do. In addition authorizations for services are handled by the service itself (for example being able to control the TCP/IP stack requires that the user doing the control be authorized by the TCP/IP stack itself.

      One thing to remember too is that VM was (and still is) used by many Universities and colleges -- not as much as it was back in the 70's and 80's, but it still has a presence.

      Anyway... just some comments from an old timer VM sysprog

    • by Anonymous Coward on Wednesday September 03, 2003 @09:07PM (#6864959)

      So you will have to get a VM person in


      Yes, this is true, but if you are going to run Linux , you only need one VM person. The rest of your Admins should be Linux Admins.

      Don't imagine that the VM person will understand (or even like) Linux and don't expect your Linux admins to understand (or even like) the Mainframe.
      • by Anonymous Coward on Thursday September 04, 2003 @01:44AM (#6866258)

        Actually not quite true (that the VM person won't understand or like linux).

        Linux has actually given VM a shot in the arm. VMers are well aware of that fact. I support a couple of VM systems during the day and play around with Linux at home at night.

        Also in terms of culture, I think you would be very surprised with the VM culture. VM spent many years as the unwanted child, VMers had to rely on each other in order to be heard above the MVS roar.

        If you take a look at some of the history of the internet you will find VM sitting there (BITNET was basically a collection of VM systems). The listserv concept was originally from VM (CERN was -- might still be -- a big VM site).

        If you want to see some of the history of VM you can start here: http://pucc.princeton.edu/~melinda [princeton.edu]

      • Don't imagine that the VM person will understand (or even like) Linux and don't expect your Linux admins to understand (or even like) the Mainframe.

        AC forgot to mention... Don't expect your Linux admins to understand (or even like) the VM person. ;)

  • by mao che minh ( 611166 ) * on Wednesday September 03, 2003 @08:28PM (#6864692) Journal
    I don't have a lot of experience with these things, but I am positive that there are plenty or "pure Linux" solutions that will be far more flexible - even when using IBM middleware.

    1. What exactly demands this solution?
    2. Can a pure Linux box, with mild tweaking, still not be more useful and create less overhead than this?

    Someone in this thread mentioned IBM implementing wildly complex systems in order to push consultation, and on some levels it's true. PeopleSoft does it also. In some cases, Oracle will have a go at this tactic. My advice is to do some searching first, without the input of IBM, and see if you can't find a better solution to whatever problem you're trying to remedy.

    • What other solutions that are "pure Linux"? An IBM mainframe has virtualization built in nativeley, hence, it shouldn't take a performance loss. This means that I doubt highly that an all-Linux solution could approach it's power (considering that you can run like 200 instances of Linux on one without any hassle).
      • Yes, IBM Z-series mainframes running Linux are some glorious beasts, but the same level of performance can be reached with a finely tuned cluster - and for far cheaper with considerable less overhead (overhead that costs a lot of money, as well).

        Again, we need to know more about his institution's needs before we can confidently declare which solution (Z\Linux or a cluster) is the best fit for his needs.

        • Re:Clusters (Score:4, Insightful)

          by DA-MAN ( 17442 ) on Wednesday September 03, 2003 @09:07PM (#6864958) Homepage
          Have you ever dealt with a cluster? Large clusters are fucking expensive to run 24x7x365. They require a lot of Air Conditioning (we spend over $1,000 a month on just AC, that's an expense that is never going away), electrical and a shitload of space.

          I know this is Slashdot, but a beowulf is not always the best choice!!!
          • by Glasswire ( 302197 ) on Wednesday September 03, 2003 @09:24PM (#6865068) Homepage
            To paraphrase YOU...

            Have you ever dealt with a MAINFRAME? Large MAINFRAMES are fucking expensive to run 24x7x365. They require a lot of Air Conditioning (many people spend over $1,000 a month on just AC, that's an expense that is never going away), electrical and a shitload of space.

            And he diffrence is what? For most applications, clusters, for all their faults are faster and cheaper than mainframes.
            • Re:Mainframes (Score:5, Interesting)

              by lrichardson ( 220639 ) on Wednesday September 03, 2003 @10:41PM (#6865505) Homepage
              Not so much anymore.

              They're a heck of lot less wasteful (electrickery into heat) than they used to be, and require a lot less space (again, compared to the past).

              Clusters ... I don't know where you get the 'faster and cheaper' line, unless you're talking about applications specifically designed for clusters. When you start writing apps designed for a few thousand simultaneous users, the benefits of the mainframe become apparent. Stability. Speed. The ability to hold gobs of info in ram. Which, BTW, makes them the nearly ideal web server. Security (hey, it's not M$!). Mainframes are a mature technology ... meaning lots of the annoying things (both hardware and software) still plaguing the small boxes have been fixed. (Admittedly, 'mature' often translates into 'f$cking obsolete pos' (i.e. panvalet).)

              I don't worry about backups conflicting with apps on the mainframe. I don't worry about the details of storing things reduntantly (although that's quickly getting solved on the smaller boxes). For those things written on WinWhatever, the programmers need to worry about every little upgrade/patch from M$.

              Now, most places still give mainframes a room of their own ... and it tends to be a bigger room than servers get. And, if you're happy with something a little slower and little less reliable, a good farm runs less than a mainframe.

              But, to put things in perspective, one of my databases (non-mainframe) is moving to a USD 2.1 million machine. That's a fraction ... as in, from 1/4 to 1/20 (depending on options) of a mainframe.

              I'm working in both worlds. I like the cost benefits of the smaller boxes. But it still freaks me out when users punch in a query and it takes several seconds (to minutes) for a response, when the delay on the mainframe is done by the time the enter key pops up.

              • IANAC (I am not a compiler) but as I understand it CPU and RAM are not what make mainframes so much faster for large scale transaction loads than desktop machines ... the I/O throughput of big iron is what makes them able to handle the bigger loads.

                A box with a few 3GHz CPUs in it isn't CPU bound anymore - it is I/O bound (back and forth to the memory, hard drives, and users.) If a desktop box can get a 40% boost in performance by doubling the amount of on-CPU cache - that means it is outrunning the I/O o
            • I thought I remembered that the early Crays used water cooling. So why not use a water cooler and dump the heat outside the building instead of inside where you have to AC the room

              I guess the tubing logistics would be a pain in the * as well as the PSUs generating heat.

              I just imagine since AC costs so much, that people haven't thought of any newer or different ideas to save cost in this aspect.

              Any innovative solutions out there? This sounds like it could be good idea for a /. story.

              • Just a bit of trivia:

                Later crays cooled themselves by using a liquid containing man made blood plasma that is used in blood transfusions.
    • I think you have absolutely no idea of what kind of cluster you can build with such a solution. Just think about the channel transfer rate on a mainframe and imagine you can virtualize the network and benefit from this transfer rate.

      And that's just part of the story...

  • No fear... (Score:5, Interesting)

    by (H)elix1 ( 231155 ) <slashdot.helix@nOSPaM.gmail.com> on Wednesday September 03, 2003 @08:30PM (#6864708) Homepage Journal
    Just consider it VMWare for big boys. I'm doing a wee bit of development for Linux on zOS, and most things just work once you get it installed. Lots of options, depending on how you carve up the system. Anyhow, for the most part it is all about fast i/o, rather than monster processing power.

    Picked up Linux on the Mainframe [barnesandnoble.com] over the weekend, but plan to read it on a (very long) plane ride next week - looked like it focused on care and feeding, however.
  • not cost efffective (Score:5, Informative)

    by Anonymous Coward on Wednesday September 03, 2003 @08:35PM (#6864739)
    Well I used to work at similar financial company where IBM was pushing something similar as well. What it boiled down to was the following issues.

    1. for the equivalent # of VM's it was more cost effective to buy new Intel hardware. The annual maintaince cost for the IBM more than paid for all new hardware.
    2. Software availability. The only thing you could run it would be home grown apps or existing opensource apps. No commercial software was available. This company was an all Oracle shop, no DB2. They're primary opensystems backup solution was Netbackup. Which at the time had no client for linux on Z. (a year ago).
    3. In house expertise. They had no linux expertise and very little Unix (solaris & HP) (jr admins at best) expertise. Let alone running linux on a Z.

    So to sum it up. It's a very expensive, somewhat propritary and inflexiable environment. If you have a specialized use for it and can justify the cost go for it. Otherwise stick with commodity Intel/AMD hardware. It'll be cheaper and easier in the long run.
    • by bunyip ( 17018 ) on Wednesday September 03, 2003 @08:59PM (#6864912)
      Yes, you're right on the "not cost effective".

      BTW - I've ported a number of programs to Linux/390 (an IBM G6 mainframe) and compared them to Linux on my 1 GHz Athlon cobbled together from left over parts and a motherboard from Fry's. The net result is that the Athlon is about twice as fast as the G6 mainframe.

      The latest and greatest mainframes are about twice as fast as a G6, but PCs have come a long way since 1 GHz. Currently, 1 CPU on a mainframe running Linux costs about $100K, you can buy a pretty impressive Intel server for that price.

      So, Linux on S/390 is only effective when you have a bunch of machines with utilization close to zero - let's call it "epsilon", which is what we mathematicians say when we really want to say zero but still need to divide by it. You buy the box for VM, which can run hundreds or even thousands of instances, securely and stably, so long as most of them are doing nothing.

      Linux/390 is great for experimental servers, test systems, etc. OTOH - if you have any significant workload, buy a rack-mount PC.

      Alan.
      • by Covener ( 32114 ) on Wednesday September 03, 2003 @09:08PM (#6864970)
        Disk IO, reliability, workload management and power consumption are also probably relevant in that equation (and on the side of z/linux)


        Linux/390 is great for experimental servers, test systems, etc. OTOH - if you have any significant workload, buy a rack-mount PC.
        • Exactly!

          Management costs for dedicated servers which are almost idle, but still required as dedicated servers for many reasons are high. Also, reliability is an issue when you suddenly multiply low cost servers, which in turn reflects on the management costs, hardware cost and downtime cost.

        • by bunyip ( 17018 )
          Perhaps I should have said a significant single workload. Workload management and power consumption is critical and definitely in favor of VM and z/Linux when you have many, many underutilized servers.

          It all comes down to crunching the numbers. I think IBM is actually pretty honest about z/Linux, they're not trying to sell it as a supercomputer but rather as a consolidation solutions.

          FWIW - I work for a very large company with thousands of servers. We have dozens of them with utilization of approximate
      • The net result is that the Athlon is about twice as fast as the G6 mainframe.

        That depends on your definition of speed.

        Mainframes aren't bought for raw MIPS.

      • by sql*kitten ( 1359 ) * on Thursday September 04, 2003 @04:37AM (#6866837)
        The latest and greatest mainframes are about twice as fast as a G6, but PCs have come a long way since 1 GHz. Currently, 1 CPU on a mainframe running Linux costs about $100K, you can buy a pretty impressive Intel server for that price.

        A mainframe is not a supercomputer - the former is build for I/O, the latter for processing speed. There's a little overlap, but they address different markets - the workload for commercial data processing (for example, updating the accounts of your 10M customers with 100M transactions a day) and scientific number crunching (say, modelling a weather front). An IBM mainframe is designed from the ground up for I/O speed, and has dedicated processors that do nothing but shuffle data from storage to memory and back again. You cannot buy an Intel based system that even comes close to a Z series for I/O at any price. Even clustering Intel boxes won't help, because at some point, you have to have something to manage and consolidate all that data so you can do useful things with it. If you need it, there simply is no substitute for a mainframe. But if you don't, I agree that a mainframe is overpriced, just like a tank is overpriced if you just want to drive to the supermarket.
    • by LinuxHam ( 52232 ) on Wednesday September 03, 2003 @09:00PM (#6864920) Homepage Journal
      Mod AC down. He's working with **very** old data, and is generalizing about the industry from one customer experience. And yes, one year is a heluva long time in Linux on Z. Just every IBM app for Linux on Intel has been ported to Linux on Z by now. It is by no means limited to open source apps anymore (yes, a year ago, it was).
    • I'm not doubting anything in your post, but do you have published paperwork regarding this cost and benefit analysis? It will help me give a better overall picture of this issue because I anticipate that IBM will "recommend" something similar for some of my existing clients.
    • by hackus ( 159037 )
      I would have to agree.

      It is always a better choice to use clusters of hardware, than a single box.

      You have a variety of tools available on the open source market now to monitor, and automagically maintain your cluster, depending on what you choose...the most popular is PVM, and it comes with a ton of very nice management utils you can get off the net, too manage hundreds of machines in a blink of an eye. This is a very configurable cluster architecture.

      There is also MOSIX, or open Mosix. A very nice co
      • That is a very simplistic view, and blatantly ignores that one of the points of the article you responded to was lack of proprietary software, that is often unlikely to support cluster configurations using PVM or MOSIX as well.

        It also blatantly ignores that for a cluster, YOUR APPLICATION need to explicitly manage availability and reliability, or a single node can potentially take down everything if it fails, whereas a mainframe has extensive availability features built in. If you have a 16 node cluster,

  • by rimu guy ( 665008 ) on Wednesday September 03, 2003 @08:36PM (#6864745) Homepage

    I haven't worked with a Z/Linux VM before. However, I have used User Mode Linux [sf.net] to create a dozen or so virtual servers per host server. And I'd imagine that the benefits offered by UML would also apply to Z/Linux VMs.

    For example, with UML you're able to get much better resource utilisation. e.g. most of the time the machine is idle. When one of the UML servers need the host server's resources, they're there (CPU, network, disk IO, etc). That means you can have multiple UML servers bursting up to the performance potential of the host server. Certainly a better resource utilisation than having several host servers running mostly idle.

    Another benefit of virtual machines are their logical separation from the host server. Each virtual server has their own users (including root), applications, file systems, IP address, etc. That means that if security is compromised on one, the others are unaffected. Ditto resources can be allocated to each virtual server according to need. And any mis-configuration on one doesn't affect the other. This compares to running multiple applications on the same server for different purposes (e.g. running HR and Account systems on one server, if email goes down them both systems are affected. In a virtual server setup, only one of the other would be affected.

    So... Thumbs up to server virtualization software in general. Particular kudos to UML. And good luck finding out about Z/Linux!

    - P
    RimuHosting.com - Linux VPS Hosting [rimuhosting.com]

    • by SuperBanana ( 662181 ) on Wednesday September 03, 2003 @09:24PM (#6865065)
      Another benefit of virtual machines are their logical separation from the host server. Each virtual server has their own users (including root), applications, file systems, IP address, etc. That means that if security is compromised on one, the others are unaffected. Ditto resources can be allocated to each virtual server according to need. And any mis-configuration on one doesn't affect the other. This compares to running multiple applications on the same server for different purposes (e.g. running HR and Account systems on one server, if email goes down them both systems are affected. In a virtual server setup, only one of the other would be affected.

      Ahh yes, grasshopper, but when that one uber-box dies(hard disk, fan, power supply, whatever), gets powered off by accident, network cable unplugged, yadda yadda- it affects ALL the virtual machines.

      Granted in the Big Iron, you've got lovely hot-swap capabilities and such(processors, memory, etc)...but nothing is foolproof or 100% reliable. It's the old joke with pilots about twin-engine airplanes; the door swings both ways and there's no such thing as a free lunch. On one hand, you've got a spare engine if one dies, but you're 2x as likely to have a failure, you've got a lot of added complexity, and sometimes it still won't save your bacon(twin engine planes have an abysmal survival rate for engine failure in part because of the really shitty way they fly with one engine down). This is VERY applicable- because managing this big IBM server is much more complex(the whole point of this article) than seperate hardware.

      Best example I can think of in how hot-swap can still not save the bacon is with the Cisco PIX 5-something(The 1U pizza-box one). It has FULL failover- if you've got two, and one shits the bed COMPLETELY, the other one takes over absolutely everything, including active connections; they share ALL state information for what's called stateful failover. Aside from a momentary blip where things stop for a sec...nobody's the wiser that a piece of very expensive hardware just let the Magic Smoke out. The problem is that the PIX OS version we had was buggy and would crash randomly- and because they were sharing connection tables and everything, they'd BOTH die, which was REALLY bad since the boxes didn't have hardware watchdogs(!). We turned off fully-stateful failover, and the problem went away; we'd notice they'd ping-ponged(there's an 'ACTIVE' led to show you which is live) and we'd power-cycle the other.

      So ask the tough questions; instead of asking what's N+1, ask what's NOT N+1, and do a very careful breakdown of what exactly it will cost to run this big huge box, and figure out what the 'per [virtual] machine' costs are...

      • by afidel ( 530433 ) on Wednesday September 03, 2003 @09:46PM (#6865187)
        IBM mainframe complexes basically never go down. There are installations that have been running 24x7x365 for decades. That's the whole point of owning one.
      • I think you have your airplane analogy the wrong way around. The alternative to lots of VMs on a big mainframe is lots of smaller real machines. Which is more likely to crash, your IBM-Boeing 747 with four engines and four hundred passengers, or one of four hundred single engined light aircraft each carrying one passenger?

    • There's another option, too:

      Bytemark Hosting [bytemark-hosting.co.uk] offers Linux virtual machines via User-mode Linux.

      Bytemark supports Open Source with contributions to Debian and discounts for Open Source developers.

      Debian is one of the distro options. Primary DNS on Bytemark's DNS servers is included (running djbdns [tinydns.org], win win).
  • by edwardd ( 127355 ) on Wednesday September 03, 2003 @08:39PM (#6864776) Journal
    I work for a big financial firm in NYC that is using Z/Linux pretty heavily. I have to say that while we are very happy with the results, it is VERY important to have VM people on staff who are also Linux savvy. IBM has been great in getting us set up, but they don't live with the systems. We do. You'll need to be very careful about what you're using the Linux instances for, and take alook at how they'll use hardware resources, like the OSA cards.

    With careful planning, and the expectation that it will be a bumpy start, you'' find that it's a very rewarding experience, both personally and professionally.
    • by Alizarin Erythrosin ( 457981 ) on Wednesday September 03, 2003 @09:58PM (#6865256)
      "IBM has been great in getting us set up, but they don't live with the systems."

      This is a very, very, very important point to consider. If you let IBM run the whole shootin match from a distance, and something goes wrong, expect downtime.

      This isn't related to Z/Linux but it is related to IBM and their systems management. At the business of my employment we outsourced all our network/systems administration to IBM. In the past 2 months (July and August) we have had not 1, not 2, but 4 very very major worm/virus infections that shut the entire network (as well as business) down. IBM didn't keep any systems up to date on patches (and the corporate security department didn't help either... they approved Win2k SP4 in an awful hurry after they found out it contained the Blaster worm fix) and told us to leave our unprotected computers on 24/7 and they would update them "in the next few days." I leave the determination of what happened after that as an excercise to the reader.

      But hopefully IBM won't do that to your Z/Linux VM... Hopefully you'll have someone on site who knows their stuff, even if it has to be you (hey, then you can ask for a raise!)
  • by Anonymous Coward on Wednesday September 03, 2003 @08:46PM (#6864821)
    I wouldn't get hung up on the whole "local" thing. You just have to understand how IBM works. There's no concept of "local" at IBM. At any one point in time, 50% of IBM employees aren't in a traditional work place.

    If you have problems, contact IBM and they will get their best people on it. IBM is all about customer service. You never get fired for buying IBM. From an engineers perspective, it's a pita. The best people in a department end up spending most of their time working on customer problems.

    Hell, IBM still supports OS/2. If a Z-Series seems to solve your problem, go for it. IBM will take care of you.
  • Linuxcare? (Score:5, Informative)

    by chrisd ( 1457 ) * <chrisd@dibona.com> on Wednesday September 03, 2003 @08:48PM (#6864838) Homepage
    I understand that Linuxcare has a program specifically for managing linux vms on z series mainframes...I'd call em and see what they've got.

    http://www.linuxcare.com/ [linuxcare.com]

    Chrisd

  • Ahhh, IBM... (Score:2, Flamebait)

    by bob670 ( 645306 )
    finding ways to fish money out of your pocket with every solution. I would love to see how many hours of consultation this migration (and ongoing support) will be out? IBM, single handedly making sure Linux isn't "free, as in beer".
    • IBM is a buisness, its their job to sell stuff. The stuff may be hardware or it may be consulting, but they are going to do their best to sell it.

      Most people here have probably put together a cheap linux box out of spare parts and the like, such a box is great for a lot of uses, but do you want to trust your corpate servers to it? I don't. There are different types of things for different reasons. IBM makes it their goal to sell the high end high bandwidth high reliabilty stuff, and yes its high price but
  • It works well (Score:5, Interesting)

    by dalslad ( 648100 ) * on Wednesday September 03, 2003 @08:56PM (#6864887) Journal

    I sold and installed the very first Linux application on the S/390 --a Multiprise running VM and it worked great. We used the TurboLinux port and then finally wound up with SuSE.

    We compiled the source code and it ran just like it did on a big Intel box. IBM helped with hardware issues which related to load balancing amongst the VM instances. One of their business partners supported the customer, Winnebago Industries with regard to Linux and OS 390.

    IBM wasn't much of a factor as far as needing support. They supported the mainframe, the OS and VM just fine. SuSE installed without a single issue.

    Some other issues arose in getting the user to learn IBM mainframe lingo, such as IPL instead of boot, and DASD. But, that didn't require much effort. The IBM Redbook on running Linux on the S/390 was all we needed to transfer knowledge. We downloaded it for free in pdf format.

    The main benefit I discovered was the ability to consolidate servers. We replaced a bunch of M$ Exchange servers and ran a suite of Open Source apps such as Cyrus IMAP, Open LDAP, Exim, Apache, etc. We were able to get rid of a bunch of distributed servers and put them on one instance.

    I suggest that IBM can help, but I don't think you'll be dependent on them. They're very expensive. With Linux on the zSeries or S/390 you can do everything yourself. -- That might not be what IBM wanted, but then they championed Linux, didn't they!

    • Re:It works well (Score:5, Interesting)

      by swillden ( 191260 ) * <shawn-ds@willden.org> on Wednesday September 03, 2003 @11:13PM (#6865699) Journal

      With Linux on the zSeries or S/390 you can do everything yourself. -- That might not be what IBM wanted, but then they championed Linux, didn't they!

      Oh, I think IBM will be just fine with that. Sure, they want to sell services, but selling mainframes is also a lucrative business -- more lucrative than services, in terms of gross margin. Whether you need their services or not, they're getting paid. Even if you already have the mainframe and don't need to buy one for the project, eventually you'll fill it up and need more power. They'll wait.

      IBM understands the value of long-term client relationships; they'll ultimately get a lot more of your money if they don't try to take it all now. What's really interesting about IBM is that they've now learned that they'll ultimately get a lot more of your money if they make it easy for you to buy elsewhere.

      See, if customers feel trapped, they'll often willingly suffer short term pain so that they can avoid being caged forever. IBM has realized that if they give the customer open solutions, (a) they gain a short term competitive advantage over closed solutions and (b) they make customers comfortable, happy and trusting. Then, as long as all of IBM's offerings are reasonably competitive, the simplicity and convenience of having a single IT vendor will allow them to paint the customer's entire infrastructure Blue, while the customer sits serene in the knowledge that if needed they can always go elsewhere. IBM, of course, will try to ensure that the customer never feels that need.

      IBM's ultimate goal in every client relationship is simple: They want to get to the point where whenever anything is needed, the CIO's first move is to call his IBM client rep. Or, even better, for the client rep to pick up on the CIO's frustrations and point out the need, and IBM's solution, during their twice-weekly lunches, or when they're golfing, or having a barbecue at one or the other's home. To achieve that, it's not necessary that buying IBM always be the best idea, only that it's the easiest idea and that it's *never* an egregiously bad idea.

      This works, and it's a strategy that's completely out of reach of every IBM competitor because none of them have the size to pull it off. Well, HP might have the size, but lacks the breadth and the customer focus, at least so far.

      So, in your case, IBM is just fine with it, because even if you didn't buy the services this time, you're an "IBM Shop", and you're happy. Some day when a new project comes up and your staff is just tapped out, or when a new technology comes along and you don't have time to understand or deal with it, the happy Blue glow that emanates from the smoothly-running z-Series will convince you to call IBM and "just let them handle it". And handle it they will, with a 30-40% profit margin.

      • Re:It works well (Score:2, Informative)

        by wagemonkey ( 595840 )
        Which is a great way to run a company,
        "Keep the customer satisfied" works for me. They know you can go elsewhere so they try very hard to get you not to need to, nor want to.

        My favourite analogy is that of European and American Roulette wheels, US wheels have a double zero and European ones don't. They'll likely both get the same money but the US ones want it quicker. (Of course all analogies are flawed, and this one ignores effectively free food and drink at US casinos...)

  • by Anonymous Coward on Wednesday September 03, 2003 @09:04PM (#6864939)
    I'm posting this as an AC because I'm an IBMer.

    Familiarity with Linux will not help you setting up the zLinux environment. It works like this: You dedicate a few processors of your mainframe to Linux. These processors will run VM, which has:

    • a command-line environment
    • the ability to run scripts written in REXX
    • the ability to virtualize resources and give them to virtual machines defined as "users".

    The users are defined in a "user directory". There, you can specify how much memory, disk and CPU share you want to give to each user. These users, remember, are in fact virtual machines that will boot an image of Linux compiled for the zSeries processor architecture.

    If you want to create and take down Linux images frequenlty, you'll have to install and customize some VM scripts that will do the job for you. When the scripts are installed, you can setup a new Linux image (complete with its own disks, IP address, etc.) with a single operator command.

    Most sysadmins of a zLinux machine spend a lot of time in VM. So learning VM is essential if you are going to do this job. VM was created 30 years ago and is somewhat primitive in places, but the resource virtualization mechanism is incredibly powerful and makes up for it.

    Finally, make sure that people understand that there might be dozens of virtual CPUs defined under VM but only a few real CPUs. If you have 4 CPUs, a Linux user with an absolute CPU share of 25% will have the equivalent of one CPU. If the Linux image is used for recompiling its kernel, it might be a tad slow. The mainframe has great I/O performance but only run-of-the-mill raw CPU speed.

    Good luck.

  • by LinuxHam ( 52232 ) on Wednesday September 03, 2003 @09:22PM (#6865053) Homepage Journal
    I'm an IBM'er currently on assignment at the world's largest insurance company. I was brought in because they wanted to consolidate servers to a mostly-Linux solution. After piloting Samba 2 beta on zLinux last summer, they balked at the heavy reliance on Z.

    The key is for people to realize that the type of workload is critical when deciding to try zLinux, and any barking about Athlon vs. G6 is useless. Also, vendors need to realize that once you compile an app on Linux on any one platform, you're usually a recompile away from running it under Linux on any other platform. Hence my reasoning that any complaints about software availability from a year ago is also useless. More apps are being ported to zLinux everyday.

    Linux on Z has a role, it just needs to be explored by more brave souls. Besides, I've always said that if I leave the company, I'd like to create an "ISP in a box" using a z800 and some ESS disk to host a few thousand virtual web servers. I implore people to please visit Linux@IBM [ibm.com] for more information.
  • by smoon ( 16873 ) on Wednesday September 03, 2003 @09:25PM (#6865071) Homepage
    We've got a production linux instance running under VM alongside our production VSE system. Since the box is fairly underpowered we get a minimal slice of the CPU. This makes the system respond like a 286 with the 'turbo' button turned off.

    When the VSE instance bombs out for some reason, and we get effectively 100% of the CPU it responds like a pentium... maybe. Think P166.

    Unfortunately in our circumstance we can't 'turn on' more MIPS because then our VSE instance is running on a 'bigger' machine and we end up doubling our licesing costs. Other alternative is to turn on the ILF (integrated linux facility) which dedicates 120Mips to linux only, without affecting other licesning, but that costs $150k. You can buy a lot of 2-way or 4-way pentium boxes with decent RAID arrays and get much better performance for that kind of money.

    So if your shop is run by some sort of morons and you've got 100's of spare MIPS to burn, then Linux on the mainframe probably makes some sense. Otherwise, just get some intel boxes. Any savings the mainframe provides in terms of power, cooling, and ligher administration is going to be offset by massive complexity, poor performance, and a lack of easy support for a bizarre platform that few developers have access to.
  • by Sabalon ( 1684 ) on Wednesday September 03, 2003 @09:25PM (#6865073)
    I've not played with Linux on VM. However, from what I understand it is a sweet thing.

    I have played with other OS's running under VM. IBM knows what they are doing in that field.

    Combine the two, and I think you have something that should work well. However, I'd weigh the costs. I would think it would a good thing to do if you already have a z-box laying around that has some space cycles. However, I would think that a stack of Dell's or something would be cheaper than buying the IBM equipment.
  • by knodi ( 93913 ) <softwaredeveloper.gmail@com> on Wednesday September 03, 2003 @09:33PM (#6865111) Homepage
    At my workplace, we run about about two hundred corporate websites. The majority of those are on three boxes from Penguin computing, and the bare minimum required by our contract with IBM are on the z-series. At first we thought it would be a great deal, and looked forward to moving all of our sites over to the high-performance IBM machine. But it failed EVERY SINGLE test we could think to throw at it, except trying to brute-force an RSA key.

    They're great number crunchers, but they don't hold up under any kind of pressure as a web server. We had the z-series with no sites on it run benchmarks and compare to our development box with 20 sites hosted, and the development box (Penguin Computing) kicked its ASS.

    Every time one of our developers has to ssh into the IBM machine, they yell "Cover me, I'm going in". Our running gag is, if they're not done editing the apache config or whatever in ten minutes, we'll have to send in a rescue team.

    My rational, scientific, carefully measured opinion is that the IBM z-series SUCKS. HARD.

    Gee, I sure wish I wouldn't get in trouble for sharing our benchmark data with you. Oh well, you'll have to take my word for it and hope the majority agrees.
    • They're great number crunchers, but they don't hold up under any kind of pressure as a web server. We had the z-series with no sites on it run benchmarks and compare to our development box with 20 sites hosted, and the development box (Penguin Computing) kicked its ASS.

      You clearly have no idea what you're talking about. Great number crunchers? I can't even imagine what your testing was.
      • by SysKoll ( 48967 ) on Thursday September 04, 2003 @12:21AM (#6866022)

        Covener, you're right. zSeries suck as number crunchers. They are great at intensive I/O jobs. They are great at consolidating servers that aren't all busy at the same time. But "brute-force an RSA key" is exactly what you don't want to spend your expensive MIPS for.

        BTW, I found that on a web server mettle test, large file transfer performance was better on zSeries than on RISC boxes. The larger the files, the more advantage to the mainframe. This is an interesting side-effect of having processors dedicated to I/O and freaking huge I/O bandwidth.

  • by TheLink ( 130905 ) on Wednesday September 03, 2003 @09:50PM (#6865210) Journal
    Unless you're in for the mainframe class hardware (and possibly support).

    Coz for x86 servers, you can always use vmware e.g. vmware esx.

    Not sure if vmware has anything lined up for opteron, but if that goes fine then it'll be cool.
  • Sticker Shock (Score:5, Interesting)

    by delcielo ( 217760 ) on Wednesday September 03, 2003 @09:58PM (#6865255) Journal
    We're fortunate to have a good solid VM guy, so implementation was no big deal on our dev box. But we've noticed a few things along the way...

    VM is expensive. Engines on the mainframe are expensive, and are the weak point in Z/Linux. Mainframes normally run batch types of workloads, and have great big fat I/O. They're not necessarily great processing powerhouses.

    You can download Linux and install it on the mainframe; but you get zero support. If you want support, open up that big old budget again. When we looked at it, Suse wanted about $20k per year, and RedHat wanted $24k. We flew solo instead. So far it's been fine; but be prepared to pay if you want support (which, by the way, is something the PHB's and mainframe systems programmers are used to having.)

    As for operational considerations, I haven't really had any problems with it at all. There aren't many rpms out there for z/os; but you can compile almost anything and use it.

    Installation is kind of cheesy; but not horrible. You basically set up your vm guest, log in to it and ftp the linux kernel, ramdisk and parmfile to the guest dasd, giving it a fixed record length of 80 bytes. You then feed these into a virtual card punch (that's right, a virtual Hollerith Punch Card Reader - 80 columns = 80 bytes), then into a virtual card reader, and ipl the reader.

    This gives you a running instance of linux that you can use to do a net install of the full distribution.

    In the implementation class I took, I was partnered with a mainframe guy who was complaining about how archaic vi was. It made me laugh.

    "Dude. We just chopped my kernel into 80 byte blocks and fed it into a card reader. Don't talk to me about archaic."
  • by nomad_monster ( 703212 ) on Wednesday September 03, 2003 @10:03PM (#6865287)
    One of my clients, a large insurance firm in the New England area, is in the process of consolidating their NT environment onto VMWare ESX server, which is linux based. This is an IBM X440, running about 30 consolidated NT VMs. Since it's VMWare it can also run linux VMs. They are saving about 500k annually on this setup in associated costs for hardware/support/environmentals. This was a pilot, and they are going to be moving forward with more consolidation based on this.

    This really isnt a new concept, most of us know of the IBM P-series, Sun E-10ks and 15ks, and the HP Superdome. All use virtualization in one form or another to provide this kind of setup. Z/series is kind of novel, because....hey...its a Mainframe.
  • an idea... (Score:3, Interesting)

    by alienhazard ( 660628 ) on Wednesday September 03, 2003 @10:09PM (#6865314)
    would it be possible to use UML on top of OpenMosix. Theoretically this should allow you to have several cheap intel/amd boxes acting as one (so shared resources) and then running multiple linuxes in UML would allow for an efficient use of those resources. In the end, would this not be close to the Z series, just cheaper? I imagine it might be a bit trickier to admin, but it would be interesting.
    • It's unclear to me what you want. If you want to make many machines look like one, then what you're looking for is Single System Image clustering - HP/Compaq/Whatever-their-name-is-today is working on providing that for Linux. Putting UML on OpenMosix simply doesn't make sense, as UML would need to virtualize a hell of a lot more than today and make multiple UML instances cooperate over OpenMosix just to run ONE instance of Linux spread over multiple machines. To run multiple instances of Linux, you would h
  • by Anonymous Coward on Wednesday September 03, 2003 @10:19PM (#6865376)
    Grain of salt, yada yada...

    I second the idea that it is very important to have VM skills on site for a customer looking at this. Presumably the customer is already a z/Series account, so they probably already know a thing or two, but they may have bought into the "VM is going away" speech and gotten rid of their VM stuff years ago and gone to z/OS.

    Even if they have VM skills from 5-7 years ago - that will still do. VM hasn't changed all that much, it just has some more bells and whistles. So one or two refresher courses for whoever is still around in their shop will get them up to speed on z/VM 4.4 if they knew it 'Back in the day'.

    And yes - Linux on VM is still young. Most shops appear to do a lot of 'roll your own' solutions to the administrative problems. Get hooked into the Marist linux390 mailing list, there are a lot of smart folks there who have at least thought about any problem youre likely to have.

    I've run/tested every one of SLES 7, 8, RedHat 7.1, 7.2, RHEL3 beta, TurboLinux (old and crusty now) and Debian with pretty much any IBM middleware you could think of. From the linux side - it doesn't know anything about VM, or care. So you as the administrator must make sure it plays politely with the others it lives with. You probably should not just throw 2 Gigabytes of storage at it just because Websphere says it needs it. Running Linux with VM does require some understanding of how to make the most of shared resources. Check out this redbook:

    http://publib-b.boulder.ibm.com/Redbooks.nsf/Red bo okAbstracts/sg246824.html?Open

    It makes a lot of these points better than I can.

    --Anonymous Coward cause I forgot my password :(
  • by adamthornton ( 101636 ) on Wednesday September 03, 2003 @10:31PM (#6865451) Homepage
    I've done plenty of these. I'm sure a little Googling will reveal who I work for and that I'm probably not lying. I'm also not an IBMer.

    As with anything, "it depends." In my experience, L/390 under z/VM works best in I/O-intensive heavy-throughput roles. Do not throw CPU-intensive work at it. If you need CPU, either build an Intel farm, or use an architecture that's designed for serious computing, like a pSeries.

    From a manageability standpoint, you will be flabbergasted how much easier it is to manage a z/VM box with 100 Linux instances on it than it is to maintain 100 rackmounted x86 boxes. And once you get your legs under you with VM, it's amazing how tunable the system parameters are. FCON/ESA (now Performance Toolkit, in z/VM 4.4) is really, really your friend in terms of determining where the system hotspots are. And once you've tasted how to deploy additional servers in two minutes without leaving your chair, it's really hard to go back to old-skool provisioning.

    Adam
    • Genuine inquiry - just why is it easier to manage a single z/VM box with 100 Linux instances on it than it is to maintain 100 rackmounted x86 boxes ??

      I can see that installation of a virtual machine sure is a lot easier than plugging in a physical one.

      But presumably each of these 100 linux instances has to be upgraded, managed, patched, etc. separately? So why is management easier under VM? Either way I'm going to be sitting at my desk logging into 100 linux instances.

      • Take the lowest MTBF for equipment in a server. Now divide by 100. With z/VM that MTBF is multiplied by the number of layers of redundancy, and is a rather large positive integer to begin with.

        There's also the fact that with some basic scripting and use of some nifty open source tools (and the insanely fast inter-VM "networking") you can maintain one box, and all the rest just fall into line according to the frequency of your cronjobs. If you need to reboot, well, that's darn quick too, and requires zero r

      • by Anonymous Coward on Wednesday September 03, 2003 @11:43PM (#6865855)
        With VM you can have all 100 instances of linux share the same system disks read only, install code on one, then each can pick up the updated code with a /etc/init.d/blah restart command.

        And - that restart command can be issued from a VM service machine (PROP - the programmable operator) whose sole function is to issue commands to all the Linux machines and make sure they do it.

        So basically it's rpm -Fvh foo.rpm on the master disk image, followed by a RESTART FOO message to PROP and you're done.

        (Note - I'm not Adam - but I can vouch that he does know what he's talking about and this is my guess at what he'd say)
  • by HoldmyCauls ( 239328 ) on Wednesday September 03, 2003 @10:53PM (#6865586) Journal
    RMS: Who the hell are these "Z" people, and why are they stealing my thunder???
  • One thing.. (Score:2, Interesting)

    by tuomoks ( 246421 )
    Good, bad and decent experiences but what is common? I didn't see one comment where the installation has put any money to hire and/or educate good support. Compared to the number of people needed to support these server farms - it's always less expensive have some good people around for mainframe and (IMHO) Linux. Sorry - I'm old, 30+ years with VM(yes), but from MFT/MVT/DOS to MVS and Unix, Tandem, Windows, and the only thing that will make a difference is one/two good knowledgeable persons (IMHO). I love
  • I know IBM (Score:3, Interesting)

    by digidave ( 259925 ) on Wednesday September 03, 2003 @11:41PM (#6865843)
    I have dealt very closly with IBM engineers for several years. I pushed Linux on them years ago and they pushed back with AIX (settled on NT due to costs), then they pushed Linux next time around. I actually ran one of their first production Websphere on Linux web sites when their very beta Websphere for Linux was released as a final version.

    I still won't claim to be an expert, but given my background with IBM I would have to say that if they are recommending it, then it's probably the worst thing you can possibly do. IMO, they experiment on their customers. At least they did on me. The worst part is that my experience shows they are not very adept at getting people to help you through problems. They'll send somebody in who's read the manual and he'll hack around for a few days before calling in the real guy.
  • by buddha42 ( 539539 ) on Thursday September 04, 2003 @12:22AM (#6866031)
    If you think the dissenting opinions on this thread are bad, read the Linux390 mailing list. One thing just about everyone agrees with on the list is this. Do not buy a mainframe to run Linux. If you already have a mainframe that has some spare cpu time, look into consolidating simple services onto LiuxVMs. Generally speaking linux on the mainframe relies on "well in this case" situations that make it cheaper. For instance you can use Samba on a LinuxVM to have a very reliable file-server, but DASD and Shark's are bloody effing expensive compared to pretty much any other system. However, if you already have a well enginered backup system and all the neccisary licensing, perhaps that tips the costs back in favor (or at least break even). There are a great many who see Linux390 as half "geeks looking to do something nifty with linux" and half "IBM looking to show off its linux-commitment and get some free press about its mainframes". Because when you really learn about all the options, benefits, and limitations, there are suprisingly few situations where it's worth it.
  • Kinda suprised (Score:3, Interesting)

    by Serapth ( 643581 ) on Thursday September 04, 2003 @12:36AM (#6866078)
    we have been hearing (as a major consulting firm) IBM people pushing the possibility of installing a Z/Linux VM setup at one of our biggest clients (financial)

    Reading this sort of shocked me... in the past I worked for a major Canadian trust company ( hint hint ) and contracted to a different major Canadian bank, and both were in bed with IBM. In all honesty, im a bit shocked you have any say in the matter at all! From what I found of the IT departments at both banks... if IBM said it was right... it was right. Hell... I was hired to port a bunch of Visual Age C++ Framework ( forget the name now, but it was IBM's equivelant to MFC but on OS/2 and windows )to a Java compatible object model... so that eventually all their systems could be ported to java. If you remember a few years back ( perhaps 5 ) IBM was the biggest supporter of Java outside of Sun. Before that it was OS2, and for a while there I believe it was smalltalk ( before my time... ). Now, IBM has attached itself to Linux, and will consult all of their major customers to do this migration as well. Thing is... both the companies I worked with did what IBM said, almost blindly... hell, as far as I understand it, they are still porting away from OS/2 to this date... Poor bastards... im glad I left that world behind.

    I guess the old adage is true... you never get fired for choosing IBM. You get a good look at the politics within a bank though... and you will see thats where most managers interests lay... self preservation... not doing whats right. Whats the Moral? Hmmm.... I suppose its just that, you should consider yourself lucky, that the financial institute you are work with even questions IBM's judgement

    As to the VM solution itself, I have to admit, that particular technology I have had no direct experience with. However, unless you have the budget to have a complete server for backup ( as in standby, not as in storage... ) I dont like the concept in general. If you cant hot swap a server in place of the Z, you are playing with fire putting all your eggs in one basket. I dont care how many redunandcies are built in... you are still running multiple important tasks of one box. If you do have a hot swapable backup... obviously your budget is bigger then mine :). Personally I would stick with rack mounts... or, use a data center ( offsite ) if the opportunity presents itself.
    • Re:Kinda suprised (Score:3, Insightful)

      by vidarh ( 309115 )
      Of course you need redundancy, but instead of having umpteen different servers you need backup for, you need only two Z-series servers, or you can quite likely achieve the redundancy you need by outfitting the Z-series machine properly.

      The Z-series supports taking CPUs out of comission for replacement without downtime. Same for RAM. Multiple hot-swappable SCSI controllers connected to a fully redundant storage system such as the ESS/Shark (where you can connect to two separate banks of controllers, so tha

  • by darnok ( 650458 ) on Thursday September 04, 2003 @02:21AM (#6866409)
    Imagine a Beowulf cluster of ...
  • as always, don't use a hammer to turn a screw, etcetera, etcetera.

    if you have a solution that requires heavy compute, low-to-medium I/O, no large shared memory ( unless you cough up the money for Myrinet, SCI, or Quadrics ) go with a Linux cluster using x86, x86-64, or (shudder) Itanium2 CPUs.

    if you need a high throughput environment w/ fairly good compute, shared memory or not, go w/ a large UNIX machine, like an SGI Altix or Origin 3000, IBM Power 4+ box, HP Superdome ( or whatever they're calling them
  • by oPless ( 63249 ) on Thursday September 04, 2003 @09:25AM (#6868024) Journal
    In soviet Russia you don't run anything on VM, VM runs YOU
  • ... using VM. Not everything can be measured in pure dollars and cents. Consider: All the stuff written about "what-if" this or that fails because I have only one box can largely be ignored. All that fail-over stuff is built under the skin of the box. Just because you don't see it as multiple distinct boxes doesn't mean it's not under the covers (multiple power supplies, cpu's, busses, etc.). When something goes wrong in an app you can right off generally cross-off hardware problems. That's because, i

"If it ain't broke, don't fix it." - Bert Lantz

Working...