Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

How Far Can Large Commercial Applications Scale? 56

clusteroid81 asks: "I've been working with customers who run large commercial applications on big iron (16-32 symmetric multi-processor systems - 64GB or more memory ). There are always numerous other front-end servers involved, but the application on the back end server is often difficult to spread across multiple systems or clusters due to the application architecture. Scaling is done by increasing memory and processor counts. As things progress, the bottleneck is usually contention within the application or operating system. Are there folks here on Slashdot who work with large single system commercial applications? What kind of processor counts and memory do the applications have and how well do they scale?"
This discussion has been archived. No new comments can be posted.

How Far Can Large Commercial Applications Scale?

Comments Filter:
  • Unfortunately, once you use up the "local" processors you're forced to branch off to other attached machines. If the app is embarrassingly parallel, you only need 2 tin cans and a string. However, if these application rely on low latency, high bandwidth connections between processors, you're going to get greatly dimished returns by using clusters connected by 10G Ethernet or Myrinet.
    • I'm gonna go ahead and disagree with you there. The network alone is not to blame. Also, keep in mind that the latency differences between most 10GigE implementations and Myrinet are radically different especially once you get above the hardware and protocol levels. They are getting better, Force10's new 10GigE switches being good examples, but they're not that close when you put something like MPI and then a poorly implemented-algorithm wise-application on top of that. Another thing to keep in mind is
  • by georgewilliamherbert ( 211790 ) on Tuesday April 18, 2006 @08:57PM (#15154074)
    I've run oracle on 32 processor Sun E10Ks with reasonably linear speedup from few-processor performance, back in the Solaris 7 days.

    I've run (now obsolete) ATG Dynamo on the same, with similar results.

    I've run Apache (1.3.x) on the same, with similar results.

    I've seen applications which stopped scaling well at much less than that.

    "Large business applications" isn't specific enough.
  • Enterprise (Score:4, Funny)

    by Procyon101 ( 61366 ) on Tuesday April 18, 2006 @09:00PM (#15154086) Journal
    It depends on how much enterprise you have in them. Enterprise is expensive, but when added liberally you can scale to huge amounts.

    I like to add a couple hundred enterprise myself.
  • scale by hashing (Score:3, Insightful)

    by pikine ( 771084 ) on Tuesday April 18, 2006 @09:00PM (#15154090) Journal
    I'm still studying computer science with little practical experience, but you can divide certain aspects of your application by hashing---you hash datasets or queries. This distributes the workload across a cluster of computers. However, implementing hashing requires you to make intrusive changes to your code, and maybe most companies aren't willing to do so. Hashing generally has to be implemented from the very beginning, which requires foresight. Google is the one company that does it well.
    • by Anonymous Coward
      Keep studying.
    • Re:scale by hashing (Score:4, Informative)

      by dgatwood ( 11270 ) on Tuesday April 18, 2006 @09:50PM (#15154350) Homepage Journal
      There are many ways to divide up a set of queries. It all depends on what the application is, how much data sharing is needed, etc.

      One way divide the data is per-user or per-group. Divide data according to its owner so that each user account is hosted on a given machine and has first-class access to his/her own data and his/her group's data, but second-class (network-based) access to everyone else's data.

      Another way, as you mention, is to do hashing based on some well-defined key, but for this to be useful requires that the front end be thoroughly abstracted from the back end so that multiple front ends share multiple back end stores. Otherwise, you are probably just moving the bottleneck around. It also requires that this key be known in advance, which means that it doesn't generally work well if, for example, you need to do a join on two tables and one of those tables is scattered across multiple machines. The only way that it would work for such use would be if either the key being used for the join is the hashed key or if each machine has a table index that spans multiple machines' content, at which point, you are going to have cache coherency problems.

      Which brings us to a fairly nice compromise solution: a replicated database with each of the outer-ring database servers being read-only caches with some sort of built-in cache consistency protocol, and the central database accepting write queries from clients, but with all the read queries directed to the outer ring. Makes for seriously scalable database access.

      This, of course, assumes that the app in question is a front-end for a database. If you're doing some other sort of application, then all bets are off. Give us more information.

    • Hashing is a good way to split data up, but now that it's spread across N nodes, you will have a hell of a time joining it back up again. Your reporting queries are now a nightmare of joins and unions.

      Say you have a bunch of products, each sold by a different department of your business. So split your data based on a hash on the department, to keep all of a department's information together. Later, a business decision consolidates two departments, or splits a department, or moves a whole slew of produc

  • by riprjak ( 158717 ) on Tuesday April 18, 2006 @09:05PM (#15154121)
    ...but have you considered trying to contact the EVE-Online developers at CCP.

    Their game is little more than a MASSIVE database application supporting tens of thousands of simultaneous users... They have lag issues but, on the whole, seem to be scaling bloody well.

  • yes (Score:2, Interesting)

    I did some freelance work a few years back for a client. They were converting some custom inhouse applications from a 64 processor Cray Superserver 6400 to a cluster-based approach. I can't comment on what they were doing, but they needed all the ram and cycles they could get ahold of.

    Anyhow, they started out on a 4-way machine and had scaled up to the 64-way without many code changes. If it had been cost effective, they would have kept on scaling upwards.

  • by subreality ( 157447 ) on Tuesday April 18, 2006 @09:21PM (#15154197)
    Different problems in computer science scale differently. You haven't given us enough data to really know what problem you're solving, so you're really not going to get a reasonable answer.

    I work for a company that has a large commercial application. We knew we needed to scale our data set and processing power to be huge, so we made sure from the start that the heavy lifting could be divided into little chunks, and thrown to the cluster. For our purposes, back end scalability is basically linear. When we need more, we just bring another rack of little 1U critters online. There are a few theoretical bottlenecks, but we'll never see them before we have our own nuclear power plant to run the data centers.

    For other applications we use, there is *no* scalability. The algorithm has to be single threaded. It doesn't matter if I run it on a cluster, or a machine bristling with CPUs. So we basically buy the data center equivalent of a gaming PC: The fastest processor and memory that fits our budget.

    So there are the ends of the spectrum. Your scalability will be somewhere between zero and infinity, depending on the problem at hand.
    • Some problems are like the "baby" problem. It takes nine months to make a baby, no matter how many couples are assigned to the problem. BUT, if the task is to make 1000 babies, you can still do that in 9 months—if you can find 1000 couples. But, if you only need one, you're stuck. It's a parallelism granularity problem.

      Other times you get stymied by serial bottlenecks in an application. Sometimes you can gain fractional benefit from additional compute resource by allowing various CPUs in the

      • In fact, in this particular problem, I was converting the algorithmic description of the edge connections into an explicit description.

        Poorly worded. I meant: "In fact, the entire goal of this exercise was to convert the algorithmic description of the edge connections into an explicit form." (The end goal was to generate a databased that expanded the compact-but-slow form into a fast-but-eats-my-hard-disk form.)

        --Joe
      • 1000 babies in 9 months does not require 2000 mouths

        techincally you need 1001 people to produce 1000 babies in 9 months-not subtracting for multiple births.

        it's all in how you look at it.
        • Tweaking this pedantic thread slightly longer ;) , technically you only need 1000 people to produce 1000 babies. If I recall correctly, a few months ago a baby was born of two mothers, that is to say, the boffins had created a viable embryo from the DNA of two women.

          -Jar.
        • It's a good point that you can do it with only 1001 people but it would take longer than 9 months. Probably more like 15.

        • FWIW, I said "couples." Couples, in the specific context of making babies, tend to be pairings of males and females. The same male could be paired with two different females, giving two couples, but 3 people. As someone else pointed out, though, you do need enough sperm to go around, and the time to make all these couplings. :-)

          Those pedantics aside, I wanted to avoid the sometimes-called-misogynistic formulation that's somewhat more common. And, well, multiple births in this analogy are like data de

  • by Anonymous Coward
    I can certainly declare that applications scale nicely to at least 128 processors that I have personally seen. That is if the application is designed and implemented to allow that.

    That is just the database server which handles approx 40,000+ user sessions at one time.

    Of course in front of that you have your liberal sprinkling of app server and database proxy servers and whatnot, amounting to about 100 other seperate systems.

    As others have noted, you need lots of enterprise, which costs money.

    The fli

  • An application that was written in a "serial" way will not scale by throwing more CPUs after the first few. Those applications are better served by a very fast CPU rather than several CPUs. If you are trying to scale an application that much, the application itself must be built with scalling in mind to allow parallelism. In that case, how much you can scale it depends on how much paralelism exists in the nature of the problem you are solving. Typically you stop getting good speedup after adding lots of CP
  • I work in a department that creates provision software for one of the large telcos... As you said, the problem is usually application... OS, DB and such are usually no issue anymore but unfortunately "Enterprise" in the US seems to mean disorganized mess with completely incompetent management - management that rather wants to keep pointless dates that have no meaning in the real workd than doing things right.

    We have a few well designed apps and there the answer is pretty much "How big a machine can you buy"
  • by kbahey ( 102895 ) on Tuesday April 18, 2006 @11:05PM (#15154635) Homepage
    Your description is very little to go about suggesting solutions ...

    You have to tell us many many specific things before we can suggest specific solutions. All we know is that the application runs on a 32 cPU system, and has 64 GB. This is all about the hardware. The application is a "large commercial application", and there is "contention within the application or the operating system". We do not even know what the hardware is, nor what operating system it is.

    Anyways, here are some generic suggestions form past experience, most of it on UNIX systems, many with Oracle, and most with commerical non-web systems.

    - Is the application CPU bound, memory bound, or I/O bound? If you do not know then you have to find out first, then attack the area of

    - Is the application transactional in nature or batch? Is it an operational system, or a decision support type of application?

    - Does the application use a database (probably does)? Is the database on the same box that runs the application? If so moving the database to a separate box with a fast connection (FDDI or Gigabit Ethernet) may help things.

    - Does the application uses queues or message passing? Do these queues fill up at certain peak hours causing slow downs?

    - Can you benchmark/load test the application on a similar box? If you have transaction generation/injection tools, then you can simulate the real load and then run tools for profiling, performance and the like in real time (e.g. sar, vmstat, top, ....etc. if you are on a *NIX type of system).

    Performance tuning is an iterative process that is more of an art than a science. Start with the 80/20 rule, and get the low hanging fruit (attack the easiest and most obvious area that would gain you some performance, then move to the next area, ...etc until you hit the diminishing returns areas).
  • by brokeninside ( 34168 ) on Tuesday April 18, 2006 @11:10PM (#15154656)
    One place I used to work had a system that scaled up to well over 20 Sun boxes each with 10 more CPUs. It all depends on having the design right. For example, if you have a batch job, you architect the job to follow a master/worker paradigm where a master process doles out chunks of works to worker processes that may or may not be running on the same machine (think SETI@Home). Not every job can be redesigned to to this, but it it's a fairly easy way to do a large number of different tasks. Further, there's no reason that this design couldn't be used by Linux/PostgreSQL or some other Free Software stack rather than Solaris/Oracle. There are also other paradigms. Perhaps you should do a search on scholarly comp sci papers instead of asking /.. The problem of scaling is not exactly new. Quite a few papers have been written on various way to solve the problem depending on what sort of computational tasks you have to accomplish.
  • Not far enough. (Score:3, Interesting)

    by Onan ( 25162 ) on Tuesday April 18, 2006 @11:11PM (#15154658)
    Do you mean to ask how far things can scale "vertically", by buying progressively bigger individual machines? That's an easy one: never far enough.

    Even if you can magically get a single system that's big enough for your needs forever, you'll still pay orders of magnitude too much money for it, and get no added reliability through redundancy.

    Any application that requires a solitary, unique, big server is just definitionally broken. It needs to be redesigned to allow it to be spread over an arbitrary number of small systems in geographically diverse locations. For reliability, your serving infrastructure needs to be at least n+1 at every layer to allow for planned maintenance, unexpected failures, and site-destroying disasters. And for scale, it needs to allow you to continue to plug in more batches of cheap little machines and get more throughput.

    • Any application that requires a solitary, unique, big server is just definitionally broken.

      Why? Centralization is often the best solution for many reasons (performance, security, legal issues, recoverability, reliability can all be factors depending on the nature of the system).

      Only an extremist advocates one type of computing solution for all problems. :-)

      Disclaimer: my background is medium-scale airline online-transaction applications where monolithic systems (read: mainframes) still tend to work very we

  • You don't give anywhere nearly enough information.

    I do SUN PS gigs, so if its SUN hardware, I can help out (just contact SUN). Ask for "PACP" (Performance Analysis and Capacity Planning). I helped design the service. Also, google "adrian cockcroft". Or http://www.cs.washington.edu/homes/lazowska/qsp/ [washington.edu]

    Or IBM or HP: they have equivalent services.

    You can also get any number of other people to help: try datacenterworks.com, or treklogic.com (off the top of my head).

    Yes, the problem falls directly into my domain,
  • Yup, I currently develop software in that scale... I am doing "volume testing" right now so I have two "sandboxes" to work in. 1 16xDual-core Solaris machine with Oracle database shared on same hardware, and 1 48core IBM (SMT core - like Pentium HT - looks like 96 CPU) p595 which is partitioned .... I have 8 cores for "my" DB2, 16 cores for "me", and someone else plays with the rest....

    For our application these machines are over-spec'd. While our app has many components in many languages, (COBOL, C, Java, P
  • Problems scale too (Score:1, Interesting)

    by Hyperhaplo ( 575219 )
    We had a small (Os/390) Dev box that was upgraded recently. One thing we noticed was that one application (SAS Websrv) was taking 30% CPU at some times. When we upgraded the box (and moved to ZOS) this was much more noticable. (please don't ask why it wasn't noticed before the upgrade). Funnily enough.. no one really noticed on the older machine, but we noticed pretty quickly on the new one.

    The moral of the story is:
    You're not just scaling up your effeciency / work load. You are also scaling up the other va
  • My company has developed a large software project on a server cluster for the backend. Our server-side architecture is (in theory) scalable as large as we want to go. We use BEA Tuxedo to assign different applications to different servers, and all the databases are available via a SAN. The Unix servers use are currently configured with 4 to 8 CPUs each, and 8 to 16 GB memory. The server cluster is currently configured between 2 and 10 servers for our current deployments, though we could scale larger sim
  • by CarrotLord ( 161788 ) <don.richarde@NOSpAm.gmail.com> on Wednesday April 19, 2006 @11:06AM (#15157260) Journal
    In my experience, the custom applications I deal with seem to be built with not just incorrect assumptions regarding load, but *no* assumptions regarding load. When I first fired up one particular application in a production environment, we were seeing 6000 incoming messages per second. I asked the lead developer what we should be expecting to see. He had no idea.

    This is caused by short sighted project management, which translates into short sighted programming. The necessary questions about throughput aren't asked, because it all works fine on the developers' PC with a test load. In our case, we eventually got the application running OK, but changes that have been made since have not taken into account anything to do with I/O, so the fact that our CPU usage is not maxing out seems to indicate to the development team that we are not bound by the server performance, and hence have not reached any scalability thresholds.

    Obviously this is madness. If one was to investigate the scalability of this application properly, one should be looking at where I/O happens, where interprocess communication happens, where object creation and destruction happens, and so on... There is no other way to scale an application -- you have to define what the "load" is, find what happens when you increase it, work out where any bottleneck is, and how parallelisable this bottleneck is. Anything less is no more than buzzwords.
  • At my current position we deal in heavy datasets, full virtual-reality simulations, and a multiple shard world comprising up to 8,000,000 simultaneous online players. We're running into some scalability issues as well...our subscriber base is outgrowing our ability to correctly run the FIVR system.

    Sometimes, it's just time to look for another job because your way out of your league when people ask vague questions!

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...