Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases IT

Scalability In the Cloud Era Isn't What You Think 75

Esther Schindler writes "'Scalability' isn't a checkbox on a vendor's feature chart — though plenty of them speak of it that way. In this IT Expert Voice article, Scott Fulton examines how we define 'scalability,' why it's data that has to scale more than servers, and how old architectural models don't always apply. He writes, 'If you believe that a scalable architecture for an information system, by definition, gives you more output in proportion to the resources you throw at it, then you may be thinking a cloud-based deployment could give your existing system "infinite scalability." Companies that are trying out that theory for the first time are discovering not just that the theory is flawed, but that their systems are flawed and now they're calling out for help.'"
This discussion has been archived. No new comments can be posted.

Scalability In the Cloud Era Isn't What You Think

Comments Filter:
  • I read the article (Score:5, Interesting)

    by Saint Stephen ( 19450 ) on Tuesday May 11, 2010 @03:04PM (#32173082) Homepage Journal

    and learned not a damned thing. Classic marketecture speak.

    • Damn, I was hoping for some technical discussion on moving from small databases of a few hundred mb to largish ones of a few petabytes while maintaining some kind of low level latency. (side note, Eve online's server model is an interesting example of this).
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Given "marketecture" speak is what got us into this cloud mess in the first place, perhaps fighting back with "marketecture" is appropriate.

      • by timeOday ( 582209 ) on Tuesday May 11, 2010 @04:32PM (#32174346)
        I'll bite, what's the "cloud mess"? In the olden days, we mocked slashdot story submitters who linked to videos because their ISP account, or university account, could never handle it. There wasn't really a way for an individual to share a video with thousands of people. Now we just upload to youtube, and viola, it works. Scalability issue solved. How many computers does it take to accomplish that? Where are they? Are they all in one place? It's a cloud, most of us don't know and don't care. It's good.
        • Re: (Score:1, Insightful)

          by Anonymous Coward

          So... you think that Youtube just popped into existence one day, perfectly scalable? People had to design a horizontally scalable video distribution platform. True, it works very well.

          But that's irrelevant. Companies are coming online with products and thinking "I'll just host it in The Cloud(tm)!" Then they start looking at "Cloud services". And they think that their application will Just Work(tm) in The Cloud(tm).

          Technology people know it doesn't work like this. Products, applications, and architect

        • Now we just upload to youtube, and viola, it works.

          I don't understand, what does this string instrument [wikipedia.org] have to do with it?

    • Yeah, FTA: "“Take the time to sit down up front and ask, ‘What would we look like if we got really busy?’ and then plan to that." I remember yawning through micro economics in college. Then it was “Take the time to sit down up front and ask, ‘What would we look like if we got really *expensive*?’ and then plan to that." Same problem, different charlatan with a marketing budget.
    • Re: (Score:1, Redundant)

      by Meshach ( 578918 )

      and learned not a damned thing. Classic marketecture speak.

      You must be new here.

      • Re: (Score:3, Informative)

        by c0d3g33k ( 102699 )
        You must be out of good ideas to add to the discussion.
        • AWSOME comeback, just because our article selection process is susceptible to social engineering doesn't mean we shouldn't do anything about it.
        • Re: (Score:1, Funny)

          by Anonymous Coward

          OK, I'll bite.

          No, really, get the fuck out of here or I will fucking bite you.

    • Re: (Score:3, Interesting)

      by gstoddart ( 321705 )

      and learned not a damned thing. Classic marketecture speak.

      I don't think it's marketecture -- I think it' trying to point out some issues which most of us have never really thought about in terms of cloud computing.

      Admittedly, I couldn't read through the entire article in one go, but I am going to go back and try to finish it.

      The thesis seems to be something along the lines of: everyone thinks that with cloud computing if you keep throwing resources at the problem, scalability is something which sorts itse

      • by c0d3g33k ( 102699 ) on Tuesday May 11, 2010 @04:17PM (#32174118)
        Both. (Marketecture and not grokking with fullness, that is.)

        Marketecture part: The delusional fantasy that because one is able to talk about things in a new way, old problems affecting scalability no longer apply. Very true. The marketers believe it. The foolish customers believe it. Anyone who has a clue runs for the hills.

        Not grokking with fullness part: You've accurately grokked the "every (idiot) thinks that if ..." part. What you haven't grokked is the details. In place of your speculation, just substitute that those who do not learn from history are doomed to repeat it.

        The fantasy I see over and over again whenever a "new" paradigm changing technology comes along is that problems which were hard using the 'old' approach are suddenly eliminated merely by virtue of doing things in a "new" way. The fantasy is that having the 'insight' to recognize the awesome potential of the magical new approach is somehow superior to having the discipline to *fully* understand the problem and solving it decisively and intelligently. The latter is often viewed as not worth the effort or offering a "poor return on investment". The delusion is that effort is better spent on looking for a loophole that doesn't require any understanding because the new approach will magically make the hard problem go away so nobody has to expend any real effort. Doing things 'in the cloud' is one of those magic new approaches that substitutes for actually engineering a solution in an informed way.

        Even if a new approach reduces the effort previously required for certain tasks, it invariably brings with it new problems that have to be understood in order to avoid being bogged down.

        History shows that folks who solve the hard problem wipe the floor with those who are looking for shortcuts. FedEx (solved the logistics problems associated with rapid delivery to anywhere), Southwest Airlines (solved the logistics problems associated with low cost regional air travel), Walmart (developed a satellite network to track inventory and sales chain-wide). Google (a better algorithm for search). Etc.

    • by raddan ( 519638 ) *
      Scalability is a real property, though. But hardware resources are only a single aspect of scalability. Take the IPv4 address depletion problem. You can throw all the hardware you want to at the problem, but it's not going to budge. That's a problem with the addressing architecture. When IPv6 happens, we still have the router-table growth problem, and that you can throw hardware at, to a degree, although for how much longer, nobody really knows. Moore's Law has kept us ahead of that particular issue.
  • Nice URL (Score:3, Insightful)

    by Anonymous Coward on Tuesday May 11, 2010 @03:06PM (#32173100)

    It says ad right there so there isn't any question.

  • by Locke2005 ( 849178 ) on Tuesday May 11, 2010 @03:10PM (#32173144)
    Unlike stupidity, computing resources are inherently limited. Which is a good thing... imagine, if it were really unlimited, the huge bill you would get at the end of the month for a runaway task attempting to use every node?
    • by jd ( 1658 )

      I dunno - I remember a Director of Architecture who could produce infinite fluff. From this, one can extrapolate that you could build a machine that did an infinite amount of nothing useful. It would need to be a quantum computer that existed in every possible state simultaneously, much like said Director in fact.

      • Nice to hear from you jd. I think I know whom you are talking about. (Isn't he now listed as "Senior Technologist"?) I apologize for getting you thinking about old jobs when we both should just be focusing on moving on. Hope you're doing well and have found a much easier commute.
        • Re: (Score:3, Interesting)

          by jd ( 1658 )

          Sometimes an old thought can trigger a new line of thinking. For example, it would be difficult to make a 3-CCD camera that's as flat as a modern digital camera, because a decent-sized CCD placed sideways will widen the camera by that amount. The prism would normally be bulky, too. Far as I know, that's the main reason you see this sort of camera on high-end video equipment, not cheap digital cameras. However, I don't see anything there that can't be solved by using a few lenses and mirrors. Since CCDs can

          • The only devices I've seen using 3 CCDs are $4000 Sony videocameras. High-quality optics make each lens and mirror very expensive, so most high-end cameras have very simple optical paths (old Hasselblads are now being repurposed as digital cameras [google.com]). Dealing with RGB is different from directing a laser beam, which has a single frequency. For a laser, I suspect even a hologram could be used as a lensing system. Not so for multi-megapixel RGB cameras.
            • by jd ( 1658 )

              You're right about the optics being the challenging part. It would depend on whether it's cheaper to make errors smaller or lenses/mirrors larger (either will let you reduce the visibility of defects, up to a point), and on how large an angle any defect can be allowed to cover when the image reaches the CCD. To be honest, I haven't the foggiest. And, yes, the very earliest (late 1800s, early 1900s) "colour" photography was done by photographing through three distinct filters and you can therefore do the sam

    • Re: (Score:3, Informative)

      Infinite scalability isn't the only snake oil in the cloud. Other cloud computing myths [toolbox.com] include "all you need is a credit card" and "cloud is cheaper."

  • tl:dr (Score:3, Funny)

    by adeft ( 1805910 ) on Tuesday May 11, 2010 @03:12PM (#32173168)
    can someone scale that article down a bit?
  • by roman_mir ( 125474 ) on Tuesday May 11, 2010 @03:15PM (#32173190) Homepage Journal

    Scalability is a buzzword that equipment, databases and servers (hardware/software) are sold on. It is as if by adding more weblogic servers to a cluster really makes your application scalable, as if throwing more processors onto a RAID system gives you more parallel ways to read / write the same data etc.

    It is all true to an extent and it is all false where it really matters. Applications need to be designed to be scalable and if I learned anything over the past 16 years is that people do not even begin to understand what it means.

    The managers and even many 'architects' really think that by throwing some stupid app on a cluster will really solve the scalability issues and so on. But the problem is that it is a very specific problem that can be solved by simply adding cluster nodes without actually properly designing the app. I blame various silver bullets like EJBs, CORBA, RMI, JNDI, BEA, Oracle, IBM and such for promoting this view among the top brass and pulling attention away from working out correct architecture to solve the specific problems that appear in building truly scalable applications.

    Application servers and databases are the worst at this, they certainly provide some specific type of scalability solution but because of that, it is almost expected that it does not matter how an app is designed to interact with these, and the design is really on the distant third, fourth, fifth or further place, way behind the deadlines, the politics, the hiring practices etc.

    Scalability is like security, it is not a one specific thing it is a way to approach many different issues and problems and even when you think your app is secure in 5 different ways, there is a sixth way in which it is not. Same with scalability: it is not only about multi-threading requests, it is not only about multiple processors for a RAID system, it is about total understanding of how the application is and will be used and adjusting it for various types of usage. Proper design for scalability mixes various approaches, there could be intermediate steps added, back-ground processing added, intermediary storage, separate storage for reading than for saving, various caching mechanisms and synchronization between nodes in a cluster for different caching questions. This could be redefining an algorithm to be less dependent on reading data from slow media. Some things are not supposed to be done in parallel, so certain bottlenecks due to synchronization need to be looked at and solved early on, because these become the Achilles heel - synchronizing on anything at all can defeat a super-fast cluster and make it no better than as a single laptop.

    It is a design issue.

    • by cybrthng ( 22291 )

      My experience with Oracle Grid Computing tells me you don't quite understand the capabilities of their RDBMS/Grid Platforms.

      • My experience with Oracle shills is that they tout Oracle as the only true way. Luckily I am not susceptible to advertising, I look at facts. Fact is that Oracle's grid computing will add no more scalability to any particular application than their earlier clustering approach, though it may help with cutting some costs on probably some hardware and energy, good, that should help to offset the crazy licensing costs. I am setting PostgreSQL everywhere I can, and I use more of an app design approach to solve

  • I dunno... (Score:5, Funny)

    by thewils ( 463314 ) on Tuesday May 11, 2010 @03:16PM (#32173204) Journal

    That ash cloud from Eyjafjallajokull seems to be scaling pretty good.

  • Reading the TFA, the author kept making references to scaling using the "cloud" without mentioning any particular vendor. I'm thinking Microsoft's Sharepoint was alluded to, but as for as FlightCaster - what are they using? How would they use Sharepoint for that? Or is there a Hardware as Service company they're using?
    • by RingDev ( 879105 )

      Sharepoint isn't a cloud, it's a CMS with a whole lot of crap mixed in.

      Microsoft's cloud service is called Azure. One of my coworkers was looking at it to host his company's web site and services. The scalability there was actually quite impressive for simple hosting and heavy loads. I don't know the details, but he seemed pretty impressed by it, just not by the cost. It was right on par cost wise as having a dedicated VM with decent resources. The only real difference he was looking at going from a dedicat

      • If you cloudsource everything, you can lay off all your datacenter operations staff. You still need sysadmins, security guys, and coders; but the people who run wires, rack servers, replace faulty disks, manage the SAN, etc. etc. are no longer relevant. You must factor the cost of this staff when comparing TCO.

        • What's the cost of having your entire physical infrastructure under someone else's control?
          • Nothing.

            Well, nothing until some Chinese ISP screws up a BGP setting for a netblock they have nothing to do with.

          • Re: (Score:1, Insightful)

            by Anonymous Coward

            A cost that isn't visible in the short term. Thus, it's invisible to the poeple making the decisions.

      • Until they can get the cost to be lower than the TCO of a cheap server, UPS, and business cable line though, I can't see making the jump for small businesses.

        Remember that TCO isn't only hardware (server, UPS, cable). You also have to factor in software licenses, physical building, physical building security, network security, HVAC costs, etc. And these are just the easy to calculate costs.

        You also have to think about other costs such as procurement (someone has to order the hardware from Dell, receive it at the shipping dock, unbox it, install the server OS on it, handle warranty repairs, etc), network administration, management overhead, load balancing, etc.

        • I agree completely. I should have spoken more clearly.

          My co-worker's personal small business has 1 employee: him. He is a skilled tech guy who can handle most of the work himself. He had 3 options:

          1) Move to the cloud for ~$150/month. supreme uptime, no hassle scaling, everything is managed for him
          2) Move to a more robust dedicated virtual machine for ~$150/month. solid up time, scaling available, all network stuff is managed for him
          3) Buy a server and business cable line to his residence for ~$750 one time

    • by spatley ( 191233 )
      That is because the essence of his article is that it does not matter what segment of cloud computing you use, if you application is not *designed* to scale, it will not scale. No matter if it was sold to do so or not.

      This is that same idea that if you take a single threaded app and put it on an 8 core proc, you will not get any performance boost from the single core. If your data set has to join a trillion rows to a billion rows, you can throw all the parallelism you want at it and you will just have a t
      • If your data set has to join a trillion rows to a billion rows, you can throw all the parallelism you want at it and you will just have a thousand boxen trying to perform the same join a thousand times and performance will not improve.

        No, you split your billion row table into a thousand pieces so each piece fits in memory on one of your thousand machines, and then multicast your trillion row table to all thousand machines and have them match the stream against the million rows they have in memory.

  • by Lord Ender ( 156273 ) on Tuesday May 11, 2010 @03:27PM (#32173330) Homepage

    The Google App Engine cloud computing offering plans to (eventually) automatically scale your application as much as you need. But that scalability comes at a cost: only key-value stores may be used. Sorry, no relational databases available. JOINs just don't scale. You can distribute data across any number of nodes, but JOINing data which lives on separate computers is not gonna happen.

    If you need JOIN-like behavior, your app has to request all the data, then compute the result itself. Trying to write an app for such a system means rearchitecting the data in ways to minimize the need for such operations, even if that means having duplicate data.

    It's quite an exercise to unlearn what you have learned about SQL and relational databases, but the use of object mappers can help a lot.

    • I've seen joins scale decently with Teradata. Might not be the best OLTP oriented database, but a great analytical database when you need to do very complex BI Logic searches across large datasets.

    • Re: (Score:3, Insightful)

      by LWATCDR ( 28044 )

      So one is going to have to learn a totally different way to do everything and then deal with a new set of problems.
      Which is why IBM is still selling ZSystems running DB2 :)
      That being said I have not used much in the way of key-value database in a complex application. Frankly it sounds like a real pain.

    • J2EE folks should definitely check out JDO as a better way to develop for the cloud [dynamicalsoftware.com]. With JDO, you can stay relational or move to EC2 or GAE without making a big code commitment.

    • by lennier ( 44736 )

      JOINs just don't scale. You can distribute data across any number of nodes, but JOINing data which lives on separate computers is not gonna happen.

      If that's the case, then surely we're Doing Something Really Wrong with our implementation of relational theory. Should we perhaps be looking at things like Extended Set Theory [xprogramming.com] instead?

      Relational - (and more specifically, SQL, which as Chris Date is at pains to tell every is NOT even a correct let alone good implementation of the relational model - but even Codd's original paper shows signs of this) - came out of a timesharing environment, where it was just assumed as a matter of course that you'd have very

    • I wasn't thinking that ease of provisioning more resources was the same as scalability.

      If for every additional 10 tasks a system is required to do takes an additional 10 units of computing resources that is not "scalability" regardless of how easy it is to procrue those additional resources.

      Or perhaps that is an example of an app that scales linearly, and what people really want when they want scalability is a system that scales geometrically?
      • Growing your cluster to handle more traffic is certainly considered "scaling" by most, and this is the way most cloud-computing services do things. I refer to this sort of scaling as horizontal, whereas adding RAM or CPU power to a single machine would be horizontal scaling. Please correct me if you know of a better term...

        • OK Sure. I guess my point was that you may be able to get more output from a crappy system simply by throwing more resources at it. But I wouldn't call that a system that was "designed to scale". There are so many things a developer can do to identify key bottlenecks so that only portions of the app need to scale linerally, but the overall app would scale non-linerally.

          I am in the process of migrating my servers to EC2 so I am a big fan of cloud computing. I am intrigued by app engine but it would mean
    • A minor niggle to a correct thesis: clouds are indeed horizontal creatures, like lichens (:-)) Joins, however, can be decomposed into a horizontally scalable component that runs on many nodes to return a small candidate set and a vertical component that puts together the candidates and returns the valid ones as a join. This is what the Oracle Teradata (sp?) machine does, making TP substantially more scalable. The bottleneck in this scheme is the backplane: it requires Linux hyperchannel to achieve the e
  • Wow, they discovered the order of functions. (See http://en.wikipedia.org/wiki/Big_O_notation [wikipedia.org])
  • Hand wave (Score:5, Funny)

    by Itninja ( 937614 ) on Tuesday May 11, 2010 @03:44PM (#32173560) Homepage
    I find that when I speak in my "IT Expert Voice" I get all kinds of things. Even if I am saying gibberish:

    "Linda. The malware infecting you CRT is several beta tests behind the best practice of current IPv6 drives. I will need your password to defrag the driver and upload the taskbar to your certification path...Thank you Linda."
    • Re: (Score:1, Funny)

      by Anonymous Coward

      God, only a true nerd would say that. Here, let me show you how this is done.

      "Linda. ... I will need you to drink this bottle of Scotch and hop in the hot tub while I defrag the driver and upload the taskbar to your certification path. I will come there when it is done...Thank you Linda."

      • Have you seen Linda? I think you'd be better off drinking the bottle of Scotch yourself if you plan on joining her in the hot tub.

    • by Nagrom ( 1233532 )
      Is that a quote from 24?
  • Well, expecting to get more output from the same input is of course illogical and impossible, but if a company puts up the planning, development and engineering resources to make it happen up front than the scalability claims in the marketing copy can be done to some extent.

    But the way some (most?) deployments seem to go make it cost prohibitive to put the distributed database / distributed applications and fault tolerant components in in the first place.

  • By the time you need to expand a complete and less expensive system has already supersceded what iron you were originally running it on.

    In many cases it is cheaper to replace the hardware than adding more "modules" for your scalable hardware.

    • by raddan ( 519638 ) *
      You're kidding, right? If not, your applications aren't big enough. Do you think Google runs on a computer under somebody's desk?
      • How many businesses are there which have Google's needs? Ten? Twenty?

        • by raddan ( 519638 ) *
          Kind of a lame argument. You don't have to be Google to require scalability. All you need is a workload that may grow or diminish faster than you can throw a fast computer at it. Don't know if you've ever heard of a website... turns out these things often run on load-balanced clusters nowadays...

          The point is not how many (and the answer to your question is actually "thousands"), but that there is a legitimate need for scalable computer architectures. Scientists need them, design firms need them, vide
  • You may now all skip even pretending to read the article and do what you do best: use a car analogy to explain why duplicating kiddie porn isn't theft, unless the Government does it.
  • Yes, indeed! I can run copies on as many desktops as I care to. Just add monkeys and ta dah - Shakespeare!
    • Yes, indeed! I can run copies on as many desktops as I care to. Just add monkeys and ta dah - Shakespeare!

      So far, mostly just Slashdot. Shakespeare seems to be in the offing yet.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...