Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Amazon EC2 May Be Experiencing Growing Pains

kdawson posted more than 4 years ago | from the hey-you-get-offa-my-cloud dept.

The Internet 93

1sockchuck writes "Some developers using Amazon EC2 are wondering aloud whether the popularity of the cloud computing service is beginning to affect its performance. Amazon this week denied speculation that it was experiencing capacity problems after a veteran developer reported performance issues and suggested that EC2 might be oversubscribed. Meanwhile, a cloud monitoring service published charts showing increased latency on EC2 in recent weeks. The reports follow an incident over the holidays in which a DDoS on a DNS provider slowed Amazon's retail and cloud operations."

Sorry! There are no comments related to the filter you selected.

Missed Opportunity (3, Insightful)

cgenman (325138) | more than 4 years ago | (#30778682)

Why not say "Yes, we're way too popular. We're adding capacity as quickly as we can, but people are just lapping up our service!"

This seems like a missed marketing opportunity.

Re:Missed Opportunity (0)

Anonymous Coward | more than 4 years ago | (#30778742)

People don't want to pay for them to be "as quick as we can"

Re:Missed Opportunity (1)

jopsen (885607) | more than 4 years ago | (#30779034)

Because it may not be true...
- Just, a thought :)

Re:Missed Opportunity (3, Insightful)

SatanicPuppy (611928) | more than 4 years ago | (#30779056)

It misses the point of the magical cloud! If the phbs learn that the magical cloud can run out of capacity, then they might have to start planning again.

If they do that then EC2 and other similar services which sell the same capacity to 100 different people on the principle that they won't all get taxed at the same time, are going to have some explaining to do.

Re:Missed Opportunity (3, Insightful)

lorenzo.boccaccia (1263310) | more than 4 years ago | (#30779228)

awwww, we the techies have been saying this from the day one of this cloud computing spree: there is no such thing as linear scalability, and infinite scale out is a myth

for one, replication and startup of machines takes at best log(n) time, forget about it being o(1), data doesn't transfer on wire magically at infinite speed.

then again, cpu as a services is bounded to meet oversubscription, as it wouldn't make any business sense the other way.

third, datacenters are bound to have an upper limit. a sudden spike wich exceed the maximum capacity of all the combined datacenter has to be met with the purchase of other datacenters, and this doesn't happen overnight. (if your web application requires less than at least a rack in a datacenter there is actually no sense in having it clouded)

Re:Missed Opportunity (3, Insightful)

segedunum (883035) | more than 4 years ago | (#30779864)

for one, replication and startup of machines takes at best log(n) time

Errrrrrrrrrr, yer. However, simply restarting a server image is infinitely preferable to hanging around for a few hours while you bite your nails waiting for a hoster to fix something. Been through it. Not going back.

(if your web application requires less than at least a rack in a datacenter there is actually no sense in having it clouded)

So you're saying if anything takes less than a rack to host then there is no point in having it hosted for you............anywhere?!

I have no idea why people think that 'cloud' computing is any different to traditional hosting. You have all the same considerations when on EC2, Joyent or anywhere else as you do if you were getting a hosting company to specifically buy hardware and set things up for you - except many things on EC2 or such a platform are standardised and you can manage a great deal through software without hanging around for someone to schedule a time to do 'something'. Maybe that's what some people don't like? ;-)

Re:Missed Opportunity (4, Insightful)

lorenzo.boccaccia (1263310) | more than 4 years ago | (#30780086)

> So you're saying if anything takes less than a rack to host then there is no point in having it hosted for you............anywhere?!

no, I'm saying that if it's smaller than that other hosting solution may be better suited for it. cloud computing is all about scaling and is priced accordingly. for small scale operations standard hosting/virtual hosting/colocation/rental may be a better solution

Re:Missed Opportunity (1, Informative)

sycorob (180615) | more than 4 years ago | (#30780510)

Caveat, I've never actually used cloud computing, just talked to people and seen presentations.

The story I got was from a guy who does IT consulting, and does a lot of prototyping for new/potential clients. He would use the Google or Amazon clouds to spin up an application, play with it, demo it to the clients, and then spin it down. If it went live, they could either leave it in the cloud, or capitalize a "real" hosting solution. He claimed that his bill some months for the cloud was less than $1.

And that's the point that I think people miss. If you're messing around with The Next Big Web 2.0 thing, but don't expect a lot of traffic to begin with, why go through the hassle of setting up a traditional hosting solution? How many racks will you need? How much bandwidth? How much memory and CPU? And if it gets suddenly more popular than you expect, how long does it take to get new servers online?

With cloud hosting, you can say "I want to pay, at most, $2000 a month." The service can then dynamically scale you up to your limit if you get on Slashdot or something. And if not, you just pay for what you're using, not for a rack of servers that are sitting 99% idle.

Re:Missed Opportunity (1, Informative)

Anonymous Coward | more than 4 years ago | (#30781254)

Exactly. I work at a popular video hosting website, and we do just that. Our encoding mechanism scales up and down depending on the number of videos requiring encoding. The fewer videos we get, the less we pay, as less equipment is being used. If we had our own racks, we'd be shelling out thousands to begin with, and then hundreds more each month. With cloud computing, our bills can be fantastically low, but are never fantastically high.

Re:Missed Opportunity (1)

lonecrow (931585) | more than 4 years ago | (#30787368)

Hi Allow me to disagree. I run a small IT shop and I host a couple of dozen websites for my clients and myself. I currently have a dedicated server at thePlanet which I have been very happy with.

However, my disaster recovery plans always hit a snag when I imagine the server falling off the self or getting rooted. Sure I have all the software and license info required, and yes I have excellent off-site backups of the websites and their databases. But if I had to restore that server from scratch it would still take me at least a day and probably two or three.

OTOH If I have an EC2 instance, I just re-launch from my custom AMI and I am back in business in minutes with little effort.

Add to that the fact that I can make snapshot backups of 30gb ESB volumes in less then a minute and I get very excited.

Re:Missed Opportunity (0)

Anonymous Coward | more than 4 years ago | (#30784176)

The thing that people don't like is the word. An illness in IT workers cause them to exhibit knee-jerk aggressiveness towards hyperbolic terms such as "cloud computing", whether it has technical merits or not. It's a streetness thing.

Re:Missed Opportunity (1)

slim (1652) | more than 4 years ago | (#30780950)

(if your web application requires less than at least a rack in a datacenter there is actually no sense in having it clouded)

Maybe not EC2, I'll grant, because of their pricing model.

But there's no reason why a "traditional" shared web serving service couldn't be hosted on a cloud. Indeed Google App Engine fits the bill.

Re:Missed Opportunity (1)

Lord Ender (156273) | more than 4 years ago | (#30779300)

Some part of "cloud computing" does involve over-selling of server resources. Google's App Engine is so over-sold it can take 20 seconds for a page to load. However, Amazon's service allows you to reserve dedicated hosts for a premium price.

And, really, being over-sold isn't a problem so long as things are managed right. A project on the scale of Amazon's should be able to afford the best engineers so that such things are managed properly.

Re:Missed Opportunity (1)

maxume (22995) | more than 4 years ago | (#30779582)

How do you oversell something that is sold in units of usage? If Amazon fails to provide a unit of CPU, they can't really charge the person they didn't provide it to.

Re:Missed Opportunity (1)

Lord Ender (156273) | more than 4 years ago | (#30779708)

An over-sold server cluster becomes slow, just like an over-sold network connection. Peak demand outstrips the "expected" number of apps running at any particular moment.

Re:Missed Opportunity (1)

maxume (22995) | more than 4 years ago | (#30780014)

What does slow mean? Are instances running slower than the advertised physical equivalent, etc.

Re:Missed Opportunity (1)

SatanicPuppy (611928) | more than 4 years ago | (#30780388)

The problem is scalability. It's not that you're not getting what you're paying for, it's that it's not scaling as it should be, as they tell you it will.

So something happens, and their system gets taxed and your hosted apps get choked for resources and look like crap. Sure, they're not billing you for what you're not using, but you're not getting a good product either.

Re:Missed Opportunity (1)

MarkWatson (189759) | more than 4 years ago | (#30779916)

I think that what you are seeing with AppEngine (and same effect with Heroku, which is EC2 based) is this: if your web application has not processed any requests for several seconds (or longer?), then it needs to be rolled back online.

Try an experiment: assuming that you have a private (non-advertised) AppEngine app, time the first request with ab (Apache benchmark tool). Then time requests that are sent every second. I bet that you see the 20 second page load time vanish if you are making frequent requests.

App Engine Blazes if Your Code is Good! (1)

TheTyrannyOfForcedRe (1186313) | more than 4 years ago | (#30782728)

Google's App Engine is so over-sold it can take 20 seconds for a page to load.

If you're app takes 20 seconds to deliver a page then it's your fault. My app consistently delivers dynamic, multi-hundred kilobyte pages in 1-2 seconds anywhere in the US. See for yourself! www.TwitGrids.com [twitgrids.com]

If you code for App Engine like it's Rails or Django your app will be a dog.

Re:App Engine Blazes if Your Code is Good! (1)

Lord Ender (156273) | more than 4 years ago | (#30782860)

The slowness in App Engine comes from when they load and unload your app. This does take a long time no matter how your code is written. If your app is already loaded because someone else has accessed within the past 60 seconds, it will respond in under two seconds, as you say.

If AppEngine were not oversold they would not have to load/unload apps constantly.

Re:Missed Opportunity (1)

slim (1652) | more than 4 years ago | (#30779336)

It misses the point of the magical cloud! If the phbs learn that the magical cloud can run out of capacity, then they might have to start planning again.

The whole point is that Amazon (or, insert service vendor of choice) does the planning for you. I don't have to order extra heating gas for winter - I expect the gas company to anticipate the grid's needs. If I heated my house with gas from canisters I had to order a month in advance, I'd have to do my own planning. That's would be analogous to a traditional in-house datacentre.

It's possible that Amazon failed to provide sufficient capacity for a period. Or it's possible it did just fine - I'm not reading about any SLAs being breached. But if one service provider fails to provide sufficient capacity, that doesn't invalidate the whole business model. It just means that the vendor has to do better.

Re:Missed Opportunity (1)

SatanicPuppy (611928) | more than 4 years ago | (#30780328)

For me, for my critical services, double redundancy is doable. I can even do double, and then have a third cluster which can take over for any one of the other four in a pinch.

I've got redundant data lines from different providers, I've got battery and generator backups, and I've got multiple physical locations. If I'm asked, I can say, without any doubts, that we have exercised diligence, and that we're prepared for any rational situation.

With the "cloud" you have to trust that some third party, whose business is making money, is going to spend for the capacity to cover those contingencies. I flat do not trust them.

Re:Missed Opportunity (2, Insightful)

slim (1652) | more than 4 years ago | (#30780620)

With the "cloud" you have to trust that some third party, whose business is making money, is going to spend for the capacity to cover those contingencies. I flat do not trust them.

This argument to be used against any and all outsourcing.

With "motorcar servicing" you have to trust that some third party, whose business is making money, is going to perform due diligence when servicing my car.

With "banking" you have to trust that some third party, whose business is making money, is going to keep my money in a secure manner.

With "office cleaners" you have to trust that some third party, whose business is making money, is going to come in on a regular basis and clean the office.

It's a non-issue. Have your contract specify what you require from the service. If the vendor doesn't fulfil the terms of the contract, sue them.

If Amazon's SLA doesn't meet your requirements, then sure, don't buy their service. Find a provider who does, or, yes, roll your own.

But if their SLA does meet your requirements, why the hell would you "flat do not trust them" to fulfil that?

Re:Missed Opportunity (1)

SatanicPuppy (611928) | more than 4 years ago | (#30781026)

Yes, but no.

The whole cloud concept has been defined so poorly, that you're not given any sort of benchmarks for performance or scalability.

I'd require guarantees (as I require with my outsourced resources), and until they're going to provide them, then I'll keep doing it myself, or outsourcing it to someone who is willing to detail their services in more concrete terms.

Re:Missed Opportunity (1)

Synn (6288) | more than 4 years ago | (#30782264)

Yes, but no.

The whole cloud concept has been defined so poorly, that you're not given any sort of benchmarks for performance or scalability.

I think the issue with this is that it's still a fairly immature business model. At some point the market will mature and you'll have "cloud" vendors that specialize in specific contract levels of service that meet a high demanding customer. Just like today you can buy a Ford or Mercedes and expect different levels of quality and support from both.

But the concept itself is sound. It's really about specialization and volume. A service like EC2 can specialize in that one specific area, managing the hardware and networks, and just provide those resources in a general way to the end customer, your local office IT. Throw in the massive volume they'll do and at some point it even starts to become a game where they'll be able to provide those CPU and disk cycles at a price cheaper than you can do it yourself.

Re:Missed Opportunity (0)

Anonymous Coward | more than 4 years ago | (#30781614)

Forget the SLA's .. this is not about this. In the "acceptable terms" they say they can shutdown your instances at any point for any reason for any period of time. So, if this is by SLA - then yeah they have done good so far.

With banking there is common understanding of the services provided, but with the cloud not. So, they can always fall back to lowest common denominator and tell me - read the user agreement, etc

However, on the marketing side it's presented as solution to all problems.

The problem is that the service is broken and we all know why. The only solution will be to raise prices and dedicate more hardware to this the "more reliable" instances.

Re:Missed Opportunity (1)

segedunum (883035) | more than 4 years ago | (#30780098)

It misses the point of the magical cloud! If the phbs learn that the magical cloud can run out of capacity, then they might have to start planning again.

People have actually planned deployment? Not on the planet Earth that I'm living on they haven't.

Re:Missed Opportunity (1)

bingoUV (1066850) | more than 4 years ago | (#30779120)

They are supposed to do basic admission control if they want to be viewed as a professional service provider. Read http://en.wikipedia.org/wiki/Admission_control [wikipedia.org]

Re:Missed Opportunity (1)

teknopurge (199509) | more than 4 years ago | (#30779214)

Because it's more than that. "Cloud Computing" is a marketing architecture, not a technical one. Incidents like this demonstrate poor planning and design, nothing more.

Re:Missed Opportunity (1)

slim (1652) | more than 4 years ago | (#30779254)

"Cloud Computing" is a marketing architecture, not a technical one.

The fact that real programming goes into projects like Hadoop and CouchDB refutes this.

I'll accept that "Cloud Computing" can be used to refer to both a marketing architecture and a technical architecture.

Re:Missed Opportunity (4, Informative)

Anonymous Coward | more than 4 years ago | (#30779606)

I think the CouchDb devs would actually disagree: I've heard a few of them (janl and jchris) refer to what they do as "ground computing" rather than "cloud computing." I can't speak for them but I think their goal is a more user peer-to-peer architecture than a client-server arch (where in the server is a proprietary cloud).

The goals of the CouchDb project sometimes seem to extend further then just a RESTful database system...

Re:Missed Opportunity (1)

teknopurge (199509) | more than 4 years ago | (#30779720)

please mod up the AC.

Re:Missed Opportunity (2, Interesting)

happy_place (632005) | more than 4 years ago | (#30781660)

I know an IT guy that works for them, and that's exactly what he says. (He says they can't install enough servers fast enough.)

Re:Missed Opportunity (0)

Anonymous Coward | more than 4 years ago | (#30793370)

You haven't worked on a cloud. It's extremely expensive to set up, all the higher ups are interested in ROI and cloud computing does NOT = ROI, despite what you may think. It just opens the doors for other products/services and getting the corporate name out there. More users = slower, and it's a pain in the ass to bring on additional hardware to keep within budget. Sure they could add more expensive hardware, but if they sold on that hardware they'd be right back to step 1.

Staged DDoS? (4, Interesting)

Stan Vassilev (939229) | more than 4 years ago | (#30778878)

When the news came around for EC2's DDoS around Christmas, I remembered reading how Amazon began offering their services to third parties in the first place. Turns out Amazon has a sudden peak of traffic around shopping holidays and particularly Christmas.

To prepare for that, they have added enough hardware to handle the peak, but that hardware went unused the rest of the year. So they started leasing it to third parties in the form of their web services.

This immediately makes you think, ok, what happens to their ability to handle the third party apps around Christmas, when they need a lot more hardware to handle Amazon.com's traffic itself? And then this DDoS happened, which importantly overloaded not the actual app servers, but the DNS servers pointing to the app servers. So as a result the app servers experiences lower traffic for third party sites than they would have otherwise.

It's making me think, and this is of course just speculation, this may have possibly not be a genuine attack as much as a stunt to lessen the overload of their cloud services they knew they'd experience around Christmas, while having a plausible explanation for the downtime that blames it on a malicious third party.

Reading they do indeed have had (and still have) performance issues supports that speculation.

Re:Staged DDoS? (0)

Anonymous Coward | more than 4 years ago | (#30778936)

Yeah, except that their retail operations were effected as well. Kinda kills your argument.

Re:Staged DDoS? (1)

Anonymous Coward | more than 4 years ago | (#30778986)

It sounds like you're trying to disagree, maybe you meant their retail operations were *affected*?

Re:Staged DDoS? (0)

Anonymous Coward | more than 4 years ago | (#30779062)

not quite sure it kills the argument ... if the DNS servers got overloaded, then the harm is minimal, the DNS servers process no data of importance, they just serve read-only DNS queries, or they don't

on the other hand, if the EC2 farm got overloaded, this could cause crashes, data losses, and a lot more lawsuits, than if there was a handy "terrorist act" killing half the traffic in order to keep the machines up and take the blame ...

even if amazon's retail ops were affected, if i had to pick between requests simply not served, versus data damage and lawsuits, i'd pick request not served any day

Re:Staged DDoS? (1)

hesaigo999ca (786966) | more than 4 years ago | (#30779870)

You bring up a very good point, as to how to stay looking like a real solution while trying to explain your downfalls, and a fake DDos attack would be great thing, however, if I am hearing that Amazon is subject to DDos and do not know how to circumvent them, then I have to say, this sounds just as bad to me, however, I am very pessimistic when it comes to
security. On the whole, no one knows what they are talking about and everybody is hacked.

Re:Staged DDoS? (1)

Dalroth (85450) | more than 4 years ago | (#30783250)

What happens is eventually they add enough capacity and enough customers that the data centers themselves have no yearly peak. Amazon may peak in December, but NBC may peak in August during the Olympics. Over the course of the year, everybody's peaks average out to no peak.

Re:Staged DDoS? (0)

Anonymous Coward | more than 4 years ago | (#30839976)

No, you have swallowed the bs PR story. EC2 was always conceived and planned as a completely separate infrastructure. Amazon runs exactly ZERO of it's ecommerce site on EC2 infrastructure.

Amazon can't eat its own dog food (0)

Anonymous Coward | more than 4 years ago | (#30778894)

Just the other day I came across Amazon's marketing materials explaining the benefits of EC2 in which they show a pretty graph of your datacenter capacity vs. demand over time. EC2 is supposed to scale up right along side the demand for services. But Amazon has to use the traditional datacenter model to support EC2; it doesn't have the luxury of its datacenter automatically scaling up with demand. It is inexpensive for us customers to scale up our EC2 services, but relatively expensive for Amazon to decide to add a bunch of new servers or upgrade a bunch of existing servers in their datacenter, or maybe even add a new datacenter.

If I were Amazon and started noticing increased latency, after checking to make sure everything was in fact functioning properly, I would probably wait to see if the spike in usage is just temporary or if it will be sustained enough to warrant an increase in datacenter capacity.

Re:Amazon can't eat its own dog food (1)

sdiz (224607) | more than 4 years ago | (#30778972)

You means, virtualize their EC2 server using EC2?

That's virtualization all the way down.

Re:Amazon can't eat its own dog food (0)

Anonymous Coward | more than 4 years ago | (#30779026)

I wouldn't be completely shocked if we end up with a model like power companies, where capacity can be borrowed at a premium price to meet peak demand spikes. Amazon just needs some people to borrow from.

Re:Amazon can't eat its own dog food (1)

slim (1652) | more than 4 years ago | (#30779406)

Or like the insurance industry, where the insurance companies take out insurance with re-insurance companies, against getting too many claims.

Or like the mortgage industry...

Seriously, I think Amazon and Google intend to be the end of the chain. They don't want to buy computing services from a third party. It seems like they need to invest in affordable "idle" capacity to deal with peaks. That spare capacity needs to be economical when it's not needed. Either it can be cheap to keep mothballed (whole powered-down data centres on cheap land?) or it can be working on profitable batch computing tasks, that customers don't mind having paused when the capacity is needed for real-time work.

Re:Amazon can't eat its own dog food (2, Interesting)

bingoUV (1066850) | more than 4 years ago | (#30779556)

Seriously, I think Amazon and Google intend to be the end of the chain. They don't want to buy computing services from a third party.

They may want to, and it might be reasonable. One reason for them wanting to do this is that they are so far the top dogs in the fight and their buying from smaller players would not make economic sense. They cannot buy from each other because they have very different models - Google's "cloud" services are much more restricted than Amazon's.

But if/when more players come into this field, it might make sense for them to buy computing resources from each other. Both buyer and seller would gain. Seller gets to earn for his idle resources - these earnings would be non-zero but less than if they were selling to an end customer. Buyer, of course, avoids disappointing his customers and save his face.

Though there might always be some cloud service providers who will not buy/sell. This does not mean there is no value in cloud guys trading with each other.

Re:Amazon can't eat its own dog food (1)

alen (225700) | more than 4 years ago | (#30780210)

in the end it comes down to having a lot of expensive hardware sitting around not doing anything 90% of the time and crazy support costs being paid. the only question with the cloud nonsense is who has this hardware? the big companies like EMC, HP, Dell, Seagate and Cisco want Amazon to have it since they know Amazon will probably pay up the support costs and will always buy extra for growth. where a smaller shop will buy what they need and upgrade later. and not pay the insane precious metals support costs all the sales people like to push with contract clauses that say we don't really guarantee you this level of service if the parts aren't in stock.

Re:Amazon can't eat its own dog food (1)

slim (1652) | more than 4 years ago | (#30780472)

having a lot of expensive hardware sitting around not doing anything 90% of the time

The solution to this is finding saleable ways to use this spare capacity. That's what Amazon's Spot Instances plan addresses. Essentially, you set up an image, and ask Amazon to run it at a given price. During off-peak hours, when the capacity is available, your image comes up. If the capacity is unavailable, and someone's outbid you, your image comes down again.

(Remember, web hosting is not the only thing computers can do)

Re:Amazon can't eat its own dog food (1)

bingoUV (1066850) | more than 4 years ago | (#30782430)

But if individual (small) business owners own all the hardware, overall more hardware gets sold. So Intel, AMD, Samsung, Kingston etc. companies that earn more from hardware than from support don't want Amazon to own all the hardware.

Also, Amazon knows what support service is worthless. They may not be using expensive EMC storage solutions and rather going with consumer grade hard disks like Google does. So EMC may not be happy with Amazon owning lots of hardware.

Amazon also does not pay Microsoft for virtualization solutions (it uses modified xen virtualization, last I heard). Small businesses are much more likely to buy everything from Microsoft, including virtualizaion solutions which Microsoft is not the best in. So Microsoft also may be unhappy with Amazon owning the hardware.

Easy solution (3, Funny)

Anonymous Coward | more than 4 years ago | (#30778950)

Amazon needs to move their cloud into space. Yes, space! It's the next big frontier beyond clouds, and you heard it here first.

The culprit (0)

Anonymous Coward | more than 4 years ago | (#30778974)

They found the person responsible - a stuffed bear covered in soot that had hovered under the honey tre via a balloon while singing.

Cloud computing reliability! (1)

DaemonKnightVS (1422157) | more than 4 years ago | (#30779024)

Wouldn't want any of my important data stored on a system which has performance issues...

Or having to wait significantly longer than I would storing my data locally!

Re:Cloud computing reliability! (2, Interesting)

alen (225700) | more than 4 years ago | (#30779340)

from what i read a lot of people like to use it for testing. you can "create" a server with a loaded OS in seconds, test and and "destroy" it by lunch. I can do this on the free version of VMWare ESX but i don't know if i can copy a bare instance i set up to another instance. Otherwise we have a sort of old Proliant G5 server with the free version of ESX that we use for testing different things. in the past we used the crappiest server we had. if we needed multiple machines we were screwed. with Vmware and Hyper-V you can even create virtual Windows MCSC clusters easily.

it's aimed at smaller shops with less cash on hand. for larger organizations it doesn't make sense

For testing, use Trinity Rescue Kit (1)

nweaver (113078) | more than 4 years ago | (#30779766)

Trinity Rescue Kit is a network boot/CD boot linux that reads and writes NTFS etc.

We use it here to image and deimage windows systems, it takes ~10 minutes boot-to-boot to bring up a raw windows system in a known state.

Re:For testing, use Trinity Rescue Kit (1)

slim (1652) | more than 4 years ago | (#30780076)

it takes ~10 minutes boot-to-boot to bring up a raw windows system in a known state.

So 60 times longer than bringing up a new EC2 instance from an image, and when you're not testing, you're still paying for the hardware.

Re:For testing, use Trinity Rescue Kit (0, Flamebait)

nweaver (113078) | more than 4 years ago | (#30780152)

The poster specifically asked for RAW IRON testing: no vm, no nothing.

And it works just fine on little $400 fanless Intel Atom systems, thats what we use.

Re:For testing, use Trinity Rescue Kit (1)

slim (1652) | more than 4 years ago | (#30780362)

The poster specifically asked for RAW IRON testing: no vm, no nothing.

I've re-read it about 5 times, and I can't see where he asks for that.

Re:For testing, use Trinity Rescue Kit (1)

nweaver (113078) | more than 4 years ago | (#30780422)

Sorry, got slighty confused. Oops.

But testing on VMs has its limits: they do introduce abberations so you should test on real systems too.

Re:For testing, use Trinity Rescue Kit (1)

lawaetf1 (613291) | more than 4 years ago | (#30781864)

When did you last spin up an EC2 instance? I wait at least 5 mins, sometimes closer to 10, for an image to launch.

Granted the EBS-backed AMIs will boot faster as the image is already boot-ready.

still too expensive (2, Interesting)

alen (225700) | more than 4 years ago | (#30779108)

i priced out a high memory config and it's like $6000 per year or more for 32GB RAM of memory and 8 CPU cores. In a few months Intel will ship server CPU's with 12 logical cores per socket. RAM prices are dirt cheap and at current prices a 36GB RAM HP Proliant DL 380 G6 will run around $13,000 and 72GB of RAM another $2000. and that includes 5 year 4 hour response time support, some of the other extras like advanced ilo, and i forgot what else i added since it's so cheap.

  add in the increased bandwidth costs and the supposed cost savings vanish. it's like the ghetto people that lease a lexus or a Benz because they can't afford to buy or they like the lower monthly payments. it's like 2000 all over again. hardware is expensive to ASP's set up shop. hardware prices drop for the power you get and ASP's go out of business.

and i think this is a scam by the hardware companies. i buy an HP server i buy one machine and a few hard drives. to support me Amazon needs to buy a few servers and 5 times the raw space for DR purposes.

Re:still too expensive (1)

Lord Ender (156273) | more than 4 years ago | (#30779348)

That's a bad analogy. A better one would be: Hosting in "the cloud" rather than in your own datacenter is like taking a taxi instead of buying a car and hiring a full-time driver.

Re:still too expensive (2, Interesting)

alen (225700) | more than 4 years ago | (#30779412)

the per hour plans are cheap. but the 24x7 hosting EC2 plans are a lot more expensive than physical hardware. and we're in a cycle where the hardware power is increasing at a very fast pace again. few years ago 4GB RAM was expensive on a server. today when we buy RAM for a newish server we just buy 32GB of RAM. the price difference is so small it doesn't make sense to buy less. $1200 or so for HP branded 32GB RAM. a few hundred $$$ less for 16GB or 8GB.

CPU's power is increasing and next year with Sandy Bridge the I/O rate which has almost always been the big bottleneck will make another huge leap and EC2 won't be able to match the performance increase since they spent obscene millions of $$$ on what is soon going to be obsolete hardware that you can barely sell on ebay.

Re:still too expensive (2, Insightful)

slim (1652) | more than 4 years ago | (#30779510)

the per hour plans are cheap. but the 24x7 hosting EC2 plans are a lot more expensive than physical hardware.

Which makes the GP's taxi analogy perfect. If you want to host something with reasonably static storage needs, that's getting hit consistently all year round, EC2's going to be more expensive than the alternatives.

If you've got something like SmugMug (image hosting) where your storage needs grow forever, at an unpredictable rate, S3 might be cheaper than managing the storage yourself.

If you get massive surges in demand, a few times a year (for example, you sell tickets for in-demand events), the ability to add a few hundred EC2 instances just at the times you need them, might be cheaper than having that spare capacity all year round.

Re:still too expensive (1)

bingoUV (1066850) | more than 4 years ago | (#30779652)

But a small mom-n-pop shop doesn't want your 32 GB, or Sandy Bridge. They want less than 100 billion CPU cycles per day, less than 1 GB data transfer per day. But they want an always on server with redundant cooling & power supply. They don't have the expertise for this, they cannot employ a geek because good geeks come expensive. For them it is multiple orders of magnitude cheaper to go cloud.

Re:still too expensive (1)

mrrudge (1120279) | more than 4 years ago | (#30780268)

24x7 is also just one usage scenario. For intensive rendering being able to hire a small army of processors for just long enough is infinitely preferable to waiting for months for a few local machines to complete, and buying the same capacity and having it idle a lot just isn't possible for me.

Re:still too expensive (1)

alen (225700) | more than 4 years ago | (#30780364)

one time i saw a stack of Nvidia brand servers delivered around here. how much data are you sending over the internet and back which is still slower than sending it to the next rack over. does EC2 support CUDA and other technologies to render via the "graphics" card which is faster than x86? i've noticed financial companies are using nvidia servers with CUDA. and these are small shops.

Re:still too expensive (5, Insightful)

shayne321 (106803) | more than 4 years ago | (#30779470)

Apples and tomatoes.. Unless your company already owns a fully equipped data center with excess capacity you have to factor in colocation space, power, cooling, backups, network infrastructure, and security. And if you're not colocating in a space where you can purchase bandwidth you have to factor in the cost of the physical circuit(s) (T1/T3/Metro-E, whatever).

We haven't even begun to consider availability. What if your app can't tolerate 4 hours of downtime (for the HP monkey to come swap out your motherboard)? Now we need redundant servers, redundant connectivity, generator and ups capacity, highly-available network infrastructure, load balancers, etc. Let's not forget the highly paid staff/consultants to implement and maintain all of this.

What happens when your app takes off and you need to scale rapidly? Now you have to procure and install servers, keeping up with the infrastructure required every step of the way.

Also, don't forget in 5 years that $13,000 server you just bought will be a boat anchor. Time to purchase a whole new round of hardware.

I'm not claiming cloud computing is the end all solution for everything, there are certainly drawbacks.. But you cannot compare the cost of a $13,000 server to a $6,000/year instance lease as apples to apples.

Re:still too expensive (1)

alen (225700) | more than 4 years ago | (#30780054)

and EC2 is a good solution for mom and pop. i even recommend it. but what if EC2 takes off and they have to add resources? what if PHB in charge at Amazon says no way because buying all this stuff and paying the ridiculous titanium level support costs will kill the ROI/ROE and every other financial acronym and wall street will hate us? Wall Street hates it when your Return on Assets ratio is too low and buying more assets to support EC2 is not good.

and we still have 10 year old servers doing low end things. unlike Amazon there isn't a monthly payment to keep them around. if you look at Amazon's pricing they charge you for backups, data transfered in an out, etc. they nickel and dime you and once you add it all up buying physical hardware isn't that bad. and the reason is all the "enterprise" level hardware companies are charging Amazon a fortune where in the real world you can get away with less hardware and still have a lot of room to grow without paying Cisco $20,000 a year in support for a WAN card

Re:still too expensive (0)

Anonymous Coward | more than 4 years ago | (#30783118)

You're still paying for power and cooling for those low-end servers. At some point it becomes more cost-effective to buy a new server if you can consolidate a bunch of older servers as virtual instances on it. If they're 10 years old, then they're what, 500MHz P3s with 256MB RAM? How many of those could you run on a single server with a pair of 6-core 3GHz Xeons costing no more than a few thousand dollars?

Re:still too expensive (0)

Anonymous Coward | more than 4 years ago | (#30779496)

i priced out a high memory config and it's like $6000 per year [...] add in the increased bandwidth costs and the supposed cost savings vanish. it's like the ghetto people that lease a lexus or a Benz because they can't afford to buy or they like the lower monthly payments.

Well, Amazon had some unused hardware so they let people use it a bit at high prices. Their side of the deal is pretty good, I think. The real question is, what are *you* doing paying 6k/year for 8 cores.

Not about the price. Planning is the keyword (0)

Anonymous Coward | more than 4 years ago | (#30779502)

Okay, you spend well over $10000 dollars to get a great machine. Then you need to pay for maintenance. Train or hire someone to maintain it or buy the service from third party... That costs extra. But even ignoring that... What if your company is planning a TV advertisement campaign and expects triple the strain on all public systems for a few months? Buy more hardware? Lease, perhaps? And what if there is some sort of an accident (be it fire, flooding, anything)? You need to suddenly spend over $10000 there again in addition to everything else you need to fix. (Alternatively, the server room needs to be made safe from all such hazards, which might cost extra) And what if there is a sudden large peak in the traffic? Or alternatively, your business swindles and then you have spent unnecessary much for the hardware...

Trade that all to about flat rate service that is easy to scale up or down as needed. Despite all the problems that cloud computing has, I can't say that I don't understand why many executives choose it, despite the risk that they might end up spending more in the long run.

Re:Not about the price. Planning is the keyword (1)

alen (225700) | more than 4 years ago | (#30780114)

G6 Proliant servers start at $2000 for the low end model and scale up to 144GB RAM. the hardware is so scalable today it's insane. we just buy a lot of RAM because the difference is only a few hundred $$$ and it saves me from a late night hardware upgrade. in fact what i do is max out using the least dense RAM we can afford. RAM is dropping in price by 50% a year so if we need more RAM we buy the more dense RAM next year at the same price and use the existing RAM in another server where we were really financially strapped to buy it as cheap as possible.

we even have a "crap" box of RAM lying around with 30GB or more in there now at any one time that we go to for a quick upgrade of a server from a few years ago when RAM was a lot more. i just upgraded a server from 8GB to 16GB RAM that was bought 2 years ago with a tight budget. RAM is cheap and there is a ton of extra always around.

Re:still too expensive (5, Insightful)

segedunum (883035) | more than 4 years ago | (#30779732)

You can always tell someone doesn't know what they are talking about when they price up hardware and say memory is 'dirt cheap' and then say that something like EC2 is too expensive. I see it a lot in those scrawny developers around the web who don't want to do any deployment (and want to pretend it doesn't even exist) and want something like EC2 and Engine Yard but for the cost that they were paying for shared hosting - where they complained that they were running out of resources!

Purchasing hardware implies a lof of other costs - where you will host it, how you will connect it, how you will back it up........ Going a traditional hosting route for this is ridiculously expensive. You need to rent the hardware, you need to communicate with the hosting company about setting up, you don't know how it will be set up (at least things are standardised with EC2), how you will handle failover (buy more hardware!) and how you will back it up (buy more hardware and storage!). Can you snapshot your data easily? Can you simply fire up a copy of your server to get running again or do testing? How will you recover from a hardware failure or a disaster where you don't hear from your hosting company for several hours while everyone bites their finger nails? It's why every other hosting company is either denying that EC2 is happening, trying to trash-talk it or trying to come up with their own 'cloud' virtualised, decentralised storage platform with some kind of software management tool........and generally failing at it. They will either respond to it or they will die.

RAM prices are dirt cheap and at current prices a 36GB RAM HP Proliant DL 380 G6 will run around $13,000 and 72GB of RAM another $2000. and that includes 5 year 4 hour response time support

Excuse me while I get up off the floor from laughing. What kind of 'support' do you think you get for that and how useful do you think it is? That supports is for ASPs and hosters. For the rest of us, deploying something means several layers of support on top of that for the hardware. Trust me - every other hosting company has scaling, infrastructure and bandwidth issues. I've been through it. My experience with EC2 in my somewhat limited comparative forays thus far have been infinitely preferable.

i buy an HP server i buy one machine and a few hard drives. to support me Amazon needs to buy a few servers and 5 times the raw space for DR purposes.

Yer, probably because you don't back anything up and you haven't had to handle recovery from a disaster. Pffffffffffffffff............... We can see who the average Slahsdot reader is when this gets modded up with this level of grammar.

Re:still too expensive (1)

alen (225700) | more than 4 years ago | (#30779932)

we have our own datacenter with a DR recovery site in another state via EMC SRDF and some other technologies. EC2 might be good for small companies and we rent out space in our datacenter to smaller companies but for larger ones it doesn't make sense. i work for a hosting company and i always see customers in their hardware cages. what happens if there is a problem with EC2? in typical new economy fashion do they tell you to go away and live with it? how long will it take them to add new resources if these latency issues are true? what do you tell customers in the meantime? our customers will just add a larger pipe. doesn't take that long.

i deal with EMC and other technologies. adding redundancy like EC2 has to have isn't cheap. EMC likes to charge $800 for a 500GB "cheap" hard drive for lower tier storage. of course you need at least 2 for RAID 1 and if you do DR and BCV's then the raw to usable storage ratio is 5 to 1. Of course Seagate execs are loving this along with EMC who jacks up their support costs once you go past a licensed storage amount in your SAN.

backup is more than just snapshots and we use a mix of disk and LTO-4. this year LTO-5 is going to ship. LTO-4 we get 800GB/1/6TB of data on a $50 tape. in reality i have a lot of tapes withi 3TB of data on them. LTO-5 will double that. and we store backups for years since sometime you get a lawsuit or some other financial dispute and need data from 5 years ago. one time we needed data from almost 10 years ago from another company we bought out a while ago. can EC2 do this level of backup? how much will it cost?

Intel and everyone is hyping SSD's and this year when i price out a few new servers i'm going to look at them since the power consumption is so low. at least for the servers where most of the storage backend is EMC SAN

Re:still too expensive (1)

Bengie (1121981) | more than 4 years ago | (#30781428)

Similar thing I heard. I've seen people talk up HD storage like it's dirt cheap to. It might be cheap for consumers, but not a decent hosted setup.

My company recent bought an effective 16TB of SAN storage, costed $120k. You ask someone how much they think 16TB costs, and they'll be like "hmmm.. $200 for 2TB, so $1600 for 16TB."

Some people also say memory is cheap also. It's cheap if you buy small amounts of it. a 2GB stick is a lot cheaper than an 8GB stick of ECC server grade memory. One of the server guys told me about a recent computer they built. A standard dual socket i7 with 32GB of ram and all the fixings, so ~$10k. They wanted to see the costs for upgrading to 192GB since it was an option. An extra $50k. Not only did it require much denser sticks, but it also required CPU and socket upgrades since the lower end CPUs and mobo didn't support those densities. you can't just go.. "hmmm... 6GB for $100, so $3200 for 192GB".

Now you need a failover computer, so instantly double your costs.

Re:still too expensive (1)

dsouza42 (1151071) | more than 4 years ago | (#30779902)

It certainly costs more than owning your own hardware, but you're paying for convenience. If you have your own hardware, the actual cost is not only of the hardware itself. You can go ahead and double that cost to have redundancy (while in EC2 you don't give a crap when a server fails. Just restart the instance... if that). When it does fail you have to send someone out to the server's location to maintain it. You also have to worry about hardware upgrades and internet link upgrades. And that's if you're in the US. I live in Brazil and my company is switching to a cloud-based provider. Since server hardware and internet links are considerably more expensive here than in the US we'll be saving a ton of money every month. Our estimate is that our monthy costs with server maintenance and internet bandwidth will go from today's $2000/month down to around $500/month. (not counting the reduction in down time we'll have). If we needed a huge infrastructure with hundreds or thousands of servers we'd probably be better off having our own stuff, but in our case (and in the case of many others too) it's worth it.

Re:still too expensive (1)

HeronBlademaster (1079477) | more than 4 years ago | (#30782826)

i priced out a high memory config and it's like $6000 per year or more for 32GB RAM of memory and 8 CPU cores. In a few months Intel will ship server CPU's with 12 logical cores per socket. RAM prices are dirt cheap and at current prices a 36GB RAM HP Proliant DL 380 G6 will run around $13,000 and 72GB of RAM another $2000.

How much does the power drain of that 12-core HP Proliant DL with 72GB of RAM add up to every month?

How much time will you lose when $HARDWARE fails and your server is offline? Even with 4-hour response time support, depending on what you're using the server for you could lose tons of money while it's offline - in the meantime, you can have a new EC2 instance spun up in seconds, if you even notice it die in the first place.

You can also save a lot by reserving an EC2 instance for 1 or 3 years for a one-time fee, and then during that reserved time you pay a much lower hourly rate; if your use-case is such that you can turn it off when it's not needed, you can save a boatload of money.

Point being, raw hardware cost is not a sufficiently complete comparison.

The real value of EC2 becomes apparent when it's not your sole host. Say you run a website that sees occasional spikes; rather than keep enough hardware on hand to deal with the highest spike, which means a lot of your hardware is idle most of the time, you can simply automatically scale into EC2 when the load starts rising - just spin up an EC2 instance and add it to your own load balancer; you'll magically handle the spikes without a problem, and you can turn them off when demand goes away. Instead of paying for a couple of $12000 servers to handle your occasional spikes, you can just pay for *one* $12000 server, then pay for a day or two of EC2 time per month (which, even with the high memory instances, will only cost a few hundred a year).

Re:still too expensive (1)

bingoUV (1066850) | more than 4 years ago | (#30783090)

The real value of EC2 becomes apparent when it's not your sole host ... simply automatically scale into EC2 when the load starts rising

Most servers would have a data store (database, filesystem etc.). If you want your own server and EC2 to aid each other in times of adversity, they will have to share the data store. How do you achieve this? Wouldn't common storage between different data centres (one is your own and another is EC2's) be very slow such that it would impact the performance of server processes running in both the data centres?

So I would think it makes sense to either completely go to EC2, or completely host all your own servers. What am I missing?

Re:still too expensive (1)

HeronBlademaster (1079477) | more than 4 years ago | (#30783192)

So I would think it makes sense to either completely go to EC2, or completely host all your own servers. What am I missing?

Presumably you have some way of deploying code updates to your own, non-dynamic hosts; you could use the same mechanism to bootstrap the EC2 instance.

We did this in a distributed computing class at my university; we made an EC2 image stored on S3 (which costs pennies per month) with the right software preinstalled, and set it up to know just enough to download the latest version of the website code once it started up.

If your website is PHP-based or something, the trivial solution is to have it run "svn checkout" in /var/www or wherever, assuming your svn server is publicly accessible.

Re:still too expensive (1)

bingoUV (1066850) | more than 4 years ago | (#30787400)

Presumably you have some way of deploying code updates to your own, non-dynamic hosts; you could use the same mechanism to bootstrap the EC2 instance.

Bootstrapping is not a problem. But once both servers are running, any action by the user and any system event would make a change in the datastore. This must be replicated real-time across the storage of both the servers. Only proper solution for this that I can think of is: mount the same storage at both the servers. This might increase the latency of data access/commit for both the servers. Even if one of the data centres has a low-latency access to the data store, locking etc. would make sure that this one also gets its performance affected.

Re:still too expensive (1)

HeronBlademaster (1079477) | more than 4 years ago | (#30787624)

Which is better: somewhat higher database access latency, or unusably overloaded servers, or idle-most-of-the-time hardware?

There's not always one right answer, but "somewhat higher database access latency" is the right answer at least some of the time. It really just depends on your use case.

Re:still too expensive (1)

bingoUV (1066850) | more than 4 years ago | (#30787804)

Which is why I said it makes sense to either completely go the EC2 way, or completely roll your own. You don't get to say

The real value of EC2 becomes apparent when it's not your sole host .... you'll magically handle the spikes without a problem ...

(Italics yours, bold mine)

when it works only by increased database latency.

Re:still too expensive (1)

HeronBlademaster (1079477) | more than 4 years ago | (#30787906)

All I'm saying is that sometimes increased database latency is better in some way than having idle hardware some large percentage of the time, and better than running entirely on EC2 all of the time (which can get expensive).

For example, let's say your page render time is in the 500ms range (perhaps it's a complex retail website). When the load goes up, is it more preferable to have some of your users get a 750ms-rendered page, or to have all of your users get a 1000ms-rendered page because you don't want to expand into EC2? Which costs more money, running a bunch of hardware that might sit idle most of the time, or occasionally losing a sale from some of your customers irritated by slightly longer page loads? Those are questions that only actual numbers can answer, and they vary on a case-by-case basis.

I have a concrete example of a good use case, but my friend wants to build a business out of it with me, so I'm not going to share it ;)

You're right in that it's not always acceptable - but don't pretend it's never acceptable. Sometimes, increased database latency isn't a problem (or it's more acceptable than the alternatives for whatever reason), and in such cases my original statement stands. So... I do get to say it, but I should have qualified it with a "sometimes".

Re:still too expensive (1)

bingoUV (1066850) | more than 4 years ago | (#30788022)

You're right in that it's not always acceptable - but don't pretend it's never acceptable

When did I pretend that it's never acceptable? The only problem I pointed out is data store latency; and if that is not a problem, it sure makes sense. Heck, some servers may not even have a data store that needs updating. On the other hand, you pretended that it is always acceptable by, as you say, not using "sometimes".

Seems like we essentially agree, with different use cases in mind.

Eh? EC2 might be oversubscribed? (1)

Colin Smith (2679) | more than 4 years ago | (#30779330)

Uh... How else do you think they make money?

 

Re: Eh? EC2 might be oversubscribed? (1)

Rogerborg (306625) | more than 4 years ago | (#30779486)

Are you insane? It's The Cloud. You don't get to question problems with their business strategy, or the consequences for their customers. What are you, some sort of Cloud Denier?

No glitches for us at least (at Netalyzr) (1)

nweaver (113078) | more than 4 years ago | (#30779368)

We use EC2 as the back-end for Netalyzr [berkeley.edu] (our free, applet-based network testing and debugging service), and right now are in the middle of a minor flashcrowd with our big updated release. No recent glitches we've noticed, with long running small instances.

Is Spot Instances for unused capacity at fault? (1)

ogiller (3107) | more than 4 years ago | (#30779576)

I seen to recall a post on slashdot about Amazon Introduces Bidding For EC2 Compute Time [slashdot.org] . This announcement took place on 12/14/2009, which coincides with the increase in average ping latency as illustrated in cloudkick's chart [cloudkick.com] . Was Amazon unprepared for the increase in demand created as a result of bidding off of the unused EC2 capacity?

I am sure that people came up with some pretty creative thing to do with low priced EC2 capacity.

I am a happy customer (1)

MarkWatson (189759) | more than 4 years ago | (#30780068)

I keep a small reserve instance running 24x7 and the cost is very low. I also have a EBS bootable large instance that I run for a few hours at a time as needed. It has been a while since I used it, but Elastic MapReduce also works well and is fairly inexpensive for what you get.

About half of my customers also use EC2s.

(Note: Amazon gave me a large grant to use EC2 for free for work on my last book, but my comments are my honest opinions.)

Startups are not building data centers (1)

cryfreedomlove (929828) | more than 4 years ago | (#30780644)

The cloud providers will have growing pains for years to come. However, cloud is a much better choice than the overhead of building and running your own data center.

IMHO the issue is number of cycles/sec (1)

bensch128 (563853) | more than 4 years ago | (#30813070)

Amazon's instance types (http://aws.amazon.com/ec2/instance-types/) doesn't seem to indicate the number of cycles/sec you are guaranteed to use per type.

They sell instance types based on the physical hardware specs which is worthless in a cloud architecture.
What they should really be doing is indicating the number of cycles/sec an instance type will be GUARANTEED and then enforce it.
If the customer doesn't use that number of cycles/sec, then fine put the idle cycles up for bidding.

Just my $0.02
Ben

Re:IMHO the issue is number of cycles/sec (1)

bensch128 (563853) | more than 4 years ago | (#30813188)

Just as a followup.
After reading more about EC2 instance types, the amazon term is compute unit. However, they don't give any hard numbers for the Hz of the machine (just "One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.") and they don't give any GUARANTEEs that the compute unit won't be diluted over time.

How do you know there is a performance problem? (0)

Anonymous Coward | more than 4 years ago | (#30836710)

Does the system just feel slow of has it been measured as such? Which resources are being starved? CPU? Disk? Network? Memory? Has anyone done any benchmarking to see what the actuals vs theoretical are? What tools are being used? Collectl provides a pretty good high-level summary.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?