Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Web Server Stress Testing : Tutorial /Review

Hemos posted more than 10 years ago | from the building-the-machine dept.

The Internet 28

darthcamaro writes "I found an interesting article on builder.com that suggests laying 'Siege' to your server to help you setup up a site to withstand /. "One of the great fears for many Web developers, and even more so for Web server admins, is watching their sites brought down by an avalanche of traffic. The traffic may be from a DoS attack, or maybe the site just got slash-dotted. Bottom line: It just isn't available.""

cancel ×

28 comments

Sorry! There are no comments related to the filter you selected.

I will find immensely humorous if that site dies (1, Funny)

Anonymous Coward | more than 10 years ago | (#8644814)

Forward /. hordes! You have a mission now!

Re:I will find immensely humorous if that site die (0)

Anonymous Coward | more than 10 years ago | (#8644857)

Just to clarify, I'm not encouraging anyone to dos the site or anything, just do what you normally would do without the parent post.

Re:I will find immensely humorous if that site die (1)

nathanhart (754532) | more than 10 years ago | (#8644908)

Would be rather comical for about 5 minutes if C|Net networks sites where brought down due to be /.'ed

Siege Post (0)

Anonymous Coward | more than 10 years ago | (#8644821)

No, really, I don't think someone would try "frist pst0ing" this one...

Sweet.. (1)

hookedup (630460) | more than 10 years ago | (#8644822)

No more trial and error for this guy. What I would do is purchase large amounts of traffic, tweak, then gradually work my way up to larger and larger numbers.

Not only does this save time, but hey, it saves money.

Re:Sweet.. (0)

Anonymous Coward | more than 10 years ago | (#8644874)

that's a fascinating point. boy, you should have your own tv show or at least have some sort of book deal with a big name publisher. honestly, you're amazing. how did you ever get so intelligent?

hey, i pick my nose. now i'm on the same level as you.

FUCK YOU YOU STUPID FUCK!!!

DDoS (3, Funny)

Dreadlord (671979) | more than 10 years ago | (#8644912)

$ siege -c25 -t1M www.mydomain.com
** Siege 2.59
** Preparing 25 concurrent users for battle.
The server is now under siege...


In other news, domain/hosting company mydomain.com [mydomain.com] was under a heavy DDoS attack, it's believed that the attacks were done by members of a geek news website called Slashdot.

sco'd (1)

An Onimous Cow Herd (8409) | more than 10 years ago | (#8645062)

1. download
2. gunzip and untar
3. ./configure
4. make, make install
5. $ siege -c200 -t720M www.thescogroup.com
** Siege 2.59
** Preparing 200 concurrent users for battle.
The server is now under siege...

Re:sco'd (1)

zangdesign (462534) | more than 10 years ago | (#8645142)

200 users? Is that it? I'd think you'd configure it for something like 10K simultaneous users.

Re:sco'd (1)

Rick the Red (307103) | more than 10 years ago | (#8646469)

200 is overkill. It only takes 128 users to max out SCO OpenServer.

Re:sco'd (0)

Anonymous Coward | more than 10 years ago | (#8670248)

Who says they are running OpenServer?

I still like The Grinder better (4, Informative)

monkeyserver.com (311067) | more than 10 years ago | (#8645117)

I looked through the article, it doesn't look like much more than a slightly sophisticated wget for loop :). Seriously though, this seems similar to a few other basic stress testers out there. For the projects I've worked on you need session management, interactive processes, ... basically hitting 5 urls isn't gonna stress test anything of value.
The Grinder [sourceforge.net] on the other hand, allows for distributed workers, following the same or different 'scripts' all controlled from a single console. It provides you with a slew of configuration options and all sorts of data at your fingertips. The scripts are jython [jython.org] which is easy to learn and very flexible. If you want to stress test a complex app, especially something interactive, or requiring sessions, check out the grinder, it's a god send.

stress testing tools (4, Informative)

HaiLHaiL (250648) | more than 10 years ago | (#8645283)

Another great tool for stress testing your site is Jakarta JMeter [apache.org] . Gives you a nice GUI for watching your response times plummet as your site is pummeled.

From the article:
Siege now supports a new proxy tool called Sproxy, which harvests URLs for testing. The premise behind Sproxy doesn't make much sense to me... Personally, I prefer Scout for getting my URLs, since it just goes through a site's links and adds them that way.

The advantage of using a browser to set up your test plan is that it better simulates real traffic patterns on your site. Microsoft's Application Test Center [c-sharpcorner.com] does this, and JMeter has a proxy server [apache.org] similar to Sproxy.

When you're trying to replicate problems with a live site, however, it would seem more appropriate to me if you could base your test on real traffic to the site. I wrote a load testing tool once that used web logs to simulate the actual traffic patterns, but it was incomplete, mostly because web logs don't record POST data. A good stress tool could come with an Apache/IIS/Tomcat plugin that recorded traffic for use in stress testing.

Re:stress testing tools (1)

Kalak (260968) | more than 10 years ago | (#8657570)

When you're trying to replicate problems with a live site, however, it would seem more appropriate to me if you could base your test on real traffic to the site.

Assuming a standard apache common log format, you can just install siege, then run:

awk '{print "http://www.iddl.vt.edu"$7}' access_log>/usr/local/etc/urls.txt
Run siege using this file, and you have it running based on actual traffic on your site. I just threw this together, and initial testing shows that it can work this way.

'Fess up (0)

Anonymous Coward | more than 10 years ago | (#8645390)

Ok guys, time to confess. Who is sieging the RIAA website? It's been down for 5 days [com.com] .

I don't know what's happening... (2, Funny)

Kopretinka (97408) | more than 10 years ago | (#8645739)

Why can't I load the article?

siege is only the first layer (2, Insightful)

drfrog (145882) | more than 10 years ago | (#8645882)

stress testing based on concurrent users hitting script etc is fine,
but there are other things to make sure of, returnin information and the like

check out the perl module called webtest or something like that

Yeesh (1)

Jahf (21968) | more than 10 years ago | (#8646487)

Any author who has any even remotely possibly interesting-to-geek material on their site who is not on a dedicated box with a full T3 who doesn't reject any request that is referer'ed by /. is behind the times anyway.

Re:Yeesh (1)

thesaur (681425) | more than 10 years ago | (#8655929)

However, that cannot prevent an attack by Google. You wouldn't want to block requests referred by google.com, because you do want people to find your site, right?

As reported in a previous story [slashdot.org] , Google linked their main logo graphic to an information academic site and brought it down [swin.edu.au] . Subsequently, Slashdot hit [swin.edu.au] , but it didn't hold a candle to Google. Fortunately, such attacks by Google are rare. Of course, there is no way to determine your risk for a Google attack, unlike slashdot attacks.

The best idea is to always keep your server ready to handle any load.

I'll probably get modded down for this, but so be it.

Re:Yeesh (1)

Jahf (21968) | more than 10 years ago | (#8657449)

Depends on if you care if people see your site ... I know one guy who takes all traffic referred by Google, /. and a couple of other sites (he occasionally publishes tech goodness) to the Google cached version of his page.

Most people can't afford to keep their personal servers ready to handle 1% of the load that Google's image fiasco or 10% of a popular article on /. can throw at them.

Should those people be penalized by not being able to have their own site (rather than surrendering control to a bunch of web farm monkeys)? No, sites like /. and Google, which espouse the idea of being good net citizens on principle should realize that often they are some of the worst net citizens out there.

Wow ... wasn't expecting it to turn into a rant, oh well :)

And who cares about being modded down? *laugh*

Re:Yeesh (2, Interesting)

Jahf (21968) | more than 10 years ago | (#8657551)

btw I doubt even the referer->GoogleCache mechanism would save most sites from the inadvertant DDOS that Google provided by that image link. Just more argument to Google and /. being better citizens.

Perhaps /. could wait to publish a story until Google had it cached and then give the -option- in a user pref to allow links to be rewritten to the Google cache ...

Perhaps Google could add a new piece to the stale robots.txt standard like "cache-link-only" so that Google would know the author was only interested in being in the Google engine if Google directed all links to it's own cache for that particular site.

Both are opt-in programs that allow the rest of us to have good conscience when viewing tiny sites via links from beasts like Google and /.

BTW, I don't want people to get me wrong ... I might not have a -job- without /. or Google since I use them for research and learning every day along with a host of other sites. I don't want them -gone- I just want them to be a bit more responsible for their actions. To paraphrase J. Depp, they're "something like big dumb pupp"ies ... in this case we like to pet them and they're usually sweet but sometimes they can bite the hand that pets them when they get overzealous.

brute force doesn't work well (0)

Anonymous Coward | more than 10 years ago | (#8647162)

read the article. what is lacking from the article is consideration of business requirements and needs. Performance is always mediated by business requirements. Laying siege to a webserver tells you the breaking point, but more importantly than the breaking point is using stress testing as part of a deployment process and maintenance plan. Once you know the breaking, you still need to measure the optimal performance and have a plan for acquiring new hardware. Sudden spikes are unpredictable, but you can configure modern webservers to deny connections temporarily until the load returns to normal. People serious about maintaining 99.99% uptime run regular stress tests and have well document plans for acquiring new hardware.

Web Testing should Include External Traffic (3, Insightful)

stecoop (759508) | more than 10 years ago | (#8648860)

One item this article didn't explicitly look at was the network saturation percentage.

Most servers can handle a greater load than the network traffic can handle. To demonstrate a proper test you would need to test outside of your routers and firewall. This means that the test machines should be located outside of your local area network while testing or at least a certain quantitative percentage for statistical purposes.

Odds are most people are going to work within the LAN and lay Siege to their machines but forget that there is an outside world.

hmmm... not what I expected (1)

SlowMovingTarget (550823) | more than 10 years ago | (#8649845)

I was expecting an article about stressing out a server by putting MS Exchange on it... But this is good too.

used to be a great tool for this (2, Interesting)

sempf (214908) | more than 10 years ago | (#8650143)

The best stress tester was a company called Envive, which was a distributed attack sort of focus, with server time and space all over the world. You write a script, and then can watch the attack from a web browser. Proof positive that Siege is more popular though - they went out of business.

ftp.heanet.ie (1)

bbrazil (729534) | more than 10 years ago | (#8654432)

http://www.linux.ie/pipermail/ilug/2004-January/00 9863.html [linux.ie]

Takes a bit to get into the discussion though

Relavent system has 2TB of data(6TB space), max recorded throughput is 550Mb/s, over 20000 concurrent http requests.

Mercury Interactive tools (1)

R33MSpec (631206) | more than 10 years ago | (#8663325)

As a test consultant working in all areas of automated testing (as opposed to manual 'tick the box' testing) - I do most of my Load or Stress Testing using the industry standard tool LoadRunner [mercuryinteractive.com] . I've used all other load testing tools and this is by far the best (albeit pretty expensive) but for large scale commercial projects - nothing even comes close.

Something good from Microsoft ... (1)

freddoM (765207) | more than 10 years ago | (#8664871)

... but they couldn't sell it so they stopped development. MS Web Application Stress Tool "webtool" is worth a look. It free and does alot http://www.microsoft.com/technet/itsolutions/intra net/downloads/webstres.mspx
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?