×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Are Long URLs Wasting Bandwidth?

ScuttleMonkey posted about 5 years ago | from the waste-away-and-build-a-bigger-pipe dept.

Networking 379

Ryan McAdams writes "Popular websites, such as Facebook, are wasting as much as 75MBit/sec of bandwidth due to excessively long URLs. According to a recent article over at O3 Magazine, they took a typical Facebook home page, looked at the traffic statistics from compete.com, and figured out the bandwidth savings if Facebook switched from using URL paths which, in some cases, run over 150 characters in length, to shorter ones. It looks at the impact on service providers, with the wasted bandwidth used by the subsequent GET requests for these excessively long URLs. Facebook is just one example; many other sites have similar problems, as well as CMS products such as Word Press. It's an interesting approach to web optimization for high traffic sites."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

379 comments

Can they not use... (5, Insightful)

teeloo (766817) | about 5 years ago | (#27364267)

compression to shorten the URL's?

Re:Can they not use... (0, Informative)

Anonymous Coward | about 5 years ago | (#27364311)

Sure they can

TinyURL [tinyurl.com]

Re:Can they not use... (5, Funny)

dotgain (630123) | about 5 years ago | (#27364423)

No, they cannot use TinyURL (read: goatse, tubgirl et. al) thank you very much.

Re:Can they not use... (2, Informative)

Anonymous Coward | about 5 years ago | (#27364609)

That's not compression, that's hashing.

Re:Can they not use... (1, Interesting)

MoFoQ (584566) | about 5 years ago | (#27364823)

because they got more requests than the number of unique things TinyURL or whatever can handle.

Better is to use a better way of doing AJAX other than using GET....they can use POST and make sure gzip is on.

I think if they really put their minds on it, they can also implement clientside JSON compression using some of the javascript compression libraries that are out there (or use a simple flash wrapper to do the dirty work).

Just throw a bunch of kiddies (or 21yr olds) in a room and offer them free pizza/beer whatever.....it'll get done.

Re:Can they not use... (0)

Anonymous Coward | about 5 years ago | (#27364337)

compression to shorten the URL's?

Maybe, but removing redundant data will certainly help.

Re:Can they not use... (2)

Skal Tura (595728) | about 5 years ago | (#27364363)

handshake for the compression, and packet headers would probably become more than the potential benefits, not worth the effort.

Re:Can they not use... (1)

corsec67 (627446) | about 5 years ago | (#27364431)

You mean something like mod_gzip?

That leave only the url in the request header, the rest should (already) be compressed by mod_gzip.

Re:Can they not use... (1)

tepples (727027) | about 5 years ago | (#27364451)

Take a page full of short URLs and a page full of long URLs. Run them both through mod_gzip. The page with short URLs will still probably come out smaller.

Re:Can they not use... (4, Informative)

jd (1658) | about 5 years ago | (#27364911)

Most of the time, yes, but then there's a question of trade-off. Small URLs are generally hashes and are hard to type accurately and hard to remember. On the other hand, if you took ALL of the sources of wastage in bandwidth, what percentage would you save by compressing pages vs. compressing pages + URLs or just compressing URLs?

It might well be the case that these big web services are so inefficient with bandwidth that there are many things they could do to improve matters. In fact, I consider that quite likely. Those times I've done web admin stuff, I've rarely come across servers that have compression enabled.

Re:Can they not use... (1)

unlametheweak (1102159) | about 5 years ago | (#27364561)

compression to shorten the URL's?

No, throttle these Web sites. Throttling is a more traditional approach to bandwidth management.

Re:Can they not use... (4, Funny)

truthsearch (249536) | about 5 years ago | (#27364577)

They should just move all the GET parameters to POST. Problem solved. ;)

Re:Can they not use... (1)

slummy (887268) | about 5 years ago | (#27364671)

That wouldn't be very UX-centric.

If pages continually POST to each other, hitting the browser's back button will display the annoying alert asking you to "Resend POST data".

Re:Can they not use... (1)

jd (1658) | about 5 years ago | (#27364949)

Then dump CGI-like syntax completely and use applets that send back data via sockets.

Wordpress has the option (5, Informative)

slummy (887268) | about 5 years ago | (#27364295)

Wordpress by default allows you to configure URL writing. The default is set to something like: http://www.mysite.com/?p=1 [mysite.com].

For SEO purposes it's always handy to switch to the more popular example: http://www.mysite.com/2009/03/my-title-of-my-post.html [mysite.com].

Suggesting that we cut URL's that help Google rank our pages higher is preposterous.

Re:Wordpress has the option (0, Insightful)

Anonymous Coward | about 5 years ago | (#27364575)

Yeah, exactly.
And since I've read somewhere that Wordpress isn't the best CMS for a high-traffic site, it doesn't really matter too much.

Re:Wordpress has the option (1)

blanks (108019) | about 5 years ago | (#27364769)

The querystrings/all the get prams are still being passed, there just passed in a "visually" pleasant way for the user.

All the data is still there meaning mod_rewrite dosen't help with the "bandwidth" issue at all. It just looks pretty.

Re:Wordpress has the option (0, Offtopic)

macraig (621737) | about 5 years ago | (#27364773)

Unless your blog endures tens of millions of page hits every day, TFA authors weren't even talking to you. Can you say n-o-n s-e-q-u-i-t-u-r?

Re:Wordpress has the option (-1, Offtopic)

slummy (887268) | about 5 years ago | (#27364961)

Yours obviously doesnt:

From your blog:

If this blog seems to be less than cohesive and entertaining, I have a confession to make: I tend to dump my rhetorical leftovers here. ... The primary reason is simple: why devote effort here when I have so few visitors and my writing at the other sites has a far larger guaranteed audience?

Try to be a bit less "reactive", people have feelings.

Re:Wordpress has the option (1)

Kugrian (886993) | about 5 years ago | (#27364871)

Maybe one day soon Google will have some way to expand mysite.com/5sfg to mysite.com/my_title_of_my_post.html. Having said that, how much of the importance of pagerank (and similar techs) is based on the url rather than title tags or links to it?

WTF (-1, Flamebait)

Trahloc (842734) | about 5 years ago | (#27364301)

What did the same people who want to ban black cars write this up too?

Re:WTF (0, Offtopic)

Skal Tura (595728) | about 5 years ago | (#27364395)

No, those guys wanting to ban black cars are saner people than writers of this article ...

The black car thing atleast is somewhat significant! For example, see when mythbusters tested white vs. black car.

Who knows? (4, Funny)

esocid (946821) | about 5 years ago | (#27364313)

Are forums (fora?) like these wasting bandwidth as well by allowing nerds, like myself, to banter about minutia (not implying this topic)? Discuss amongst yourselves.



Read the rest of this comment

Re:Who knows? (4, Insightful)

phantomfive (622387) | about 5 years ago | (#27364535)

Seriously. No one better tell him about the padding in the IP packet header. A whole four bits is wasted in every packet that gets sent. More if it's fragmented. Or what about the fact that http headers are in PLAIN TEXT? Talk about a waste of bandwidth.

In reality I think by watching one youtube movie you've used more bandwidth than you will on facebook URLs in a year.

Re:Who knows? (1, Funny)

PolygamousRanchKid (1290638) | about 5 years ago | (#27364579)

One man's waste is another man's treasure. Some say, "The world is my oyster." I say, "The world is my dumpster."

Wasted bandwidth, indeed.

Re:Who knows? (1)

jd (1658) | about 5 years ago | (#27364967)

I discussed it with myselves, but there was no agreement. Well, other than the world should use IPv6 or TUBA and enable multicasting by default.

Better way of doing it (4, Informative)

Foofoobar (318279) | about 5 years ago | (#27364315)

The PHPulse framework [phpulse.com] is a great example of a better way to do it. It uses one variable sent for all pages which it then sends to a database (rather than an XML page) where it stores the metadata of how all the pages interelate. As such, it doesn't need to parse strings, it is easier to build SEO optimized pages and it can increase page load times by 10 times over other MVC frameworks.

Depending on your viewpoint (5, Insightful)

markov_chain (202465) | about 5 years ago | (#27364331)

The short Facebook URLs waste bandwidth too ;)

Re:Depending on your viewpoint (1)

Dreen (1349993) | about 5 years ago | (#27364409)

I very much like this being Insightful =)

Re:Depending on your viewpoint (1, Funny)

Anonymous Coward | about 5 years ago | (#27364627)

I also like his username.

Re:Depending on your viewpoint (3, Informative)

FooBarWidget (556006) | about 5 years ago | (#27364587)

I've always found stories along the lines of "$ENTITY wastes $X amount of $RESOURCE per year" dubious. Given enough users who each use a piece of $RESOURCE, the total amount of used resources will always be large no matter how little each individual user uses. There's no way to win.

Re:Depending on your viewpoint (1)

jd (1658) | about 5 years ago | (#27364985)

For most users, anything they can access on Facebook is already present on 127.0.0.1.

Waste of effort (4, Interesting)

El_Muerte_TDS (592157) | about 5 years ago | (#27364339)

Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually).
I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls.
But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).

ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250

Re:Waste of effort (5, Insightful)

JCY2K (852841) | about 5 years ago | (#27364413)

Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually). I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls. But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).

ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250

Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually). I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls. But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).

ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250

To your ps, most of that is easily comprehensible It was an article that ran today; only the 2017250 is unmeaningful in itself. Perhaps article.pl?sid=09/03/27/Muerte/WasteOfEffort would be better but we're trying to shorten things up.

Re:Waste of effort (5, Interesting)

krou (1027572) | about 5 years ago | (#27364521)

Exactly. If they wanted to try optimize the site, they could start looking at the number of Javascript files they include (8 on the homepage alone) and the number of HTTP requests each page requires. My Facebook page has *20* files getting included alone.

From what I can judge, a lot of their Javascript and CSS files don't seem to be getting cached on the client's machine either. They could also take a look at using CSS sprites to reduce the number of HTTP requests required by their images.

I mean, clicking on the home button is a whopping 726KB in size (with only 145 KB coming from cache), and 167 HTTP requests! Sure, a lot seem to be getting pulled from a content delivery network, but come on, that's a bit crazy.

Short URIs are the least of their worries.

Irrelevant (5, Insightful)

Skal Tura (595728) | about 5 years ago | (#27364345)

It's irrelevantly small portion of the traffic, while at the scale of Facebook, it could save some traffic, but does not make any impact on the bottomline worthwhile the effort!

150 chars long url = 150 bytes VS 50KILObytes + Images of rest of the pageview....

I'm throwing out of my head that 50kilobytes for the full page text, but a pageview often runs at over 100kb.

So it's totally irrelevant if they can shave off the 100kb a whopping 150bytes.

Re:Irrelevant (1)

CannonballHead (842625) | about 5 years ago | (#27364551)

ya. i hav better idea. ppl shuld just talk in txt format. saves b/w. and whales. l8r

Seriously, though, I don't exactly get how a shorter URL is going to Save Our Bandwidth. Seems like making CNET articles that make you click "Next" 20 times into one page would be even more effective. ;)

The math, for those interested:

So to calculate the bandwidth utilization we took the visits per month (1,273,0004,274) and divided it by 31. Giving us 41,064,654. We then multiplied that by 20, to give us the transfer in kilobytes per day of downstream waste, based on 20k of waste per visit. This gave us 821293080, which we then divided by 86400 which is the number of seconds in a day. This gives us 9505 kilobytes per second, but we want it in kilobits, so we multiply it by 8. Giving us 76040, finally we divide that by 1024 to give us the value in MBits/sec. Giving us 74Mbit/sec. One caveat with these calculations is that we do not factor in gzip compression. Using gzip compression, we could safely divide the bandwidth wasting figures by about 50%. Browser caching does not factor in the downstream values, as we are calculating the waste just on the HTML file. It could impact the upstream usage as not all objects maybe requested with every HTML request.

Re:Irrelevant (3, Interesting)

Skal Tura (595728) | about 5 years ago | (#27364749)

So to calculate the bandwidth utilization we took the visits per month (1,273,0004,274) and divided it by 31. Giving us 41,064,654. We then multiplied that by 20, to give us the transfer in kilobytes per day of downstream waste, based on 20k of waste per visit. This gave us 821293080, which we then divided by 86400 which is the number of seconds in a day. This gives us 9505 kilobytes per second, but we want it in kilobits, so we multiply it by 8. Giving us 76040, finally we divide that by 1024 to give us the value in MBits/sec. Giving us 74Mbit/sec. One caveat with these calculations is that we do not factor in gzip compression. Using gzip compression, we could safely divide the bandwidth wasting figures by about 50%. Browser caching does not factor in the downstream values, as we are calculating the waste just on the HTML file. It could impact the upstream usage as not all objects maybe requested with every HTML request.

roflmao! I should've RTFA!

This is INSULTING! Who could eat this kind of total crap?

Where the F is Slashdot editors?

Those guys just decided per visit waste is 20kb? No reasoning, no nothing? Plus, they didn't see on pageviews, just visits ... Uh 1 visit = many pageviews.

So let's do the right maths:
41,064,654 visits
Site like Facebook would probably have around 30 or more pageviews per visit. let's settle for 30.

1,231,939,620 pageviews per day.

150 average length of url. Could be compressed down to 50. 100 bytes to be saved per pageview.

123,193,962,000 bytes of waste, 120,306,603Kb per day, or 1392Kb per sec.

In other words:
1392 * 8 = 11136Kbps = 10.875Mbps.

100Mbps guaranteed costs 1300$ a month ... They are wasting a whopping 130$ a month on long urls ...

So, the RTFA is total bullshit.

Re:Irrelevant (0, Troll)

Chyeld (713439) | about 5 years ago | (#27364811)

ya. i hav better idea. ppl shuld just talk in txt format. saves b/w. and whales. l8r

times 17.3.84 bb speech malreported africa rectify

times 19.12.83 forecasts 3 yp 4th quarter 83 misprints verify current issue

times 14.2.84 miniplenty malquoted chocolate rectify

times 3.12.83 reporting bb dayorder doubleplusungood refs unpersons rewrite
fullwise upsub antefiling

Re:Irrelevant (0)

Anonymous Coward | about 5 years ago | (#27364861)

Yes, you stole my idea.

Seriously this article is ridiculous. Premature optimization is the root of all evil.

Re:Irrelevant (0)

Anonymous Coward | about 5 years ago | (#27364887)

Consider caching. Your browser is going to ask facebook whether or not it has a newer version of an image file every time it loads a page with that file. Even when the file is not out of date and all that the facebook server sends back is the last modification date. In that scenario a very long URL could actually be the largest part of the http transaction.

Facebook? Go after Twitter. (2, Interesting)

Anaplexian (101542) | about 5 years ago | (#27364353)

Twitter clients (including the default web interface) auto-tinyURL every URL put into it. Clicking on the link involves not one but *2* HTTP GETs and one extra roundtrip.

How long before tinyurl (and bit.ly, ti.ny, wht.evr...) are cached across the internet, just like DNS?

Most likely insignificant (3, Informative)

nysus (162232) | about 5 years ago | (#27364367)

This is ridiculous. If I have a billion dollars, I'm not going to worry about saving 50 cents on a cup of coffee. The bandwidth used by these urls is probably completely insignificant.

Re:Most likely insignificant (1)

Psychotria (953670) | about 5 years ago | (#27364515)

That's a funny way to look at it. If I save 50 cents a day on my cup of coffee I will have another billion dollars in just 5479452 years (roughly). And that's excluding compound interest!

Re:Most likely insignificant (1, Offtopic)

jd (1658) | about 5 years ago | (#27365001)

Just how interesting are the compounds in coffee, anyway?

Re:Most likely insignificant (4, Interesting)

scdeimos (632778) | about 5 years ago | (#27364565)

I think the O3 article and the parent have missed the real point. It's not the length of the URL's that's wasting bandwidth, it's how they're being used.

A lot of services append useless query parameter information (like "ref=logo" etc. in the Facebook example) to the end of every hyperlink instead of using built-in HTTP functionality like the HTTP-Referer client request headers to do the same job.

This causes proxy servers to retrieve multiple copies of the same pages unnecessarily, such as http://www.facebook.com/home.php [facebook.com] and http://www.facebook.com/home.php?ref=logo [facebook.com], wasting internet bandwidth and disk space at the same time.

Re:Most likely insignificant (2, Insightful)

XanC (644172) | about 5 years ago | (#27364999)

You can't ever rely on the HTTP-Referer header to be there. Much of the time, it isn't; either the user has disabled it in his browser, or some Internet security suite strips it, or something. I'm amazed at the number of sites that use it for _authentication_!

Re:Most likely insignificant (0)

Anonymous Coward | about 5 years ago | (#27364635)

yeah, it be far more worthwhile to optimize the webpage as a whole. why try to optimize the 1% of the data tranfer when they can optmize the part that really matters.

this is of course assuming the bandwidth cost savings would be worth the cost to assign people to optimize and test it which I doubt it.

But your analogy is pretty bad where saving $0.50 cents can mean alot when you selling in the millions. 1 million sales of coffee (for a large chain) would mean the difference of $500,000 dollars.

Bandwith is a bit different where you have to optmize quite a bit to recoup loss from assigned man hours due to low bandwith costs.

Really? (1)

kenh (9056) | about 5 years ago | (#27364375)

How many times are the original pages called? Is this really the resource hog?

What about compressing images, trimming them to their ultimate resolution?

How about banishing the refresh tags that cause pages to refresh while otherwise inactive? Drudgereport.com is but one example where the page refreshes unless you browse away from it...

If you really want to cut down on bandwidth usage, eliminate political commenting and there will never be aneed for Internet 2!

Wow. Just wow. (3, Informative)

NerveGas (168686) | about 5 years ago | (#27364379)

75 whole freaking megabits? WOWSERS!!!!

They must be doing gigabits for images, then. Complaining about the URLs is complaining about the 2 watts your wall-wart uses when idle, all the while using a 2kW air conditioner.

Re:Wow. Just wow. (1)

Guysmiley777 (880063) | about 5 years ago | (#27364585)

Typical half-assed slack-alism. HEY! If I take a really small number and multiply it by a REALLY HUGE number, I get a REALLY BIG NUMBER! The end is nigh! Panic and chaos!!!

Mental Masturbation (5, Insightful)

JWSmythe (446288) | about 5 years ago | (#27364383)

    This is a stupid exercise. Oh my gosh, there's an extra few characters wasted. They're talking about 150 characters, which would be 150 bytes, or (gasp) 0.150KB.

    10 times the bandwidth could be saved by removing a 1.5KB image from the destination page, or doing a little added compression to the rest of the images. The same can be said for sending out the page itself gzipped.

    We did this exercise at my old work. We had relatively small pages. 10 pictures per page, roughly 300x300, a logo, and a very few layout images. We saved a fortune in bandwidth by compressing the pictures just a very little bit more. Not a lot. Just enough to make a difference.

    Consider taking 100,000,000 hits in a day. Bringing a 15KB image to 14KB would be .... wait for it .... 100GB per day saved in transfers.

    The same can be said for conserving the size of the page itself. Badly written pages (and oh are there a lot of them out there) not only take up more bandwidth because they have a lot of crap code in them, but they also tend to take longer to render.

    I took one huge badly written page, stripped out the crap content (like, do you need a font tag on every word?), cleaned up the table structure (this was pre-CSS), and the page loaded much faster. That wasn't just the bandwidth savings, that was a lot of overhead on the browser where it didn't have to parse all the extra crap in it.

    I know they're talking about the inbound bandwidth (relative to the server), which is usually less than 10% of the traffic. Most of the bandwidth is wasted in the outbound bandwidth. That's all anyone really cares about. Server farms only look at outbound bandwidth, because that's always the higher number, and the driving factor of their 95th percentile. Home users all care about their download bandwidth, because that's what sucks up the most for them. Well, unless they're running P2P software. I know I was a rare (but not unique) exception, where I was frequently sending original graphics in huge formats, and ISO's to and from work.

Re:Mental Masturbation (2, Informative)

Skal Tura (595728) | about 5 years ago | (#27364487)

it's actually not even 0.15Kb, it's 0.146kb >;)

and 100mil hits, 1kb saved = 95.36Gb saved.

You mixed up marketing, and in-use computer kilos, gigas etc. 1Kb !== 1000 bytes, 1Kb === 1024bytes :)

Re:Mental Masturbation (1)

JWSmythe (446288) | about 5 years ago | (#27364763)

    Nah, I just never converted the KB (Bytes) of file size and string size (8 bit characters are 1 byte), so I never converted it down to the Kb/s (kilobits per second) for bandwidth measurement. :)

Re:Mental Masturbation (0)

Anonymous Coward | about 5 years ago | (#27364601)

"So on a single day, the folks over at facebook have wasted roughly 783GB downstream and 469GB upstream."

Re:Mental Masturbation (1)

drinkypoo (153816) | about 5 years ago | (#27364649)

While you have a good point, your argument can be summed up as "I've already been shot, so it's okay to stab me."

Re:Mental Masturbation (1)

JWSmythe (446288) | about 5 years ago | (#27364789)

    Naw, it's more like, I'd rather be poked with that blunt stick than shot with a cannon. :)

Re:Mental Masturbation Try the new ebay (2, Insightful)

thoglette (74419) | about 5 years ago | (#27364921)

Badly written pages (and oh are there a lot of them out there) not only take up more bandwidth because they have a lot of crap code in them, but they also tend to take longer to render.

ebay has "upgraded" their local site http://my.ebay.com.au/> and "my ebay" is now a 1M byte download. That's ONE MILLION BYTES to show about 7K of text and about 20 x 2Kb thumbnails.

The best bit is that the htm file itself over 1/2 Mbytes. Then there's two 150K+ js files and a 150k+ css file.

Web "designers" should be forced to develop on a 128M P3 machine with VGA screen and dial up modem

Re:Mental Masturbation (1)

value_added (719364) | about 5 years ago | (#27364953)

This is a stupid exercise. Oh my gosh, there's an extra few characters wasted. They're talking about 150 characters, which would be 150 bytes, or (gasp) 0.150KB.

Perhaps, but I'm reminded of the time when I started getting into the habit of stripping Unsubscribe footers (and unecessarily quoted Unsubscribe footers) from all the mailing lists (many high volume) that I subscribed to. During testing, I found the average mbox was reduced down in size by between 20 and 30%.

If you accept the premise that waste is waste, then it's only a matter of perspective. If something doesn't affect you personally, it doesn't mean that it doesn't affect someone else.

Overlong URLs may not waste bandwidth to any degree, but they sure as hell are wasteful in other respects, if not outright idiotic. Or is there no one who has ever copied/pasted a URL?

what is that as a proportion? (1)

wjh31 (1372867) | about 5 years ago | (#27364387)

but how much is that as a proportion of their total bandwidth usage, if they were worried about bandwidth im sure they could just compress the images a little more and save much more

tag: dropinthebucket (4, Insightful)

RobertB-DC (622190) | about 5 years ago | (#27364407)

Seriously. Long URL's as wasters of bandwidth? There's a flash animation ad running at the moment (unless you're an ad-blocking anti-capitalist), and I would expect it uses as much bandwidth when I move my mouse past it as a hundred long URL's.

I'm not apologizing for bandwidth hogs... back in the dialup days (which are still in effect in many situations), I was a proud "member" of the Bandwidth Conservation Society [blackpearlcomputing.com], dutifully reducing my .jpgs instead of just changing the Height/Width tags. My "Wallpaper Heaven" website (RIP) pushed small tiling backgrounds over massive multi-megabyte images. But even then, I don't think a 150-character URL would have appeared on their threat radar.

It's a drop in the bucket. There are plenty of things wrong with 150-character URLs, but bandwidth usage isn't one of them.

Re:tag: dropinthebucket (1)

Skal Tura (595728) | about 5 years ago | (#27364533)

lol, i used to run Wallpaper Haven :)

People came to me complaining "it's not haven, it's heaven!" Ugh ... Didn't know what Haven means :D

Re:tag: dropinthebucket... Hmmm, i was thinking (1)

davidsyes (765062) | about 5 years ago | (#27364853)

...

I am wondering if this is more about exploting the fact that such long and exacting URLs might serve as a form of security through obscurity...

5kb per typed page (1)

TinBromide (921574) | about 5 years ago | (#27364417)

If you take and type a full page (no carriage returns) into notepad and save it, you end up with 5kb per printed page at the default font/print settings. When was the last time that a web page designer cared about 5kb? If 150 bytes (yes, 150 char's) is a concern, trim back on the dancing babies and mp3 backgrounds before you get rid of the ugly url's.

Besides, if not for those incredibly long and in need of shortening URL's, how else would we be able to feed rick astley's music video youtube link into tinyurl and expect people to click it, expecting it to be a real URL?

Re:5kb per typed page (2, Interesting)

Overzeetop (214511) | about 5 years ago | (#27364867)

Actually, when I had my web page designed (going on 4 years ago), I specifically asked that all of the pages load in less than 10 seconds on a 56k dialup connection. That was a pretty tall order back then, but it's a good standard to try and hit. It's somewhat more critical now that there are more mobile devices accessing the web, and the vast majority of the country won't even get a sniff at 3G speeds for more than a decade. There is very little that can be added to a page with all the fancy programming we can put into them. Mostly, I (and my clients who need to find me) want information, and one of the best ways is simply readable text with static pictures. For the web, you can really compress the heck out of an image and still have it look crisp on a monitor.

more like CSS, Javascript, and (0)

Anonymous Coward | about 5 years ago | (#27364439)

umpteen ad links, crapromedia, and god knows what else. Seriously, the URL size is like complaining while you urinate into the sea then saying it's affecting the tide level.

Compared to what? (1)

dmomo (256005) | about 5 years ago | (#27364441)

What's the percentage savings? Is it enough to care or is it just another fun fact?

Simplifying / nanoizing / consolidation javascript and reducing the number of sockets required to load a page would probably be more bang for the buck. Is it worth worry about?

absolute number and 'wasted' (1)

fermion (181285) | about 5 years ago | (#27364445)

First, absolute numbers mean nothing. It is like 200 million for this wasted federal program on the 20 I waste on coffee over equal period. Without know the percent of total, or how much it would really save, or even if the problem can be fixed. As it is, this is just random showboating, perhaps interesting from a theoretical sense if the math is correct, but given the absolutes I doubt the author can really do the math correctly.

Second, define 'waste'. Most rational people would argue that facebook is itself is a waste of bandwidth, and that getting rid of it would leave more bandwidth for what people really want, which is p0rn, unless the rumors extorted in the previous article is true which is that facebook is really about such amateur barely legal material.

But even if we assume that Facebook is wasted, the percentage of bandwidth used is probably not excessive given it's entertainment value. I mean, it would be like getting rid of the department of homeland security. Sure it would lower the taxes we pay by 2%, but don't we already have enough unemployed executives complaining about how hard it is to live on a 1 million dollar severance package?

Long urls good for search rankings (0)

Anonymous Coward | about 5 years ago | (#27364473)

The reason many sites have long URLs like that is so they can be explicit and do better in Google search rankings. As long as Google values the actual URL for search rankings, URLs will remain long.

Its all about the evolution of "it" (1)

Rooked_One (591287) | about 5 years ago | (#27364519)

it goes in cycles... you get better hardware, then you saturate it with software. Then you get better software and you saturate it with hardware.

Currently, we can apply said metaphor with internet connections. We started with jpegs. We had low baud modems. We then moved on to moving pictures we needed to download. They upped it to cable. Now we are to the point where the demand for fiber to your house is going to be needed in most situations.

Think how we've moved from dumb terminals to workstations and now we are using more dumb terminals (ie - VM's) and it will just keep cycling.

At this point... (1)

MatthewAnderson (1005607) | about 5 years ago | (#27364531)

At this point we may as well start harping on engineers about TCP/IP packet overhead if we're concerning ourselves with this water under the bridge...

I can top that. Try the Globe and Mail! (5, Interesting)

Anonymous Coward | about 5 years ago | (#27364553)

For an even more egregious example of web design / CMS fail, take a look at the HTML on this page [theglobeandmail.com].

$ wc wtf.html
12480 9590 166629 wtf.html

I'm not puzzled by the fact that it took 166 kilobytes of HTML to write 50 kilobytes of text. That's actually not too bad. What takes it from bloated into WTF-land is the fact that that page is 12,480 lines long. Moreover...

$ vi wtf.html

...the first 1831 lines (!) of the page are blank. That's right, the &lt!DOCTYPE... declaration is on line 1832, following 12 kilobytes of 0x20, 0x09, and 0x0a characters - spaces, tabs, and linefeeds. Then there's some content, and then another 500 lines of tabs and spaces between each chunk of text. WTF? (Whitespace, Then Failure?)

Attention Globe and Mail web designers: When your idiot print newspaper editor tells you to make liberal use of whitespace, this is not what he had in mind!

ffs (1)

stonedcat (80201) | about 5 years ago | (#27364589)

Yea that's it... URLs are wasting bandwidth... never mind the massive amounts of useless garbage on the Internet no it's definitely long URLs.

Yes, of course it's waste. (0)

Anonymous Coward | about 5 years ago | (#27364653)

If they were using that space for descriptive purposes (like long file names) there might be an arguable tradeoff, but most URLs are full of illegible encodings that mean nothing to anyone except the people managing the service. This is all fine, but why not encode all the info and send it in one fat lump? Most users, perhaps with the exception of some nerds who hangout here at /. don't navigate by editing the URL directly. They press the big shiny buttons like normal primates.

Customer bulletin (4, Funny)

kheldan (1460303) | about 5 years ago | (#27364675)

Dear Customer,
In order to maximize the web experience for all customers, effective immediately all websites with URLs in excess of 16 characters will be bandwidth throttled.

Sincerely,
Comcast

High Hanging Fruit? (0)

Anonymous Coward | about 5 years ago | (#27364681)

For laughs, use Yahoo's YSlow on the article. The site makes a stupid amount of requests for CSS and JS, doesn't GZIP, doesn't use ETAGS, etc. They ignore almost every other bandwidth saving technique, but at least their URLs are short!

i just got off the toilet (0)

Anonymous Coward | about 5 years ago | (#27364705)

www.ishitoutanobama.com

An elliptic assault on Net Neutrality. (2, Interesting)

tjstork (137384) | about 5 years ago | (#27364739)

Has anyone here even looked at what the real motivation behind this study is? It's to create this idea that web hosts, are, surprisingly, wasting the valuable bandwidth provided by your friendly ISPs. Do a few stories like this over a few years, and suddenly, having Comcast charge Google for the right to appear on Comcast somehow seems fair. The bottom line is, as a consumer, its my bandwidth and I can do whatever I want with it. If I want to go to a web site that has 20,000 character URLS, then, that's where I'm headed.

more bandwidth wasted by this thread! (0)

Anonymous Coward | about 5 years ago | (#27364759)

honestly, is this really an issue when people are streaming entire movies?

It's not the URL in the GET, it's URLs in the HTML (2, Insightful)

rbrome (175029) | about 5 years ago | (#27364841)

I hope this is obvious to most people here, but reading some comments, I'm not sure, so...

The issue is that a typical Facebook page has 150 links on it. If you can shorten *each* of those URLs in the HTML by 100 characters, that's almost 15KB you knocked off the size of that one page. Not huge, but add that up over a visit, and for each visit, and it really does add up.

I've been paying very close attention to URL length on all of my sites for years, for just this reason.

Better idea (5, Funny)

Anonymous Coward | about 5 years ago | (#27364843)

Just use a smaller font for the URL!

Pay Per Click Program (-1, Offtopic)

Anonymous Coward | about 5 years ago | (#27364855)

soujiro.28@hotmail.com
www.payperclick-program.com
Thanks for the information i really excited
Careful

Focusing on the wrong problem... (3, Insightful)

hrbrmstr (324215) | about 5 years ago | (#27364981)

Isn't Facebook itself the huge waste of bandwidth as opposed to just the verbose URLs it generates?

It's on static pages too... (0)

Anonymous Coward | about 5 years ago | (#27365003)

I used to be the sysadmin for a public high school. The school's website was 100% static pages, and the Webmaster/Web design teacher was thoroughly incompetent. She pretty much read "Web Design for Dummies" and used Macromedia Suite MX to design the worst and slowest possible Java or Flash crap. Poor layout too--it looks like MySpace was rewritten by teenagers.

The school website URL was kind of long to begin with: http://www.school.county.k12.fl.us/ [k12.fl.us]

Here's where it got fun. First, she could not comprehend the concept of relative paths, so every single link was an absolute path.

For the school calendar, I wanted to use http://www.school.county.k12.fl.us/calendar [k12.fl.us]. She could not have any of that, and insisted on http://www.school.county.k12.fl.us/DailyUpdates/calendar/calendar/calendar.htm [k12.fl.us]. Her argument was that users should not memorize addresses of things they go to frequently--they should go to the main page and link through.

My personal favorite URL of hers? http://www.school.county.k12.fl.us/StudentParentInfo/PhoneList/PhoneList08-09.htm [k12.fl.us].

Every attempt I made to organize the webspace was met with her hysterically screaming and making it a mess again. She also insisted on uploading .psd files along with their resultant .jpg files--her "new and improved" website started at 900 MB and grew to 40 GB in three years.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...