New Breed Of Web Accelerators Actually Work 323
axlrosen writes "Web accelerators first came around years ago, and they didn't live up to the hype. Now TV commercials are advertising accelerators that speed up your dial-up connection by up to 5 times, they say. AOL and EarthLink throw them in for free; some ISPs charge a monthly fee. Tests by
PC World, PC Magazine and CNET show that they do speed up your surfing quite a bit. They work by using improved compression and caching. The downside is they don't help streaming video or audio." And they require non-Free software on the client's end, too.
Faster porn? (Score:5, Funny)
Re:Faster porn? (Score:3, Funny)
Re:Faster porn? (Score:3, Insightful)
Re:Faster porn? (Score:4, Informative)
2) If you're already using compression (stac, predictor, MPPC, etc.), this will make ZERO difference. The cache has to be on the near side of the slowest link -- which is the dialup user's modem. Now, in the instances where the ISP disables software compression -- like, for instance, the "idiots" at Bellsouth.Net who disable CCP to "speed up connection times" [exact words of the Cisco engineer who helped them set things up] (the time it takes to connect and pass traffic is 100% modem training. For us ISDN users, 3 of the 3 seconds it takes to connect are IPCP; I'll accept that as they do tend to return the same IP most of the time) -- it'll help some.
3) A lot of what's moving around the internet isn't measurablly compressable... GIFs, JPGs, mpegs, zip files, etc. (I shall have to perform an analysis.)
You mean... (Score:3, Insightful)
Re:You mean... (Score:5, Informative)
Sounds pretty much like that... Which Apache already supports, and the major browsers already support, making something like this redundant.
Moreover, dialup modems already use a fairly high level of compression at the hardware layer. While not exactly "gzip -9" quality, you can only realistically squeeze another 10% out of those streams no matter how much CPU power you devote to the task.
Others have mentioned image recompression, which has traditionally used VERY poor implementations, nothing more than converting everything to a low quality JPEG. I would point out that a more intelligent approach to image compression could yield a 2-3x savings without noticeable loss of quality (smoothing undifferentiated backgrounds, stripping headers, drop the quality a tad (ie, to 75-85%, not the 20-40% AOL tried to pass off as acceptible), downgrading the subsampling on anything better than 2:2, etc). But no, not a 5x savings.
Re:You mean... (Score:3, Informative)
In fact, using mod_gzip I've experienced a very discernible speed-up in access and display over DSL!
RabbIT Proxy (Score:3, Interesting)
Speeds things up so much, it's not even funny. Although it does require that you have a machine on a decent, faster than dialup connection to make it work well.
Re:You mean... (Score:5, Informative)
Because v.42bis has a maximum compression ratio of 4:1 (MNP5 only does 2:1).
Now, for a file of all zeros, hey, I agree, you can do a lot better. So, how often do you download files containing nothing but zero? For a typical text file, you might get better than 90% with gzip (while still only 75% from v.42bis). But from binary content? very rarely better than 50%.
In any case, web content consists of five basic types of information - Text, graphics, sound, multimedia (flash, MPEGs, AVIs, etc), and already-Zipped packages.
Of those, only the first benefits from any lossless method, and only the second really leaves any room for saving bits via lossy compression without horrible loss of quality at the same time. (Some of the fourth type could also possibly endure lossy compression, but takes far too long to recompress on-the-fly).
Unfortunately, text comprises the least bothersome (in terms of relative size) of all of those major types of web content.
Don't get me wrong, I fully encourage people to turn mod_gzip on in their Apache installs. But When a company hawks its product with claims that simply cannot occur in a normal webbrowsing situation, I have to call foul.
I see only two situations whereby they could claim 5:1 compression - Either VERY text-heavy material, such as reading something from Project Gutenberg, or they strip every possible non-critical image from a page. I already do the latter via my hosts file and a paranoid userContent.css, so what does that leave?
Hope you only like reading text, in which case, have you ever heard of "Gopher"?
Re:You mean... (Score:3, Funny)
>can do a lot better. So, how often do you download
>files containing nothing but zero?
Er, every time I visit Slashdot?
Re:You mean... (Score:2)
Now if they could just speed up online gaming (i.e. RTCW Enemy Territory) so I can keep up with the broadband users.....
Re:You mean... (Score:2)
Awwww boo hoo (Score:2, Insightful)
Well, why don't you go ahead and write some Free software to accomplish the same thing?
My GameCube requires non-Free software too.
Wahhhh
Re:Awwww boo hoo (Score:2, Insightful)
Re:Awwww boo hoo (Score:5, Insightful)
Different algorithms lend themselves better to different applications, so it seems to me a good accelerator would use a mix of algorithms based on MIME type.
Ie; is the source data formatted in 24 byte words? 16 bit words? 8 bit words? If you have 8 bit data you don't want to look at 16 bit chunks, because then the string "abacadaeafag" doesnt compress for you. Dictionary sizes and blah blah blah... Even format conversion - turn all those BMPs that dingbats put on their pages into PNGs or lossless jpegs..
And as for caching, it seems to me like more of a prefetch than a squid-type cache.. Ie, you request page, proxy at IP gets page, compresses it on the fly, then sends it. Caching it locally is more of an advantage WRT latency, not throughput.
There's a lot of common sense tricks you could use. And according to these articles, they work.
Mod this down overrated. (Score:3, Insightful)
The types of data that can be specialized are much fewer than you propose.
For example, bzip: better suited for text as text has a lot of localized second order trends. However it is computationally intensive and may not do well on a server appliance over multiple connections.
PNG is better on (many) r
Re:Awwww boo hoo (Score:2)
Re: Awwww boo hoo (Score:5, Informative)
Running Squid [squid-cache.org] with a 256mb ram disk cache is all the speedup we need, and it does so without altering the data being fed from upstream.
For a few dollars more . . . (Score:2, Insightful)
Another class of people who can't get broadband (Score:3, Insightful)
You admit that a $200,000 setup fee [pineight.com] isn't "a few bucks more." Thank you; most people miss this.
But what about people who are so mobile that they need to be able to jack in and access the Internet from any of several locations, and they can't afford the price of a broadband subscription for each location? I was in just that situation for four years. Dial-up has the advantage of a last mile in almost every home in the States, brought to you by the Universal Service Tax, meaning that no matter whose house
Re:For a few dollars more . . . (Score:2)
Re:For a few dollars more . . . (Score:2)
I still don't understand (Score:5, Interesting)
because IIS's is garbage (Score:5, Interesting)
I turned mine off by accident once and got a phone call from the co-lo wanting to know why I was suddenly maxing out.
gotta love that 70% saving.
Re:because IIS's is garbage (Score:3, Informative)
I used to host Slashcode based sites. The default home page was about 50k. With mod_gzip, it literally got down to about 6k. Really sweet!
Re:because IIS's is garbage (Score:5, Insightful)
Re:because IIS's is garbage (Score:5, Interesting)
Actually, try downloading your page, copying it, gzipping the original, cleaning up the copy to your specs, gzipping that, and comparing the two file sizes. While you may kill a lot of text in the uncompressed version, I would strongly suspect you'll find that the gzip'ped version saved much less then you think.
Those "spacer gifs" that take up perhaps 100bytes apiece in the original file (perhaps a bit generous) will compress away down to very little (if there are several near each other, they may literally compress down to a handful of bits after the first one), whereas the story text compresses much less well.
If you're compressing things, XML, CSS, and a lot of other things that look awfully redundent in plain-text are suddenly downright bandwidth-efficient technologies, being dwarfed in their compressed representations by the plain-text payloads. This is one of the reasons that fundamentally XML is so cool; you get human readability, but for the very small effort of invoking gzip or similar compresion technology, you also get something that is very nearly as bandwidth-efficient as possible, because compression technologies dynamically determine the best binary encodings for such messages (including their plain-text payloads), whereas supposed "efficient" binary protocols may actually waste a lot of space. (Compressing the both of them may equalize them, but the binary file, perversely, will still be "harder" to compress, even with nearly the same information in both files.)
How compression behaves is not necessarily intuitive.
compression with PHP (Score:5, Informative)
If you have a recent version of PHP, you don't even need mod_gzip. Just put the following lines in your .htaccess file:
php_flag zlib.output_compression onDoes everything on the fly. I once had a shell script that would wget a url with the accept encoding gzip header, and then wget it again without and show the percent savings. Was fairly interesting to see what sites were using compression, and what sites that weren't could have saved in bandwidth by using compression.
R T F M (Score:3)
http://sourceforge.net/projects/mod-gzip/
Re:I still don't understand (Score:3, Informative)
--jeff++
Yeah, right! (Score:5, Insightful)
And I'll just bet that none of that software includes any popups, spyware or intrusive monitoring!
But really, why? (Score:5, Interesting)
Something like $10-20 monthly for "speedy" earhtlink dial-up, or an extra $10-20 slapped on my monthly cable bill for broadband? (Charter Communications, they suck anyways)
I guess if you need to read
Re:But really, why? (Score:2)
Not a niche market at all. Fairplay Communications [fpcc.net] offers this right now, and it's only an extra $5.00 a month.
Re:$10-20/mo marginal cost (Score:2)
Next thing... (Score:5, Funny)
Next thing they find out is the new generation of penis enlargement devices actually work, too...
Re:Next thing... (Score:2)
Re:Next thing... (Score:2)
Non-Free software? (Score:3, Insightful)
tradeoff (Score:4, Insightful)
And graphic compression's been done before too, since around AOL 3.0 or so. Most people turn it off because it makes pages look like crap.
Re:tradeoff (Score:3, Informative)
If you choose the highest speed (and hence the greatest compression), the image quality is downright poor. On PCMag
Doesn't sound lossless at all to me. If admins knew what they were doing the HTML,CSS, etc would already be compressed with mod_gzip or the little compression checkbox in IIS.
What the hell? (Score:5, Funny)
They aren't really that great. (Score:5, Insightful)
My former company was checking out NetAccelerator [netaccelerator.net] recently to resell to our clients.
These things are a joke. The primary performance increase comes from recompressing images into really nasty JPEGs. AOL was doing this years ago (and getting blasted for it). If you turn that off, the performance improvement is not even measurable.
Furthermore, you tend to get a lot of stale caches on your machine. Most browsers don't even get this right, so they add yet another layer of potentially buggy cache abstraction.
No, these things are junk. They act as proxy servers and their source is closed. How can you trust them to handle your data? Even with all their compression features turned on, the performance improvement is seriously overrated. Don't bother. You simply cannot get something for nothing in cases like these.
Now, what would improve the download speed of the web is if web designers would start building standards compliant markup. Many web sites have as much as 700kb overhead in markup from tools that create loads of font tags and their ilk. Pure XHTML + CSS layout would do a hell of a lot more to speed up the web than these scams. Of course, don't take my word for it--read Zeldman [zeldman.com].
Nasty JPEGs? (Score:5, Funny)
Re:They aren't really that great. (Score:2)
I am a big CSS fan (having recently finally relented and converted my websites to use it), but the implementations are still varied.
What's sad is that SO MANY pages have this extra WYSIWYG garbage code in 'em. I was recently looking at job postings (JSP/ASP/CGI/Perl/Java/Hire Me!) and damned near every one had 'must be able to hand code'. Interestingly, the higher end jobs (like more
Re:They aren't really that great. (Score:2)
What do I use? Ancient AOLpress and Visual Page. With some hand-tweaking for stuff neither is new enough to grok.
Re:They aren't really that great. (Score:2)
I one time had to (per instructions from the boss) get a 20 page word document (almost entirely a table) on our website within an hour, in HTML format. Ashamedly, I used 'save as HTML', but inserted a comment saying 'my boss made me do it' just in case anyone looked
Re:They aren't really that great. (Score:4, Interesting)
Save As HTML woulda been my first thought too.. except knowing the kark that Word thinks is HTML, I'd probably do this instead: save as WordPerfect 5.1, then (assuming I didn't have WP available to cut the middleman) I'd run it thru one of the WP-to-HTML tools, which usually do pretty well on tables, then load and save in AOLpress to clean up artifacts and mismatched tags. And when it magically appears on the website well before the deadline, the boss thinks you're a genius.
Re:They aren't really that great. (Score:2)
no we need web designers that actually have skills instead of being frontpage operators.
Some of the absolute best websites on the net are written by hand by skilled designers that know what they are doing... and their sites are fast and clean (Html wise) while pushing the en
Re:They aren't really that great. (Score:2)
In my experience, standards-compliant HTML is *less* space-efficient than ad-hoc HTML. Compare
font size="+1"
vs.
font size=+1
or the extra trailing slash on atomic tags, or
I predict that at some point in the future it will become a crime, or a
Can't compress twice (Score:4, Interesting)
(I should check this out by timing various downloads, but I'm too lazy. Somebody else can prove me wrong!)
So why do JPEG files with "more" compression download faster? Because JPEG is a lossy format: when you increase the "compession" you're not encoding data more efficiently, you're throwing data away. Depending on the image, you can do this and still end up with something that looks the same. But push it far enough and you end up with crap.
Re:Can't compress twice (Score:4, Interesting)
Modem compressors work very, very poorly. This isn't just because the people who come up with them suck, there are fundamental problems with doing compression in the modem. In order to avoid introducing really horrible latency, you have to compress the data in fairly small chunks. You can't wait for 50k of data to arrive from the computer and compress it all at once. Yet any decent compression scheme will achieve better compression ratios on longer chunks of data than shorter ones in non-pathological cases. So you have a modem which is stuck trying to compress a hundred bytes at a time, and a web server which can compress all 100k of page at once, and you have a significant difference in size. Also, gzip runs on a computer with a truly mind-boggling amount of number-crunching power available compared to a modem, which has a CPU just powerful enough to handle complicated commands like "ATH". With more CPU power, you can achieve better compression ratios.
In the end, modem hardware compression is basically a hack, and mostly a worthless one. There's a reason why everybody who distributes a file for download compresses it first, and it's not because it makes the file look prettier.
Congratulations: you're clueless. (Score:3, Informative)
Wow. I've never encountered anyone who has spoken so confidently on a topic without knowing a damn thing about it it. You are so absolutely wrong that the example you gave is the exact oposite of what you think it is. You've demonstrated precisely what
Compressing the already compressed? (Score:4, Interesting)
Re:Compressing the already compressed? (Score:2)
Re:Compressing the already compressed? (Score:2)
rproxy -- also actually works, and open source (Score:5, Informative)
Squid & mod_gzip (Score:4, Interesting)
Instead of having to traverse the Internet, with all the associated latency, pages are pulled locally - 1 hop away. Pages are also compressed.
A better way would be to figure out how to transfer pages via CVS, so only
Re:Squid & mod_gzip (Score:2, Interesting)
There is a mod_gzip for Apache...
Re:Squid & mod_gzip (Score:3, Interesting)
Re:Squid & mod_gzip (Score:3, Interesting)
Doh!
I meant rsync. I use it to sync websites on multiple servers.
Thanks.
Just remember (Score:5, Informative)
Re:Just remember (Score:2)
Re:Just remember (Score:2)
mod_gzip does not translate images into jpg's and recompress them with a quality setting of 5-10%
But true, for the most part, mod_gzip takes care of any plain-text compression, and most data on the web is already going to be compressed anyways if the author of the website is smart.
Re:Just remember (Score:5, Informative)
For a given representation these are all compressed. However in all cases these have lossy compression, where you can degrade the quality of the final output and send a smaller bitrate over the wire. Want me to prove my point... Take your favorite CD quality MP3 - lets say the track is 100 K. Now take it and convert the quality to minimum quality - the file will be like 20 K now (if even that much)... you can still hear what is going on... but the quality will suck. Can do the same thing with the rest of the compressed formats as well.
Re:Just remember (Score:2)
Re:Just remember (Score:4, Insightful)
In many cases CPU power on the internet is free, bandwidth is expensive and worth spending free CPU cycles dealing with... Oh - how do YOU know that you are getting a degraded image anyway ? the average idiot going through an ISP that would do this only sees the internet this way.
Hooray... (Score:2, Interesting)
Caching and compression will only get you so far before lossiness (sp?) kicks in and you start getting garbage, or caching works so well you get the same page every time you load it.
Get on the bandwagon and chip the money for Broadband if you're looking to boost your speeds. If you can't get a/v any faster, really, what's the point?
Low bandwidth main pages becoming less and less prevalent so it's not going to do you much anyway, plus
Semi-real, maybe (Score:3, Interesting)
[Puts cynics hat on]
The vendors mentioned in the PCWorld article seem to be treading dangerously close to copyright infringement by compressing other people's content on their servers to be pulled through their browser proxy.
NetZero and Earthlink apparently force you to use their proprietary internet-access layer, so how are we sure their extra-cost "Super" speed isn't just normal internet speed, and their "Base" speed isn't just slowed down by the interface layer?
[Takes cynics hat off]
The only thing here that seems like it would be genuinely useful is HTML compression... surely there is/will-be an Open Source solution for this. Maybe a new MIME type, e.g. text/html.compressed? Then it could be implemented on both the browser and server side, and this would have far greater impact. This could be implemented either in the browser itself or in a lightweight proxy like Proxomitron. Anyone? Anyone?
Re:Semi-real, maybe (Score:2)
Re:Semi-real, maybe (Score:2)
Strange that it is used so little...
There are several different modules for compression with Apache (well, maybe *that* is the reason...). Some can compress everything, some can compress static content only, some store the compressed data so it does not need to be compressed every time, etc.
In PHP there is also an option to compress all PHP output, very useful when you run an application that outputs large tables.
Worth the effort (Score:2)
Re:Worth the effort (Score:2)
Even so... I'm not sure I believe these accelerators will do enough to be worthwhile, especially when my dialup tops out at 26k and I don't load images in the first place. And I don't like the idea of yet another layer of Stuff that can go wrong.
Re:Worth the effort (Score:2)
1: Dialup is available everywhere there is a phone line and is not dependent on a phys
Methods (Score:3, Informative)
Compression.. Now there's something! I have in the past used an ssh tunnel (with compression switched on) to my university's web proxy, and that sped up things quite a bit! Why isn't this switched on by default on my PPPoA connection? Doesn't apache handle gzip'ing these days? Doesn't seem to be used much, though.. This speed up might be less pronounced on dial-up links though, because POTS modems usually switch on compression anyway (again YMMV).
Some download accelerators simply download different chunks of the same file in multiple sessions from either one server (shouldn't matter - unless with roundrobin DNS) or even from mirrors (better!). That's quite effective as well, but we know this, and that's why we use bittorrent for big files, don't we?
But it has to be said.. Most download accelerators are just bloaty spyware and don't do *zilch* to help your download speed.. Feh!
Didn't AOL use to convert GIF graphics to their own, lossy,
Re:Methods (Score:3, Informative)
Wrong. Most of the larger residential ISPs probably do, but mine certainly doesn't, and of the last four ISPs I worked for, only two did any caching at all, and one of those only did caching in certain limited situations.
The downside is that whenever I've used an ISPs squid proxy, it slowed things down! Turning proxies off almost invariably helps speeds, in stead of hurting them.
Hogwash. I've been using a caching proxy server on my LAN for the pas
Xwebs (Score:2)
Re:Xwebs (Score:2)
Wow, what an amazing concept! (Score:5, Funny)
My browser and my modem with
Cache the Suckage (Score:3, Informative)
Basically, it proxied all requests through that ISP on port 80. If it found a request to an IP or sitename it had visited before, it tried to serve it out of cache. If it didn't, it proxied the result through and returned the results from the requested IP or sitename.
The problems:
The server had a difficult time with virtual hosting of any kind. About 4 out of 5 requests to a virtual host would go through. About 20% of the time, there was some critical piece of information that the cache server would mangle so that the vhost mechanism would be unable to serve the right data. This was a couple years ago, so bugfixes might have happened. Maybe.
The server definitely had a hard time with dynamic content that wasn't built with a GET url (thus triggering the pass-thru proxy). If the request was posted, encrypted, hashed, or referenced a server side directive of some kind (server-side redirects were a nasty) the cache would fail. A server side link equating something like "http://www.server.net/redirect/" to a generated URL or dynamic content of some kind was the most frequent case we rean into with this. The server simply couldn't parse each and every http request or every variety and try to decide if it should pass-thru or not. I can't think of a logical way around this that wouldn't break any given implimentation. Can you?
We used dynamically assigned IPs at the time, so proxy requests made from one PC were often returned erroneously to another assuming the IP changed between usage. Say a modem hangup, etc. This was a rare event, but I listened to at least one person complaining that he was getting someone else's Hotmail. The fix to this is either to blacklist sites from being cached-- infeasible for every site that could possibly be requested-- or assign static IPs. DHCP broadband users may have similar problems, especially for those who have new IPs every so often.
Finally, if something got corrpted on the cache server due to disk error, stalled transfer, or some other reason, the sever had little or no way to throw out the bad data. It would throw out data that it *knew* was corrupt due to unfinished downloads, etc... , but often times this check failed or data was assumed to be correct even when it wasn't. Everyone who requested the same piece of corrupt data got it. I had to answer this statement a few times. "I downloaded it on one computer connected to your ISP and got a bad download. I downloaded it on my other computer from the same ISP and got the same bad download. Then I connected to another ISP from the first computer and got a complete download. What's up wit' dat, yo?"
Cache servers are a bad idea. The very idea is to try to be an end-all be-all to everyone who uses them. There are bug-fixes to some of the problem, but no way to solve the essential problem of the fact that MOST data on the web is dynamic now. Using cache servers with dynamic data is inviting difficulty and problem.
Re:Cache the Suckage (Score:2)
What? Client A establishes a TCP connection to the proxy server, then disconnects. Client B connects with client A's old IP, happens to initiate a connection to the proxy server with the exact same source port, and ignores the fact that the proxy server didn't successfully complete the build-up. The server doesn't seem to notice that it's gett
Caches are great. (Score:2)
Cache servers are NOT a bad idea, they are a GREAT idea, and for this reason they are in wide use. I don't know what cache engine you were using, but it sure sounds like it sucked. Cache eng
Re:Cache the Suckage (Score:3, Informative)
I worked at a local ISP who managed to get a demo for a cache server a while back. (I don't anymore.) The machine arrived. We plugged it in, and started to take tech calls.
Sounds like this was your mistake. You "managed to get a demo" speaks volumes. Sounds like an expensive proprietary product from a small company. If you had just downloaded Sq
Free Web Accelerators (Score:3, Informative)
Oh yeah? (Score:5, Funny)
A little bit of insight.. (Score:5, Funny)
Use faster web servers (Score:2, Interesting)
dumb, really dumb... (Score:2)
Re:dumb, really dumb... (Score:2)
http://news.com.com/2100-1038-5060321.ht m l
http://groups.google.com/groups?q=%2Bcomcast+%2 Bsp eed+%2Bincrease&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=1 JK3b.226372%24It4.108600%40rwcrnsc51.ops.asp.att.n et&rnum=5
I believe that this is in part due to the pressue of earthlink dsl service, which offers 1.5meg/128k (384k in some regions) for roughly $50 a month.
Both comcast and earthlink WILL lower their monthly rate
Tests by PC World, PC Magazine and CNET show ... (Score:5, Insightful)
Tests by PC World, PC Magazine and CNET show
These are the same magazines with full color, multi-page reviews of the new 0.025% faster hardware. They are the same magazines that review each micro$oft product and say that the TCO is lower than ever before. Take one look at any of their websites, and you will see:
These magazines are Advertisements
Taking anything from them seriously is like taking a presidential speech to be a serious economic discussion, or taking a realtor's web-site as gospel in the market.
Funny - just went to CNET.com to research my post, and guess what? Over 50% of the page is advertising. The rest is 'reviews' of which 100% have links to affiliate programs to purchase said hard/software and give a kickback to CNET.
They will try hard to sell anything, and get their commission. It's like they are the used car salesman of the internet - only everything is new and they don't look you in the eyes when lying to you.
Didn't modems already do this (Score:2)
Squid (Score:3, Interesting)
Of course when it's a new site chock full of graphics, or I'm doing binary downloads, I'm painfully aware of my modem's limitations. But for general surfing, sometimes it seems almost as good as the friends' broadband.
Speed up your site and cut bandwidth use right now (Score:3, Interesting)
One step most of these proxies is doing is compressing HTML files. HTML is highly redundant, so compression can save alot of space. However, it's silly for the proxy to do the compressing. Instead web site owners can do the compressing! Transferring pages gzip compressed is part of the standard. No special software is needed by end users. A 3:1 reduction in bytes transferred for your web pages (the HTML itself) is a reasonable minimum. The result is that you use less bandwidth and end users get a faster web site! Every mainstream browser supports this, and those browsers that don't support it will automatically get the uncompressed version. If you're using Apache, you'll want mod_gzip [schroepl.net] to automatically compress transfers. (You can fake the effect with MultiViews [apache.org], but it's a hassle to maintain two copies of every HTML file.)
(Yes, I know I don't practice what I preach. I'm working on it.)
Why is this limited to dial-up? (Score:3, Insightful)
Free software for faster browsing (Score:3, Informative)
If you have a shell account on another machine, and that machine has access to a proxy server, then you can tunnel port 3128 or 8080 (common http proxy ports) through ssh. This makes browsing a lot quicker because there is only a single TCP/IP connection going over the modem link - you don't have to connect separately for each page downloaded. Unfortunately I found that while this gave very fast browsing for half an hour or so, eventually it would freeze up and the ssh connection would have to be killed and restarted. Perhaps this has been fixed with newer OpenSSH releases.
RabbIT [sourceforge.net] is a proxy server you can run on the upstream host which compresses text and images (lossily).
The author of rsync mentioned something about an rsync-based web proxy where only differences in pages would be sent, but I don't know if this program was ever released.
Evil... (Score:3, Interesting)
Results for you: You click on a link and you have it immediately - from harddisk cache.
Result for others: A major part of the bandwidth is wasted, everyone's connection gets slower.
Effects: Everyone installs accelerators to have the net working faster. Bandwidth usage jumps 10 or so times, prices rise, connection speed drops far below what was before the accelerators. Nobody gives up the evil accelerators because without them it goes even slower.
It's called "social trap".
Re:Does anybody really care? (Score:5, Funny)
on
dialup.... ...
you....
insensitive
clod!
And my connection is wheezing just trying to post this!
Hey weren't you in my class? (Score:5, Funny)
went.....
to the.....
William.....
Shatner....
school of.....
acting.....
didn't you?
Useful for portable connections unlike DSL (Score:3, Informative)
Likewise I often find myslef in some crappy hotel where the connection is so noiesy I can barely squeeze a 14.4K connection out of it. I just want to check my web based e-mail not download the encyclopedia britannica
so anything that can make a dialup work painlessly on common web pages is a good thing.
Re:Does anybody really care? (Score:2)
Then again, I do see AOL boards during Bundesliga games, which is another issue entirely : why would Germans want to use America On-Line?
Re:Get Broadband (Score:3, Funny)
Um, because I don't live where you live?
- MugginsM