Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet America Online

New Breed Of Web Accelerators Actually Work 323

axlrosen writes "Web accelerators first came around years ago, and they didn't live up to the hype. Now TV commercials are advertising accelerators that speed up your dial-up connection by up to 5 times, they say. AOL and EarthLink throw them in for free; some ISPs charge a monthly fee. Tests by PC World, PC Magazine and CNET show that they do speed up your surfing quite a bit. They work by using improved compression and caching. The downside is they don't help streaming video or audio." And they require non-Free software on the client's end, too.
This discussion has been archived. No new comments can be posted.

New Breed Of Web Accelerators Actually Work

Comments Filter:
  • by DigitalNinja7 ( 684261 ) * on Wednesday September 10, 2003 @05:38PM (#6925297) Homepage
    Unfortunately, these caches store only the most accessed pages, so anything of any value to the Slashdot audience will be as slow as ever. But you can be sure your porn will be delivered at 5x the speed of your normal dial-up! (yawn)
    • by Anonymous Coward
      Are you saying Slashdotter's don't value pron?
    • Re:Faster porn? (Score:3, Insightful)

      by onecrazyfoo ( 183660 )
      Actually, how it was described to me, is that the requested page is retrieved by the ISP's server, cached and compressed, then sent along to the client. Which, with the compression they are able to get, is much faster for the dial-up user. At least that is how is supposed to work with Slipstream's product (what NetZero uses).
  • You mean... (Score:3, Insightful)

    by HungWeiLo ( 250320 ) on Wednesday September 10, 2003 @05:40PM (#6925320)
    "Web accelerators"...You mean highly-advanced technology like mod-gzip?
    • Re:You mean... (Score:5, Informative)

      by pla ( 258480 ) on Wednesday September 10, 2003 @06:12PM (#6925653) Journal
      "Web accelerators"...You mean highly-advanced technology like mod-gzip?

      Sounds pretty much like that... Which Apache already supports, and the major browsers already support, making something like this redundant.

      Moreover, dialup modems already use a fairly high level of compression at the hardware layer. While not exactly "gzip -9" quality, you can only realistically squeeze another 10% out of those streams no matter how much CPU power you devote to the task.


      Others have mentioned image recompression, which has traditionally used VERY poor implementations, nothing more than converting everything to a low quality JPEG. I would point out that a more intelligent approach to image compression could yield a 2-3x savings without noticeable loss of quality (smoothing undifferentiated backgrounds, stripping headers, drop the quality a tad (ie, to 75-85%, not the 20-40% AOL tried to pass off as acceptible), downgrading the subsampling on anything better than 2:2, etc). But no, not a 5x savings.
      • Re:You mean... (Score:3, Informative)

        by Anonymous Coward
        Uh, no. mod_gzip will compress _much_ better than your modem. Your modem can only compress in small chunks, and so there is less redundancy to remove. Even small(ish) web pages can be compressed, as a whole, to a much, much smaller size than what your modem can.

        In fact, using mod_gzip I've experienced a very discernible speed-up in access and display over DSL!
      • RabbIT Proxy (Score:3, Interesting)

        by DA-MAN ( 17442 )
        Ah, you mean like the RabbIT proxy [khelekore.org]. Personally I run this on my box on a t1 and use it whenever I am stuck with all but dialup.

        Speeds things up so much, it's not even funny. Although it does require that you have a machine on a decent, faster than dialup connection to make it work well.
    • Yeah exactly. They don't speed up anyones internet 'connection'. They just compress text and some images. Faster downloads for web-browsers is about the size of it.

      Now if they could just speed up online gaming (i.e. RTCW Enemy Territory) so I can keep up with the broadband users.....
    • Some of these do a little more than just gzip things coming down. One of the linked articles mentions that the server-side software recompresses images to minimize their size, giving you images that are smaller but lower quality. The article said you could right click to see the unmodified image.

  • Awwww boo hoo (Score:2, Insightful)

    by stratjakt ( 596332 )
    They require non-Free software?

    Well, why don't you go ahead and write some Free software to accomplish the same thing?

    My GameCube requires non-Free software too.

    Wahhhh

    • Re:Awwww boo hoo (Score:2, Insightful)

      by Anonymous Coward
      Apache/mod_gzip + Mozilla = free accelerated web content. Oh, and you can throw in squid if you want caching.
      • Re:Awwww boo hoo (Score:5, Insightful)

        by stratjakt ( 596332 ) on Wednesday September 10, 2003 @05:55PM (#6925478) Journal
        Nah, this is different altogether. Gzip is not the alpha and omega of compression.

        Different algorithms lend themselves better to different applications, so it seems to me a good accelerator would use a mix of algorithms based on MIME type.

        Ie; is the source data formatted in 24 byte words? 16 bit words? 8 bit words? If you have 8 bit data you don't want to look at 16 bit chunks, because then the string "abacadaeafag" doesnt compress for you. Dictionary sizes and blah blah blah... Even format conversion - turn all those BMPs that dingbats put on their pages into PNGs or lossless jpegs..

        And as for caching, it seems to me like more of a prefetch than a squid-type cache.. Ie, you request page, proxy at IP gets page, compresses it on the fly, then sends it. Caching it locally is more of an advantage WRT latency, not throughput.

        There's a lot of common sense tricks you could use. And according to these articles, they work.

        • For the most part, gzip IS, in fact, the alpha and omega. gzip is king on byte-oriented data (it doesn't matter what size the words are, it's a nth-order entropy encoder, so they all turn into pseudo-symbols).

          The types of data that can be specialized are much fewer than you propose.
          For example, bzip: better suited for text as text has a lot of localized second order trends. However it is computationally intensive and may not do well on a server appliance over multiple connections.
          PNG is better on (many) r
      • If you have squid on a fast connection, you can have it do gzipping. That would give you the same benefits as these web accelerators unless I'm mistaken in my understanding of how they work---and it would be all free, on the servers and clients.
    • Re: Awwww boo hoo (Score:5, Informative)

      by Master Bait ( 115103 ) on Wednesday September 10, 2003 @07:24PM (#6926261) Homepage Journal
      Well, why don't you go ahead and write some Free software to accomplish the same thing?

      Running Squid [squid-cache.org] with a 256mb ram disk cache is all the speedup we need, and it does so without altering the data being fed from upstream.

  • get broadband. This will definitely help places that still don't have broadband. But, if broadband is available, it's a no-brainer. I'd rather spend a few bucks more and get broadband, rather than be stuck with some kind of software that may or may not speed up the access depending on what it is.
    • You admit that a $200,000 setup fee [pineight.com] isn't "a few bucks more." Thank you; most people miss this.

      But what about people who are so mobile that they need to be able to jack in and access the Internet from any of several locations, and they can't afford the price of a broadband subscription for each location? I was in just that situation for four years. Dial-up has the advantage of a last mile in almost every home in the States, brought to you by the Universal Service Tax, meaning that no matter whose house

    • And for those of you who have broadband that you use to run an HTTP server on---have you considered using mod_gzip [schroepl.net]? It will let you serve more with less, which is a very good thing if you ever get linked to from slashdot.
    • Will a web-accelerator accelerate make my broadband connection five times faster?
  • by Anonymous Coward on Wednesday September 10, 2003 @05:42PM (#6925337)
    Why so many content providers aren't using gzip compression? The cpu time required is MUCH cheaper than the bandwidth, AND it makes users happiers because they get it faster. Oh, and it's free (for Apache anyway) and easy to set up. It even works with 99% of browsers these days.
    • by DrSkwid ( 118965 ) on Wednesday September 10, 2003 @05:47PM (#6925391) Journal
      mod_gzip is manna from heaven

      I turned mine off by accident once and got a phone call from the co-lo wanting to know why I was suddenly maxing out.

      gotta love that 70% saving.

      • by Micah ( 278 )
        I agree.

        I used to host Slashcode based sites. The default home page was about 50k. With mod_gzip, it literally got down to about 6k. Really sweet!
        • by Micah ( 278 ) on Wednesday September 10, 2003 @06:44PM (#6925907) Homepage Journal
          Of course, I should also add that both numbers would be a lot lower if the Slashcode theme remotely resembled web standards instead of horrendous amounts of nested tables and "spacer" graphics, but that's getting off-topic.....
          • by Jerf ( 17166 ) on Thursday September 11, 2003 @12:00AM (#6927734) Journal
            Of course, I should also add that both numbers would be a lot lower if the Slashcode theme remotely resembled web standards instead of horrendous amounts of nested tables and "spacer" graphics, but that's getting off-topic.....

            Actually, try downloading your page, copying it, gzipping the original, cleaning up the copy to your specs, gzipping that, and comparing the two file sizes. While you may kill a lot of text in the uncompressed version, I would strongly suspect you'll find that the gzip'ped version saved much less then you think.

            Those "spacer gifs" that take up perhaps 100bytes apiece in the original file (perhaps a bit generous) will compress away down to very little (if there are several near each other, they may literally compress down to a handful of bits after the first one), whereas the story text compresses much less well.

            If you're compressing things, XML, CSS, and a lot of other things that look awfully redundent in plain-text are suddenly downright bandwidth-efficient technologies, being dwarfed in their compressed representations by the plain-text payloads. This is one of the reasons that fundamentally XML is so cool; you get human readability, but for the very small effort of invoking gzip or similar compresion technology, you also get something that is very nearly as bandwidth-efficient as possible, because compression technologies dynamically determine the best binary encodings for such messages (including their plain-text payloads), whereas supposed "efficient" binary protocols may actually waste a lot of space. (Compressing the both of them may equalize them, but the binary file, perversely, will still be "harder" to compress, even with nearly the same information in both files.)

            How compression behaves is not necessarily intuitive.
      • compression with PHP (Score:5, Informative)

        by mboedick ( 543717 ) on Wednesday September 10, 2003 @09:23PM (#6926921)

        If you have a recent version of PHP, you don't even need mod_gzip. Just put the following lines in your .htaccess file:

        php_flag zlib.output_compression on

        Does everything on the fly. I once had a shell script that would wget a url with the accept encoding gzip header, and then wget it again without and show the percent savings. Was fairly interesting to see what sites were using compression, and what sites that weren't could have saved in bandwidth by using compression.

  • Yeah, right! (Score:5, Insightful)

    by Anonymous Coward on Wednesday September 10, 2003 @05:42PM (#6925345)
    And they require non-Free software on the client's end, too.

    And I'll just bet that none of that software includes any popups, spyware or intrusive monitoring!
  • But really, why? (Score:5, Interesting)

    by agent dero ( 680753 ) on Wednesday September 10, 2003 @05:43PM (#6925347) Homepage
    This seems like a really niche market nowadays. Not _too_ many people that need fast internet, that could use this, don't have broadband availible. The one key thing is price, which is even starting to get iffy.

    Something like $10-20 monthly for "speedy" earhtlink dial-up, or an extra $10-20 slapped on my monthly cable bill for broadband? (Charter Communications, they suck anyways)

    I guess if you need to read /. or pr0n that much fast, it works, tell me if I am wrong, but I am seeing a small market for this much hype
  • by koi88 ( 640490 ) on Wednesday September 10, 2003 @05:43PM (#6925351)
    They actually work?
    Next thing they find out is the new generation of penis enlargement devices actually work, too...
  • Non-Free software? (Score:3, Insightful)

    by divisionbyzero ( 300681 ) on Wednesday September 10, 2003 @05:43PM (#6925354)
    OMG, not that! I know this won't get much play here, but I don't care if it's free or not as long as it works. I use the Free software that I do because it is better than Fee software, not because it is free. Shame on me for not being an ideologue.
  • tradeoff (Score:4, Insightful)

    by nstrom ( 152310 ) on Wednesday September 10, 2003 @05:44PM (#6925364)
    It's just the old tradeoff between CPU power (decompression time) and bandwidth usage (download time). Much easier (and more smartly) implemented on the server side with something like mod_gzip, like HungWeiLo said.

    And graphic compression's been done before too, since around AOL 3.0 or so. Most people turn it off because it makes pages look like crap.
  • by elwoodblues16 ( 666185 ) on Wednesday September 10, 2003 @05:45PM (#6925366)
    Snake oil that works? What do you even call something like that?
  • by Jerk City Troll ( 661616 ) on Wednesday September 10, 2003 @05:45PM (#6925368) Homepage

    My former company was checking out NetAccelerator [netaccelerator.net] recently to resell to our clients.

    These things are a joke. The primary performance increase comes from recompressing images into really nasty JPEGs. AOL was doing this years ago (and getting blasted for it). If you turn that off, the performance improvement is not even measurable.

    Furthermore, you tend to get a lot of stale caches on your machine. Most browsers don't even get this right, so they add yet another layer of potentially buggy cache abstraction.

    No, these things are junk. They act as proxy servers and their source is closed. How can you trust them to handle your data? Even with all their compression features turned on, the performance improvement is seriously overrated. Don't bother. You simply cannot get something for nothing in cases like these.

    Now, what would improve the download speed of the web is if web designers would start building standards compliant markup. Many web sites have as much as 700kb overhead in markup from tools that create loads of font tags and their ilk. Pure XHTML + CSS layout would do a hell of a lot more to speed up the web than these scams. Of course, don't take my word for it--read Zeldman [zeldman.com].

    • by Hayzeus ( 596826 ) on Wednesday September 10, 2003 @05:50PM (#6925426) Homepage
      OK -- now you'vegot my attention. Like, um... just how nasty?
    • Unfortunately, you run a risk of leaving some folks with an unreadable page if their browser doesn't support CSS correctly.

      I am a big CSS fan (having recently finally relented and converted my websites to use it), but the implementations are still varied.

      What's sad is that SO MANY pages have this extra WYSIWYG garbage code in 'em. I was recently looking at job postings (JSP/ASP/CGI/Perl/Java/Hire Me!) and damned near every one had 'must be able to hand code'. Interestingly, the higher end jobs (like more
      • Not every WYSIWYG editor makes bloated code. Happens that my fave, and my next-best-thing, both make very clean code with no needless crap. But neither is Frontpage or worse yet, Dreamweaver. :)

        What do I use? Ancient AOLpress and Visual Page. With some hand-tweaking for stuff neither is new enough to grok.

        • An AOL tool makes clean code... I am shocked and awed...

          I one time had to (per instructions from the boss) get a 20 page word document (almost entirely a table) on our website within an hour, in HTML format. Ashamedly, I used 'save as HTML', but inserted a comment saying 'my boss made me do it' just in case anyone looked :)
          • by Reziac ( 43301 ) on Wednesday September 10, 2003 @06:39PM (#6925861) Homepage Journal
            AOLpress makes the cleanest, most legible HTML you'll ever see (better than most people bother with their hand-crafted HTML, in fact). It's also utterly anal about correct tags. I use it as a validator and code beautifier even when I've built the page in something else. Between that and its ability to work as a browser (a huge timesaver when a site is all unique pages and you need to follow links back and forth between several of 'em as you edit), it's completely ruined me -- now I expect *every* editor to do as well :)

            Save As HTML woulda been my first thought too.. except knowing the kark that Word thinks is HTML, I'd probably do this instead: save as WordPerfect 5.1, then (assuming I didn't have WP available to cut the middleman) I'd run it thru one of the WP-to-HTML tools, which usually do pretty well on tables, then load and save in AOLpress to clean up artifacts and mismatched tags. And when it magically appears on the website well before the deadline, the boss thinks you're a genius. :)

    • Now, what would improve the download speed of the web is if web designers would start building standards compliant markup. Many web sites have as much as 700kb overhead in markup from tools that create loads of font tags and their ilk

      no we need web designers that actually have skills instead of being frontpage operators.

      Some of the absolute best websites on the net are written by hand by skilled designers that know what they are doing... and their sites are fast and clean (Html wise) while pushing the en
    • I would prefer more browsers to use gzip for retrieving web pages. Also, web servers could optimize HTML by stripping out all unnecessary whitespace and redundant tags before transmission.

      In my experience, standards-compliant HTML is *less* space-efficient than ad-hoc HTML. Compare
      font size="+1"
      vs.
      font size=+1
      or the extra trailing slash on atomic tags, or /p tags (which you can pretty much omit entirely without confusing any browsers).

      I predict that at some point in the future it will become a crime, or a
      • Can't compress twice (Score:4, Interesting)

        by fm6 ( 162816 ) on Wednesday September 10, 2003 @07:00PM (#6926035) Homepage Journal
        I would prefer more browsers to use gzip for retrieving web pages.
        I might be wrong, but I'm pretty sure this would have not help dialup users at all. They're already using hardware data compression in the modem. When you're using lossless compression, there's an absolute limit [data-compression.com] as to how much compression you can get -- and you can't get around that limit by running your data through multiple compressors.

        (I should check this out by timing various downloads, but I'm too lazy. Somebody else can prove me wrong!)

        So why do JPEG files with "more" compression download faster? Because JPEG is a lossy format: when you increase the "compession" you're not encoding data more efficiently, you're throwing data away. Depending on the image, you can do this and still end up with something that looks the same. But push it far enough and you end up with crap.

        • by HeghmoH ( 13204 ) on Thursday September 11, 2003 @12:17AM (#6927801) Homepage Journal
          You're making the incorrect assumption that all lossy compression is equal.

          Modem compressors work very, very poorly. This isn't just because the people who come up with them suck, there are fundamental problems with doing compression in the modem. In order to avoid introducing really horrible latency, you have to compress the data in fairly small chunks. You can't wait for 50k of data to arrive from the computer and compress it all at once. Yet any decent compression scheme will achieve better compression ratios on longer chunks of data than shorter ones in non-pathological cases. So you have a modem which is stuck trying to compress a hundred bytes at a time, and a web server which can compress all 100k of page at once, and you have a significant difference in size. Also, gzip runs on a computer with a truly mind-boggling amount of number-crunching power available compared to a modem, which has a CPU just powerful enough to handle complicated commands like "ATH". With more CPU power, you can achieve better compression ratios.

          In the end, modem hardware compression is basically a hack, and mostly a worthless one. There's a reason why everybody who distributes a file for download compresses it first, and it's not because it makes the file look prettier.
      • In my experience, standards-compliant HTML is *less* space-efficient than ad-hoc HTML. Compare

        font size="+1"
        vs.
        font size=+1
        or the extra trailing slash on atomic tags, or /p tags (which you can pretty much omit entirely without confusing any browsers).

        Wow. I've never encountered anyone who has spoken so confidently on a topic without knowing a damn thing about it it. You are so absolutely wrong that the example you gave is the exact oposite of what you think it is. You've demonstrated precisely what

  • by insecuritiez ( 606865 ) on Wednesday September 10, 2003 @05:46PM (#6925382)
    They compress the packets of data. Where will this help? In compressible places that aren't already compressed. Such as the HTML markup for webpages. This wont help already compressed JPGs, or already compressed MP3s or already compressed ZIP/GZIP files or already compressed videos (MPG/AVI/ASF). So is this really going to help much? Sure, there is always going to be a small percent of space (and therefore time) saved even transferring these formats. Is it going to make a 5X difference? No. Is it going to make a noticeable difference? It's unlikely but possible. The only way this "new technology" is going to help is if you are a dialup user without broadband options.
    • Oh, and one more thing. If they do like AOL did and use LOSSY compression and re-compress the JPGs and other images on a website, forget it. Not only does that not save much space, it makes the images look like shit.
  • by mattbee ( 17533 ) <matthew@bytemark.co.uk> on Wednesday September 10, 2003 @05:46PM (#6925384) Homepage
    rproxy [samba.org] is a really interesting project, and back when I tried it over a 56K dial-up connection, it did actually work to speed things up. You sit an rproxy web cache at each end of the dial-up connection (so you need somewhere to deply your custom proxy to make it work, but bear with me...) and then request web pages as usual. Each end caches the pages that pass through it, but the clever part is that when you re-request a page, the proxy at the far end (on the fast connection) can fetch the page and compare with the last copy in the cache. Then it transmits only the differences using the rsync algorithm [samba.org]. Unforunately it's not being actively developed any more given the increasing availability of high-bandwidth connections, and the decreasing fraction of web traffic that is suitable for delta-compression. Shame, since it did seem to be a real "web accelerator" without any of the illusory techniques used by the garish banner-ad accelerators.
  • Squid & mod_gzip (Score:4, Interesting)

    by chill ( 34294 ) on Wednesday September 10, 2003 @05:47PM (#6925396) Journal
    ISPs could simply put some squid caches between the net and their dial-up banks. Turn mod_gzip on and you'll accomplish a lot of the same thing.

    Instead of having to traverse the Internet, with all the associated latency, pages are pulled locally - 1 hop away. Pages are also compressed.

    A better way would be to figure out how to transfer pages via CVS, so only .diffs came across. :-)
    • Re:Squid & mod_gzip (Score:2, Interesting)

      by Anonymous Coward
      Does that actually exist, a mod_gzip for Squid?
      There is a mod_gzip for Apache...
    • Re:Squid & mod_gzip (Score:3, Interesting)

      by burns210 ( 572621 )
      on your cvs comment, it sounds alot like the rsync solution... rsync copies files over the net. but if the file already exists, it will only copied the sections of the file that are change(10% maybe) and keep the other 90%... That sounds like a great feature to have for the cacheable proxies.
      • Re:Squid & mod_gzip (Score:3, Interesting)

        by chill ( 34294 )
        on your cvs comment, it sounds alot like the rsync solution...

        Doh!

        I meant rsync. I use it to sync websites on multiple servers.

        Thanks.
  • Just remember (Score:5, Informative)

    by El ( 94934 ) on Wednesday September 10, 2003 @05:47PM (#6925398)
    GIFs, JPEGs, MPEGs, and MP3s are already compressed, so compression doesn't make them any smaller. That really leaves only HTTP, HTML and CSS to benefit from compression. And caching only helps if you're in the habit of looking at the same pages multiple times... so where's the benefit for the average porn-downloading, RIAA-infringing geek? Does it speculatively preread links before I click on them?
    • Am I missing something? I thought mod_gzip or similar took care of this at the application level, so with a compliant browser (and most are) and server, it's possible for even HTML and CSS to be compresed.

      • mod_gzip does not translate images into jpg's and recompress them with a quality setting of 5-10%
        But true, for the most part, mod_gzip takes care of any plain-text compression, and most data on the web is already going to be compressed anyways if the author of the website is smart.
    • Re:Just remember (Score:5, Informative)

      by MerlynEmrys67 ( 583469 ) on Wednesday September 10, 2003 @05:57PM (#6925495)
      GIFs, JPEGs, MPEGs, and MP3s are already compressed

      For a given representation these are all compressed. However in all cases these have lossy compression, where you can degrade the quality of the final output and send a smaller bitrate over the wire. Want me to prove my point... Take your favorite CD quality MP3 - lets say the track is 100 K. Now take it and convert the quality to minimum quality - the file will be like 20 K now (if even that much)... you can still hear what is going on... but the quality will suck. Can do the same thing with the rest of the compressed formats as well.

      • So say you make a request for an image from a site. The ISP has to go, retrieve the ENTIRE image, de-compress, recompress at a higher level, and then begin transfer. Talk about latency issues. The only way they could do this it with caching. But then it would only be good for the most popular sites. And even then it would highly degrade the image quality.
        • Re:Just remember (Score:4, Insightful)

          by MerlynEmrys67 ( 583469 ) on Wednesday September 10, 2003 @06:11PM (#6925638)
          Yup... this is exactly what they are doing... Remember I have a local proxy cache - and multiple T-3 links to the internet - you have a 33kbit connection to this. If I can get a 100K file - spend time compressing it by 5x and get it to you in less time than it would take you to get the 100K file (24 seconds right) I have won. And guess what - the next sucker that asks for it, I get to give the recompressed data too for free.

          In many cases CPU power on the internet is free, bandwidth is expensive and worth spending free CPU cycles dealing with... Oh - how do YOU know that you are getting a degraded image anyway ? the average idiot going through an ISP that would do this only sees the internet this way.

  • Hooray... (Score:2, Interesting)

    by -Grover ( 105474 )
    You can get the same thing you looked at yesterday 5x faster!!

    Caching and compression will only get you so far before lossiness (sp?) kicks in and you start getting garbage, or caching works so well you get the same page every time you load it.

    Get on the bandwagon and chip the money for Broadband if you're looking to boost your speeds. If you can't get a/v any faster, really, what's the point?

    Low bandwidth main pages becoming less and less prevalent so it's not going to do you much anyway, plus
  • Semi-real, maybe (Score:3, Interesting)

    by Empiric ( 675968 ) * on Wednesday September 10, 2003 @05:50PM (#6925418)
    Okay, so GIFs, JPGs, streaming video, ZIPs, and compressed .EXE installers are all already compressed near to their thoretical limits.

    [Puts cynics hat on]

    The vendors mentioned in the PCWorld article seem to be treading dangerously close to copyright infringement by compressing other people's content on their servers to be pulled through their browser proxy.

    NetZero and Earthlink apparently force you to use their proprietary internet-access layer, so how are we sure their extra-cost "Super" speed isn't just normal internet speed, and their "Base" speed isn't just slowed down by the interface layer?

    [Takes cynics hat off]

    The only thing here that seems like it would be genuinely useful is HTML compression... surely there is/will-be an Open Source solution for this. Maybe a new MIME type, e.g. text/html.compressed? Then it could be implemented on both the browser and server side, and this would have far greater impact. This could be implemented either in the browser itself or in a lightweight proxy like Proxomitron. Anyone? Anyone?
  • I wonder if all this is worth the effort. For instance, Bell canada offers low speed dsl which is capped around 25 KB/s ~= 5x dial-up for only a couple dollars more than regular dial up. When you add in the fact that you don't have to tie up the phone line, and the other advantages of DSL such as high speed on all pages, not just frequently visited ones, you really have to wonder why anyone uses dial up at all anymore.
    • Why does anyone use dialup anymore? Because some of us CAN'T get broadband. I'm only 50 miles from Los Angeles, yet the nearest broadband is 15 miles away. (About 3/4ths of this valley, population 350,000, is out of range of broadband.)

      Even so... I'm not sure I believe these accelerators will do enough to be worthwhile, especially when my dialup tops out at 26k and I don't load images in the first place. And I don't like the idea of yet another layer of Stuff that can go wrong.

    • I wonder if all this is worth the effort. For instance, Bell canada offers low speed dsl which is capped around 25 KB/s ~= 5x dial-up for only a couple dollars more than regular dial up. When you add in the fact that you don't have to tie up the phone line, and the other advantages of DSL such as high speed on all pages, not just frequently visited ones, you really have to wonder why anyone uses dial up at all anymore.

      1: Dialup is available everywhere there is a phone line and is not dependent on a phys
  • Methods (Score:3, Informative)

    by wfberg ( 24378 ) on Wednesday September 10, 2003 @05:51PM (#6925431)
    Caching webpages in a proxy is something all ISPs do. The downside is that whenever I've used an ISPs squid proxy, it slowed things down! Turning proxies off almost invariably helps speeds, in stead of hurting them. Plus, if the proxy goes down, you can still use the web. I have no idea why ISP's proxies are so craptastic (YMMV), but in my experience, they are. (BTW, it would help if windowsupdate was cacheable..)

    Compression.. Now there's something! I have in the past used an ssh tunnel (with compression switched on) to my university's web proxy, and that sped up things quite a bit! Why isn't this switched on by default on my PPPoA connection? Doesn't apache handle gzip'ing these days? Doesn't seem to be used much, though.. This speed up might be less pronounced on dial-up links though, because POTS modems usually switch on compression anyway (again YMMV).

    Some download accelerators simply download different chunks of the same file in multiple sessions from either one server (shouldn't matter - unless with roundrobin DNS) or even from mirrors (better!). That's quite effective as well, but we know this, and that's why we use bittorrent for big files, don't we? ;-) Not such a good approach for webbrowsing btw.

    But it has to be said.. Most download accelerators are just bloaty spyware and don't do *zilch* to help your download speed.. Feh!

    Didn't AOL use to convert GIF graphics to their own, lossy, .ART format when you used their client? Do they still?
    • Re:Methods (Score:3, Informative)

      by Phroggy ( 441 ) *
      Caching webpages in a proxy is something all ISPs do.

      Wrong. Most of the larger residential ISPs probably do, but mine certainly doesn't, and of the last four ISPs I worked for, only two did any caching at all, and one of those only did caching in certain limited situations.

      The downside is that whenever I've used an ISPs squid proxy, it slowed things down! Turning proxies off almost invariably helps speeds, in stead of hurting them.

      Hogwash. I've been using a caching proxy server on my LAN for the pas
  • What ever happened to that Xwebs browser written by some 16 year old kid, that was a total scam, I mean, supposedly sped up web surfing lots ?
  • by default luser ( 529332 ) on Wednesday September 10, 2003 @05:55PM (#6925473) Journal
    WOW, a webcache and real-time compression!

    My browser and my modem with .v42Bis compression have only been able to do that for, what, nearly a decade?
  • Cache the Suckage (Score:3, Informative)

    by Bonker ( 243350 ) on Wednesday September 10, 2003 @05:58PM (#6925504)
    I worked at a local ISP who managed to get a demo for a cache server a while back. (I don't anymore.) The machine arrived. We plugged it in, and started to take tech calls.

    Basically, it proxied all requests through that ISP on port 80. If it found a request to an IP or sitename it had visited before, it tried to serve it out of cache. If it didn't, it proxied the result through and returned the results from the requested IP or sitename.

    The problems:

    The server had a difficult time with virtual hosting of any kind. About 4 out of 5 requests to a virtual host would go through. About 20% of the time, there was some critical piece of information that the cache server would mangle so that the vhost mechanism would be unable to serve the right data. This was a couple years ago, so bugfixes might have happened. Maybe.

    The server definitely had a hard time with dynamic content that wasn't built with a GET url (thus triggering the pass-thru proxy). If the request was posted, encrypted, hashed, or referenced a server side directive of some kind (server-side redirects were a nasty) the cache would fail. A server side link equating something like "http://www.server.net/redirect/" to a generated URL or dynamic content of some kind was the most frequent case we rean into with this. The server simply couldn't parse each and every http request or every variety and try to decide if it should pass-thru or not. I can't think of a logical way around this that wouldn't break any given implimentation. Can you?

    We used dynamically assigned IPs at the time, so proxy requests made from one PC were often returned erroneously to another assuming the IP changed between usage. Say a modem hangup, etc. This was a rare event, but I listened to at least one person complaining that he was getting someone else's Hotmail. The fix to this is either to blacklist sites from being cached-- infeasible for every site that could possibly be requested-- or assign static IPs. DHCP broadband users may have similar problems, especially for those who have new IPs every so often.

    Finally, if something got corrpted on the cache server due to disk error, stalled transfer, or some other reason, the sever had little or no way to throw out the bad data. It would throw out data that it *knew* was corrupt due to unfinished downloads, etc... , but often times this check failed or data was assumed to be correct even when it wasn't. Everyone who requested the same piece of corrupt data got it. I had to answer this statement a few times. "I downloaded it on one computer connected to your ISP and got a bad download. I downloaded it on my other computer from the same ISP and got the same bad download. Then I connected to another ISP from the first computer and got a complete download. What's up wit' dat, yo?"

    Cache servers are a bad idea. The very idea is to try to be an end-all be-all to everyone who uses them. There are bug-fixes to some of the problem, but no way to solve the essential problem of the fact that MOST data on the web is dynamic now. Using cache servers with dynamic data is inviting difficulty and problem.
    • We used dynamically assigned IPs at the time, so proxy requests made from one PC were often returned erroneously to another assuming the IP changed between usage.

      What? Client A establishes a TCP connection to the proxy server, then disconnects. Client B connects with client A's old IP, happens to initiate a connection to the proxy server with the exact same source port, and ignores the fact that the proxy server didn't successfully complete the build-up. The server doesn't seem to notice that it's gett

    • Cache servers are a bad idea. The very idea is to try to be an end-all be-all to everyone who uses them. There are bug-fixes to some of the problem, but no way to solve the essential problem of the fact that MOST data on the web is dynamic now. Using cache servers with dynamic data is inviting difficulty and problem.

      Cache servers are NOT a bad idea, they are a GREAT idea, and for this reason they are in wide use. I don't know what cache engine you were using, but it sure sounds like it sucked. Cache eng
    • Re:Cache the Suckage (Score:3, Informative)

      by slamb ( 119285 )
      Sounds like you had a horrible experience with one. But the problems you saw were bugs in the software, not fundamental problems with the concept. One by one...

      I worked at a local ISP who managed to get a demo for a cache server a while back. (I don't anymore.) The machine arrived. We plugged it in, and started to take tech calls.

      Sounds like this was your mistake. You "managed to get a demo" speaks volumes. Sounds like an expensive proprietary product from a small company. If you had just downloaded Sq

  • by wang232 ( 601242 ) on Wednesday September 10, 2003 @06:04PM (#6925564)
    There are two free software projects building web accelerator proxies. One is RabbIT [sourceforge.net] . The other is ziproxy [sourceforge.net]. They are both web proxies which do not require any special software on the client side. They both compress HTML by gzip, and compress images into lower quality JPEG's. RabbIT is written in JAVA whereas ziproxy is written in C. RabbIT has more features than ziproxy, such as caching and removing ads. Give them a try if you're using a slow line! Disclaimer: I'm a ziproxy user and developer.
  • Oh yeah? (Score:5, Funny)

    by identity0 ( 77976 ) on Wednesday September 10, 2003 @06:09PM (#6925615) Journal
    If they're so good, why isn't this first post?!
  • by ewhenn ( 647989 ) on Wednesday September 10, 2003 @06:13PM (#6925666)
    Netzero offered this a while ago (maybe they still do). Basically it does speed up loading of pages greatly, however there is a drawback, and a big one at that, the pictures look like crap. The GIFs/JPEGs/etc are compressed. Compressed so much to the extent that they look like, for lack of better terms, crap. A rule of thumb applies here, if it sounds too good to be true, it probably is. You really cant expect broadband speed for the cost of a dial up, and if you do I have some lovely penis pills to sell you for the low low price of 69.95.
  • The web could be faster if web server admins used faster web servers. Zeus Web Server instead of Apache, for example. The Holy Grail of web serving seems to be "good enough is good enough, and performance is someone else's problem".
  • The way to ween the American public off dial-up is to offer tiered cable modem services. Say Comcast offers AOL for the same price as dial-up service, with guaranteed 128kbps downstream/56kbps upstream. Or barebones ISP (sans AOL or any other *content-enhancements*) for $19.99 per month. The cable companies would even steal customers away from Netzero and the like by appealing to the fact that there's no need for a dedicated telephone line anymore for internet access. But nope, the cable companies would
    • Actually... Comcast did raise their speeds I believe in Oregon and Atlanta
      http://news.com.com/2100-1038-5060321.ht m l
      http://groups.google.com/groups?q=%2Bcomcast+%2 Bsp eed+%2Bincrease&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=1 JK3b.226372%24It4.108600%40rwcrnsc51.ops.asp.att.n et&rnum=5

      I believe that this is in part due to the pressue of earthlink dsl service, which offers 1.5meg/128k (384k in some regions) for roughly $50 a month.

      Both comcast and earthlink WILL lower their monthly rate
  • by noahbagels ( 177540 ) on Wednesday September 10, 2003 @06:20PM (#6925717)
    That's about all the article had to say:

    Tests by PC World, PC Magazine and CNET show

    These are the same magazines with full color, multi-page reviews of the new 0.025% faster hardware. They are the same magazines that review each micro$oft product and say that the TCO is lower than ever before. Take one look at any of their websites, and you will see:

    These magazines are Advertisements

    Taking anything from them seriously is like taking a presidential speech to be a serious economic discussion, or taking a realtor's web-site as gospel in the market.

    Funny - just went to CNET.com to research my post, and guess what? Over 50% of the page is advertising. The rest is 'reviews' of which 100% have links to affiliate programs to purchase said hard/software and give a kickback to CNET.

    They will try hard to sell anything, and get their commission. It's like they are the used car salesman of the internet - only everything is new and they don't look you in the eyes when lying to you.
  • Remember the old 24/96 modems? V.42bis compression? MNP 5-10? Yeah and like they said mod_gzip. My Motorola phone has "compression" software so the measley 9.6kbps connection isn't so bad when trying to get e-mail to my laptop. (TDMA phone)
  • Squid (Score:3, Interesting)

    by Helmholtz ( 2715 ) on Wednesday September 10, 2003 @06:38PM (#6925842) Homepage
    Personally I've found putting my modem on a box with a large amount of disk space and running squid to be extremely useful. In between the agressive caching and blocking banner ads, most of my web browsing doesn't seem very slow at all.

    Of course when it's a new site chock full of graphics, or I'm doing binary downloads, I'm painfully aware of my modem's limitations. But for general surfing, sometimes it seems almost as good as the friends' broadband.
  • by ChaosDiscord ( 4913 ) on Thursday September 11, 2003 @01:17AM (#6928082) Homepage Journal

    One step most of these proxies is doing is compressing HTML files. HTML is highly redundant, so compression can save alot of space. However, it's silly for the proxy to do the compressing. Instead web site owners can do the compressing! Transferring pages gzip compressed is part of the standard. No special software is needed by end users. A 3:1 reduction in bytes transferred for your web pages (the HTML itself) is a reasonable minimum. The result is that you use less bandwidth and end users get a faster web site! Every mainstream browser supports this, and those browsers that don't support it will automatically get the uncompressed version. If you're using Apache, you'll want mod_gzip [schroepl.net] to automatically compress transfers. (You can fake the effect with MultiViews [apache.org], but it's a hassle to maintain two copies of every HTML file.)

    (Yes, I know I don't practice what I preach. I'm working on it.)

  • by kinema ( 630983 ) on Thursday September 11, 2003 @02:58AM (#6928496)
    People keeps saying that this technology is pretty much moot as more and more people are getting broadband connections. Why should compression and caching technology only be applied to slower connections? Why waste any amount of bandwidth even when you have "tons" of it?
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday September 11, 2003 @04:42AM (#6928777) Homepage
    I use a modem for web browsing. I've found that wwwoffle [demon.co.uk] is a good proxy server, because it can operate in both online and offline modes - when offline, it serves the most recent version of each page, and if you try to view a non-cached page, it's marked to be downloaded next time you connect. If you want to speed up your browsing some more at the expense of having to hit 'reload' occasionally, you can configure wwwoffle to always use an available cached version even when online.

    If you have a shell account on another machine, and that machine has access to a proxy server, then you can tunnel port 3128 or 8080 (common http proxy ports) through ssh. This makes browsing a lot quicker because there is only a single TCP/IP connection going over the modem link - you don't have to connect separately for each page downloaded. Unfortunately I found that while this gave very fast browsing for half an hour or so, eventually it would freeze up and the ssh connection would have to be killed and restarted. Perhaps this has been fixed with newer OpenSSH releases.

    RabbIT [sourceforge.net] is a proxy server you can run on the upstream host which compresses text and images (lossily).

    The author of rsync mentioned something about an rsync-based web proxy where only differences in pages would be sent, but I don't know if this program was ever released.
  • Evil... (Score:3, Interesting)

    by SharpFang ( 651121 ) on Thursday September 11, 2003 @05:56AM (#6928973) Homepage Journal
    Harvest all the links on a webpage, cache them on disk while you view the page content, if you still don't go anywhere, harvest links on the cached pages, and so on. If you type some URL and go elsewhere, discard everything.
    Results for you: You click on a link and you have it immediately - from harddisk cache.
    Result for others: A major part of the bandwidth is wasted, everyone's connection gets slower.
    Effects: Everyone installs accelerators to have the net working faster. Bandwidth usage jumps 10 or so times, prices rise, connection speed drops far below what was before the accelerators. Nobody gives up the evil accelerators because without them it goes even slower.

    It's called "social trap".

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...