×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Not To Design a Protocol

timothy posted more than 3 years ago | from the sweet-morsels-of-logged-in-ness dept.

The Internet 186

An anonymous reader writes "Google security researcher Michael Zalewski posted a cautionary tale for software engineers: amusing historical overview of all the security problems with HTTP cookies, including an impressive collection of issues we won't be able to fix. Pretty amazing that modern web commerce uses a mechanism so hacky that does not even have a proper specification."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

186 comments

Does it work ? (0)

unity100 (970058) | more than 3 years ago | (#34072172)

yes. and thats whats important.

Re:Does it work ? (3, Insightful)

Anonymous Coward | more than 3 years ago | (#34072186)

RTFA. That's exactly what happend with HTTP. "It works". In the world of 1990. And then they started to "fix" it to keep up.

Re:Does it work ? (4, Insightful)

OG (15008) | more than 3 years ago | (#34072310)

When I'm designing a solution, I don't ask if it works, I was if it works well. Is it secure? Is it scalable? What are the risks associated with it? Is it full of kludges that make bad implementations easy? What do I do if a user decides she doesn't trust that functionality and turns it off? And the point of the article wasn't to say that people shouldn't use cookies when developing web site or applications. Rather, it's an examination of how a sub-optimal solution came to be so that perhaps other people can avoid similar pitfalls in the future.

Re:Does it work ? (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34072498)

Web developers would have the time to think about important things like that, if they weren't spending all of their time trying to prevent data loss caused by MySQL or the NoSQL database de jour, horrible server-side peformance due to PHP, horrible client-side performance due to JavaScript, all while trying to avoid the numerous browser incompatibilities.

Although the tools and technologies they're using are complete shit, it sure doesn't help that they generally don't understand even basic software development and programming theories very well. See their bastardization of the MVC pattern, for instance.

Re:Does it work ? (3, Insightful)

ralfmuschall (1782380) | more than 3 years ago | (#34072828)

Your way of thinking is nice, but it is exactly this attitude that gets developers fired (or their bosses broke if they share that attitude and don't fire you, in which case an inferior insecure competing product will dominate) for thinking too much instead of getting the product out. That's why we are up to the neck in inferior goods, protocols just being one example. Not even death penalty (e.g. for melamine in chinese milk) does seem to stop this.

Re:Does it work ? (4, Insightful)

icebike (68054) | more than 3 years ago | (#34073770)

So in other words, you never bring anything into production status.

Look, its really quite simple.

HTTP was a presentation mechanism, designed to deliver content, dependent on non persistent connections, where each initial and each subsequent request had to supply all information necessary to fulfill said request. Even if you "log in" to your account, every request stands alone.

There is no persistent connection. There is no reliable persistent knowledge on the server side that can be positivity attributed to any given client. Clients are like motorists at a drive up window of a Burger stand, not well known patrons at a restaurant.

Given that scenario, it was inevitable that cookies would be developed, and employed.

So unless you were willing to hold off deployment of e-commerce until you totally rewrote HTTP into a persistent connection based protocol, totally replaced the browser as the client side tool, any grandstanding on how carefully and methodically you work is just grandiose bravado.

The only tool at hand was http and web servers and browsers. Its still largely the same today. There was no other way besides cookies of some sort. You may argue about their structure, their content or what ever, but cookies are all that is on the menu.

Re:Does it work ? (2, Funny)

Saint Stephen (19450) | more than 3 years ago | (#34073882)

Thank you, Captain Hindsight! What a complete failure the designers of HTTP were. They should've done it so much different! :-)

Re:Does it work ? (1, Informative)

Anonymous Coward | more than 3 years ago | (#34072334)

I wonder how many code snippets of yours have appeared on The Daily WTF. Just because something works doesn't mean it's good.

I knew a pilot who flew with duct tape holding down the fuel cap on his wing. That worked too, but it's hardly ideal is it?

Here in Australia a few years back, a major power substation was "working" only because someone rigged up a hose to constantly drip water on an overheating thingomajig. Sure it works and props to the hardhack, but it's a piece of shit that can easily stop working.

You see, some of us prefer things not to be a piece of shit.

Re:Does it work ? (0)

Anonymous Coward | more than 3 years ago | (#34072394)

And our browsers wouldn't really be the same without all those tasty cookies!

Re:Does it work ? (0)

Anonymous Coward | more than 3 years ago | (#34072402)

yes. and thats whats important.

No standards and no security.

Your definition of "works" is weak.

I wouldn't be so quick to advertise such low standards are if I were you.

Re:Does it work ? (2, Insightful)

Bacon Bits (926911) | more than 3 years ago | (#34072446)

It is this type of thinking that separates a carpenter from an engineer.

Re:Does it work ? (0)

Anonymous Coward | more than 3 years ago | (#34072474)

Um, it's this kind of thinking that separates a bad carpenter from a good carpenter, or a bad engineer from a good one.

Re:Does it work ? (0)

Anonymous Coward | more than 3 years ago | (#34073302)

When I give food to the poor, I feel charitable. When I am forced to give food to the poor, I feel abused.

"Working" is different from "working well". (5, Insightful)

Anonymous Coward | more than 3 years ago | (#34072470)

"Working" is measured over a very wide spectrum. On one hand, we have "broken", and on the other we have "working perfectly". The web is far, far closer to the "broken" side of the spectrum than it ever has been to the "working perfectly" side.

Put simply, almost everything about the web is one filthy hack upon another. It's a huge stack of shitty "extensions" that were often made with little thought, so it's no wonder web development is so horrible today.

HTTP has been repurposed far more than it should have been. Its lack of statefulness has resulted in horrible hacks like cookies and AJAX. HTTP makes caching far harder than it should be. SSL and TLS are mighty awful hacks. And those are just a few of its problems!

HTML is a mess, and HTML5 is just going to make the situation worse. Even after 20 years, layout is still a huge hassle. CSS tries to bring in concepts from the publishing world, but they're not at all what we need for web layout, and thus everyone is unhappy.

A lot of people will claim otherwise, and they're wrong, but JavaScript is a fucking horrible scripting language. It's even worse for writing anything significant. And no, it's absolutely nothing like Scheme (some JavaScript advocate always makes this stupid claim whenever the topic of JavaScript's horrid nature comes up).

PHP is one of the few popular languages that can rival JavaScript in terms of being absolutely shitty. Then there are other server-side shenanigans like the NoSQL movement, which arose solely because there are a lot of web "developers" who don't know how to use relational databases properly. I've seriously dealt with such "developers" and many of them didn't even know what indexes are!

Most web browsers themselves are quite shitty. It has gotten better recently, but they still use huge amounts of RAM for the relatively simple services they provide.

The only people involved with some sort of web-related software development who aren't absolute fuck-ups are those working on HTTP servers like Apache HTTPd, nginx, and lighttpd. But now we're seeing crap like Mongrel and Mongrel2 arising in this area, so maybe it's only a matter of time before the sensible developers here move on.

So just because the web is "sort of broken", rather than "completely fucking broken", it doesn't mean that it's "working".

-1 Profanity (-1, Offtopic)

jabberw0k (62554) | more than 3 years ago | (#34072890)

You had some excellent points until you started swearing. Clean up your act, I wanted to hear what you had to say.

Re:-1 Profanity (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34073020)

Oh for fucks sake, stop being a fucking puritan, you fucktard!

Re:"Working" is different from "working well". (4, Insightful)

BlueStraggler (765543) | more than 3 years ago | (#34073166)

HTML is a mess

Unquestionably, yes. And yet it has nevertheless become the most pervasive, flexible, universal communication medium in the history of the world, so it's a glorious mess. It is questionable whether a better-specified system would have succeeded in this, because it would have been too locked down into its designer's original intent. It is precisely the hackability of HTML/http that makes it both fucking awful and fucking brilliant.

Re:"Working" is different from "working well". (3, Insightful)

quacking duck (607555) | more than 3 years ago | (#34073630)

I've been noticing technology trending towards biological models, either intentionally or otherwise. Genetic algorithms. Adaptable AIs. Computer viruses, even.

The rise of the internet and the web models this, too. Much like our own DNA, there's a lot of redundancy, legacy functionality that borders on harmful, and amazing features that are the result of (tech/biological) hacks upon hacks, but they survived not because they were necessarily the best, but because they allowed earlier iterations (ancestors/early web) to be more flexible and adaptable, so it flourished.

Re:"Working" is different from "working well". (1)

DavidTC (10147) | more than 3 years ago | (#34073380)

PHP isn't as shitty as people want to make it out to be.

It's certainly an inconsistent language, but arguments being in weird orders and some function having _ and some not doesn't really make a language 'shitty'. Especially now that it's a real OOP and if you actually use that part it's pretty consistent.

And thanks to HTTP's shittiness and web servers being bitches it often results in PHP being not being stateful either, but that's not really PHP's fault. None of the 'cgi' languages are stateful, and even if the language is, like Perl, you're not using that statefulness in web-based programs.

Remind me (1)

entotre (1929174) | more than 3 years ago | (#34072184)

Are slashdot accounts with auto-login also vulnerable?

Re:Remind me (1)

bunratty (545641) | more than 3 years ago | (#34073270)

Vulnerable to what?

Re:Remind me (1)

entotre (1929174) | more than 3 years ago | (#34073522)

Having the "Herding Firesheep" story fresh in my mind, I meant the wifi vulnerability. :)
After reading the blog post I would guess /. uses (insecure) httponly cookies, but that the cookie settings of each individual account is what determines whether the cookie wifi spy tools can obtain will be useful.

Re:Remind me (1)

bunratty (545641) | more than 3 years ago | (#34073708)

It's not a WiFi vulnerability. The vulnerability is that without HTTPS, passwords and cookies are sent in the clear, so that anyone who can see your Internet traffic can impersonate you on sites you log into. This could happen on a WiFi network or on a wired network. Slashdot does not support HTTPS at all as far as I can tell.

Analogy (1)

s1lverl0rd (1382241) | more than 3 years ago | (#34072226)

HTTP is like a manual lawn mower. It's not flawless, pretty, blazingly fast, or elegant, but it's usable enough to do the job, and you get used to the quirks.

Re:Analogy (5, Funny)

John Hasler (414242) | more than 3 years ago | (#34072298)

> HTTP is like a manual lawn mower.

No it isn't. A manual lawnmower is well-designed. The Web is like a lawnmower built by Rube Goldberg out of dozens of pairs of scissors, lots of string, some boards and a child's wagon, propelled by a large dog and powered by the wagging of his tail (the cookies are to get him to wag it). It's now had a clippings bag and a fertilizer cart added following the same design principles. An automatic dandilion remover, a dethatcher, and an aerator are coming soon (and several more dogs).

Re:Analogy (4, Funny)

phillips321 (955784) | more than 3 years ago | (#34072328)

You forgot to mention that the dog taking a shit is an extra add-on........Flash!

Re:Analogy (1, Funny)

Anonymous Coward | more than 3 years ago | (#34072438)

...which smells so bad because the dog has been fed the worst dogfood, called PHP

Re:Analogy (4, Funny)

peragrin (659227) | more than 3 years ago | (#34072344)

am I the only one who now wants to see that built/build it myself?

Re:Analogy (0)

Anonymous Coward | more than 3 years ago | (#34072518)

Me too... it sounds like the most awesome piece of performance art ever. I don't think I'd mow my lawn with it though.

Re:Analogy (2, Insightful)

arth1 (260657) | more than 3 years ago | (#34072584)

Rube Goldberg? Quite the opposite. The HTTP protocol is very simple, eminently debuggable, plus extensible both ways.
It's the implementations in browsers and servers that suck.

Now *SOAP*, layered on top of HTTP, is truly a Rube Goldberg invention with no redeeming qualities whatsoever.

Re:Analogy (1)

grumbel (592662) | more than 3 years ago | (#34073288)

The HTTP protocol is very simple, eminently debuggable, plus extensible both ways.

Simple, yes, but I'd say its a little to simple for its own good. For example I find it rather ridiculous that in 2010 I still can't reliably continue an interrupted download, as without any form of checksum the browser might just append new data to a file containing garbage and not even know it.

Re:Analogy (1)

mikael_j (106439) | more than 3 years ago | (#34073452)

Now *SOAP*, layered on top of HTTP, is truly a Rube Goldberg invention with no redeeming qualities whatsoever.

Yet a lot of times it's the only thing that makes sense from a business perspective, more elegant solutions often require a lot more work while the majority of your systems can somewhat easily be made to work with SOAP. Not trying to defend it, it's still pretty ugly but connecting to different systems using SOAP is often faster than using something elegant, and the boss doesn't care about "elegant" (I'm sure there are exceptions and I'd love to work for someone like that, most don't though).

Re:Analogy (4, Insightful)

Bigjeff5 (1143585) | more than 3 years ago | (#34073476)

The only reason the implementations in browsers suck is because HTTP is such a hack-job of a protocol (it wasn't originally, but then it was not originally designed to do what it does today). The browsers are left dealing with issues which the HTTP "specification" (which isn't even fully documented, btw) either completely ignores or recommends practices that are completely unrealistic.

One example from the article: the HTTP spec recommends a minimum of 80kb for request headers (20 cookies per user, 4kb per cookie). However, most web servers limit request headers to 8kb (Apache) or 16kb (IIS) in order to prevent denial of service attacks. It is very important that they limit the headers - not doing so leaves them wide open to attack. The HTTP recommendations are completely unreasonable in this regard and fly in the face of good security practice. They are also completely ignored in this and many other cases, because they are so unreasonable.

If the protocol were simple, clear, well designed, and well defined then the browser implementations wouldn't have to suck. It's HTTP that has caused this problem, not the other way around.

It was a very limited protocol that became way too popular, and now we're stuck with a bunch of hacks to get it to work with modern web technology.

Re:Analogy (2, Interesting)

postbigbang (761081) | more than 3 years ago | (#34073054)

Part of the problem is historical. Tim B-L wanted to make a WYSYWYG viewer system. Back in the day when it was invented, it was dangerous. Dangerous because it was an independent, open API set that worked wherever a browser worked. That flew in the face of tons of proprietary software. It was a transport-irrelevent protocol set that took the best of different coding schemes and made it work. Like most things invented by a single (or very few) person(s), it was a work of art. But it was state of the art nearly two decades ago, and we've come a lonnnnnnng way.

When http and W3C were hatching, there were still battles about ARCNet, Token Ring, Ethernet, and something called ATM. Now most of the world uses Ethernet and Ethernet-like communications using TCP/IP-- which back then, was barely running across the aforementioned networking protocols.

Lawn mowers, by contrast, were a 2-stroke, then 4-stroke engine with a blade and housing. The need, whacking grass, hasn't changed. By contrast, we now make browsers do all sorts of things never invisioned in the early 1990's. And we're planning stuff not really imagined in 2000. In 2020, browsers may be gone, or they may be *completely* different tools than they are now. Lawnmowers will still only whack grass.

Re:Analogy (1)

Joce640k (829181) | more than 3 years ago | (#34073184)

I would have said it more like a baby stroller which later had to do duty as a lawnmower and a vacuum cleaner while still maintaining full backwards compatibility and increasing capacity up to 200 babies.

Aww shoot... (1)

MacGyver2210 (1053110) | more than 3 years ago | (#34072252)

Darn...and here I thought this was going to be an article on the OSI Network model...

http://en.wikipedia.org/wiki/OSI_model [wikipedia.org]

Re:Aww shoot... (5, Insightful)

timeOday (582209) | more than 3 years ago | (#34072516)

Ah, the OSI model (circa 1978), the polar opposite of Cookies - a spec so glorious, it's still commonly cited - yet so useless it's a 30 year old virgin, having never been implemented!

Re:Aww shoot... (1)

Bigjeff5 (1143585) | more than 3 years ago | (#34073690)

That's because it's just a description of the network structure, not a protocol in itself. It's only a specification in the sense that it accurately describes how networks must be layed out. It is in fact implemented everywhere. It has to be, or a network connection does not exist. The specific protocols don't matter, the OSI model doesn't care about them beyond describing which layer they fall into.

Layer 1 is your physical connection - any medium over which data is transmitted (coax, microwave, fiber, radio, etc) falls under this layer.

Layer 2 is the data link layer - your MAC address is part of this layer, along with the switch/router your machine connects to. Also here is PPP, SNAP, ethernet DLC, etc.

Layer 3 is the network layer - ARP, ICMP, IPX, IP, etc all fall under this layer.

Layer 4 is the transport layer - TCP, UDP, SPX, NSPDNA, ADSP, etc all fall under this layer

Layer 5 is the session layer - DAP, NetBEUI, RPC, etc all fall under this layer

Layer 6 is the presentation layer - LPP, XDR, NetBIOS, etc fall under this layer

Layer 7 is the application layer - DHCP, HTTP, NFT, RFA, X Windows, FTP, NTP, NFS, etc all fall under this layer

A few protocols span multiple layers (not many), and some layers are skipped (anything that is sessionless and presentationless doesn't need the fifth and sixth layer, for example), but everything needs up to at least the 4th layer and anything in user land must have a protocol in the 7th layer in order to communicate.

It's a description (like all specs), and it is well used today in networks everywhere.

One of the main problems with HTTP is it is sessionless - it really needs something between TCP and HTTP to handle sessions, but instead cookies were hacked on by browsers (thank you Netscape) to give some semblance of sessions to a sessionless protocol. Cookies have since been expanded and further bandaided and completely mis-managed by the http protocol, leaving us with piss-poor implementations of cookies some 15 years after their creation.

Cookies should be replaced (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34072270)

The whole cookie system should be replaced by a system based on public key cryptography. Replace domain scope by associating sessions with the public keys of the client and the server. Authenticate each chunk of exchanged data by signing a hash value. Browsers could offer throwaway key pairs for temporary sessions and persistent key pairs for preferences and permanent logins.

Re:Cookies should be replaced (1)

Sique (173459) | more than 3 years ago | (#34072484)

But then you run into problems if sessions are to be detached to different servers, because not a single computer answers your requests, but a large server farm, maybe geographically distributed worldwide.

Re:Cookies should be replaced (2, Insightful)

ultranova (717540) | more than 3 years ago | (#34072854)

But then you run into problems if sessions are to be detached to different servers, because not a single computer answers your requests, but a large server farm, maybe geographically distributed worldwide.

But these servers need to communicate anyway to maintain a "session" in any meaningful sense, so they can as well send the associated crypt key with the rest of the session information.

More restrictive spec could have averted this (5, Interesting)

thasmudyan (460603) | more than 3 years ago | (#34072284)

I still think allowing cookies to span more than one distinct domain was a mistake. If we had avoided that in the beginning, cookie scope implementations would be dead simple and not much functionality would be lost on the server side. Also, JavaScript cookie manipulation is something we could easily lose for the benefit of every user, web developer and server admin. I postulate there are very few legitimate uses for document.cookie

Re:More restrictive spec could have averted this (2, Interesting)

Sique (173459) | more than 3 years ago | (#34072490)

It was created to allow a site to dispatch some functionality within a session to dedicated computers, let's say a catalog server, a shopping cart server and a cashier server.

Re:More restrictive spec could have averted this (1)

thasmudyan (460603) | more than 3 years ago | (#34072560)

It's clear why it was created. I would argue, however, that the same effect can be achieved by other means on the server side and at the same time it would have made client implementations much much easier. And safer.

Re:More restrictive spec could have averted this (1)

Sique (173459) | more than 3 years ago | (#34072576)

Then describe those "other means".

Re:More restrictive spec could have averted this (4, Insightful)

thasmudyan (460603) | more than 3 years ago | (#34072614)

Then describe those "other means".

First, this happens only rarely in practice. Most of the time these types of ID handovers are done by huge commercial sites such as eBay and even they have cleaned up their URL mess considerably in the last years. Nowadays, big sites tend to have multiple transparent front-end servers that handle incoming connections to a single domain. Using subdomains as a means of differentiating separate machines is not all that common anymore, especially when they exchange lots of data.

But if you really need this functionality, you can just as easily pass a one-time auth token by URL and create another cookie on the second server. There is really no trickery involved here. And if you need to make it very very secure, you can use OAuth, but that would be overkill for the scenarios we're talking about here.

Re:More restrictive spec could have averted this (1)

Skapare (16644) | more than 3 years ago | (#34072668)

This functionality would be achieved with a very simple rule. The rule is simply that for a given hostname, the cookie can be accessed by any hostname that is LONGER than the hostname it was set for. So if "example.co.uk" sets a cookie, "foobar.example.co.uk" can access it. A website can simply make use of this by directing people to the core web site. Note that even this can be abused. A registrar might set up "co.uk" and set a cookie that every domain in "co.uk" can access.

Re:More restrictive spec could have averted this (1)

TheRaven64 (641858) | more than 3 years ago | (#34072538)

With that restriction, you'd have had to log in to tech.slashdot.org, linux.slashdot.org, slashdot.org, and so on all separately. As it is, you have to log into slashdot.org and {some subdomain}.slashdot.org separately.

A better solution might be to put cookie policies in either a well-known location on the web server (as with robots.txt) or in DNS records (as with SPF). That way, domains like slashdot.org could say 'cookies are shared between all subdomains' while domains like .com would have no entry and so cookies would be on a per-subdomain basis.

Hubbub [hubbub.at]: privacy-oriented, distributed, open source social network

The world doesn't need more incompatible social networking platforms, it needs one well-defined, well-designed, social networking protocol.

Re:More restrictive spec could have averted this (1)

thasmudyan (460603) | more than 3 years ago | (#34072600)

With that restriction, you'd have had to log in to tech.slashdot.org, linux.slashdot.org, slashdot.org, and so on all separately.

Yeah, there is no technical reason to have those subdomains anyway. (Other than that it looks cool.)

As it is, you have to log into slashdot.org and {some subdomain}.slashdot.org separately.

If you really needed to pass auth tokens around through subdomains, there are other more secure schemes available to do exactly that.

But even if you're a total fan of semantic subdomains, there is a real argument to be made that you should first have to prove to the browser you actually own the root domain and the subdomain before being allowed setting cookies for them. Though such an extra step would have added complexity, it wouldn't have been anywhere near as ugly as the wildcard/TLD/heuristics mess we got today.

The world doesn't need more incompatible social networking platforms, it needs one well-defined, well-designed, social networking protocol.

I waste my time on what I feel like, thank you. What the world needs is more people who actually do things instead of sniping cheap shots from the sidelines. And my sig is completely irrelevant to this dicussion. Feel free to diss me in a private message anytime.

Re:More restrictive spec could have averted this (0)

Anonymous Coward | more than 3 years ago | (#34072626)

I hope this post removes the accidental moderation.

Re:More restrictive spec could have averted this (0)

Anonymous Coward | more than 3 years ago | (#34072640)

I guess it did?

Re:More restrictive spec could have averted this (1)

istartedi (132515) | more than 3 years ago | (#34073760)

What the world needs is more people who actually do things instead of sniping cheap shots from the sidelines

And, if I may add, "How do you know that software won't form the base for an open standard some day?".

Documents take time and cost money. Free reference implementations are priceless.

Not planned (2, Insightful)

Thyamine (531612) | more than 3 years ago | (#34072330)

I think it can be hard to plan for this far into the future. Look how much the web has changed, and the things we do now with even just HTML and CSS that people back in the beginning probably would never have even considered doing. You build something for your needs and if it works then you are good. Sometimes you don't want to spend time planning it out for the next 5, 10, 20 years because you assume (usually correctly) that what you are writing will be updated long before then and replaced with something else.

Re:Not planned (0)

Anonymous Coward | more than 3 years ago | (#34072428)

Welcome to IE6 coporate webapp land. Because planning for the future is hard.

Sometimes you don't want to spend time planning it out for the next 5, 10, 20 years because you assume (usually correctly) that what you are writing is not documented and hell if anyone can figure out what the last programmer was thinking, better gtfo long before anyone notices and I'm replaced by somone else.

I'd be happy if we could declare valid cookies (1)

Vekseid (1528215) | more than 3 years ago | (#34072342)

On a domain.

Like the crosssite.xml or robots.txt files. "Cookies on this site must follow this pattern." Or somesuch.

Most of the rest, I can cope with. Cookie pollution from various forms of injection, not so much.

Re:I'd be happy if we could declare valid cookies (1)

Sique (173459) | more than 3 years ago | (#34072494)

You could actually implement that in your server. Throw away any cookies you are not interested in.

When it comes to cookies - block them all (0)

Anonymous Coward | more than 3 years ago | (#34072356)

When it comes to cookies - block them all just like we all block Flash and JavaScript for security reasons.

Inside a VM running a non-Microsoft OS, I have a browser configured to allow session cookies and javascript, but that VM is a LiveCD boot - no hard drive. I use it for very specific reasons like banking. If I want to visit less-safe-parts-of-the-internet, I reboot the VM and have at it, but turn off javascript so those scripts don't attempt to hack other machines on my internal network. I really need to setup another network for that type of use.

Thank you ... (0)

Anonymous Coward | more than 3 years ago | (#34072378)

Yep, thank you Captain Hindsight.

Replace, rather than repair (1)

AlecC (512609) | more than 3 years ago | (#34072392)

TFA makes it clear that it is impossible to repair the current cookie system: it is really badly broken, and several previous attempts have failed.

Could we therefore design a complete new replacement system, to be implemented in parallel, and added as part of the HTML5 standard? If it were well specified, so that all implementations were consistent, and had all the features that TFA shows are needed, it should be both easy to use and have serious benefits for the site designer as well as the user. In which case, designers might be inclined to do
if then else

The important thing is that it must be easy to use the replacement (e.g. no inter-browser weirdness) and the designer must get some payoff in terms of a better site. Of course, the user will also get a payoff - probably bigger - in terms of better security amongst other things. But, realistically, it is the designers convenience which will win the day. Once you get the big four (or so) browsers implementing the same standard, and designers regarding that as a preferred option, it has a chance of taking over.

Who can design such a system? Assuming a perfect "supercookie" system is designed, how do we get it into the standard? And what is the game-changing power feature that will bribe site designers to use the supercookie?

Re:Replace, rather than repair (1)

tepples (727027) | more than 3 years ago | (#34072718)

Is HTML5 localStorage anything like what you want?

Re:Replace, rather than repair (1)

AlecC (512609) | more than 3 years ago | (#34073138)

You need more than that, as the comments on TFA explain. You need a limitation on space. You need expiry. You need very carefully defined sharing, so sites can federate. You probably need enforcement of https. On the other hand, you need very little storage: rigorously controlled UUIDs seem to me to provide all that is needed i.e. a record of your previous visit to this or a federated site.

Why the hate.... (5, Informative)

Ancient_Hacker (751168) | more than 3 years ago | (#34072476)

Why go hatin' on this particular protocol?

Most of them are just nuckin futs:

* FTP: needs two connections. Commands and responses and data are not synced in any way. No way to get a reliable list of files. No standard file listing format. No way to tell what files need ASCII and which need BIN mode. And probably more fubarskis.

* Telnet: The original handshake protocol is basically foobar-- the handshakes can go on forever. Several RFC patches did not help much. Basically the clients have to kinda cut off negotiations at some point and just guess what the other end can and will do.

* SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

 

Re:Why the hate.... (0)

Anonymous Coward | more than 3 years ago | (#34072492)

You forgot to add WEP.

Re:Why the hate.... (0)

Anonymous Coward | more than 3 years ago | (#34073490)

"Jesus WEPt".

Re:Why the hate.... (4, Insightful)

Anonymous Coward | more than 3 years ago | (#34072564)

Telnet dates to 1969. FTP dates to 1971. SMTP dates to 1982. HTTP dates to 1991, with the current state of affairs mostly dictated during the late 1990s.

It's excusable that Telnet, FTP and even SMTP have their issues. They were among the very first attempts ever at implementing networking protocols. Of course mistakes were going to be made. That's expected when doing highly complex stuff that has absolutely never been done before.

HTTP has no such excuse. It was initially developed two to three decades after Telnet and FTP. That's 20 to 30 years of mistakes, accumulated knowledge and research that its designers and implementors could have learned from.

And it did learn... (2, Interesting)

Junta (36770) | more than 3 years ago | (#34072922)

It didn't make mistakes that closely resemble those in Telnet, tftp, ftp, smtp, it made what may be considered completely distinct 'mistakes' in retrospect.

However, if you confine the scope of HTTP use to what it was intended, it holds up pretty well. It was intended to serve up material that would ultimately manifest on a endpoint as a static document. Considerations for some server-side programmatic content tweaking based on client given cues was baked in to give better coordination between client and server and some other flexibility, but it was not intended to be the engine behind highly interactive applications 'rendered' by the server. HTTP was founded at a time when the internet at large wasn't particularly shy about developing new protocols running over TCP or UDP and I'm sure the architects of HTTP would've presumed such a usage model would have induced a new protocol rather than a mutation of HTTP over time.

Part of the whole 'REST' philosophy is to get back to the vision that HTTP targets. Strictly speaking, a RESTful implementation is supposed to eschew cookies and server maintained user sessions entirely. Every currently applicable embodiment of data is supposed to have its own *U*RL and authentication when required is HTTP auth. Thanks to Javascript a web application can still avoid popping up the inadequate browser provided login dialog as well as assembling disparate data at the client side rather than server side. It doesn't work everywhere, and often even when it does it's kinda mind warping to get used to, but it does try to use HTTP more in the manner it was archictected to be used.

Re:Why the hate.... (2, Insightful)

ultranova (717540) | more than 3 years ago | (#34073250)

HTTP has no such excuse. It was initially developed two to three decades after Telnet and FTP. That's 20 to 30 years of mistakes, accumulated knowledge and research that its designers and implementors could have learned from.

HTTP works perfectly fine for the purpose for which it was made: downloading a text file from a server. How were the developers supposed to know that someone was going to run a shop over it?

HTTP and the Web grew organically. That evolution has given it its own version of wisdom teeth. Unfortunate, but hardly the fault of either Berners-Lee or the microbes in the primordial soup.

Re:Why the hate.... (1)

Carl Drougge (222479) | more than 3 years ago | (#34072594)

SMTP has no such restriction. (Not saying it's good exactly, but it doesn't have that particular problem.)

The unix mbox format has that problem though, but there are plenty of better options for mail storage. And there are no interoperability problems with switching, except with local software.

Re:Why the hate.... (2, Interesting)

panda (10044) | more than 3 years ago | (#34072728)

Interestingly, "mbox" format is another one of those standards without a standard, just like cookies.

It started basically as a storage convention for the mail command. Then, other programs started using it. Some of those programs were written to depend on certain information appearing on the line after the "From " and others didn't.

When I contributed to KMail 2 back in the day, on of my patches was to change what KMail put into the "From " lines of mailbox files because mutt or pine users (forget which) were complaining that KMail was broken because it wrote "From aaa@aaa" followed by the date with the hour set to midnight. This broke one of the other readers that expected the sender's email address and an actual timestamp.

Anyway, long story short, mbox format is plagued by similar though less serious problems to cookies. The biggest of which is that it is actually not a standard, but a convention.

Re:Why the hate.... (1)

Bookwyrm (3535) | more than 3 years ago | (#34072726)

Take a look at Session Initiation Protocol (SIP) RFC 3261 if you really want to see crazy.

Re:Why the hate.... (3, Informative)

hedrick (701605) | more than 3 years ago | (#34072732)

These protocols were designed for a different world:

1) They were experiments with new technology. They had lots of options because no one was sure what would be useful. Newer protocols are simpler because we now know what turned out to be the most useful combination. And the ssh startup isn't that much better than telnet. Do a verbose connection sometime.

2) In those days the world was pretty evenly split between 7-bit ASCII, 8-bit ASCII and EBCDIC, with some even odder stuff thrown in. They naturally wanted to exchange data. These days protocols can assume that the world is all ASCII (or Unicode embedded in ASCII, more or less) full duplex. It's up to the system to convert if it has to. They also didn't have to worry about NAT or firewalls. Everyone sane believed that security was the responsibility of end systems, and firewalls provide only the illusion of security (something that is still true), and that address space issues would be fixed by reving the underlying protocol to have large addresses (which should have been finished 10 years ago).

3) A combination of patents and US export controls prevented using encryption and encryption-based signing right at the point where the key protocols were being designed. The US has ultimately paid a very high price for its patent and export control policies. When you're designing an international network, you can't use protocols that depend upon technologies with the restrictions we had on encryption at that time. It's not like protocol designers didn't realize the problem. There were requirements that all protocols had to implement encryption. But none of them actually did, because no one could come up with approaches that would work in the open-source, international environment of the Internet design process. So the base protocols don't include any authentication. That is bolted on at the application layer, and to this day the only really interoperable approach is passwords in the clear. The one major exception is SSL, and the SSL certificate process is broken*. Fortunately, these days passwords in the clear are normally on top of either SSL or SSH. We're only now starting to secure DNS, and we haven't even started SMTP.

---------------

*How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.

Re:Why the hate.... (1)

metamatic (202216) | more than 3 years ago | (#34072756)

Don't forget the horrible hacks on SMTP for lines that consist of just a period "."

Also, if you want to see a brand new bad protocol, look at XMPP.

I think the all time worst protocol I've seen is SyncML. vCards wrapped in XML [sun.com], with embedded plaintext passwords.

Re:Why the hate.... (2, Insightful)

arth1 (260657) | more than 3 years ago | (#34072856)

* SMTP: You can't send a line with the word "From" as the first word? I'm not a typewriter? WTF?

There's nothing in the SMTP protocol stopping you from using 'From ' at the start of a line. The flaw is with the mbox storage format, in improper implementations[*], and mail clients who compensate for that without even giving the user a choice. Blaming that on SMTP is plain wrong.

[*]: RFC4155 gives some advice on this, and calls the culprits "overly liberal parsers".

Re:Why the hate.... (1)

Blakey Rat (99501) | more than 3 years ago | (#34073594)

There's also OAuth and OpenID, which are particular egregious because they're so new. Who designs a protocol that *requires* Internet access *and* a web browser to work? WTF.

The way the web works in general is bizarre (4, Insightful)

vadim_t (324782) | more than 3 years ago | (#34072586)

Let's see:

1. IP is a stateless protocol, that's inconvenient for some things, so
2. We build TCP on it to make it stateful and bidirectional.
3. On top of TCP, we build HTTP, which is stateless and unidirectional.
4. But whoops, that's inconvenient. We graft state back into it with cookies. Still unidirectional though.
5. The unidirectional part sucks, so various hacks are added to make it sorta bidirectional like autorefresh, culminating with AJAX.

Who knows what else we'll end up adding to this pile.

Re:The way the web works in general is bizarre (1, Informative)

Anonymous Coward | more than 3 years ago | (#34072710)

culminating with AJAX.

Oh no, not at all. There's WebSockets and Server-Sent Events in the pipeline now.

Not completely nonsensical... (5, Informative)

Junta (36770) | more than 3 years ago | (#34072818)

1. Sure
2. stateful, stream-oriented, *and* reliable
3. HTTP designed as a stateless datagram model, but wanted reliability, so TCP got chosen for lack of a better option. SCTP if it had existed might have been a better model, but for the time the stateful stream aspect of TCP was forgiven since it could largely be ignored but reliability over UDP was not so trivial.
4. More critically, the cookie mechanism strives to add stateful aspects that cross connections. This is something infeasible with TCP. Simplest example, HTTP 'state' strives to survive events like client IP changes, server failover, client sleeping for a few hours, or just generally allowing the client to disconnect and reduce server load. TCP state can survive none of those.
5. Indeed, at least AJAX enables somewhat sane masking of this, but the only-one-request-per-response character of the protocol means a lot of things cannot be done efficiently. If HTTP had allowed arbitrary server-side HTTP responses for the duration of a persistent http connection, that would have greatly alleviated the inefficiencies that AJAX methods strive to mask.

Re:Not completely nonsensical... (0)

Anonymous Coward | more than 3 years ago | (#34073838)

HTTP: Client-server X11 sessions done ... right ...

At least text reflows on the client side!

Re:The way the web works in general is bizarre (1)

Sarten-X (1102295) | more than 3 years ago | (#34073456)

And that's why I don't do web development. Almost everybody's got a back end, and that's where I stay.

Let me get this straight... (-1, Offtopic)

froggymana (1896008) | more than 3 years ago | (#34072660)

So when it comes to Flash HTML5 is the best thing in the world, but when its just HTML but it self its a terrible mess of kludges that doesn't work very well?

Why can't we just start over with an entirely new web standard that would be designed in a more efficient manner? HTML5 is going to take a lot of work to fully implement and to get rid of flash, or why don't they do a serious over haul on HTML removing a lot of the security risks to make it as safe as it could be while still keeping most of the same syntax?

Re:Let me get this straight... (1)

John Hasler (414242) | more than 3 years ago | (#34072738)

Why can't we just start over with an entirely new web standard that would be designed in a more efficient manner?

And let's replace IPv4 while we're at it!

Re:Let me get this straight... (1)

frank378 (736832) | more than 3 years ago | (#34072870)

Why can't we just start over with an entirely new web standard that would be designed in a more efficient manner?

Yes, why don't we? The layered nature of the protocol stack is meant to allow for multiple versions and revisions of various and sundry functionality and interaction between layers. All the bright outspoken /.'ers here can go off and build some newer, better layers, or even a whole new stack! No more cookies needed, huzzah!

Re:Let me get this straight... (1)

am 2k (217885) | more than 3 years ago | (#34073262)

Huh? The article is talking about HTTP, not HTML. Those two are not related in any way, Flash is also sent via HTTP.

Re:Let me get this straight... (1)

Bigjeff5 (1143585) | more than 3 years ago | (#34073776)

We're talking about HTTP, not HTML. Just because they are often used together doesn't mean they are the same thing. In fact, they couldn't be more different; one is a communications protocol, the other is a markup language - I hope to god you can figure out which is which from that much.

But HTML is a terrible mess of kludges that doesn't work very well, too. It's just that most people on Slashdot consider it to be superior to Flash, even though it lacks a lot of Flash's basic functionality, and lacks all of the nice development tools that Flash has. Most of this stems from security paranoia (legitimate, but overblown in 99% of cases) and its tendency to crash (more significant issue, IMO, and also legitimate - also the cause of much of the security paranoia).

ohhh yeah (1, Funny)

Anonymous Coward | more than 3 years ago | (#34072696)

A session is forever

i love your design

alternatives ? (1)

Tom (822) | more than 3 years ago | (#34072752)

Most of the crap we surround ourselves with (cookies, MIME, Windows and Office, etc.) are still there because they are there and the alternatives aren't.

What is the alternative to using cookies, really? Almost every framework for web-based development has session support that largely relies on cookies. Give me something more secure that works as easily and I will be using it right away.

You think HTTP is bad? (1)

mveloso (325617) | more than 3 years ago | (#34073436)

SNMP is a nightmare. There was a doc out there that used SNMP as an exemplar of "how not to write a protocol."

It's easy to forget, but these protocols were designed back in the day when there wasn't a lot of ram, bandwidth, or CPU.

Most of the problems with everything have been well-discussed. You can dig into the past to see, but interoperability with existing implementations is always the blocking factor.

Heck, everyone knew the problems with ActiveX when it was announced...but that didn't stop MS. Same with cookies. If you want to see excitement, you can mine all the old protocol-level vulnerabilities just by plowing through usenet archives.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...