Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

PHK: HTTP 2.0 Should Be Scrapped

Unknown Lamer posted about 3 months ago | from the just-give-up dept.

Networking 220

Via the HTTP working group list comes a post from Poul-Henning Kamp proposing that HTTP 2.0 (as it exists now) never be released after the plan of adopting Google's SPDY protocol with minor changes revealed flaws that SPDY/HTTP 2.0 will not address. Quoting: "The WG took the prototype SPDY was, before even completing its previous assignment, and wasted a lot of time and effort trying to goldplate over the warts and mistakes in it. And rather than 'ohh, we get HTTP/2.0 almost for free', we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them. ... Wouldn't we get a better result from taking a much deeper look at the current cryptographic and privacy situation, rather than publish a protocol with a cryptographic band-aid which doesn't solve the problems and gets in the way in many applications ? ... Isn't publishing HTTP/2.0 as a 'place-holder' is just a waste of everybody's time, and a needless code churn, leading to increased risk of security exposures and failure for no significant gains ?"

cancel ×

220 comments

Sorry! There are no comments related to the filter you selected.

Fault And Blame (-1)

Anonymous Coward | about 3 months ago | (#47095971)

It's the blacks. It's their fault.

Re:Fault And Blame (-1)

Anonymous Coward | about 3 months ago | (#47096523)

That is disgusting, offensive and just plain wrong.

It's the Jews.

Encryption (4, Insightful)

neokushan (932374) | about 3 months ago | (#47096009)

I hope that whatever HTTP2.0 ends up being enforces encryption by default.

Re:Encryption (5, Interesting)

Anonymous Coward | about 3 months ago | (#47096035)

No, you really don't. Encryption is good for Facebook, but enforcing it for your Internet-of-Everything lightbulb or temperature probe in the basement gains nothing other than more complex bugs and lower battery life.

Re:Encryption (5, Funny)

Anonymous Coward | about 3 months ago | (#47096097)

Nice try NSA.

Re:Encryption (3, Informative)

gweihir (88907) | about 3 months ago | (#47096201)

Nonsense. Enforcing encryption does not make things more secure, unless that encryption and the authentication going with it is flawless. That is very unlikely to be the case against an attacker like the NSA.

Re:Encryption (4, Insightful)

AuMatar (183847) | about 3 months ago | (#47096383)

It doesn't need to be perfect. If cracking it still takes some time, it lowers their resources. And it can still be unbreakable for attackers with fewer resources at their disposal.

Re:Encryption (-1)

Anonymous Coward | about 3 months ago | (#47096441)

It doesn't need to be perfect. If cracking it still takes some time, it lowers their resources. And it can still be unbreakable for attackers with fewer resources at their disposal.

You have NO IDEA what you are talking about. Please, stop posting.

Re:Encryption (4, Insightful)

gweihir (88907) | about 3 months ago | (#47096499)

Unfortunately, breaking the crypto directly is _not_ what they are going to do. Protocol flaws usually allow very low cost attacks, it just takes some brain-power to figure them out. The NSA has a lot of that available.

Re:Encryption (1)

Anonymous Coward | about 3 months ago | (#47096901)

They are absolutely breaking crypto directly. Snowden's leaks evidenced a strongly compartmentalized and highly secret department in the NSA which breaks public key cryptography. By compartmentalized, I don't mean the software ACL crapware that they use for most stuff, and by secret I mean almost nobody knows precisely what their capabilities are. It still sounds immensely costly for them, and they only use it on high-value targets. (Which could still include, however, large companies which don't rotate keys often enough.)

The NSA uses all the tools at its disposal. And, frankly, it's the low-tech stuff that permits wide-field surveillance that is troublesome to me. If the NSA can crack 1024-bit RSA keys then good for them, as long as its costly enough that they can't be indiscriminate and lazy about it, and it isn't secretly used to prosecute people.

Re:Encryption (3, Insightful)

fractoid (1076465) | about 3 months ago | (#47097047)

Even lower cost is simply subpoenaing one end of the transaction. There's no point bothering with a cryptographic or man-in-the-middle attack when you control the man-at-the-other-end.

Re:Encryption (1)

Anonymous Coward | about 3 months ago | (#47096617)

Are you going to pay for SSL certificates and apply them to every appliance and light bulb in your house? Even if you get free 1 year certificates through somewhere like StartSSL you still have to issue and apply all of those certificates to all those devices.

Run your own CA (1)

tepples (727027) | about 3 months ago | (#47097129)

Or just run your own CA and install its root certificate on the devices on your private network.

Re:Run your own CA (2, Interesting)

gmhowell (26755) | about 3 months ago | (#47097299)

Reasonable idea, but I suspect GE, Samsung, Whirlpool, and all the other manufacturers of Internet connected widgets will force you to buy a certificate from their app store. Hacking your light bulb to install your own certificate will be a federal crime, punishable by PMITA prison or worse.

Re:Encryption (1)

complete loony (663508) | about 3 months ago | (#47096931)

Sure you need to actively modify network packets, rather than just monitor them. But without some form of authentication, man-in-the-middle attacks are trivial.

Re:Encryption (1)

msauve (701917) | about 3 months ago | (#47096385)

"more secure" != "perfectly secure." I don't think the NSA is interested in screwing with your mood lighting, but script kiddies might be.

Re:Encryption (1)

gweihir (88907) | about 3 months ago | (#47096509)

On the minus side, your mood lighting may be a lot more expensive if it suddenly needs a significant CPU power upgrade and far more complex software.

Re:Encryption (5, Insightful)

jmv (93421) | about 3 months ago | (#47096403)

Nothing is NSA-proof, therefore we should just scrap TLS and transmit everything in plaintext, right? The whole point here is not to make the system undefeatable, just to increase the cost of breaking it, just like your door lock isn't perfect, but still useful. If HTTP was always encrypted, even with no authentication, it would require the NSA to man-in-the-middle every single connection if it wants to keep its pervasive monitoring. This would not only make the cost skyrocket, but also make it trivial to detect.

Re:Encryption (1, Insightful)

gweihir (88907) | about 3 months ago | (#47096529)

You are confused. The modern crypto we have _is_ NSA proof (as the NSA made sure of that). The protocols using it are a very different matter. These have the unfortunate property that they are either secure or are cheap to attack (protocols do not have a lot of state and hence cannot put up a lot of resistance to brute-forcing). Hence getting the protocols right, and more importantly, designing them so that they have several effective layers of security and can be fixed if something is wrong is critical. Unfortunately, that involves making very conservative choices, while most IT folks are starry eyed suckers for the new and flashy.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47096883)

sounds like a plan - cost based encryption. Encryption that is not particularly state of the art but just so multi-layered and verbose as to be a real pain to decypher.

Or better yet, a protocol that is polymorphic by nature, so they wouldn't even get to focus on 1 protocol, but there would be many.

Re:Encryption (1, Interesting)

WaffleMonster (969671) | about 3 months ago | (#47097147)

Nothing is NSA-proof,

NSA proof is possible unless NSA includes goons armed with $5 wrenches.

The whole point here is not to make the system undefeatable, just to increase the cost of breaking it, just like your door lock isn't perfect, but still useful.

If you can't view traffic then traffic is safe from you therefore it is not necessary to encrypt traffic.

If you can view traffic then you have everything necessary to own that traffic.. TCP initial sequence number and fast pipe is all you need... nobody is doing any of the filtering necessary to prevent source address spoofing so these attacks are trivial.

If your data is going through a "great firewall", CGN (everyone using a cellular network) or other bump in the wire there is no reason not to expect opportunistic encryption to be MITMd in realtime and in bulk.

it would require the NSA to man-in-the-middle every single connection if it wants to keep its pervasive monitoring.

So everyone in US is safe from NSA bulk collection of websites they visit except bulk collection of IP layer headers, certificate identities sent in the clear during TLS handshake and the zillions of US corporations engaged in cross site stalking compelled to hand over "any tangible thing".

What is the opportunity cost of an encryption solution which solves nothing? What resources and demand are no longer available to be applied to a solution with teeth?

How do you explain to the user well their data might be encrypted yet their data is not protected since it is not trusted? I can see the eyes rolling and roar of millions of swooshes... All people know is "encrypted" and this means "safe" ... I see nothing good coming from introduction of this technical doublespeak.

Does HTTP 2.0 implement any latching or fingerprinting that could be useful to retroactively detect compromise of security? Do they even try?

Re:Encryption (1)

jmv (93421) | about 3 months ago | (#47097483)

How do you explain to the user well their data might be encrypted yet their data is not protected since it is not trusted?

I'm talking about http here, not https. The idea is that even with http -- where you don't pretend that anything is secure -- you still encrypt everything. It's far from perfect, but it beats plaintext because the attacker can't hide anymore -- it has to be an active attack. I don't pretend to know all about the pros and cons of http 2, but plaintext has to die.

Re:Encryption (1)

Anonymous Coward | about 3 months ago | (#47096663)

> Enforcing encryption does not make things more secure, unless that encryption and the authentication going with it is flawless

What a load of bullshit. Even without strong authentication encryption with ephemeral keying _absolutely prevents_ passive attacks. This means that there can be no undetectable pervasive surveillance. What would motivate you to suggest that we shouldn't take that improvement?

Of course, without strong auth it's not "secure", but an encrypt only link shouldn't and wouldn't be advertised to the user as secure. It would just look like http. To get the "secure" behavior you need to use https, which has mandatory authentication.

Re:Encryption (0)

TechyImmigrant (175943) | about 3 months ago | (#47097061)

>To get the "secure" behavior you need to use https, which has mandatory authentication.

You are wrong on many levels. Please understand cryptography and secure protocol design before you post.

Re:Encryption (1)

tepples (727027) | about 3 months ago | (#47097135)

I think what the AC was trying to say is that the "http" scheme would use TLS encryption with no authentication of the other party, while the "https" scheme would use TLS encryption with the present PKI for authentication. What makes encryption with no authentication worse than plaintext?

Re:Encryption (1)

Anonymous Coward | about 3 months ago | (#47097079)

Enforcing encryption does not make things more secure, unless that encryption and the authentication going with it is flawless.

Wrong. Trivial thought experiment. Currently a very small percentage of internet traffic is encrypted. If suddenly all internet traffic were encrypted, the CPU time to decrypt it all would skyrocket. In fact, given a sufficiently advancing encryption scheme that took into account the client/server and Moore's law, one can imagine further enlarging keys/iterations. Even presuming the keys were included, the actual CPU/GPU/whatever cost to decrypt it all would tend to grow exponentially on top of the already significant growth in actual traffic growth.

Of course all the above is incredibly moot on the point that the NSA can't programically process even the very limited set of data that relates to suspected terrorists (presuming they could even reduce their mountain of data to just that) because there's too much noise to signal. Worse, odds are good that the noise to signal is so bad that even people who are experts can't do the job. That's the real lie of the intelligence community, that there is anything approaching the clarity to actually catch terrorist suspects pre-attack. Hell, they do a pretty horrible finding people post-attack--unless you think that the long delay for bin Laden's "capture" was some sort of clever strategy.

Now, if you want to argue that enforcing encryption doesn't make people safer, then that's another point. But then to go that route would have to begin with people having the real sort of outrage of what the NSA, CIA, FBI, Congress, and President have done. It makes one really consider who the enemy is.

Re:Encryption (1)

AK Marc (707885) | about 3 months ago | (#47096451)

I control my home with a phone app. No encryption needed, I'm either wired or going through an encrypted wireless (yeah, not NSA secure, but more than "good enough"). And none of my home stuff talks HTTP. It's all proprietary. If you are worried about bugs and battery life, you wouldn't use HTTP either. HTTPS is not any different.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47096505)

I control my home with a phone app. No encryption needed, I'm either wired or going through an encrypted wireless

So Genius ... how's that phone app transmit commands to the system controlling your home? Oh yeah. Through the internet. It's as if security still matters there huh?

Re:Encryption (1, Troll)

AK Marc (707885) | about 3 months ago | (#47096763)

So Genius ... how's that phone app transmit commands to the system controlling your home? Oh yeah. Through the internet. It's as if security still matters there huh?

Not through HTTP. Oh, never mind. You are too dumb to know the difference between HTTP and The Internet.

HTTP-only proxy (0)

tepples (727027) | about 3 months ago | (#47097143)

If you're behind a firewall that shunts all traffic to a proxy that allows only outgoing HTTP connections, then HTTP is The Internet.

Re:HTTP-only proxy (1)

AK Marc (707885) | about 3 months ago | (#47097179)

I'm not. Are you?

Re:HTTP-only proxy (1)

tepples (727027) | about 3 months ago | (#47097229)

If you're behind a firewall that shunts all traffic to a proxy that allows only outgoing HTTP connections

I'm not. Are you?

At one time, I was. I imagine many still are, especially in workplaces and on IPv4 address-poor continents.

Re:HTTP-only proxy (0)

AK Marc (707885) | about 3 months ago | (#47097355)

So you aren't on one. I'm not on one. But you find it relevant? NAT doesn't require a proxy, and I've only worked one place that used a proxy. And aside from that one place, I've only seen them commonly used at schools, as it's common for people who misunderstand CIPA to think one is required. Though CIPA allows for all other protocols, so an HTTP proxy with all other ports wide open. Some places do that. All HTTP goes through proxy, but nothing else filtered happens some times as well.

But the real reason the false equivalence is made is that it's anti-American. The Americans invented the Internet and TCP-IP, but CERN invented the World Wide Web, which *IS* the Internet (or, just one of many protocols that run over the Internet). Thus, the more "important" HTTP is, the less important the USA is.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47097117)

Actually you're wrong. By making everything encrypted no one can simply focus on the encrypted parts that you're sending

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47097205)

> gains nothing
It ensures nobody except the recipient of my communication can listen to it, which is far from nothing.

> more complex bugs
To a programmer, this statement sounds like "reversing the polarity" to a physicist. Namely, nonsensical.

> and lower battery life
Encrypting all your regular web traffic would have negligible effects on power drain. Modern computers are so fast that the additional computing power would barely register for a few hundred MB of data per day. Large volume sources such as video streaming can disable encryption where it's useful to do so.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47097373)

Fuck you're an idiot.

Why would my lightbulb or temperature probe have HTTP?

No, it wouldn't, that's fucking stupid.

And it DOES need encryption, particularly if it operates wirelessly which presumably it would. Otherwise some prankster could mess with my lighting or fuck up my homebrew.

And encryption results in virtually no added power consumption as there are literally hundreds of embedded uCs with hardware encryption.

Re:Encryption (4, Informative)

jmv (93421) | about 3 months ago | (#47096119)

Last I heard, it still supports unencrypted, but only if both the client and server ask for it. If either one asks for encryption, then the connection is encrypted, even if there's no authentication (i.e. certificate). With no certificate, it's still possible to pull an active(MitM) attack, which is much harder to pull off at a large scale without anyone noticing (i.e. you can just collect all data you see).

Re:Encryption (4, Informative)

abhi_beckert (785219) | about 3 months ago | (#47096275)

Last I heard, it still supports unencrypted, but only if both the client and server ask for it. If either one asks for encryption, then the connection is encrypted, even if there's no authentication (i.e. certificate). With no certificate, it's still possible to pull an active(MitM) attack, which is much harder to pull off at a large scale without anyone noticing (i.e. you can just collect all data you see).

A server cannot ask for encryption.

Unless the client establishes a secure connection in the first place, the server has no way of knowing if the client is actually who they claim to be. If the client attempts to establish a secure connection and the server responds with "I can't give you a secure connection" then the client needs to assume there is a man in the middle attack going on and refuse to communicate with the server.

There is no way around it, security needs to be initiated on the client and the server cannot be allowed to refuse a secure connection.

HSTS is a partial solution for this problem (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security)

Re:Encryption (5, Informative)

jmv (93421) | about 3 months ago | (#47096331)

A server cannot ask for encryption.

AFAIK, HTTP2 allows the server to encrypt even if the client didn't want to.

Unless the client establishes a secure connection in the first place, the server has no way of knowing if the client is actually who they claim to be. If the client attempts to establish a secure connection and the server responds with "I can't give you a secure connection" then the client needs to assume there is a man in the middle attack going on and refuse to communicate with the server.

If you're able to modify packets in transit (i.e. Man in the Middle), then you can also just decrypt with your key and re-encrypt with the client key. Without authentication, there's just nothing that's going to prevent a MitM attack. Despite that, being vulnerable to MitM is much better than being vulnerable to any sort of passive listening.

Re:Encryption (1)

viperidaenz (2515578) | about 3 months ago | (#47096349)

You're confusing encryption with authentication.

Re:Encryption (1)

AK Marc (707885) | about 3 months ago | (#47096587)

A server cannot ask for encryption.

So a request to HTTP://www.example.com can't be redirected to HTTPS://www.example.com? Because I would consider that the server asking for encryption.

There is no way around it, security needs to be initiated on the client and the server cannot be allowed to refuse a secure connection.

Great, so when we go to all secure, we don't need any more 500 series error messages, as the servers aren't "allowed" to refuse connections.

Re:Encryption (2)

Blaskowicz (634489) | about 3 months ago | (#47096187)

I fear that would train users to mass click through certificate warnings, or even to install shady "helpful" software that will "manage" the problem for them.

Re:Encryption (1)

fuzzyfuzzyfungus (1223518) | about 3 months ago | (#47096431)

They do. Unfortunately, the 'shady software' is "basically all client software that does SSL/TLS and is designed with end users in mind". Because nobody really wants to bite the bullet and scare the users, all major OSes and browsers 'trust' a horrifying number of dubious CAs unless manually configured otherwise.

Given the (lack of) alternatives, it's hard to blame them for doing that rather than being abandoned by users; but it's pretty much the state of play right now.

Re:Encryption (3, Insightful)

gweihir (88907) | about 3 months ago | (#47096193)

That is stupid. First, encryption essentially belongs on a different layer, which means combining it with HTTP is always going to be a compromise that will not work out well in quite a number of situations. Hence at the very least you should be able to leave it out, and either do without or use a different mechanism on a different layer. SSL (well, actually TLS) would have worked if it had solved the trust-in-certificates problem, which it spectacularly failed at due to naive trust models, that I now suspect were actively encouraged by various Three Letter Agencies at that time. In fact, if you control the certificates on both ends, TLS works pretty well and does not actually need a replacement.

That said, putting security for specific, limited (but widely used) scenarios in can be beneficial, but always remember that it makes the protocol less flexible as it needs to do more things right. And there has to be a supporting infrastructure that actually works in establishing identity and trust. In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.

Re:Encryption (5, Insightful)

swillden (191260) | about 3 months ago | (#47096551)

In order for the security to have any benefit at all, it must be done right, must be free from fundamental flaws and must give the assurances it was designed to give. That is exceedingly hard to do and very unlikely to be done right on the first attempt.

SPDY's security component is TLS. SPDY is essentially just some minor restrictions (not changes) in the TLS negotiation protocol, plus a sophisticated HTTP acceleration protocol tunneled inside. So this really isn't a "first attempt", by any means. Not to mention the fact that Google has been using SPDY extensively for years now and has a great deal of real-world experience with it. Your argument might hold water when applied to QUIC, but not so much to SPDY.

It really helps to read the thread and get a sense of what the actual dispute is about. In a nutshell, Kamp is bothered less that HTTP/2 is complex than that it doesn't achieve enough. In particular, it doesn't address problems that HTTP/1.1 has with being used as a large file (multi-GB) transfer protocol, and it doesn't eliminate cookies. Not many committee members seem to agree that these are important problems for HTTP/2, though most do agree that it would be nice some day to address those issues, in some standard.

What many do agree on is that there is some dangerous complexity in one part of the proposal, a header compression algorithm called HPACK. The reason for using HPACK is the CRIME attack, which exploits Gzip compression of headers to deduce cookies and other sensitive header values. It does this even though the compressed data is encrypted. HPACK is designed to be resistant to this sort of attack, but it's complex. Several committee members are arguing that it would be reasonable to proceed without header compression at all, thus greatly simplifying the proposal. Others are arguing that they can specify HPACK, but make header compression negotiable and allow clients and servers to choose to use nothing rather than HPACK, if they prefer (or something better when it comes along).

Bottom line: What we have here is one committee member who has been annoyed that his wishes to deeply rethink HTTP have been ignored. He is therefore latching onto a real issue that the rest of the committee is grappling with and using it to argue that they should just throw the whole thing out and get to work on what he wanted to do. And he made his arguments with enough flair and eloquence to get attention beyond the committee. All in all, just normal standards committee politics which has (abnormally) escaped the committee mailing list and made it to the front page of /.

Re:Encryption (1)

complete loony (663508) | about 3 months ago | (#47096977)

There's a simpler solution. Keep using GZip compression, but expose the sensitivity of strings to the compression layer. You could ensure that sensitive strings are transmitted as separate deflate blocks without any compression at all, and ignored for duplicate string elimination. All HTTP/2 would need to specify is the ordering of these values so that the compression can still be reasonably efficient for everything else.

Re:Encryption (0)

Splab (574204) | about 3 months ago | (#47097195)

Lately PHK has become more and more derailed. If you ever follow him on his local Danish blogs, he has gone totally off the rail - claiming all sorts of weird things, from major conspiracies to some far fetched claims against the EU.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47096221)

I regret IPv6 isn't built around the Tor concept.

Re:Encryption (1)

Aethedor (973725) | about 3 months ago | (#47097425)

http://regmedia.co.uk/2014/05/... [regmedia.co.uk] says it all. Thinking that encrypting everything makes the internet more secure is really naieve.

Re:Encryption (0)

Anonymous Coward | about 3 months ago | (#47097449)

Why would you expect useful encryption from a commitee? Remember how well it worked with IPv6?

His previous comments are much better (4, Interesting)

Anonymous Coward | about 3 months ago | (#47096015)

If you are a programmer and have a Wikipedia page (-1, Troll)

omtinez (3343547) | about 3 months ago | (#47096031)

...then you are already too self-important to be considered seriously. Please, keep this rant from a nobody away from the front page. All he did was to cry about how SPDY sucks and asked to admit defeat without proposing a good alternative, basically telling everyone involved to throw away all work done up to this point and start from scratch (again, without really proposing anything specific, just throw away the prototype!). Thanks, but no thanks.

Re:If you are a programmer and have a Wikipedia pa (3, Insightful)

Anonymous Coward | about 3 months ago | (#47096107)

Dang, I'm sad Linus Torvalds, John Carmack, et. al. are "too self important" because someone else made a wikipedia page about them. Or maybe programming, especially concerning the next standard for what most of the internet would ideally run, is too important for fucking hipsters to get involved.

Re:If you are a programmer and have a Wikipedia pa (3, Insightful)

l0ungeb0y (442022) | about 3 months ago | (#47096165)

And why shouldn't have a moratorium and review ESPECIALLY in regards to what has come to light about how fucked the internet is in just the last year?

Why proceed blindly with a protocol that comes from Google, who gladly works hand in hand with the NSA and is a Corporation whose core focus to track and monitor every single person and thing online?

What? Just proceed with something that addresses NONE of the present mass surveillance issues, and possibly could make us less secure than we are now just because we don't have a fall back lined up?

Or how about we take this time to step back and reevaluate what HTTP2.0 needs to be -- such as changing to a focus on security and privacy.

Re:If you are a programmer and have a Wikipedia pa (0)

Anonymous Coward | about 3 months ago | (#47097031)

How "fucked" is the internet, exactly?

It's acquiring new users at an astonishing rate - on average, over ten million people per day. Number of searches, online spending, advertising profits - these metrics are increasing at a compound rate of double-digit percent. More industries should be so fucked.

There's a problem with spying and privacy? Duh, and how much is that worth at the bottom line? Hasn't stopped the growth, hasn't even dented it so's you'd notice. Ask Google.

The internet isn't fucked. Its users may be - particularly those who wanted it to remain a free, anarchic playground - but the internet itself is thriving beyond any reasonable expectation.

Re:If you are a programmer and have a Wikipedia pa (1)

YA_Python_dev (885173) | about 3 months ago | (#47097501)

Ah, the irony of using "security and privacy" to argue in favour of old unencrypted HTTP/1.1 and against always-encrypted SPDY (which HTTP/2.0 is based on).

Re:If you are a programmer and have a Wikipedia pa (1)

Anonymous Coward | about 3 months ago | (#47096459)

Yeah, what DOES he know? He's just the author of Varnish and a core FreeBSD developer, I bet you understand HTTP a lot more than he does.

Good point, except... (3, Interesting)

Anonymous Coward | about 3 months ago | (#47096061)

...the entire idea is to cripple security and the ability to provide for privacy. In the end, National Security agencies take the view that digital networks are a primary source of intelligence. Thus, being able to bug and break into systems is a national security priority. The group are dominated by companies that rely on government contracts, so they do their bidding and weaken the specs.

Ultimately, you live in an Oligarchy, not a democracy, so no one cares about your opinion or that of anyone else, unless you happen to have lots of cash. If you did have lots of cash, then you too would be trying to undermine security and privacy to ensure no one takes it from you.

Deal with it.

Re:Good point, except... (2)

jackspenn (682188) | about 3 months ago | (#47096177)

From reading the HTTP/2.0 thread, it seems like some of us "normal" users should respond to the working group last comment call and point out that encryption alone is not enough. That privacy and anonymity are at least equal to encryption, if not more important.

Was tempted to post as an Anonymous Coward for effect.

Re:Good point, except... (4, Interesting)

MassacrE (763) | about 3 months ago | (#47096419)

It is a technical discussion. Unless you are prepared to provide feedback on how to make a more private/anonymous protocol which can serve as a drop-in replacement for HTTP 1.1, "normal users" will just serve as background noise.

PHK's biggest issue IMHO is that HTTP/2 will break his software (Varnish), by requiring things his internal architecture can't really deal with (TLS).

Re:Good point, except... (4, Informative)

philip.paradis (2580427) | about 3 months ago | (#47096701)

PHK's biggest issue IMHO is that HTTP/2 will break his software (Varnish), by requiring things his internal architecture can't really deal with (TLS).

Varnish was never intended to support TLS, nor do the majority of Varnish users (myself included) want it to. The core issues being discussed have little to do with Varnish, aside from the fact that PHK has an excellent understanding of HTTP and high performance content delivery. Having written an HTTP proxy of my own to perform certain other tasks, I understand and largely agree with his sentiments.

That said, it should be noted that many people who need to support TLS connections already use separate software in front of Varnish for cases where high performance intermediate HTTP caching is desirable. This is really a separate topic from discussion of HTTP/2 and/or SPDY, but implementation of a SPDY to HTTP proxy could handle cases where an administrator wishes to run software that only speaks HTTP, albeit with the drawback that SPDY-specific features would be unavailable.

For many use cases, the ability to support 30,000 concurrent HTTP connections with a single VM outweighs the value proposition of encrypting the content in transit, especially for cases where the content in transit isn't remotely sensitive in nature. While "encryption doesn't add much overhead, Google said so" is a commonly parroted idea these days, if you take the opportunity to test various deployment scenarios you'll quickly find that assertion is false for many of those use cases.

Mixing insensitive and sensitive content (2)

tepples (727027) | about 3 months ago | (#47097175)

For many use cases, the ability to support 30,000 concurrent HTTP connections with a single VM outweighs the value proposition of encrypting the content in transit, especially for cases where the content in transit isn't remotely sensitive in nature.

It isn't necessarily that the work that you're trying to serve is "remotely sensitive in nature". It's that other parts of the same page may be "sensitive in nature", and browsers throw up pop-up windows about "mixed content" when a secure document transcludes an insecure resource. For example, the logged-in user's session cookie is "sensitive in nature" because an attacker can view it and replay it to impersonate the user. But because ad networks have a history of not supporting HTTPS, many sites have had to remain vulnerable to Firesheep in order to serve ads to logged-in users. (Only in September of last year did Google's AdSense add HTTPS support.)

Re:Mixing insensitive and sensitive content (1)

philip.paradis (2580427) | about 3 months ago | (#47097301)

I'm keenly aware of the many issues surrounding mixed content. I'm not referencing any use cases where that would be an issue; far from it, I'm referencing cases where a single entity controls the serving of non-sensitive content, and I'm certainly not suggesting serving session cookies over plaintext under any circumstances. You might be interested to learn that I spend a considerable amount of time every week educating people on issues far more complex than the limited fundamentals you've referenced here, and in any event we appear to be discussing completely different use cases.

Re:Mixing insensitive and sensitive content (1)

tepples (727027) | about 3 months ago | (#47097351)

I'm referring to Slashdot itself, which sends session IDs in plaintext unless you're a subscriber.

Re:Mixing insensitive and sensitive content (1)

philip.paradis (2580427) | about 3 months ago | (#47097363)

I'm explicitly not referring to Slashdot. As much as I may disagree with said practices (and Slashdot has a seemingly ever-increasing pile of bad practices following the last buyout), poor practices on the part of a particular site operator bear no relation to responsible use cases.

Re:Mixing insensitive and sensitive content (1)

philip.paradis (2580427) | about 3 months ago | (#47097377)

To add to the heap of bad practices in relation to the current conversation, I'll repeat my earlier comment that Slashdot has still not renewed their certificate [cubeupload.com] , and thus even subscribers are in a bit of a pickle with regard to TLS unless they happen to have previously noted the key fingerprint of the now-expired certificate and place rather high trust in the internal integrity of Slashdot operations. Given the circumstances, I could not in good professional conscience recommend such trust.

Re:Mixing insensitive and sensitive content (1)

philip.paradis (2580427) | about 3 months ago | (#47097401)

I'll add one more bit of fuel to the fire of Slashdot irresponsibility. It appears slashdot.org uses Apache 2.2.3 on CentOS to serve its content, and while this could be due to an obfuscated host response header, the issuance date for their wildcard SSL certificate (which really shouldn't be used in this case anyhow) was April 20, 2013. Unless Slashdot went to the trouble of avoiding OpenSSL all this time, this means the private key for their wildcard certificate was vulnerable to the Heartbleed vulnerability widely reported just a short while back (heavily discussed on Slashdot, ironically enough), but their management couldn't be bothered to revoke and reissue the certificate. I'm sure I don't need to elaborate on the information security implications in this case.

Re:Good point, except... (1)

phantomfive (622387) | about 3 months ago | (#47097111)

That privacy and anonymity are at least equal to encryption, if not more important.

I'm really interested on your plan to enforce privacy and anonymity in the HTTP protocol. Because I don't see how you can do that......

Re:Good point, except... (0)

Anonymous Coward | about 3 months ago | (#47096301)

On paper we technically live in a Republic.

http://www.wimp.com/thegovernm... [wimp.com]

Re:Good point, except... (0)

Anonymous Coward | about 3 months ago | (#47096381)

Sort of the same Republic as presented in Star Wars...

Convincing (5, Interesting)

gweihir (88907) | about 3 months ago | (#47096137)

There is also the other thing that there is no urgent need to replace HTTP/1.1, despite of what people claim. Sure, it has problems, but the applications it does not support so well are things that there is not urgent need for, hence there is no urgent need for a protocol replacement. It would be far better to carefully consider what to put into the successor and what not. And KISS should the the overriding concern, anything else causes a lot more problems and wastes a lot more resources than having the successor a few years later.

Re:Convincing (0)

Anonymous Coward | about 3 months ago | (#47096199)

Agreed. Additionally, SPDY already exists as an unofficial stopgap for those that want it. Most will stick to HTTP/1.1, but Google is using SPDY because it suits their needs and that's just fine.

Re:Convincing (1)

gweihir (88907) | about 3 months ago | (#47096485)

Indeed. And getting a bit more experience with it to be able to judge its merits and problems (unfortunately Google has been both short-sighted and too specific for their own needs in the past) would be a good thing too, as it provides the opportunity to learn some lessons before setting anything in stone.

next up ipv6 (0)

Anonymous Coward | about 3 months ago | (#47096197)

can we scrap ipv6 yet ? the most unintuitive address system invented so far

Re:next up ipv6 (1)

am 2k (217885) | about 3 months ago | (#47096287)

Globally unique addresses are really unintuitive. What is this, the phone system? /s

Google wants control of the world. (0)

Anonymous Coward | about 3 months ago | (#47096225)

Isn't publishing HTTP/2.0 as a 'place-holder' is just a waste of everybody's time, and a needless code churn, leading to increased risk of security exposures and failure for no significant gains ?

Not to mention, who gets control of it and passes ouit all the back doors.

Idealism in computing? (1)

Anonymous Coward | about 3 months ago | (#47096233)

This guy has never heard of version numbers or something? He thinks it's a better idea to design something that is perfect instead of using something that is useful for what it's designed for but less than ideal for a small percentage of use cases.

Moving goal posts (4, Insightful)

abhi_beckert (785219) | about 3 months ago | (#47096243)

I don't think HTTP has any problems with security. All the real world problems with HTTP security are caused by:

  * dismally slow roll out of dnssec. It should have been finished years ago, but it has barely even started.
  * the high price of acquiring an SSL certificate (it's just bits!).
  * slow rollout of IPv6 (SSL certificates generally require a unique IP and we don't have enough to give every domain name a unique IP).
  * arguments in the industry about how to revoke a compromised SSL certificate, which has lead to revocation being almost useless.
  * SSL doesn't really work when there are thousands of certificate authorities, so some changes are needed to cope with the current situation (eg: dsnssec could be used to prevent two certificate authorities from signing the same domain name)

Re:Moving goal posts (0)

Anonymous Coward | about 3 months ago | (#47096295)

So HTTP doesn't have any problems with security... it just has numerous problems with security that aren't adequately covered by the myriad of fixes thrown at it?

Re:Moving goal posts (2)

fuzzyfuzzyfungus (1223518) | about 3 months ago | (#47096545)

The trouble is that SSL is really playing two roles that aren't trivial to separate, because of the 'well-just-MiTM-with-a-self-signed-cert' problem; but which are substantially at odds with one another in other respects.

You've got identification, which you only really want in a subset of cases (your bank, say); but which is actually slightly expensive to do properly and then you've got encryption, which you want in basically all cases (would you ever not at least want it?) which is cheap; but requires the certificate. Arguably, what we really need is some mechanism for measuring (presumably by consulting some number of 3rd parties, ideally widely distributed) certificate stability and certificate duration.

There are some certificates where you would actually want somebody to have done some thorough checking that the entity on the cert is who they say they are. However, much of the time your main concern is that the certificate isn't an MiTM, and that you are talking to the same person or entity you were talking to previously. You don't actually need to know their Real Official Name or anything; but you want to be sure that you aren't suddenly being fed a different certificate than people in other parts of the world, or on different ISPs, and you want to be sure that the certificate didn't change suddenly for mysterious reasons.

MITM from day one breaks key continuity management (2)

tepples (727027) | about 3 months ago | (#47097103)

However, much of the time your main concern is that the certificate isn't an MiTM, and that you are talking to the same person or entity you were talking to previously.

That's called the "key continuity management" paradigm. But KCM breaks down if the first time you talk to someone happens to be through a man in the middle. If your Internet connection is through a MITM proxy, as seen in bug 460374 [mozilla.org] and in many corporate networks, then "the same person or entity you were talking to previously" would be the MITM. For this reason, even though SSH is most often used in KCM mode, the "Please answer yes or no" prompt urges the user to confirm the server's key fingerprint out of band.

Re:Moving goal posts (1)

Kjella (173770) | about 3 months ago | (#47097519)

Well you can always do it the TOR way, basically the onion address is a fingerprint of the public key. You'd still need DNS to tell you that "435143a1b5fc8bb70a3aa9b10f6673a8.pubkey" can be found at ip 1.2.3.4 or ipv6 abcd:abcd:abcd:abcd:abcd:abcd:abcd:abcd) so you could always suffer denial of service, though the "pubkey" DNS server should refuse requests to redirect that aren't signed by that public key but nobody else would have the correct key for a MITM. The obvious downsides:

  1. The domain would never be easy to read or have meaning, it's for linking or copy-pasting or QR codes. But you could have an "easy" URL that redirects to your secure site for bookmarking, that way returning visitors coming from bookmarks aren't vulnerable.
  2. If the server is compromised, you lose the "domain". This can somewhat be mitigated by having it signed by a "root" key (self-signed!) kept offline and safe, using the root key to revoke it and once revoked DNS servers will never let it be unrevoked. Possibly also a revoke/redirect to say the new server is now at 70a3aa9b10f6673a8435143a1b5fc8bb.pubkey. That does somewhat rely on the DNS system, but if the server is compromised all bets are off and this only lets you recover once (and if...) you find out it's compromised.
  3. If you fuck it up and lose the key and all backups, there's no recovery and the domain is forever lost.

Anything more complicated than that will probably die the way 99.999% don't use PGP's web of trust, the sorta maybe we're changing keys for some good reason or maybe we're really compromised doesn't work, people want straight answers is this the same site or not.

cost of SSL certificates (1)

SuperBanana (662181) | about 3 months ago | (#47096681)

The cost of SSL certificates is not in the bits. It's in the security of the private key, some validation in extended verification certs, and the administration work involved in signing your key with the CA's key. "It's just bits" is like looking at a computer chip and saying "it's just sand."

Re:cost of SSL certificates (2)

WaffleMonster (969671) | about 3 months ago | (#47096879)

The cost of SSL certificates is not in the bits.

Back in the day you actually had to pick up the phone, speak with someone and provide corporate documentation. Now you purchase certs from a computer in an 100% automated process. Completely "just bits" worthless.

It's in the security of the private key, some validation in extended verification certs

Extended verification is a foolish scam to enrich CAs. Users hardly understand what the padlock icon means in URL bar after being intentionally inundated with fake padlock gifs and "we're secure" believe what we say assertions littering every online commerce and banking site on the planet.

Re:Moving goal posts (0)

Anonymous Coward | about 3 months ago | (#47096951)

DNSSEC is an abomination. One of the primary reasons it's rolling out slow is because a lot of smart (but perhaps less vocal and financially-interested than their opponents) people really don't *want* DNSSEC to be widely adopted. DNSSEC raises all sorts of new, difficult security issues. It's a great example of total process failure: a committee protocol over a decade in the making that makes security worse in the net.

The unique IPs problem isn't really relevant, it's not a practical barrier.

All the rest of your stuff boils down to the problems of trust that are exhibited by any public key infrastructure and how they were solved (or not!) by browser vendors. This problem is somewhat orthogonal to everything else about the security and privacy of the protocol itself.

Re:Moving goal posts (1)

ras (84108) | about 3 months ago | (#47097387)

I don't think HTTP has any problems with security.

I disagree. We live in a world where phishing attacks are common, and the PKI system is fragile. Fragile as in when Iran compromised DigiNotar and people most likely died as a result.

The root cause of both problems is the current implementation of the web insists we use the PKI infrastructure every time we visit the bank, store or whatever. Its a fundamental flaw. You should never rely on Trent (the trusted third party, the CA's in this case) when you don't have to. Any security implementation does the insist you do when you don't have to is broken. Ergo HTTP is broken.

It's not like it isn't fixable. You could insist that on the first visit the site sends you a cert which is used to secure all future connections, and that cert was used only when the user clicked on a bookmark created when the cert was sent. That would fix the "Iran" problem, and it would also allow the web sites to train the users to use the bookmark instead of clicking on random URL's.

So given HTTP security has caused deaths and it's is fixable, I'd say it has HTTP huge problems with security. Given HTTP/2.0 not attempting to fix it is a major fail IMHO.

Death by Committee (5, Insightful)

bill_mcgonigle (4333) | about 3 months ago | (#47096283)

HTTP/1.1 is roughly seventeen years old now - technically HTTP/1.0 came out seven years before that, but in terms of mass adoption, NSFNet fizzled in '94 and then people really started to pay attention to the web - I had my first webpage about six months before that (at College) and there were maybe a dozen in the whole school who had heard of it previously. Argue for seven years if you'd like, but I'll say that HTTP/1.0 got seriously revised after three years of significant broad usage. SSLv3, still considered almost usable today, was released the year before. TLSv1.2, considered good, has been a standard for over five years and still it's poorly supported though now critically necessary for some security surfaces.

After this burst of innovation, somebody dreamt up the W3C and we got various levels of baroque standards, all while everybody else solved the same problems over and over again. IETF used to be pretty efficient, but it seems like they're at the same point now.

I won't argue for SPDY becoming HTTP/2.0 but I will admire it as an effort to freaking do something. Some guys at Google said, "look, screw you guys, we're going to try to fix this mess," and they did something. While imperfect, they still did enough that the HTTP/2.0 committee looked at it and said (paraphrasing), "hrm, since we haven't done anything useful for 15 years, let's take SPDY and tweak it and call it a day's work well done."

The part Google got most right was the "screw you guys" part - central-planning the web is not working.. I'm not positive what the right organization structure looks like, but it's not W3C and IETF. We need to figure out what went right back in the mid 90's and do that again, but now with more experience under our belts. This talk of "one protocol to rule them all for 20 years" is undeniably a toxic approach. HTTP/1. 1 should have been deprecated by 2005 and we should be on to the third iteration beyond it by now. Yeah, more core stuff for the devs to do - used to be we had people who could start a company and write a whole new web browser in a year - half the time it takes to change the color of tabs these days.

And don't start with this "but this old browser on ... " crap either - we rapidly iterated before and can do it again. Are there people who fear change? Sure - and nobody is going to stop HTTP/1.1 from working 50 years from now, but by golly nobody should want to use it by then either.

Too much. (2)

bussdriver (620565) | about 3 months ago | (#47096599)

It's better they chuck some of it and stick with a few good bits. The encryption can be trashed as far as I care; that can be another group's problem. We need proxy caching and you can't do it with encryption and be secure.

The reason we can't move like before isn't the committee, it's that we now have a global system built around it and a great deal of investment in it. In the 90s it was all new; low risk, low impact. Today, there is a vast territory claimed and set; when you make new things you can't destroy all you've gained and unless you have a killer app (like the web was) people will not be motivated to make drastic changes.

DNSSEC is a great example of not having much motivation to do the pain in the ass it creates; furthermore, it doesn't completely solve a problem we all are that worried about. They may have made it quickly but people are not using it. IPv6 is long past due and here we sit... (at least we don't have a huge movement of IPv4 deniers saying it's not full and if it is, it wasn't our fault.)

Re:Death by Committee (2)

fuzzyfuzzyfungus (1223518) | about 3 months ago | (#47096685)

I'd be inclined to suspect that the stagnation of the W3C and IETF is more a symptom than a cause: they might be 'central planners' in the sense that they get a bunch of technocrats together and try to hammer out the Glorious 3rd Five Year Plan; but they lack essentially all power that a real central planner would have(either to expropriate anyone who sneaks a patent into an IETF standard, or to crush somebody who just wanders off and does his own thing).

Trouble is, if you just wander off and do your own thing, you swiftly learn that the internet of today, unlike the early internet, is huge, heavily populated by people and companies who see it as a necessary evil rather than having any active desire to deal with it, and a certain amount of perverse cat-and-mouse caused by 'security'(why does so much stuff use, or look like, HTTP over port 80, even if it probably isn't the best solution? Because anything that doesn't won't be visible to cube drones slacking off or people on really shit ISPs...)

Given that it takes time for hardware and software to age out and be replaced, it's obviously a bad thing that the people who should be working on future standards, to guide the implementation of what replaces the stuff aging out, have absorbed the inertia of the larger internet (Even if it won't actually be in the field until everybody's old shit drops dead, it'll never reach production if it isn't hammered out and ready to be baked in to the replacements when that does happen); but I'd be very, very, surprised if the standards bodies actually manage to impose stagnation, rather than drawing their membership heavily(but somewhat unavoidably) from entities in the grips of stagnation themselves.

It's probably not for nothing that it was Google, rather than somebody smaller, who came up with that proposal: in terms of engineering resources a substantially smaller company could have done it; but they wouldn't be able to say "Yeah, here's our HTTP replacement. We think it's pretty neat, and will probably use it when the millions of users of our browser and/or operating system contact our tens to hundreds of thousands of servers, which is often. If you guys like it, you can use it too; but even if you don't, we don't really care."

If you are very big, or a lot of your product line is very insular, there is still nothing preventing you from Just Doing It and assuming that things will work out(because, in your case, they very well might). If you are not one of those things, the pond in which you are a small fish is vastly larger than it used to be...

Re:Death by Committee (3, Insightful)

jandrese (485) | about 3 months ago | (#47096877)

My impression is that the IETF was doing a pretty good job until the businesses started taking the internet seriously and instead of being a group of engineers trying to make the best protocols it became a bunch of business interests trying to push their preferred solution because it gives them an advantage in the market. Get a few of those in the same room and deadlock is the result.

Re:Death by Committee (3, Insightful)

MatthiasF (1853064) | about 3 months ago | (#47097151)

Bullshit. BULLSHIT!

Google has derailed so much of the web's evolution in an attempt to control it that they do not have the right for them or any Google lover to suggest they get to the web's standards from committees. From the "development" trees in Chrome, to WebRT and WebM, they have splintered the internet numerous times with no advantage to the greater good.

The committee was strong armed into considering SPDY simply because they knew Google could force it down everyone's throats with their monopoly powers across numerous industries (search, advertising, email, hosting, android, etc.). HTTP/1.1 has worked well for the web. The internet has not had any issues in the last 22 years except when assholes like Google and Microsoft decided to deviate from a centralized standard.

There is no way we should let Google set ANY standard after the numerous abuses they have done over the last 8 years, nor should any shills like you be allowed to suggest they should be the one calling the shots.

So, kindly go to hell.

How does WebM splinter the Internet? (2)

tepples (727027) | about 3 months ago | (#47097253)

From the "development" trees in Chrome, to WebRT and WebM, they have splintered the internet numerous times with no advantage to the greater good.

VP8 is a royalty-free video codec whose rate/distortion performance is in the same league as the royalty-bearing MPEG-4 AVC. WebM is VP8 video and Vorbis audio in a Matroska container. Did Xiph likewise "splinter[] the Internet" by introducing Vorbis as a royalty-free competitor to the royalty-bearing MP3 and AAC audio codecs? If so, how? If not, then how did Google's On2 division "splinter[] the Internet" by introducing WebM as a competitor to MPEG-4?

Re:Death by Committee (1)

thsths (31372) | about 3 months ago | (#47097471)

I completely agree. W3C seems to be always behind reality, trying to describe it, but not define it. IETF did a lot of very useful work, but they have been branching out into rather obscure protocols recently. Where is HTTP/1.2? Surely HTTP/1.1 is not perfect?

And Google did what Google does: they threw together a prototype and checked how it would work. And it seems it is working very well for them, but maybe not so much for others.

I would also advocate to separate some of the concerns. Transmitting huge amount of bulk data is a problem that is (mostly) solved with HTTP/1.1. Encryption less so, session tracking is a bit of a pain, and server push is really ugly in HTTP/1.1.

PS: Concerning the original submission, there is nothing wrong with encrypting cookies. Instead it is the proper thing to do if you do not trust the client, which you should never do.

Summary (0)

Anonymous Coward | about 3 months ago | (#47096323)

Let's keep the crappy thing we have now and not deploy the less crappy thing we've already implemented so we can focus our effort on building something perfect from scratch. Good luck with that.

Arguing about other peoples arguments (3, Insightful)

WaffleMonster (969671) | about 3 months ago | (#47096711)

I think following demonstrates reality participants in standards organizations are constrained by the market and while they do yield some power it must be exercised with extreme care and creativity to have any effect past L7.

As much as many people would like to get rid of Cookies -- something
you've proposed many times -- doing it in this effort would be counter-productive.

Counter-productive for *who* Mark ?

Counter-productive for FaceBook, Google, Microsoft, NSA and the other mastodons who use cookies and other mistakes in HTTP
(ie: user-agent) to deconstruct our personal identities, across the entire web ?

Even with "SSL/TLS everywhere", all those small blue 'f' icons will still tell FaceBook all about what websites you have visited.

The "don't track" fiasco has shown conclusively, that there is never going to be a good-faith attempt by these mastodons to improve personal privacy: It's against their business model.

And because this WG is 100% beholden to the privacy abusers and gives not a single shit for the privacy abused, fixing the problems would be "counter-productive".

If we cared about human rights, and privacy, being "counter-productive" for the privacy-abusing mastodons would be one of our primary goals.

It is impossible for me to disagree with this. Have several dozen tracking/market intelligence/stat gathering firms blackholed in DNS where creative use of DNS to implement tracking cookies do not work. I count on the fact they are all much too lazy to care about a few people screwing with DNS or operating browser privacy plugins.

I'm personally creeped out by hoards of stalkers following me everywhere I go...yet I see the same mistakes play out again and again... people looking to solve problems without consideration of second order effects of their solutions.

You could technically do something about those army of stalker creeps ... yet this may just force them underground, pulling same data thru backchannels established directly with site - rather than a cut and paste javascript job it would likely turn into module loaded into backend stack with no visibility to the end user or ability to control.

While this would certainly work wonders for site performance and bandwidth usage... those limited feedback channels we did have for the stalked to watch the stalker are denied. On flipside of the ledger not collecting direct proof of access could disrupt some stalker creeps business models.

I think emotional half-assed reaction to NSA with established ability to "QUANTUM INSERT" ultimately encourages locally optimal solution having effect of affording no actual safety or privacy to anyone.

Not only does opportunistic encryption provide a false sense of security to the vast majority of people who simply do not understand relationship between encryption and trust such deceptions effectively work to relieve pressure on need for a real solution.. which I assume looks more like DANE and associated implosion of SSL CA market.

My own opinion HTTP 2.0 is only a marginal improvement with no particular pressing need... I think they should think hard and add something cool to it.. make me want to care...as is I'm not impressed.

Re:Arguing about other peoples arguments (0)

TechyImmigrant (175943) | about 3 months ago | (#47097261)

>My own opinion HTTP 2.0 is only a marginal improvement with no particular pressing need... I think they should think hard and add something cool to it.. make me want to care...as is I'm not impressed.

Alright, lets propose some stuff

(1) HTTP runs over TCP, This is stupid. An unbracketed stream protocol for serving blobs of data and protocol messages is wrong and has promoted abhorrent things like REST. Lets go use a reliable datagram transport with a persistent session that can be bound 1:1 with the security session. Then both ends can keep per-session state and the life of programmers gets easier.
(2) HTTP over TLS is stupid. TLS is a machine to machine protocol, not a web client to web server protocol. Lets use a security protocol that is designed to secure the datagram-HTTP mentioned in (1). Then perhaps, the server would know which cert to cough up for which web site, which today it does not, because TLS only authenticates the server not the (virtual) web site.
(3) MITM is a primary problem. Authenticate both ends. If x509+PKI+TLS isn't working (it isn't) lets do something else. How about everything defaults to self signed and we build an certification/attestation layer on top to declare which of the self signed certs correspond to which entities. You could even do it web-of-trust style to limit the single-point-of-failure issues with conventional PKI.

Re:Arguing about other peoples arguments (0)

snadrus (930168) | about 3 months ago | (#47097365)

I agree completely that you've hit the nail on the head with exactly what needs to be looked at.
As for (1) I believe that QUIC is the protocol being investigated now (you can enable it in Chrome).
(2) Anyone who sets-up a virtual website is shocked to learn that one. But an SSL extension where the client indicates what name it seeks should do.
(3) This. Exactly this is needed & I don't know that anyone's trying. Signed certs are dead-simple to make & is how Git is done nowadays, so extending that model to http just makes sense. Since this 1 PC has signed-in to all my regular services, they all would be able to vouch for that cert being me (and should ask challenge questions at sign-in for any machine that doesn't coorespond).

Thanks.

SPDY doesn't solve the real issues (3, Insightful)

Aethedor (973725) | about 3 months ago | (#47097495)

The biggest problem with SPDY is that it's a protocol by Google, for Google. Unless you are doing the same as Google, you won't benefit from it. In my free time, I'm writing an open source webserver [hiawatha-webserver.org] and by doing so, I've encountered several bad things in the HTTP and CGI standard. Things can be made really more easy and thus faster if we, for example, agree to let go of this rediculous pathinfo, agree that requests within the same connection are always for the same host and make the CGI headers work better with HTTP.

You want things to be faster? Start by making things more simple. Just take a look at all the modules for Apache. The amount of crap many web developers want to put into their website can't be fixed by a new HTTP protocol.

We don't need HTTP/2.0. HTTP/1.3 with some things removed, fixed or at least have some vague things be specified more clearly, would be more than enough for 95% of all the websites.

Get back to HTML 1.0 (0)

Anonymous Coward | about 3 months ago | (#47097527)

What I would like to see, is to scrap the whole HTML since HTML 1.0 and get back to basic text-only sites in most parts without most fanciest CSS/JS/PHP etc scripting.

Add just the basic features like what is the font and its style (bold, underline, color) and basic formatting (alignment, width, space between lines and paragraphs) and then way to get few pictures between or side of the text.

And every URL would need to be in the text itself and no image or specific area can be used to trigger anything or as URL.

Basically more like what Wikipedia is today, in more simpler manner even from it.

Then the commenting should be moved back behind NNTP.

There is nothing wrong with HTTP itself, but the pages sizes, elements etc are just blowed out to proportions where sites don't anymore serve the original idea of the WWW and purpose of the Internet.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>