Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Varnish Author Suggests SPDY Should Be Viewed As a Prototype

Soulskill posted more than 2 years ago | from the final-version-needs-lasers-don't-ask-why dept.

The Internet 136

An anonymous reader writes "The author of Varnish, Poul-Henning Kamp, has written an interesting critique of SPDY and the other draft protocols trying to become HTTP 2.0. He suggests none of the candidates make the cut. Quoting: 'Overall, I find the design approach taken in SPDY deeply flawed. For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ... It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment. (This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men") With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic.'"

cancel ×

136 comments

Sorry! There are no comments related to the filter you selected.

Reminds me of IPP (0)

Viol8 (599362) | more than 2 years ago | (#40638213)

The Internet Printing Protocol is a wierd mash up of HTTP and a proprietary binary format. God knows what they were smoking when they dreamt it up.

Re:Reminds me of IPP (2)

Skapare (16644) | more than 2 years ago | (#40638877)

That's what you get when someone designs a protocol and then someone ELSE decides to change it in different ways of thinking. If the first one was well designed, changes should end up looking like they were part of the original. If the first one was poorly designed, make a whole new one.

Re:Reminds me of IPP (0)

OakDragon (885217) | more than 2 years ago | (#40639739)

The Internet Printing Protocol is a wierd mash up of HTTP and a proprietary binary format. God knows what they were smoking when they dreamt it up.

They were eating rotisserie chicken.

Re:Reminds me of IPP (2, Informative)

Anonymous Coward | more than 2 years ago | (#40639789)

I was the member of the IETF committee that proposed the standard (while working for Microsoft), and I agree its not very good but I can tell you that getting standards through various bodies is more politics than technology. Late in the cycle we tried to change it to XML but people thought we (MS) were playing mind games with the committee so the idea was abandoned

Google only recommends SPDY with SSL/443 (0)

Anonymous Coward | more than 2 years ago | (#40638265)

And most SSL implementations (not including newer TLS) can only handle one certificate and usually one host (not counting multi-host/wildcard certs), which pretty much negates his host comment.

Re:Google only recommends SPDY with SSL/443 (1)

Short Circuit (52384) | more than 2 years ago | (#40638433)

That's been fixed with TLS+SNI, which has broad support [wikipedia.org] . SSL (as opposed to TLS) should be effectively dead as a support requirement by now.

Internet Explorer on Windows XP (1)

sgent (874402) | more than 2 years ago | (#40638559)

According to the link, IE on Windows XP does not support TLS+SNI -- including IE 8.

Until this is fixed or sufficient number of people migrate to a newer OS, TLS+SNI is still not viable for most websites.

Re:Internet Explorer on Windows XP (4, Insightful)

Short Circuit (52384) | more than 2 years ago | (#40638681)

By the time a replacement of HTTP 2 is standardized, XP will be fully out of support. I get flamed whenever I say this, but it will be time to let XP die. I'm considering replacing my grandmother's box with an ASUS Transformer, as that'll handle all of her needs. (*And* the rest of my family won't say 'we don't know how to reboot the router because we don't know how to use the Linux netbook you set her up with.) Quickbooks runs on Vista and Win7. Tools and other things which require Windows XP are becoming scarcer, and workarounds and alternatives are becoming cheaper.

Eventually, XP will be like that DOS box that sits in some shops...used only for some specific, very limited purposes. Any shop cheaping out and still using it in lab environments (such as call centers) can work around it by installing a global self-signed cert and using a proxy server to rewrap SSL and TLS connections. Yes, this is bad behavior. So is continuing to use XP. At some point, the rest of Internet needs to move on.

IE on XP, and Android 2.x too (1)

tepples (727027) | more than 2 years ago | (#40638871)

So what should operators of small web sites do in the 21 months between now and when Microsoft agrees to let XP die? Now that IPv4 addresses have become scarce, and home ISPs still haven't been pushing IPv6, shared hosting companies are able to charge double for the dedicated IP addresses needed to run SSL 3.0. Besides, Android 2.x devices are still being sold, and their SSL stack doesn't support SNI either.

Re:IE on XP, and Android 2.x too (2)

Short Circuit (52384) | more than 2 years ago | (#40639059)

If you think home ISPs haven't been scrambling to catch up on IPv6, you haven't been paying attention! Comcast is rolling it out right now. DSL providers are deploying 6rd. Mobile providers are deploying. Within a year, most end-users (in the US) will have access to IPv6 from their ISP. Within two years, most end-users will have replaced their non-IPv6 CPEs with ones which support IPv6. But IPv6 isn't the only solution to the problem, either.

Right now, most small website operators should avoid TLS if they only have static content. Otherwise, they need to make a decision between supporting XP and shelling out for a dedicated IP. Me, I'd probably drop support for XP, and let the end-user click through a cert warning if that's what they're inclined to do.

How much more per month are we talking about for a dedicated IP, anyway? I know how you'd set up joe random guy with a dedicated IPv4 address using a proxy server on a $5-7/mo VPS. Seems cheap to me, especially compared to what joe already spent to get a valid SSL cert.

As far as Android...a number of websites are pushing their users to use simple apps instead of the Android browser. As a user, this annoys me, as my LG-509 doesn't have much space unless I root it and clean it...but I can see how it offers a better interface to the server, and how it changes authentication and connectivity concerns.

Re:IE on XP, and Android 2.x too (1)

jandrese (485) | more than 2 years ago | (#40639149)

Within a year, most end-users (in the US) will have access to IPv6 from their ISP.

You sir, are an optimist. I applaud Comcast's deployment of IPv6, but the rest of the industry is still dragging their heels quite badly.

Re:IE on XP, and Android 2.x too (1)

Short Circuit (52384) | more than 2 years ago | (#40639293)

Comcast is deploying native. AT&T is deploying 6rd. I hear TWC is also deploying native. Also, someone on the east side of Michigan went live with IPv6 a couple months ago and asked some questions in one of the mailing lists I'm on. I can't find the message now, though.

Re:IE on XP, and Android 2.x too (2)

allo (1728082) | more than 2 years ago | (#40639959)

when your page is important, the user will use a browser which is supported.
Imagine google only working with sni. How long will it take until no one uses IE8 anymore? Only a few days, and even the dumbest has found a friend who can download him a real browser.

Re:IE on XP, and Android 2.x too (0)

Anonymous Coward | more than 2 years ago | (#40640829)

when your page is important, the user will use a browser which is supported.
Imagine google only working with sni. How long will it take until no one uses IE8 anymore? Only a few days, and even the dumbest has found a friend who can download him a real browser.

What data do you have to back this up? I have worked on two sites in the Alexa 100 that actually measured it on a small sample of users. The users they lost did not come back with a different browser. Most did not even come back when the experiment ended! Given that most people don't know what a browser is*, how on earth would they ask heir friend to fix one?

*: http://www.youtube.com/watch?v=o4MwTvtyrUQ

Google broken? Switch to Bing. (1)

tepples (727027) | more than 2 years ago | (#40641631)

the user will use a browser which is supported. Imagine google only working with sni.

What is the sound of millions of users abandoning Google? "Bing."

Re:Internet Explorer on Windows XP (0)

Anonymous Coward | more than 2 years ago | (#40640761)

By the time a replacement of HTTP 2 is standardized, XP will be fully out of support. I get flamed whenever I say this, but it will be time to let XP die.

What is your plan to make it happen? Will you be breaking in to people's homes and replacing their PCs?

When you are done, you should make everyone stop smoking and end poverty.

Re:Internet Explorer on Windows XP (1)

Short Circuit (52384) | more than 2 years ago | (#40641461)

What is your plan to make it happen? Will you be breaking in to people's homes and replacing their PCs?

Nobody has to make anything happen that isn't already either planned (Microsoft will stop supporting it) or physically inevitable.

Hardware will die. Software will get screwed up. Installation media will be missing. It will become cheaper for the 'family tech guy' to get his parents something newer or different as a replacement. There will be die-hards who will want to stick with Windows who will refuse to change. Those die-hards are outside the demographic of the vast majority of website maintainers.

So it went with Amiga, Commodore64, DOS, Win3.1, OS/2, Win95, Win98, IPX, token ring, Linux ipchains, VAX. DEC Alpha. So it goes. So it shall go.

When you are done, you should make everyone stop smoking and end poverty.

Heh.

Firefox or chrome (1)

higuita (129722) | more than 2 years ago | (#40640415)

Just use firefox or chrome in XP, problem solved

Re:Google only recommends SPDY with SSL/443 (1)

bluefoxlucid (723572) | more than 2 years ago | (#40640163)

Why are we discussing SSL? The guy who develops Varnish has said that SSL is a mess, OpenSSL is a confusing and terrifying mess, and SSL is bad because he doesn't understand the code.

https://www.varnish-cache.org/docs/trunk/phk/ssl.html [varnish-cache.org]

First, I have yet to see a SSL library where the source code is not a nightmare.

As I am writing this, the varnish source-code tree contains 82.595 lines of .c and .h files, including JEmalloc (12.236 lines) and Zlib (12.344 lines).

OpenSSL, as imported into FreeBSD, is 340.722 lines of code, nine times larger than the Varnish source code, 27 times larger than each of Zlib or JEmalloc.

This should give you some indication of how insanely complex the canonical implementation of SSL is.

Second, it is not exactly the best source-code in the world. Even if I have no idea what it does, there are many aspect of it that scares me.

Translation: SSL libraries are big and scary, SSL is big and confusing and I have no idea what the hell it does so it's bad.

Re:Google only recommends SPDY with SSL/443 (2)

Short Circuit (52384) | more than 2 years ago | (#40640553)

Translation: SSL libraries are big and scary, SSL is big and confusing and I have no idea what the hell it does so it's bad.

Actually, the better argument I've heard is that it OpenSSL is very poorly documented. And I've heard this complaint from numerous people...to the point where some even started looking into fresh implementations.

Re:Google only recommends SPDY with SSL/443 (2)

LordLimecat (1103839) | more than 2 years ago | (#40640917)

I have heard the complaint from numerous folks that SSL libraries really are a mess, which is why periodically we get nasty vulnerabilities in them; supposedly, auditing the code is an exercise in futility.

Re:Google only recommends SPDY with SSL/443 (2)

Score Whore (32328) | more than 2 years ago | (#40640343)

Conspiracy minded folks would think that SPDY is mainly about Google being able to ensure that advertisements are served before the content. Putting it inside of SSL also ensures that any intermediate carriers won't be stripping Google's adverts.

Re:Google only recommends SPDY with SSL/443 (2, Insightful)

Anonymous Coward | more than 2 years ago | (#40641059)

Conspiracy minded folks would think that SPDY is mainly about Google being able to ensure that advertisements are served before the content. Putting it inside of SSL also ensures that any intermediate carriers won't be stripping Google's adverts.

It also improves user's privacy by preventing personal content from being read by ISPs, proxies, and other men-in-the-middle. If any other web site turned on SSL, we would thank them for choosing to improve user's privacy. But this is Google, so it must be a bad thing.

Google turned on SSL for search a month before they launched personalized search, where the search results can include things only the logged-in user has permission to see (if the user logs in and enables it). If they had not enabled SSL, people would (rightly) be upset that any man in the middle could see photos, documents, and G+ posts shared only with you.

If you punish companies for doing the right thing, expect them to stop. Every company has people for and against any idea. When you punish good behavior, the people who fight for it will not win the argument next time.

While I hate the transfer syntaxes we have (3, Interesting)

scorp1us (235526) | more than 2 years ago | (#40638307)

Parsing a HTTP session with multi-part mime attachments using chunked encoding is murderous. Now true, many people don't have to worry about this, but the fact is the protocol leaks like a sieve. For instance, you can't send a header after you've entered the body of the HTTP session. You can't mix chunked-length encoded elements with fixed content-length elements with HTTP1.1. Once you've sent your headers and encoding, you're screwed. The web has a solution - AJAX, but then you need JavaScript.

I'd be all for something new. I'd suggest base it on XML with a header section and header-element to get the transfer started then accept any kind of structured data including additional header elements. With this, you can still use HTTP headers for back-wards compatibility, but once recognized as "HTTP 2.0" the structured XML can be used to set additional headers, etc. With the right rules, you can send chunks of files or headers in any arbitrary order and have them reconstructed.

Re:While I hate the transfer syntaxes we have (3, Insightful)

Skapare (16644) | more than 2 years ago | (#40638383)

If you substitute JSON (or something like it with equal or better simplicity) for XML, then I might go along with it.

Re:While I hate the transfer syntaxes we have (3, Insightful)

spike2131 (468840) | more than 2 years ago | (#40638549)

I love JSON, but XML has the advantage of being something you can validate against a defined schema.

Re:While I hate the transfer syntaxes we have (2)

Skapare (16644) | more than 2 years ago | (#40638693)

And what do you do when something does not validate? Kick the guy who typed it in manually? Oh wait, what if it was generated by a program?

The whole schema thing in XML is one of the things that makes it suck. Just write the data correctly in the first place and discard anything that doesn't make sense to the application.

Re:While I hate the transfer syntaxes we have (5, Insightful)

jimmifett (2434568) | more than 2 years ago | (#40639055)

Ideally, you give the schema to the other side and they can validate the message before sending to you, catching possible errors there. You validate against same schema on your side as a safety net to week out junk data and messages from users that don't validate. It also allows you to enforce types and limitations on values in a consistent manner.

JSON is good for quick and dirty communications when you are both the sender and the consumer of messages and can be lazy and not care too much about junk data.

Both have their uses, but you have to know when to use which.

Re:While I hate the transfer syntaxes we have (1)

sydneyfong (410107) | more than 2 years ago | (#40641541)

Except that it is impossible to design a validation scheme that covers all useful cases without resorting to designing a programming language.

And when you get to that point, why not just write the application code to validate in the first place? Why is it so hard to write a "schema validation" for JSON data? The fact that the designers of JSON didn't overengineer the feature into the spec doesn't mean it's hard to do....

Re:While I hate the transfer syntaxes we have (2)

mangobrain (877223) | more than 2 years ago | (#40641107)

When it doesn't validate, you reject it. Or, in the case of a replacement for an "extensible" protocol, you do something more subtle - such as, accept something which is well-formed XML but contains unrecognised tags, by skipping over the unrecognised tags. Much as is done in HTML itself.

Once you've written a few programs which accept data from the public internet, you come to greatly appreciate the value of protocols whose syntax is easy to parse, and whose semantics are simple to understand. The simpler the parsing code, the less likely it is to contain exploitable bugs; the simpler the semantics, the easier it is to write standards-compliant logic, and have a wide range of client & server implementations which interoperate happily. HTTP in its modern form - i.e. augmented over the years with such features as persistent connections, pipe-lining, chunked encoding, multi-part bodies, compression, cookies & caching - is neither easy to parse, nor easy to understand once parsed. It needs to be taken out and shot, along with this whole laissez-faire attitude to standards compliance & well-formed-ness which permeates web standards.

JavaScript schema (1)

tepples (727027) | more than 2 years ago | (#40638891)

JSON can also be validated against a schema, where the schema consists of a JavaScript file implementing isValid(parsed_object).

Re:JavaScript schema (1)

FrangoAssado (561740) | more than 2 years ago | (#40640397)

That seems great, but then to validate against the schema you must have a full JavaScript interpreter (or almost full, depending on how much you're willing to restrict what can be used in the isValid function). Not to mention that, this being JavaScript, a lot of schemas would end up being a mess, which would defeat half of the purpose of a schema -- being a human-readable documentation of the data format.

Schema validation is a very clear example of a situation where it's not good to have a Turing-complete language. There is a proposal for a JSON schema language [ietf.org] (where schemas are themselves JSON documents). Apparently there's not much interest, though.

XSLT is Turing complete (1)

tepples (727027) | more than 2 years ago | (#40641601)

half of the purpose of a schema -- being a human-readable documentation of the data format

That purpose can be achieved with English.

Schema validation is a very clear example of a situation where it's not good to have a Turing-complete language.

If you specifically don't want something Turing complete when processing XML, then why do XML fans use XSLT despite its being Turing complete [unidex.com] ?

Re:XSLT is Turing complete (1)

FrangoAssado (561740) | more than 2 years ago | (#40641873)

half of the purpose of a schema -- being a human-readable documentation of the data format

That purpose can be achieved with English.

Sure, but then you have to write the schema AND document it. This can (and does) lead to documentation being out of sync with the code.

If you specifically don't want something Turing complete when processing XML, then why do XML fans use XSLT despite its being Turing complete [unidex.com] ?

XSTL is a completely different story; it's used to transform XML, not to validate it (which is what XML Schema does). For that, having the flexibility of a Turing-complete language is a good thing (that said, XSLT is still a pain in the ass to use, regardless of being Turing-complete).

so write a json schema validator (0)

Anonymous Coward | more than 2 years ago | (#40640443)

How hard can it be to write a JSON schema validator ?

There is no reason why the same data in an XML schema can't be represented as a JSON structure and why you couldn't use that data to validate another arbitrary JSON data structure. In fact it should be easier since we'll be missing the useless and often abused distinction between attributes and tags that XML has.

In reality, almost no one uses XML schemas any more (even html has gone the html5 way from xhtml1) because DTDs and their ilk are such are a verbose PITA to the point where alternating validating protocols sprung up to try and simplify it. But the main reason seems to be that no one really reads them and that they often stopped representing the actual data sent over the wire because no one bothered maintaining it after the dedicated developer who carefully crafted it for the first release.

Everyone should use a schema (0)

Anonymous Coward | more than 2 years ago | (#40640955)

Everyone should use a schema; but hardly anybody does. You only need one case of "just skip validation, it'll work but we haven't updated the schema yet". Then the schema slips further and further out of maintenance...

I'm not saying this is how it should be. I'm just saying that a lot of times, that's how it is. Also, you have to validate in the code on some level anyway. You'd be foolish to rely on the schema as your single point of failure. Either you put the schema on a pedestal and make it capable of validating everything, and allow coders to not validate in code, or you lean towards validation everywhere and let the schema slip. Since people are often writing code that doesn't have anything to do with XML, you don't want to throw input validation out as a best practice.

So... the schema is a nice redundant check, or first pass; but it really just isn't that important.

Re:While I hate the transfer syntaxes we have (2)

tigre (178245) | more than 2 years ago | (#40638425)

Did you really just say XML?

Re:While I hate the transfer syntaxes we have (1)

dave420 (699308) | more than 2 years ago | (#40638455)

XML? Cute.

Re:While I hate the transfer syntaxes we have (4, Informative)

Skapare (16644) | more than 2 years ago | (#40638759)

s/Cute/Ugly/

XML is for marking up documents, not serializing data structures.

Now suppose we make HTTP based on XML. During the HTTP header parse, we need the schema. Fetch it. With what? With HTTP. Now we need to parse more XML and need another schema we have to get with HTTP which then has to be parsed ...

XML is not for protocols. JSON is at least more usable. Some simpler formats exist, too.

Re:While I hate the transfer syntaxes we have (1)

i_ate_god (899684) | more than 2 years ago | (#40639025)

and whats wrong with Key: Value; Value; Value anyways?

Re:While I hate the transfer syntaxes we have (1)

Carewolf (581105) | more than 2 years ago | (#40641173)

Nothing, what is wrong with the MIME syntax used in HTTP?

The actually implementation and use may suck, but it could be cleaned to something more consistent without throwing everything else away as well.

Btw, You get almost all the speedup SPDY provides by just using HTTP 1.1 and pipelining. Only reason it is not done more is because it is hard to predict if it will be supported probably, but you could make that a requirement for HTTP 1.2 for instance, solving the problem.

Re:While I hate the transfer syntaxes we have (0)

Anonymous Coward | more than 2 years ago | (#40640927)

XML is for 'presentation'. Meaning it is meant to be parsed by machines. The 'conversation' itself (such as HTTP) or in this case what xml tags to use is the next layer up.

http://en.wikipedia.org/wiki/OSI_model#Layer_6:_presentation_layer

SPDY has an issue that large organizations are not going to like. They can not filter or cache it. Until those 2 things are addressed it will be a 'neato to have' thing (but only if you do not care about your network bill). They could have trimmed out some of the longer codes and replaced them with built ins. They could have given clients hints like machines 1-50 all produce the same results but you may get markup that says machine 3... Instead it is mostly about speeding up 1 thing (which it does very well) connection setup/teardown. Then for icing on the cake it runs everything thru SSL. Meaning caching is out the window. This means instead of producing say a caching proxy that filters out known viri sites I have to do it at each client now instead of 1 place.

Also this xml version of http already exists but no one uses it http://www.xhttp.org/

Re:While I hate the transfer syntaxes we have (0)

Darinbob (1142669) | more than 2 years ago | (#40641169)

I honestly think that XML was created as a prank to see how gullible people were, but then it backfired when the religious cult formed around it.

Re:While I hate the transfer syntaxes we have (1)

Viol8 (599362) | more than 2 years ago | (#40641427)

Wish I had points to mod you up. So true!

Re:While I hate the transfer syntaxes we have (2, Informative)

laffer1 (701823) | more than 2 years ago | (#40638461)

XML is too big. If anything, we need to compress the response not make it ten times larger. The header thing can be annoying at times, but it's important to know what you're going to send the client anyway. You must figure it out by the end of the document, why not at the beginning? Many files have a header including shell scripts, image files, BOM on XML documents or even the xml declaration. it's common in the industry.

AJAX doesn't solve the real problem. If anything it necessitates making responses smaller and faster. We have to do many connections and deal with the overhead of that. Pipelining can help some, but if we continue down this road, we must make the protocol more efficient. XML is the opposite of that goal. I don't agree with anything less compact than what we have now, but you could at least argue for JSON as it's already supported by browsers and much faster to parse.

As there are vastly different goals with the next generation of HTTP, I think it's best not to rush into anything. We'll be stuck with this new protocol. If it doesn't take off, it's just a hassle and if it does, it could be devastating to the internet if it's bloated or doesn't solve any real problems. I don't always agree with PHK, but he has a point that the current proposals do not solve all current or future issues. HTTP must be extendable, backward compatible, work with proxy servers, and allow for the continued growth of the internet. HTTP's lack of state is a problem for many of us now, but it was a feature in the early days. It made the protocol light weight and fast at a time when internet connections were slow. Cookies are abused. Many are created. I don't think adding state to the protocol is going to solve the underlying problem that developers store too much crap in it. Only a session id is necessary. Everything else should be stored server side or in a host page. A nice addition might be to limit where cookies are sent/received from beyond the same domain. That would take away the overhead of sending cookies for every image file, ajax request, etc. They're not always necessary. This can be worked around with a separate domain for images, but it's a hassle to setup.

I think some people have forgotten KISS. Keep it simple, stupid. Seems like everything is getting more complex only to force us back into what we were trying to get away from to begin with. Take NoSQL. Most people are still going strong with map reduce, yet google has been moving away from it. They're now trying to store indexes and incrementally update them. Gee what does a relational database server do.. it has indexes that get UPDATED. They're trying to reinvent SQL and they don't know it. Similarly, a bunch of cruft is getting added to the HTTP protocol and that will stay with us for a long time. Get it wrong and we end up with NoSQL all over again. NoSQL solves a few problems and creates others. It has use cases. HTTP on the other hand has to work for everything. It's critical it's done right.

Re:While I hate the transfer syntaxes we have (1)

DamonHD (794830) | more than 2 years ago | (#40638545)

"You must figure it out by the end of the document, why not at the beginning?"

Because in many reasonable cases you don't know the final outcome when you've produced the first byte of the response, for example streamed on-the-fly-generated pages possibly with on-the-fly gzip encoding. The user gets to see useful output sooner, and the server can more easily cap peak resources, by streaming/pipelining and lazy eval. Like SAX rather than DOM.

Rgds

Damoner

Re:While I hate the transfer syntaxes we have (1)

Anonymous Coward | more than 2 years ago | (#40638491)

XML (or the proposed JSON) isn't a good way to encode computer communications. Why not use a simple extendible binary encoding instead? If a human wants to read the communication for debugging or something else it would be easy to translate to a human readable format instead at human speeds. The computer would crunch through the binary coded format several times faster than optimized text parsing code (most such code isn't optimized).

Re:While I hate the transfer syntaxes we have (0)

Anonymous Coward | more than 2 years ago | (#40638529)

Agreed, ancient broken cruft like XML should be limited to use in legacy systems. The sheer space waste and parsing overhead alone would bring the web to its knees, and there's no reason to use full overkill like XML when all you need is a few simple fields. If anything something like protobuf would be the way to go here.

Re:While I hate the transfer syntaxes we have (2)

LO0G (606364) | more than 2 years ago | (#40639205)

Yeah, maybe something like ASN.1.

Oh wait....[1]

[1] If you don't get this, you've never actually dealt with ASN.1.

Re:While I hate the transfer syntaxes we have (0)

Anonymous Coward | more than 2 years ago | (#40640575)

Big Endian or Little Endian? /sarc

Oh yes XML, that efficiently parsable mess (4, Insightful)

Viol8 (599362) | more than 2 years ago | (#40638777)

As a static data format its just about passable, but as a low overhead network protocol??

Wtf have you been smoking??

Re:Oh yes XML, that efficiently parsable mess (1)

Skapare (16644) | more than 2 years ago | (#40638925)

XML is for marking up documents. Our problem with HTTP is that it is stuck in the legacy document model. Today we need streams, and optimization of sessions. XML would just be the markup of documents we might want to choose to fetch over those streams. Notice that audio/video/media containers are not based on XML, and never should be.

Re:Oh yes XML, that efficiently parsable mess (1)

Rob Riggs (6418) | more than 2 years ago | (#40640215)

Wtf have you been smoking??

My guess: java beans.

XML? In the name of ${DEITY:-XENU}, Why? (3, Insightful)

luis_a_espinal (1810296) | more than 2 years ago | (#40639079)

I'd suggest base it on XML with a header section and header-element to get the transfer started then accept any kind of structured data including additional header elements.

Haven't we learned enough already from industrial pain to stay away from XML? JSON, BSON, YAML, compact RELAX NG, ASN.1, extended Backus-Naur Form. Any one of them, or something inspired by any (or all) of them, that is compact, unambiguos (there should be only one canonical form to encode a type), not necesarily readable, possibly binary, but efficiently easy to dump into an equally compact readable form. Compact and easy to parse/encode, with the lowest overhead possible. That's what one should look for.

But XML, no, no, no, for Christ's sake, no. XML was cool when we didn't know any better and we wanted to express everything as a document... oh, and the more verbose and readable, the better!!(10+1). We really didn't think it through that much back then. Let's not commit the same folly again, please.

Re:XML? In the name of ${DEITY:-XENU}, Why? (1)

SuricouRaven (1897204) | more than 2 years ago | (#40639631)

XML has many good uses.

This is not one of them.

Re:XML? In the name of ${DEITY:-XENU}, Why? (2, Funny)

Anonymous Coward | more than 2 years ago | (#40639737)

XML has many good uses.

It's just that none of them involve computers.

So I'm getting a lot of flack for mentioning XML.. (1)

scorp1us (235526) | more than 2 years ago | (#40640841)

But really any format that can express structured data is endorsed by me. I do not have a problem with JSON, in fact it is my 2nd favorite. My first favorite is Python's style, which is very, very close to JSON. But JSON has the advantage that web people already know it.

Please don't get bogged down with XML, I wrote XML into my post because despite what you all think, it's not that bad to parse, provided that you use a stream-reader style rather than SAX or DOM. The other reason why I wrote XML is because it does not pre-suppose any kind of scripting engine, so people would not be tempted to use code which would require a JavaScript interpreter, which would end up being a really bad idea.

XML, JSON, Python can all express structured data. They are all equally valid and anything expressed in one can be converted between them all.

Re:While I hate the transfer syntaxes we have (1)

Darinbob (1142669) | more than 2 years ago | (#40641139)

XML is a ridiculous format for this. It is bulky. It is intended to be human readable which means it is much llnger than protocols intended for machine readability. I can't figure out why everyone seems to think XML is the magic bullet to use everywhere. You dont neeed schemas for this, and if you did XML's method is really rotten anyway.

Re:While I hate the transfer syntaxes we have (1)

A bsd fool (2667567) | more than 2 years ago | (#40641479)

I agree with the parsing nightmare, though XML is not the right answer either.

Transport needs encryption, authentication, and compression. Internally, the data can be handled by something similar to inetd+tcpmux.

  1. SCTP w/ default-persistent connections for the transport.
  2. PK signing of data to verify authorship, replacing SSL
  3. Mandatory compression
  4. A generic, extensible, data envelope used to hold the actual goodies, with a few well defined but generic header fields.

This gets rid of most of the problems that currently exist with TCP, allows efficient proxying and reverse proxying, and wastes fewer resources. Encryption at the transport layer means webservers in a virtual hosting environment no longer have to figure out what key to use before they know the target host. Signing data means the stream can be authenticated with existing SSL certificates and the CA infrastructure.

The data envelope would have a minimum number of headers. The envelope types, provided by client and server plugins, could be as simple as a datagram transport for "text/plain" to handle the bulk of existing websites, or as complex as an "application/webdav" handler for remote publishing, or "application/vcr" for interactive a/v media. If the overall protocol is thought of similar to TCPMUX where channel IDs are used instead of tcp/udp ports, we get everything we need, "forever".

The single pre-defined envelope type is simply used to exchange a list of supported envelope types by both ends. This is the only part of the protocol that would continue to be "string based", so no central authority is needed to assign and manage a mapping between envelope types and IDs

his criticism is not true in practice (1)

Chrisq (894406) | more than 2 years ago | (#40638323)

He says:

For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ...

It seems to me that routing based on header is doing entirely the wrong thing. In any case, according to wikipedia [wikipedia.org] :

TLS encryption is nearly ubiquitous in SPDY implementations

Which rather makes routing on content infeasible (OK you can forward route behind the SSL endpoint, but this doesn't seem to be what he's talking about)

Re:his criticism is not true in practice (5, Informative)

Mad Merlin (837387) | more than 2 years ago | (#40638427)

TFA is talking about in reverse proxies (of which Varnish is one of many), which are very commonplace. In fact, you're seeing this page through (at least) one, as Slashdot uses Varnish.

Re:his criticism is not true in practice (2)

Chrisq (894406) | more than 2 years ago | (#40638483)

TFA is talking about in reverse proxies (of which Varnish is one of many), which are very commonplace. In fact, you're seeing this page through (at least) one, as Slashdot uses Varnish.

Publicly cached data is outside SPDY's use-case. It is aimed at reducing latency [chromium.org] , and its main target is rich "web application" pages. Now it may well be possible to design a protocol that supports caching as well as reduced latency, but this is not what SPDY was designed to do.

Delenda est. (3, Insightful)

Anonymous Coward | more than 2 years ago | (#40638569)

Then it cannot replace HTTP and should be withdrawn, or it's been wrongfully sorted in under "HTTP/2.0 Proposals [ietf.org] "

The IETF HTTPbis Working Group has been chartered to consider new work around HTTP; specifically, a new wire-level protocol for the semantics of HTTP (i.e., what will become HTTP/2.0), and new HTTP authentication schemes.

Re:Delenda est. (1)

Chrisq (894406) | more than 2 years ago | (#40638731)

Then it cannot replace HTTP and should be withdrawn, or it's been wrongfully sorted in under "HTTP/2.0 Proposals [ietf.org] "

The IETF HTTPbis Working Group has been chartered to consider new work around HTTP; specifically, a new wire-level protocol for the semantics of HTTP (i.e., what will become HTTP/2.0), and new HTTP authentication schemes.

Good point - unless there are particular reasons that a "niche protocol" for highly interactive sites is better than a general purpose one then a replacement that covers all uses should be covered. In fact I have come round to agreeing with TFA: "SPDY Should Be Viewed As a Prototype"

Re:Delenda est. (1)

tibman (623933) | more than 2 years ago | (#40639473)

Isn't it a superset?

Re:his criticism is not true in practice (1)

Skapare (16644) | more than 2 years ago | (#40639015)

It's more than just caching, these days. It's also about sending the requests to the appropriate server. For example, if you can send the requests of a logged in user to the same server or group of servers, it's easier to manage session state (each of 10000 servers holding 400 session states, instead of 10000 servers having to access a centralized store of 4000000 session states).

One thing a new protocol could do to better manage that is, after session authentication, tell the client another IP address and/or port to use so that subsequent requests go to a session group partition, which better distributes the load without having so large a gauntlet of layer 4+ smart session routers.

Or better yet, just hold the session up for as long as the user is present and using the service. First, think VNC. Then think replacing VNC with something smart.

Re:his criticism is not true in practice (1)

Short Circuit (52384) | more than 2 years ago | (#40638465)

Routing based on header is the kind of thing you'd do in an accelerator proxy. You receive the request, look at the headers and perform actions based on those headers. Forwarding the request on to another host is an example of routing.

Re:his criticism is not true in practice (2)

kasperd (592156) | more than 2 years ago | (#40638821)

It seems to me that routing based on header is doing entirely the wrong thing.

But that is something you need to support as long as multiple domains are hosted on the same IP address. Lots of things gets easier if you can have a separate IP address for each domain you want to host. But there has been a shortage of IP addresses.

However there is a solution. You just have to move to IPv6, then you will no longer have a shortage on IP addresses. So what if some people find themselves in a situation where they cannot deploy SPDY on IPv4 (because of limitations in their proxies)? I don't see how that is a bad thing. They can keep using plain HTTP for IPv4 users and SPDY for IPv6 users, where there is no need to host multiple domains on a single IP address.

You might think it is a problem to have this difference between the IPv4 deployment and the IPv6 deployment. It is not a problem, it is a little bit of extra work, but any transitioning is a little bit of extra work. For any domain where you want to have dual stack support, you have to host the IPv4 and IPv6 version of the site on different IP addresses (that's sort of obvious, but needs to be pointed out to make the rest of the argumentation clearer). You can make your two (or more) domains resolve to the same IPv4 address, which is your HTTP proxy, additionally you can make the domains resolve to different IPv6 addresses, which are routed either directly to webservers or through some loadbalancer. You don't need to ever route the IPv6 addresses to the HTTP proxy, it can be routed to a load balancer that only knows TCP and none of the higher levels.

IPv4 support doesn't have to be a design goal in a new protocol designed today. Before a new standard can be agreed upon another continent or two will have run out of IPv4 addresses. There may still be other arguments against SPDY.

Re:his criticism is not true in practice (1)

jbolden (176878) | more than 2 years ago | (#40639187)

That's a really good idea! Make the HTTP shift coordinate with the IPV4/IPV6 shift and then we can assume 1 domain per IP. I'm having a tough time seeing how that breaks down. Any mods out there should mod you up for best idea of the day.

Re:his criticism is not true in practice (1)

toejam13 (958243) | more than 2 years ago | (#40641303)

It seems to me that routing based on header is doing entirely the wrong thing.

But that is something you need to support as long as multiple domains are hosted on the same IP address.

In the load-balancing world, this is known as "Layer 7 routing" and it is quite a handy feature. It also goes well beyond just HTTP HOST headers. The User-Agent header is probably the most useful as you can route clients based on browser type or version, operating system and language. I use this one a lot for forwarding clients to a web_css_pool and web_nocss_pool (looking at you, IE6).

interesting but flawed (1)

certain death (947081) | more than 2 years ago | (#40638443)

So, because you would have to design new security tools and think a different way in order to make it sure, does that make it flawed? Does this mean we are no longer free to innovate unless it fits into some mold? That is just stupid. If someone comes up with a new way of doing things, put on your REAL security hat and come up with a way to secure it, don't just spread FUD about how it is BAD!!

Re:interesting but flawed (1)

fast turtle (1118037) | more than 2 years ago | (#40639563)

so you have the trillion pounds of latium to upgrade all of the routers on the internet? Good, then give me 5000 pounds worth so I can finally get the damn router/file/printer server for my household and provide the 100 million pounds for my ISP to get off their asses and upgrade to Docis3 and IPv6 tomorrow or STFU and Get off my lawn

Rethink HTTP with something else (5, Interesting)

Skapare (16644) | more than 2 years ago | (#40638547)

Much of what the web has become is no longer fitting the "fetch a document" model that HTTP (and GOPHER before it) are designed to do. This is why we have hacks like cookie managed sessions. We are effectively treating the document as a fat UDP datagram. The replacement ... and I do mean replacement, for HTTP, should integrate the session management with it, among other things. The replacement needs to hold the TCP connection (or better, the SCTP session), in place as a matter of course, integrated into the design, instead of patched around as HTTP does now. With SCTP, each stream can manage its own start and end, with a simpler encryption startup based on encrypted session management on stream 0. Then you can have multiple streams for a variety of serviced functions from nailed up streams for continuous audio/video, to streams used on the fly for document fetch. No chunking is needed since it's all done in SCTP.

Re:Rethink HTTP with something else (0)

Anonymous Coward | more than 2 years ago | (#40638921)

We just need to be able to open a freakin' TCP connection from within the client.

Re:Rethink HTTP with something else (1)

broken_chaos (1188549) | more than 2 years ago | (#40640739)

That would be great, ideally (aside from maybe problems with it being absurdly over-engineered).

But it would be really hard to make it catch on. You'd need to manage support for 'traditional' HTTP and the new protocol in all clients, servers, *and* web applications. Because do you really think Microsoft would backport support into old versions of Internet Explorer that people are still using for some god-unknown reason?

generally agree (0)

Anonymous Coward | more than 2 years ago | (#40638571)

I have to agree with the FA. even though the guy isn't always clear enough about issues with HTTP 1.1, It seems like SPDY is more of a fix/speedup for HTTP 1.1 than a true, forward-looking web protocol.

Of course though, industry favors incremental changes.

nice (-1)

Anonymous Coward | more than 2 years ago | (#40638581)

nice site...
http://juadul.jux.in

HTTP wouldn't pass muster (1)

khipu (2511498) | more than 2 years ago | (#40638615)

If someone proposed HTTP today, it wouldn't pass muster by these experts either. And I doubt that any of these new protocols really would make much of a difference anyway. The infrastructure has been built around HTTP, everybody knows how to compress it and everybody knows how to deal with the kind of multiple connections that it requires. If anything additional is really needed, it could be expressed as hints to the server and the intermediate infrastructure without starting from scratch.

Re:HTTP wouldn't pass muster (1)

Skapare (16644) | more than 2 years ago | (#40638815)

SCTP sessions give you multiple streams to do anything you want in them. And once you have encryption established in stream 0, a simple key exchange is all that is needed encrypt the other streams. You can do fetches in some streams while others are doing interactive audio/video streaming. And that's all done within one session as the network stack, and session routers, see it.

Re:HTTP wouldn't pass muster (2)

Viol8 (599362) | more than 2 years ago | (#40638845)

"If someone proposed HTTP today, it wouldn't pass muster by these experts either."

And with good reason. Berners-Lee might have invented the web as we know it but like all first attempts (yes I know about hypercard and all the rest , they weren't networked!) it could really do with some serious improvement. Unfortunately the best solution would be to bin it and start again but its way to late for that so its make do and mend which almost always ends up in a total mess. Which is we what we have today.

Re:HTTP wouldn't pass muster (4, Interesting)

jandrese (485) | more than 2 years ago | (#40639221)

The flipside of this is that a lot of the proposals to replace HTTP suffer badly from the second system effect, where the protocol designer decides to add proper support for all of the edge cases and ends up with a protocol that is gigantic and difficult to implement.

Re:HTTP wouldn't pass muster (1)

jbolden (176878) | more than 2 years ago | (#40639285)

New protocols don't go through committees they just happen. That's the great thing about using a generic TCP/IP or UDP/IP base. New protocols prove themselves by finding a market; protocol revisions prove themselves by finding a consensus.

Re:HTTP wouldn't pass muster (0)

Anonymous Coward | more than 2 years ago | (#40639873)

No so sure about that -- HTTP is great because it's so fucking simple. (even when compared to other inet protocols like FTP.) Sure, everything grown up around it is kludgy, but that nice simple clean core sure beats a web based on DCE RPC or something.

Re:HTTP wouldn't pass muster (0)

Anonymous Coward | more than 2 years ago | (#40640665)

everybody knows how to compress it and everybody knows how to deal with the kind of multiple connections that it requires

That doesn't solve the latency problem that is becoming worse and worse. Bandwidth is increasing really fast, but the speed of light is fairly static and is becoming a huge problem. We need to reduce round trips.

Obligatory (1)

Meneth (872868) | more than 2 years ago | (#40638863)

This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men"

Half the human race is middle-men, and they don't take kindly to being eliminated.

Stupid names (0)

ChrisMaple (607946) | more than 2 years ago | (#40638887)

Is this an author from the planet Varn? Or does he claim to have invented a yellowish coating?

zip file support (1)

Twillerror (536681) | more than 2 years ago | (#40639049)

Wouldn't it be better to have the browser support zip/tarball path.

Now

  would look thru the zip file.

I suppose there could be some security issues here, but it seems like it would be easier than chunking protocols if not much faster.

Further ...

Now we've got cached apps as well.

Re:zip file support (1)

raynet (51803) | more than 2 years ago | (#40639609)

Nah, it would be much better if we could use rsync:// instead of http:/// [http] it would handle nicely partial downloads, compression, slightly changed files etc.

You 1&nsensitive clod! (-1)

Anonymous Coward | more than 2 years ago | (#40639767)

Why aren't people using SPDY, anyway? (1)

edxwelch (600979) | more than 2 years ago | (#40640043)

HTTP 2.0 is not going to happen for a long time. SPDY is here now and both Firefox and Chrome support it.
But, the only companies using it are Google and Twitter. I'd like to see web hosting companies offer it as a service

Re:Why aren't people using SPDY, anyway? (1)

alphred (1920232) | more than 2 years ago | (#40640151)

I thought that SPDY's only reason for existence is to push more ads on people. Why else would anyone want to use it?

Re:Why aren't people using SPDY, anyway? (0)

Anonymous Coward | more than 2 years ago | (#40640759)

I thought that SPDY's only reason for existence is to push more ads on people. Why else would anyone want to use it?

Is that a joke or do you read the tech equivalent of FOX news?

SPDY was designed to drastically speed up and reduce bandwidth when loading Google services on mobile devices (phones). It provides the same benefits to desktop systems as well, obviously. It reduces page load times (round trip latency). Features are:
1) A single server connection instead of opening and closing and opening and closing 100 times (once for each css, script and image)
2) Mandatory compression
3) Mandatory encryption

Er... huh? Article full of nonsense (2)

brunes69 (86786) | more than 2 years ago | (#40640169)

SPDY is encrypted by design. There is no option for middle-men, and frankly, that is the way I like it myself, as i would assume most people. I don't like when devices mess with my traffic.

As for most of the other complaints - given than Google is running SPDY just fine on all of it's servers, and they're basically one of the largest (if not the largest) hosts on the internet, I think they are all strawmen. If it is working for Google then it will work for others.

My experience using SPDY, as a user, is nothing short of spectacular. The performance gains in on Google properties with SPDY are incredible and very noticeable.

Re:Er... huh? Article full of nonsense (1)

Anonymous Coward | more than 2 years ago | (#40641033)

What if you have an organization that you want to cache data (not look at it). You pay per megabyte in data sent to/from the web. Wouldnt you want caching? Hell, on your box I bet there are no less than 2 50megabyte caches (unless you have tweaked the size or installed other browsers).

Oh and remember all ISPs out there are drooling to put you on usage based browsing... What if you have a family of 4? Wouldnt you want to have them caching like mad? I myself installed a squid proxy. In the past 6 months I have saved downloading nearly 20 gig of data because of the cache. Average perf boost 10-15%.

Local caching is awesome. Its local and 5-100 times faster than the internet...

The idea is cool. But needs to let you cache. In fact they could go as far as to make caching easier and better. If someone can fetch somthing local that is one less file you have to serve up to them and one more you can serve to someone else. Meaning less hardware to buy and more you can pay to someone to make you better services...

SPDY is cool for what it does. But it needs work and needs to involve guys like this one from varnish and squid.

If you think I am full of it with this caching thing. Disable it on your local box. Set your local disk cache to 1 byte and your memory cache to 1 byte. You will feel the pain quickly.

HTTP needs to be replaced altogether (1)

Riskable (19437) | more than 2 years ago | (#40640421)

The problem all of these HTTP 2.0 proposals are trying to work around is the fact that each resource fetched by the web browser is handled via a separate connection. By combining these elements into a single (compressed) stream you can save a TON of overhead. This is why sites that use nothing but data::URI images load so much faster--even--than sites using the fastest CDNs. These 'solutions' are just workarounds to the crap that is HTTP 1.1.

Of course, the problem with data::URIs is that they can't be cached if the page's content is dynamic. However, the fact that you don't have to open a hundred additional HTTP connections just to load the cached content (have to check if something changed!) more than makes up for the lack of caching.

The real solution here is to just ditch HTTP and replace it with something like SCTP which can keep the connection open to the server and maintain the session in a secure fashion (negating the need for session-tracking cookies, hurray!). Having said that, such a change to the web would completely break the popular, N-servers-behind-a-load-balancer architecture. It would also negate the need for CDNs (for the most part)... Which is probably why many of the big-name vendors are proposing solutions that maintain the status quo.

Re:HTTP needs to be replaced altogether (0)

Anonymous Coward | more than 2 years ago | (#40640751)

HTTP and SCTP are two different layers. HTTP is an application protocol SCTP is a transport protocol HTTP could run on top of SCTP and gain lots of benefits with relatively little change.

Re:HTTP needs to be replaced altogether (1)

Carewolf (581105) | more than 2 years ago | (#40641207)

What overhead, where? You are confusing several issues. One of the reason SPDY sucks is because it still uses TCP like HTTP does. Using HTTP over SCTP would be a great improvement.

The problem with not using TCP though is that you no longer get the well-supported encryption from TLS for free anymore.

Re:HTTP needs to be replaced altogether (0)

Anonymous Coward | more than 2 years ago | (#40641747)

I think HTTP needs to be deprecated. 20 years back RFCs were deprecated all the time. Gopher had a good 3 year lifetime before being deprecated with HTTP. Anyone remember Archie or Veronica?
The HTTP replacement should be stateful, secure, and be able to multi cast. With the "cloud" we should have be have a stateful session protocol and get away with 20 years of hackery and kludgy development HTTP gave us when we hacked a document retrieval protocol and morphed it for application development. Applications should have a separate protocol. Keep HTTP for its original design of document retrieval.

The hackery of HTTP reaffirms that our profession is still in its infancy.

Re:HTTP needs to be replaced altogether (1)

DamonHD (794830) | more than 2 years ago | (#40642013)

CDNs will still exist to be (a) high-bandwidth and (b) low-latency close-to-the-user commodity servers of large data volumes.

A change of protocol won't eliminate the limitation of light speed and long-distance comms networks.

Rgds

Damon

Good slide show (1)

jbolden (176878) | more than 2 years ago | (#40641851)

For those who do read the article and didn't understand what the debate was about. Here is a good slide show from google about the advantages of SPDY. Which also explicate the issues in "HTTP routers" in the article: http://www.slideshare.net/bjarlestam/spdy-11723049 [slideshare.net]

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>