Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

HTTP's Days Numbered 603

dlek writes: "ZDNet is running an article in which a Microsoft .Net engineer declares HTTP's days are numbered. (For those of you just tuning in, HTTP is the primary protocol for the world-wide web.) Among the tidbits in this manifesto is the inference that HTTP is problematic primarily because it's asymmetric--it's not peer-to-peer, therefore it's obsolete. Hey everybody, P2P was around long before Napster, and was rejected when client-server architecture was more appropriate!"
This discussion has been archived. No new comments can be posted.

HTTP's Days Numbered

Comments Filter:
  • Yeah, but (Score:2, Offtopic)

    by pete-classic ( 75983 )
    don't most P2P systems use . . . HTTP as a transport?

    -Peter
    • Re:Yeah, but (Score:2, Interesting)

      by kooshball ( 25032 )
      The article is especially funny since .NET is built upon SOAP which uses HTTP as its primary transport. Maybe MS realizes that .NET is a stupid idea and they want to set up HTTP as the scapegoat when it fails miserably
      • Re:Yeah, but (Score:2, Informative)

        by osbornk ( 458218 )
        Well, not to comment on .NET, but SOAP can really be used over almost any transport, HTTP just being the most popular.
  • Hopefully... (Score:2, Insightful)

    by gowen ( 141411 )
    HTTP will go back to being the protocol for the transfer of hypertext. Its being forced to do so many things for which it is clearly unsuitable that its no surprise its starting to show the strain.
    • Re:Hopefully... (Score:2, Interesting)

      by pla ( 258480 )
      Why? The existence of a standard matters more than its suitability.

      Why do we still have TCP? Because of its ubiquity... A company would have to have even more arrogance than M$ to try to push a non-TCP-friendly network product on the market.

      So why do we have HTTP, along with a million higher layer protocols riding on top of it, many of which might work "better" without relying on HTTP? Because every machine has either a web server or a web browser. Almost 100% supported, in some form or another. Even cell phones have a web browser.

      Throwing away that instant compatibility would seem like insanity to anyone trying to push a new product.

      Or, to put it another way, when did you last write an app that works directly on top of IP? Or even lower? Why not? All that "overhead" of TCP, or the unreliability of UDP, *certainly* leaves room for everyone's personal view of a "good" compromise...

      (Of course, in this forum, I see a good chance of someone responding that they *have* done just that...)
    • by Zeinfeld ( 263942 ) on Tuesday February 26, 2002 @02:44PM (#3071807) Homepage
      HTTP will go back to being the protocol for the transfer of hypertext. Its being forced to do so many things for which it is clearly unsuitable that its no surprise its starting to show the strain.

      Nah, the problem with HTTP is that it escaped from the lab a little early. As far as optimised hypertext transport goes we could do a heck of a lot better.

      Don Box is one of the main authors of SOAP. Another author of SOAP is Henryk Frystyk Nielsen who was also a principal contributor to HTTP and spent several years working on HTTP-NG with Jim Gettys.

      The problem HTTP-NG faced is that HTTP did the job too well, there was simply not enough incentive to change. And the folk at Netscape had zero interest in any idea that was NIH, in fact they would work to kill them because they saw anything that implied that others contributed to the Web as a personal threat.

      But in any case there was a deployment problem, how does a browser know to upgrade to NG when it makes the request? The principal optimizations in NG were in compressing the initial request to one packet.

      Web services answers this problem. All services are going to be supported on HTTP, the legacy infrastructure is too great. Some services will also offer alternative transports, possibly BEEP, possibly something like NG.

      What I don't agree with at all is the 'peer to peer' confusion. At the protocol level all protocols have an initiator and a responder (possibly multiple responders in multicast). There can only be one first mover however. All peer to peer means is that any device may act as an initiator or a responder. That was actually the original HTTP model, everyone running a Web Browser/Editor would also have a Web server. The protocol describes only one half of the loop, but that does not mean it cannot be there.

      HTTP will always be a client/server protocol but that does not mean that it can only serve the serf/master business model. We designed HTTP to democratise publishing, anyone could publish. In the early days of the Web we had more people publishing on the Web than using it to access stuff. P2P is intrinsic to the Web philosophy, it is not intrinsic to the protocols because that makes no sense. You can only have one initiator in a transaction, at the time we wrote HTTP we used to call the initiator a 'client'. Since then the nomenclature has shifted and SOAP is written somewhat differently, it was not possible to use the term initiator in 1992 even though we understood the issue.

  • In other news... (Score:2, Insightful)

    by Bob McCown ( 8411 )
    ...advances in electronics will create the paperless office!

    Sigh...why do we bother to listen to this kind of predictions, particularly from a source that is trying to control everything...

  • Read the article? (Score:4, Informative)

    by SteveX ( 5640 ) on Tuesday February 26, 2002 @02:03PM (#3071434) Homepage
    The article is stating that SOAP over HTTP should go away... not that the HTTP protocol should go away as a protocol used for delivering web pages...
    • by fajoli ( 181454 )
      Which article did you read? I didn't see any mention of SOAP.
    • by Jordy ( 440 )
      Of course, the second SOAP over HTTP goes away... the only good reason for SOAP to exist goes away as well. The only good argument I've ever seen against protocols such as RPC and IIOP (CORBA) is that they don't play well with firewalls that filter everything but HTTP.

      Of course there are lots of other (bad) arguments for SOAP. The idea that XML is somehow superior to XDR even though XDR is significantly more efficient, just as easy to write for and is already a wide-spread standard. (Hey kids... it even has an 'X' in it.. you know you like X's.)

      There is the argument that SOAP is stateless and statelessness is better than stateful when it comes to RPC requests. Of course, statefulness will have to creep into SOAP sooner or later when someone decides they need ACID transactions.

      There is of course the argument that it's easier to debug text protocols. This one I particularly love as writing a binary protocol to text converter isn't exactly the most difficult thing in the world to do. In fact, IIOP and RPC protocol debuggers already exist.. and they aren't all that big. Plus there is the fact that the amount of time developing the protocol is insignificant to the amount of time the protocol is used... therefore it makes perfect sense to put a little more effort into making it efficient.

      Of course, there is the non-RPC related uses of XML as a protocol too. Jabber seems to use it in a sort of odd maligned way. Maybe the idea of documenting your protocol using... well a document language such as XML (IDL in the case of CORBA, UML, etc) and generating code to do protocol marshalling never occurred to people. It's not like extensible binary formats don't exist (the simplest being key/value pairs which you can use to represent nearly anything with.)

      History just repeats itself over and over. The web is not the internet. HTTP sits ontop of general-purpose protocols. XML is a document definition language. Java is not as powerful and flexible as C++ and when it finally becomes as powerful and flexible, it will be just as complex. Not everyone has the newest machine, you are not free to waste resources just because it makes your life a bit easier.

      Of course, that's just my opinion, I could be wrong.
  • Well Duh... (Score:4, Insightful)

    by pridkett ( 2666 ) on Tuesday February 26, 2002 @02:05PM (#3071451) Homepage Journal
    Anyone who has tried to understand the various "standards" for web services and their associated train wreck (I think I'm being gracious here) would realize that most of them are bolted on to a protocol that was never meant to serve them in such a way. HTTP is meant for quick requests, not monoloithic requests that take a long time.

    Before you rush to say Mickeysoft is destroying the web, please realize that he's referring to web services, not your personal home page (although I'd imagine they'd like to make that proprietary too).
    • by Narcocide ( 102829 ) on Tuesday February 26, 2002 @02:16PM (#3071568) Homepage
      while it's certainly true that http was never originally ENVISIONED as a protocol to serve shoutcast/icecast streams, for example, it's usefulness to that purpose is a tribute to how well the spec was thought out. the simple fact remains that it's an incredibly versatile protocol which can be (and is) used for nearly every data/media transport/request over the internet. microsoft is going to have to do something FAR more impressive to convince me they have a good reason to scuttle the most re-purposeable protocol on the internet.

      ever wonder why 99% of ANY urls you see start with an http? ever wonder why flash webpages don't start with something like mmfttp and shoutcast streams don't start with plsttp?

      wonder.
      • by dkemist ( 199970 ) on Tuesday February 26, 2002 @07:38PM (#3074522)
        You still have to conceed the point that http evolved to where it is today, and if one had to design something from scratch, it would likely be far different.

        I mean, let's take a connection oriented protocol like TCP and add a text based stateless protocol on top of it. Ok, that makes sense so far.... but wait, we want to be able to maintain state, so lets introduce this new concept called "cookies" and we'll use ASCII strings to identify things. And, it would be nice to be able to make multiple requests per TCP session, so let's put together some keep-alive mechanism. Ohh, and I want to be able to talk to multiple servers on a given IP, so let's add a host header field.... But wait, all of this is transmitted in clear text! Let's engineer a set of encyrption protocols to stick between our HTTP layer and our TCP layer. Here we'll solve some of the same engineering problems, like adding an SSL session ID to maintain state. Now, instead of requesting simple documents, how about we design an extensible markup spec to request "web services?" Yeah, that should work.

        It is a testament to the design of the protocol that it's still ticking with all these enhancements (aka hacks.) But, all the layers add bits of overhead that could likely be engineered out if one had the luxury of starting from scratch.
    • Capabilities like chunked-encoding allow HTTP to not know how long the response will be before transmission begins. Explicit instructions for weather the connection is to remain open or closed after the request is complete. Support for acknowledgment before large data chunks are sent to server. Mandatory backward compatibility certainly hasn't taken anything away from the party. All of this is perfect for Client-Server, yes even "monolithic" actions.

      Now if you want to use HTTP to do P2P, then most likely you're only doing it so corporate users can flow out from behind their various firewalls without getting special permission from the IS department. Perhaps, HTTP isn't right in those situatins, but that doesn't mean something is wrong with HTTP. Put the blame where its due.
  • I don't think a P2P protocol would make more sense for me reading /. or ESPN websites. I agree that P2P could be more useful than HTTP for applications, but I don't see why they wouldn't keep HTTP around for simple browsing.

    But... maybe I just answered my own question. Is this a thinly-disguised way to hide a revive of the much-touted-a-few-years-back "push" technology for the web?
  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday February 26, 2002 @02:06PM (#3071455) Homepage
    From the article:
    Box likes to think of HTTP as the "cockroach of the Internet" because "after the holocaust it will be the only protocol left standing."
    Also known as 'moronic firewall policies'.
  • http over used (Score:2, Interesting)

    by bobaferret ( 513897 )
    This sounds like wishfull thinking. Http is over used. Every time someone wants to implement a new protocal latly , it seems that they do it over http, as opposed to implementing there own. esp, if it can be async.

    -jj-

    • With good reason -- many 'security' and firewall administrators will not let other protocols through proxy servers/firewalls.

      Of course if they had a clue they would know that you can encapsulate just about anything in http.
  • by Ars-Fartsica ( 166957 ) on Tuesday February 26, 2002 @02:07PM (#3071471)
    HTTP-NG tried to get off of the ground but ultimately it was too early and too complicated.

    There are obvious reasons to replace HTTP - the most obvious being the creation of true stateful transactions. That said, there will be support for HTTP until 2025 at least, and ultimately legacy support for HTTP will be painful and necessary for coders.

  • He's got a point (Score:2, Insightful)

    by wiredog ( 43288 )
    I've been doing web app development for a few months, and the stateless nature of http is a royal pain. Pretty much the only reliable way to maintain state information, in HTTP, is through cookies.
    • I think that is the point of Hyper TEXT Transport Protocol.
    • by Anonymous Coward
      Put the data you need to track in a database table or two and use hidden form vars to build the query to grab what you want.... Instant serverside cookies... No muss... Just another fscking query....
  • by PureFiction ( 10256 ) on Tuesday February 26, 2002 @02:08PM (#3071478)
    "HTTP's days as an RPC transport are numbered"

    HTTP works great for a large number of purposes. It will continue to work great for a large number of purposes. However, it is not so great when you are trying to build powerful RPC mechanisms like SOAP on top of it. It's the latter where HTTP will slowly loose favor.

    Your web browser will still be making HTTP requests for HTML documents many years into the future...
  • Does anybody else find it hilarious that this guy from Microsoft says "It's all hackery, it's all ad-hoc and none of it is interoperable." I mean, this is the same Microsoft that goes out of its way to prevent MSN.com from being viewed with other browsers than IE, completely ignores HTML standards and tries to make its own proprietary HTML tags, tries to prevent Java from being interoperable, and uses closed APIs for Windows programming and proprietary formats for DOC files.
    • But he's right. If you want to do anything beyond streaming static text and binary files, you need an add-on; something that forces state information and persistance onto a stateless and transient protocol. ASP, PHP, Cold Fusion, Tango, whatever.
  • I don't know about anyone else, but I'm not going anywhere near .NET. Applications installed on my hard drive, including opera, work just fine for me.

    This is another example of M$'s ego. They keep trying to change the direction of personal computing and the internet, and seem to never have any thoughts about "well, what if this doesn't catch on..." What if they move all of their applications over to .NET, and no one uses it? My dad/brother were perfectly happy with office 97 until i gave them star office, and they both run windows 98SE. How soon until people get tired of having the way they compute changed and just stick with what works?
  • NAT & Firewalls (Score:5, Insightful)

    by mikeee ( 137160 ) on Tuesday February 26, 2002 @02:08PM (#3071483)
    The problem is, most machines aren't even really *on* the internet anymore, just on the Web. Which is not as powerful, so you end up with these godawful kludges trying to run applications over HTTP.

    The Right Thing would be to get IPv6 out, make local client firewalls and sandboxing standard, and ditch NAT and central firewalls.

    Yeah, right.

    Instead we have SOAP, a RPC-over-HTTP kludge. We may as well run PPP-over-HTTP and have done with it...
    • by NetJunkie ( 56134 )
      Who in the world would want to administer seperate client firewalls? Sure, sounds great at home with 5 machines, now do it with 5000. No way that will ever happen.

      A good centrally administered firewall won't stop you from doing what you need to do. If a protocol is well thought out NAT will not cause a problem.

    • Client firewalls are total bunk. Let me rephrase. Nearly every client firewall does the exact opposite of what it is supposed to do.

      To secure a machine you shut down services. This closes open ports, and prevents untrusted user input. They hit a closed port, and they get dropped by the OS proper.

      Client firewalls do the same thing, only they are not OS proper, and they usually do packet inspection as well. Thus spending much more CPU power to do something simple like dropping packets. What would I an attack do then? Right. I'd send odd packets at the firewall trying to peg your CPU.

      I may not control your machine, but I can probably make sure you can't use it.
    • Re:NAT & Firewalls (Score:3, Interesting)

      by dublin ( 31215 )
      In a course I took years ago (when Interop was a small show for net-heads that fit easily in the San Jose convention center), Marshall Rose was talking about security and transports and showed us how some really twisted grad student had implemented PPP over a DNS transport just to prove it could be done.

      Sadly, pretty much everyone agrees that HTTP isn't suited for the things it's being used for today, but everything gets built on top of it because port 80 is pretty much the only thing that's guaranteed to get through firewalls, most of which are stupidly configured and require the corporate equivalent of an act of Congress to get opened up.

      By the way, Marshall Rose is relevant here for another reason, too: his proposed BEEP protocol is (IMO) a far better way to deal with providing a multipurpose transport suitable for a wide variety of things. There's a good BEEP Q&A document [beepcore.org] by Rose on the beepcore.org site. We should be using things like BEEP to avoid having the same arguments and reinventing the same wheels over again every few years/months/weeks. BEEP seems to be gaining traction: The IESG recently approved APEX4 the application datagram protocol over BEEP as a proposed standard, and SOAP over BEEP was similarly approved last summer. Let's hope these get through the grinder in time to do some good, and that the true Internet standard of "Rough Consensus and Running Code" prevails over corporate landgrabs by Microsoft, et al.
  • HTTP workarounds (Score:2, Insightful)

    by ciole ( 211179 )
    There is work going on to address the shortcomings of HTTP...

    i thought basically all web development fell into this category :)

    But seriously, i've been involved in projects requiring using HTTP for purposes which it was not well-suited for - workaround is the name of the game. Old problems, lots of old solutions.

    So well this looks like more .NET propaganda (the buzzword that will not die) i've been imagining a world without HTTP for a while. It's pretty much a given that i'll be able to brag to my grandkids about having used HTTP. So what will the world's computing resources look like then? The "internet" was not neighborhood-pervasive until HTTP - where and when will the next major transition occur? And (god forbid) will it be ".NET"?
  • ... the fact that Web Services don't rely on HTTP whatsoever. Sure, loading the client UI that goes and talks to these Web Services relies on HTTP, as well it should, as it's all go-fetch-me-a-web-page stuff.

    However, the web based UI could easily be implimented such that the actual communication between web services is done through any IP based protocol. Right now HTTP is the one that jumps to most developers minds, but by no means is it the one that's expected to be used for longer-running services. Personally, I would expect that the web based UI would interact with some running process that would dispatch and receive Web Service data through a message queueing system that provides some form of transactional validity and security. If it's a really long-running service, then this intermediary process could exist much like a state machine, and the web UI could get status updates by hitting that state machine and getting the appropriate response (ie: "Still waiting to hear back from Microsoft's UDDI server!" or "Still waiting for that order to go through!")
  • by afniv ( 10789 ) on Tuesday February 26, 2002 @02:10PM (#3071493) Homepage
    So, will the new protocol help serve web pages from Mars, where the time delay a quite long?
  • from the article: "I need a way to send a request to a server and not the get result for five days"

    Why does it need to be done at a protocol layer? If I need to submit something to a server that is going to take 5 days to get back to me, I should probably have an account with that server, and when I log back in, can get that information.

    It sounds to me like he's fishing for an excuse to design a new protocol for a need no one really has.
  • The article just points out that using HTTP as a basic transport for more higher level concepts such as a Microsoft Web Service is problematic because:

    a) The way most of the Internet's IP infrastructure treats port 80 traffic will not allow long durtation web service transactions to work reliably (presumably because things like NAT mapping tables will get cleaned up before the transaction finishes)

    b) Because the server can't initiate a connection to a client.

    It's NOT talking about HTTP being unsuitable for pushing web server content around the Internet.
  • by Glock27 ( 446276 ) on Tuesday February 26, 2002 @02:11PM (#3071507)
    Microsoft declares all languages and runtimes besides C# are obsolete, and all office suites besides OfficeXP obsolete.

    Oh yeah, and all operating systems besides Windows XP are obsolete.

    ROFL.

    By the way, the funniest quote in the article was:

    Microsoft has some ideas (on how to break the independence on HTTP)

    Now that was a Freudian slip... ;-)

    299,792,458 m/s...not just a good idea, its the law!

  • by jeremie ( 257 ) on Tuesday February 26, 2002 @02:12PM (#3071521) Homepage
    HTTP does have it's problems, and it's one of the reasons that Jabber [jabber.org] has it's own internal transport protocol to accomplish IM.

    I've seen other proposals for HTTP replacements and have been less-than-pleased by their complexity and design. Based on what I've learned from Jabber, and great feedback from many in the open source and standards communities, XATP was born:

    http://xatp.org/ [xatp.org]

    XATP, the eXtensible Application Transport Protocol is very simplistic and geared to operate at a layer below content, identity, framing, and other application-level issues. Check it out and offer feedback or participate if your interested.

    Jer
  • I seem to remember that some time in the past MS claimed that the Internet was obsolete and the MicroSoft Network was the future. I think Unix is also obsolete according to MS.

    And of course I am obsolete since I refuse to view MS products as anything else than toys. Admittedly by now toys that actually have some level of stability and can be used for some (limited) tasks without too much hassle. But as long as they insist on sitting on their island (admittedly a large one, but instable and plagued by document-rot), I will not consider their products "professional" in any sense.

  • by drew_kime ( 303965 ) on Tuesday February 26, 2002 @02:12PM (#3071528) Journal

    Didn't Cringely claim several months ago that they were going to try to do this? Well, not quite, but back in August he wrote [pbs.org]:

    According to these programmers, Microsoft wants to replace TCP/IP with a proprietary protocol -- a protocol owned by Microsoft -- that it will tout as being more secure.

    So they decided to go up one level of abstraction. Hell why not, that way they break even more competing products.

    • that may or may not exist.

      Just because I get an email from some machine, doesn't mean that it really originated there or that it wasn't maliciously crafted or altered by some sleeper virus.

      You know why M$ wants to get rid of the TCP/IP stack don't you. They didn't write it, and it works. It replaced their own, which didn't work.

      They want to stamp out any trace of non M$ code in their OS.

      Maybe BelCore or Multics organization, or even IBM should sue them for copyright or patent violation on the use of recursive structures like sub-directories.

      If they rip out the stack, I predict a wave of new virus exploits the likes of which hasn't been seen yet.
  • Good points (Score:2, Insightful)

    by hondo77 ( 324058 )

    Seemed to me like Mr. Box raised some good points. Unfortunately, he works for Microsoft, which means that your first impression is "Oh my gosh, Microsoft wants to stamp out HTTP and replace it with some evil, proprietary protocol" (it was my first impression, anyway). Looks like it just means that we'll also be making requests like "newtp://blah.blah.blah" someday.

    I'm in the middle of a project where the one-way nature of HTTP is a bit inconvenient at times so I can see where he's coming from.

  • Another problem with HTTP, said Box, is that it is asymmetric. "Only one entity can initiate an exchange over HTTP, the other entity is passive, and can only respond. For peer-to-peer applications this is not really suitable,"

    Yet this is the whole idea behind Web services, the banner MS inisists it waves and waved first. I should be able to issue a request into the Ether and there should be a server sitting there waiting to handle my request. I realize that he's talking about P2P apps, but is one arm of Microsoft not paying attention to what the other is doing? I agree that HTTP probably isn't the best way to send P2P messages, but it's not going away, at least if the Web Services division of MS has anything to do about it.

    psxndc

  • by mfarah ( 231411 ) <`miguel' `at' `farah.cl'> on Tuesday February 26, 2002 @02:16PM (#3071563) Homepage
    Box likes to think of HTTP as the "cockroach of the Internet" because "after the holocaust it will be the only protocol left standing."

    Gee, I wonder WHAT shape will that holocaust take. Maybe it'll be a killer protocol that pursues and assasinates other protocols? Damn, Mr. Box, use the proper words, will you?



    This works for small transactions asking for Web pages, but when Web services start running transactions that take some time to complete over the protocol, the model fails. "If it takes three minutes for a response, it is not really HTTP any more," Box said.

    Well, of course it isn't. Is it, then, HTTP's fault that it doesn't work perfectly when used for stuff it wasn't designed to do? Hell, I'd love to see telnet-over-HTTP done while we're at this.



    "We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests--I need a way to send a request to a server and not the get result for five days."

    Maybe if we get back to use the proper protocols (say, why don't we rely on ftp for transferring files, for example?), we wouldn't have the current "problem".



    Another problem with HTTP, said Box, is that it is asymmetric. "Only one entity can initiate an exchange over HTTP, the other entity is passive, and can only respond. For peer-to-peer applications this is not really suitable," he said.

    Of course it isn't, HTTP is designed with a client-server model in mind.



    In my humble opinion, this is just the first step from Microsoft for a new FUD campaign against HTTP: "First, we show everyone how HTTP isn't any good, then we roll over our brand new protocol that supports all of HTTP's capabilities, and lacks its limitations. Buy it from us, your beloved Microsoft!".



    "Microsoft has some ideas (on how to break the independence on HTTP), IBM has some ideas, and others have ideas. We'll see," he said. But, he added, "if one vendor does it on their own, it will simply not be worth the trouble."

    This, of course, implies that Microsoft won't control the new protocol on its own... not at first. They'll just "embrace and extend" it later.

    • by jon_c ( 100593 ) on Tuesday February 26, 2002 @03:07PM (#3072038) Homepage
      Mr. Box was not saything that HTTP is not good as a Hyper Text Transfer protocol, he was stating that it's being manipulated to perform RPC, which is true. The theme of the artical was on how HTTP is bad for RPC, which you seem to also agree with.

      Simply because this guy now works at Microsoft does not mean he has an agenda for evil. As a matter of fact before working for Microsoft Mr. Box started a little company called DevelopMentor [develop.com], He's also written a few books [amazon.com] One of which is concedered "The" book on COM, Essential COM [amazon.com], ask any COM developer worth their salt if they own a copy, they do.

      I've known of Mr. Box for years now and trully recpect him as a technical writter and developer and I honestly don't think that he would shill for Microsoft.

      -Jon
  • I share the growing concern that wrapping web services in HTTP is akin to putting a ladder outside your house next to an open window. SOAP is being touted for it's ability to get through corporate firewalls via HTTP. I can't believe someone would consider this a feature rather than a bug in the spec.

    We need a wide range of new protocols for web services with security and scalability in mind while they are being developed. We don't want to use HTTP for more than HTML. We want to be able to control who does what, where and to whom.

    I hope the .NET team at Microsoft is listening to this guy.
  • There is a good point that HTTP over TCP doesn't work nicely when there is lag between request and response. It keeps that nasty TCP connection going the whole time, tying up resources. For higher volume, and a more flexible scheme, go to an ACK-ed UDP transport.

    Client: Hey! I want X!
    Server: ok...
    [time passes]
    Server: (X)
    Client: got it!

    In TCP, the "ok..." and "got it!" phases are implicit in that TCP will tell you your message got through. Lots of overhead that the protocol doesn't really need though. In my Networks class we hear the End-to-End argument, that end to end the protocol should be designed to exibit only the state and information transfer it actually needs. Using TCP is a shortcut, and lazy. Good for getting things working fast, not optimal in the long run. Just like the STL, but that's another rant.
    • doh! (Score:3, Insightful)

      by mikeee ( 137160 )
      Using TCP is a shortcut, and lazy.

      Until you end up reimplementing half of it on top of UDP. Badly. And yes, I've seen this multiple times.

      Enough with the NIH, please? There are many years of effort in the common TCP stacks, and many subtle things they do right that you'll miss the first dozen implementations.

      For the love of god, if you need a substancial subset of TCP's features, and can live with the overhead, use TCP!
  • bxxp, I mean beep (Score:2, Informative)

    by Pauly ( 382 )
    Sorry Microsoft, as usual smarter people already knew this and have been working on it:

    www.beepcore.org [beepcore.org]

  • Hmm... An intelligent statement coming from Redmond?

    While I'm certain a lot of this is about leveraging Microsoft to control the 'next' major form of web transport, the engineer in question is right about one thing... HTTP is overused.

    A lot of P2P stuff could be a lot more efficiently and resource-considerate if it were to use UDP-style transmission like email and some online games rather than 'Virtual Circuit' style TCP connections. Another sweetener to add in the pot is to use Parchive (PAR) style error correction on your datagram packects in order to be more tolerant to faults, etc...

    sender transmits udp0-6, upd7 is lost by receiver, receiver requests par(0,6), sender transmits, receiver self-generates upd7, sender transmits upd8-999 with no further par requests without ever trying to figure out if receiver got all those packets.

    It's async. It's resource considerate, and it could do a great deal to ease download over p2p architecture.
  • I wouldn't be surprised if this is preparatory FUD laying the groundwork for MS to introduce some kind of DRM-laden, proprietary, charge-per-use, license-to-implement, no-OSS-allowed, caters-completely-to-huge-business protocol that will warp the web into some kind of horrific, ad-laden, taxed, corporate space where all fun and creativity will be sucked out leaving a soulless morass of stuff-we-must-buy.

    Or maybe not - I suppose one way to look at it is if big biz, sucking from the MS teat, all herd off onto MS's "better" protocol, the rest of us can continue to use HTTP without the control freaks (read: corporations) trying to own it.

  • "Guru" my ass (Score:2, Interesting)

    by z84976 ( 64186 )
    If he was such a guru maybe he'd have realized that http is "hypertext transport protocol" or something like that, not "heavenly total treatment protocol"

    It was designed to --- get this, kids --- deliver WEB PAGES!!! I once heard rumors of this other evil, called ftp, that was once used as a "file transfer protocol!!" The nerve of some of those early networking types!

    Really, though... there are just about as many potential protocols as their are potential uses. So http doesn't lend itself to your insecure privacy invading microsoft-enriching wet dream of global domination. GET OVER IT or LEARN TO USE AN APPROPRIATE PROTOCOL. Really!

    I don't see my linux box making http requests every time I want to mount an NFS share, so why should microsoft's next weapon use it to rob me of my money? HAH! Guru indeed...
  • COBOL is still around. FORTRAN is still around. The reverberations of 80-column cards can still be heard. Screen-scraping is alive and well. HTTP will be with us when Bill Gates' ghostly presence is roaming Internet2.
  • Usual MS FUD (Score:5, Interesting)

    by gweihir ( 88907 ) on Tuesday February 26, 2002 @02:21PM (#3071608)
    seem to remember that some time in the past MS claimed that the Internet was obsolete and the MicroSoft Network was the future. I think Unix is also obsolete according to MS.

    And of course I am obsolete since I refuse to view MS products as anything else than toys. Admittedly by now toys that actually have some level of stability and can be used for some (limited) tasks without too much hassle. But as long as they insist on sitting on their island (admittedly a large one, but instable and plagued by document-rot), I will not consider their products "professional" in any sense.

    Incidently the only argument in the article (aside from the "argument" that P2P is better than client-server, given as dogma) is that there are problems with transactions that have several minutes connection time. I am sorry, but I don't see how that makes http obsolete. First these long transaction are not that common and second they work fine. Or are we going towards an Internet where a telnet/ssh connection will be terminated after 3 minutes, because the backbone cannot cope?

    Pure FUD, as far as I can tell.
    • Opps, overlooked the part that he was actually talking about RPC. (Talk about misleading headlines...).

      I still don't quite get the argument. Af course HTTP is not really suitable for RPC. That much can be deduced from its name. Is anybode except MS using it for RPC?

      And I still don't see the problem with the long connections.
  • Excuse me, but the article says that HTTP will still be around after the world ends, so that hardly implies it's going away. The implication is (correctly) that HTTP should be used for everything.

    Inflammatory headlines and spinning of the stories really does no one any good, and we have to slog through dozens of responses by people frothing at the mouth because they only read the intro paragraph.
  • HTTP Info (Score:2, Insightful)

    by Joe U ( 443617 )
    HTTP,

    Was designed to transfer hypertext, not be the end-all-be-all RPC transport of the Internet.

    Microsoft and MANY others made a big mistake of using it as their protocol of choice for everything Internet related.

    Using HTTP as a catch-all protocol defeats the whole purpose of having different ports if everything is on 80. It makes administration a headache, and it lulls people into a false sense of security.

    (Oh, it's only HTTP, we can leave that open...what did you say about a SQL Server HTTP interface? And the SA password is blank on your local development system?)

    HTTP, The HyperText Transfer Protocol; use it for what it was designed for.
  • I make many Web Apps. And I know from 1st hand experence that HTTP it is not the best protocall for making apps. Keeping state is anoying eather you use cookies or make more of hacking tricks, All the browsers seem to handle more advanced html commands differently. HTTP is great for Message boards like Slashdot or simple online ordering. And of course content information. Where it is basicly place your data and get a responce. But if you start working on more advanced features HTTP is anoying because your program spends a lot of code to get your data and memory in the right spot and it does its output and it is gone. A different protocall for the more advanced features could be more handy so we are not hacking a way to keep state. Making Javascript to do realtime input checking. Having in intire page reload to update some data. (remember I am tring to use as simple HTML as possible for browser compatability). The Web is now being used way beyond it intent and does need a new protocall for more advanced documents.
  • Don Box, an architect for Microsoft's .NET Developer Platform team, said HTTP presents a major challenge for Web services, for peer-to-peer applications and even for security.

    I guess Microsoft needs to think outside the box.

  • Did anyone else notice that Mr. Box used the word hack properly? ("It's all hackery")
  • The "Cockroach" (Score:2, Insightful)

    by gcondon ( 45047 )
    To be fair, the engineer interviewed acknowledged that HTTP is the "cockroach of the internet ... after the after the holocaust it will be the only protocol left standing."

    Of course, that is as it should be. Even bad standards have a tendency to live much longer than anticipated and good standards are rarer than hen's teeth. As a good standard, HTTP rightly deserves a long and fruitful life.

    The nefarious implication is that Microsoft is pushing their own propriety replacement for HTTP in order to lay down their infamous hammerlock on the 'net just as they have on so many other sectors of the industry.

    While the engineer raises some fairly valid points regarding the applicability of HTTP to alternative networking models such as P2P, I'm sure that most people will read these comments as a thinly veiled plot to extend Redmond's Global Dominance (TM) - and I'm not sure that they would be mistaken.

    Certainly, the issues mentioned regarding high latency network operations smacks of the distributed applications model of .Net and strikes me as more of a macguffin than a critical limitation of the existing infrastructure. This is just the sort of strawman Gates & co. love to use to insert new technologies whose only true purpose is to increase the public's dependence on the Microsoft MotherShip (TM).

    While few would (should) argue that HTTP has room to grow, and may ultimately be supplemented or even supplanted by other standards, I am very leery of such spin coming from such a notoriously anti-standards organization.

    Be afraid. Be very afraid.
  • by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Tuesday February 26, 2002 @02:29PM (#3071682) Homepage Journal

    The problem with HTTP, as with any stateless protocol, is that there often are (or should be) relationships between requests. Ordering relationships are common, for example, as are authentication states. Stateless protocols are easier to implement, and thus should be preferred when such "implicit state" is not an issue, but in many other situations a protocol that knew something about state could be more efficient. All of this session-related cookie and URL-munging BS could just go away if the RPC-like parts of HTTP were changed to run on top of a generic session protocol.

    Another error embodied in HTTP - and it's one of my pet peeves - is that it fails to separate heartbeat/liveness checking from the operational aspects of the protocol. Failure detection and recovery gets so much easier when any two communicating nodes track their connectedness using one protocol and every other protocol can adopt a simple approach of "just keep trying until we're notified [from the liveness protocol] that our peer has died". This is especially true when there are multiple top-level protocols each concerned with peer liveness, or when a request gets forwarded through multiple proxies. As before, having the RPC-like parts of HTTP run on top of a generic failure detection/recovery layer would give us a web that's much more robust and also (icing on the cake) easier to program for.

    I don't know if any of this is what Don Box was getting at, but in very abstract terms he's right about HTTP being a lame protocol.

  • I know I'll get moderated down for saying this, but this seems to be a problem lately. Mis-information.

    The article does not say that HTTP's days are numbered. They are simply saying that HTTP does not work for RPC, and they are completely correct. If I may quote from Dan Box, the .Net engineer:

    "However, there is nothing wrong with HTTP per se, as its ubiquity and high dependability means it is the only way to get an a reliable end-to-end connection over the Internet"

    That doesn't sound to me like he's trying to get rid of HTTP, or that it's going away.

    HTTP is perfect for what it is used for, but when you get into things where you need real-time processing of data both ways, HTTP simply does not work well. This is all that Dan Box is saying.

    Remember folks, just because it's Microsoft does not mean they are always wrong or evil.
  • Seems that this article reveals many of the fundamental flaws in Microsoft's view of HTTP. The idea that HTTP is fundamentally a RPC protocol is somewhat out of line. Of course, that view is precisely why .NET services run on port 80 -- most firewalls don't block it so they can get around security.

    In a very abstract view, HTTP could be a RPC protocol, but it isn't the same kind of RPC that Sun RPC or even java's RMI (Remote Method Invocation) cover. Sure you can send data back and forth and even cause the server to do some action, but that isn't the design of the protocol. Unlike RPC, HTTP provides no inherent mechanism for passing arbitrary objects -- only text. There is no marshaling of data types at the protocol level. The protocol isn't designed to be used by an application to do anything but retrieve data.

    With XML there is some standard mechanism for packaging arbitrary data types to be sent over HTTP, but this isn't an inherent part of the protocol. The unpacking and reconstructing of these is still at the application level (at best the interpreter of the call will do it so the programmer doesn't have to think about it), but the web server won't have it's primary purpose be marshaling of datatypes -- just executing the requested file (assuming it's a CGI type object) or returning the contents of the file for a normal web page.

    There's more to RPC than just a request and a reply -- generally more than just a few functions are made available, HTTP only really has GET, POST, HEAD, and maybe CONNECT for proxy servers. How these are handled is up to the server author -- in the case of Microsoft, they want to think of it as RPC, are we suprised that they have so many security flaws in IIS?
  • Duh. HTTP is not good at things that it was never designed for. Imagine that. This is the reason that it is called Hyper Text Transfer Protocol and not Remote Procedure Call Protocol, or Peer To Peer Networking Protocol or anything else for that matter.

    I guess you need to be a Microsoft employee to have an article written about stating the obvious. It's like saying Radio is not good for sending Television broadcasts.

  • by gnovos ( 447128 ) <gnovos@ c h i p p e d . net> on Tuesday February 26, 2002 @02:33PM (#3071720) Homepage Journal
    HTTP is problematic primarily because it's asymmetric

    It is symmetric, though. If you stick a server on both end points, the you just send a request down one side and get a response down the other. I think the *real* thing people are complaining about is that it is not *stateful*. If I send you 10 requests, when those connections are closed, there is no way to determine what order or when the responses will come back, and there is no inhereant state tracking in the HTTP protocol...

    BUT, to the half clueful application developer, there IS extensibility in the protocol, which means you could just use your own header, "X-REQUESTID: 214132dbbcdee43221c", or whatever and track in that way.

    That being said, however, I think this article is not so much about application developers using HTTP, but people who have some other agenda attempting to pull the wool over the eyes of the reporter. Quotes like "If people can't search the Web they call the IT department, so the IT department makes sure HTTP is always working." just sound stupid. People don't take calls because thier protocol has gone down, they take calls becuase thier servers have gone down.

    Right after that comes the quote, "We have engineered the hell out of it". WTF? HTTP is one of the most simplistic protocols I have ever seen. You can implement the entire RFC is an afternoon. (Maybe, however, MICROSOFT has spent a lot of time "engineering the hell out of it", in it's attempts to twist it into something less usable)

    What this article looks like, to me, is just a way of explaining to the non-programmer that M$ is smart and forward thinking becuase they have seen the "melting of the internet" by the backward and old-tech protocols. Any *real* application programmer knows if you want to use HTTP as a transport, you don't just sit there and leave the connection open for a week while the server finshes processing whatever you were doing, instead you send a message that says "do X" the server responds *instantly* with "okey dokey, I will", and then it calls you back later with the results. This is just common sense.

  • It has begun (Score:3, Insightful)

    by pergamon ( 4359 ) on Tuesday February 26, 2002 @02:40PM (#3071782) Homepage
    I knew it was only a matter of time before Microsoft realized it didn't need HTTP anymore. Granted, that isn't *exactly* what this article is talking about, but I think they're just warming up. If you read carefully, they're not just attacking HTTP as an RPC transport, but HTTP because it is an RPC protocol.

    Why bother with HTTP, FTP, SMTP, POP, IMAP, etc when they control most of the clients and almost half the servers on the Internet. They could replace all those with their own set of protocols or, more likely, a single MS-specific protocol. They say they're already working on some new RPC solution right here in this article. It isn't too hard to imagine them introducing this WindowsProtocol on the server and in some beta of MSIE. Then MSIE starts to try to use WindowsProtocol for any network communications before falling back to the standard protocols. In 3-5 years when they're up to 60% or 70% of the server market, server side Windows has an option that is default "on" that disables non-WindowsProtocol connections and client-side Windows starts asking the user if they want to enable connections to "legacy" services, while warning them that it isn't Microsoft so it can't be good. After that, who would run a server that can't accept connections from 90% of consumer computers?

    Of course I don't want this to happen, but what's to stop them? I doubt the <5% of us that realize its wrong will be able to.
  • by gamgee5273 ( 410326 ) on Tuesday February 26, 2002 @02:46PM (#3071824) Journal
    As are Apple's, and Sun's, and Oracle's. Let's not forget FTP, oh and throw SMTP in there, too.

    Maybe I'm just getting a little George Carlin- grumpy lately, maybe it's because I'm writing a eulogy for a friend's funeral, maybe it's because I'm sick of people at MS attempting to form competent sentences (please, stick with those inspired dance routines!), but please tell me: What's days aren't numbered?

    HTTP has its issues, but referring to it as "the cockroah of the internet" and saying its days are numbered, and then saying that MS has a P2P solution!, just goes to show that not only are they power hungry in Redmond but seriously power-tripping...

    Arrgghhhh....

  • 1. As everyone knows, the WWW is the Internet.

    2. Since the web runs using HTTP, http runs the Internet.

    3. HTTP can't do everything the Internet can offer.

    4. While there are other protocols out there (like ftp, p2p, telnet), only hackers and pirates use them, so they must be insecure.

    5. Therefore, we must change http or the Internet is doomed.
  • duh (Score:5, Funny)

    by cweiblen ( 465407 ) on Tuesday February 26, 2002 @02:55PM (#3071879) Homepage
    I need a way to send a request to a server and not the get result for five days

    Try sending an email to MS customer support

  • by Refrag ( 145266 ) on Tuesday February 26, 2002 @02:59PM (#3071926) Homepage
    MSTP .Net

    Microsoft will be anouncing Microsoft Transfer Protocol .Net which will be used by the WWN (World Wide .Net) for anything from ms-mail (sending electronic messages to friends and family) to paying your ms-mortgage.
  • exaggeration (Score:3, Interesting)

    by f00zbll ( 526151 ) on Tuesday February 26, 2002 @03:01PM (#3071953)
    After reading the article I couldn't help but wonder what Don Box is thinking when he says:

    "If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests--I need a way to send a request to a server and not the get result for five days."

    If I am reading his statement correctly, Box feels HTTP is not suitable for processes that take long period to get a response. Even if you remove HTTP layer from SOAP, you would still have a problem. Say some one decides to by pass HTTP, use raw sockets and establish persistent connections. This means a stateful application has to be built on top of SOAP. I'm just guessing, but if Box is saying RPC has to have sessions and be stateful, that isn't a full solution. If a process like "place a stock order for MSFT when the price is less than 50.00 buy," a stateful application may not be the best solution. It might take 1 day or 2 months for the price to drop below $50.00.

    Microsoft is a supporter of XLang [coverpages.org] which tries to address the problem of stateful transactions. One of the problems of this approach that I can see is it is limited in scalability and timeout. Once you say all transactions need to be stateful, what is an acceptable timeout? Do all transaction require the same timeout period? What are the rules governming timeout, persistence, and garbage collection of invalid/expired states?

    Why not use event based protocol with rules to provide a higher level of abstraction than XLang. The way XLang treats transaction is with case statements. On the surface that sounds fine, until you realize for every company you do business with, you will have to add cases to handle new situations, which rapidly makes the system harder to maintain. EBXml in my mind uses a better approach, which divides business and functional logic and suggests rules as a possible mechanism. HTTP isn't really the problem for long processes (as in weeks and months). A better solution is event based protocol, so using HTTP isn't a big deal. This doesn't mean there are cases where HTTP is really bad for transactions. Cases where response time is a huge factor in processing a transaction, a persistent connection would be more suitable. Things like day trading applications where response time affects the price, you would be better off using persistent connections for RPC. It would suck for a day trading application to loose a buy order because there was a sudden spike in requests and the system couldn't reconnect to send confirmation. Having a persistent connection in this case is the best solution, because response time has to be rapid.

  • http == qwerty (Score:3, Insightful)

    by peter303 ( 12292 ) on Tuesday February 26, 2002 @03:14PM (#3072125)
    Sure both have flaws. But they are so entrenched, they'll never be displodged.
  • by BitHerder ( 180499 ) <crroot@worldnet. ... Eet minus distro> on Tuesday February 26, 2002 @03:45PM (#3072391) Homepage
    "But, he said, we can't stay on HTTP forever, despite all the investment and engineering that have gone into it. Among the problems with HTTP, said Box, is the fact that it is a [Not Owned By Microsoft] Remote Procedure Call (RPC) protocol; something that one [Non-MS] program (such as a [Netscape] browser) uses to request a [Potentially Unlicensed] service from another program located in another [NOT WINDOWS] computer (the server) in a network without having to understand [Proprietary] network details.

    "We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will [Never Own The Internet Right Down To The Roots]. We at least have to raise the level of abstraction, so that we have an industry-wide [Monopoly] way to do long-running requests--I need a way to [Make Money Writing Books On How To Use Our Protocol].
  • Eh? (Score:3, Funny)

    by The Cat ( 19816 ) on Tuesday February 26, 2002 @04:16PM (#3072641)
    "I need a way to send a request to a server and not the get result for five days."

    No doubt so the server can be rebooted three or four times...

  • solution (Score:3, Funny)

    by y2dt ( 184562 ) on Tuesday February 26, 2002 @04:20PM (#3072671)
    i know! how about letting microsoft embrace and extend HTTP and make a new protocol that they control.

    they can start implementing it in IE and Windows first, then over time completely remove support for things like HTTP.

    i think i just threw up in my mouth.
  • by metoc ( 224422 ) on Tuesday February 26, 2002 @04:24PM (#3072703)
    HTTP is a protocol that was developed as a solution to a problem. That didn't mean we stopped using POP, FTP, Gopher, Telnet, KERMIT, etc., as they were developed to solve different problems. Now the new problem is Web Services, and the solution should not mean that we will stop using HTTP it to deliver web pages, or FTP to move files. We should not fear a new protocol (assuming it is good & worthy). As long as the solution has an IETF RFC number, with all the consultation and work required, it can be implemented by anyone. Remember HTTP wasn't invented by Microsoft, Netscape or even Linus. If you don't want Microsoft, AOL, Oracle or the MPAA developing the next solution, then come up with a great idea and start submitting RFCs.
  • ADMISSION: This post is the result of original ideas added to shameless [plagiarism | merciful summation] of other posts on this topic.

    From the article:

    I need a way to send a request to a server and not the get result for five days.

    How about email?

    As so many people have said, the whole problem comes from an over-reliance of HTTP. If you need the request in 5 days, you probably need some other kind of service.

    However, his complaint about the time-frame of HTTP requests has deeper implications than he perhaps realizes. For example, if your request takes 5 days, you'd better be ready to compensate your content provider for machine usage, because it must be extremely resource intensive. (Maybe MS passport could help out. .)

    If I'm requesting a reply over a 5-day time frame, ideally I would not need to have my machine powered up to receive the replay, as most machines are turned off daily. So, some kind of asynchronous protocol with intermediate storage -- like email -- would be required.

    So, we need a service that checks for the latest server responses whenever you start it up, and automatically keeps track of how much you should be charged for each transaction. Actually, I think an HTTP/SMTP implementation would not be poorly suited, at least with a Free Software application server doing the heavy lifting. (See another posting on MS and intellectual "property" sharing.)

    A new PHP function: do_5_day_request_and_charge_for_it($user, $args)
    {
    do_lots_of_stuff($args);
    charge_lots_of_money($user);
    }

    I'll write that function if you promise royalties off each function call :)

    Of course, if you wanted a seriously secure system, you would either require credit card info beforehand or require payment before issuing the response (at least for new users) to discourage fraud.
  • by MousePotato ( 124958 ) on Tuesday February 26, 2002 @06:58PM (#3074187) Homepage Journal
    I hate stories like this. The title would lead you to belive that http:// is on its way to being dead. While that may or may not be true in regards to hhtp the same could be said for anything and everything(face it we are all a little closer to the day WE are going to die with every tick of the clock).

    What I want to know is this; what is going to replace http? The article really doesn't say other than alluding to p2p as the way of the future.

    Now, I may agree that p2p will be way cool as its uses are just barely beginning to be explored but I don't think we will see http disappear any time soon. I wouldn't be surprised if five years from now things are essentially the same as they are now in this respect and http is still a staple of many things web related.

    And this;
    "We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests--I need a way to send a request to a server and not the get result for five days."


    Why? If you make a request and it takes you that long to respond... your clients will search for a different source for the data. How many customers do websites lose for loading slow in the first place? You might as well use snailmail for that kind of stuff.

    So... mod me down if you must but somebody please explain what Box is talking about here.
  • Pfeh (Score:3, Interesting)

    by mkb ( 88436 ) on Tuesday February 26, 2002 @07:13PM (#3074313)
    As Bruce Schneier says, the cutting edge is always moving, but the low-end is here to stay.

    Old standards have a way of hanging on, even when there are superior replacements. Look at all the strange, vestigial crap in PC hardware. Look at NTSC video, 8.3 filenames. You'd be amazed how many large companies keep important data in VSAM files instead of real databases. People might start using new standards for new applications, but the old standards will still cling in the old applications or eveb in new applications that must interact with old ones.

    Are there exceptions where old technology was phased quickly? Sure. The cost of change can be high, and business people generally want technology that is Good Enough rather than following the latest and greates trends just for their own sake.

    You should see the old tech that is still used in the finantial sector. In these parts, the rule of thumb is "If it ain't broke, don't fix it." When you are dealing with people's money (and government regulations therof), the cost of botching a change is very high.

    --mkb
  • by Chris Johnson ( 580 ) on Tuesday February 26, 2002 @07:23PM (#3074405) Homepage Journal
    "I need a way to send a request to a server and not get the result for five days."

    ...Windows XP Datacenter :D

  • *chuckle* (Score:5, Interesting)

    by sheldon ( 2322 ) on Tuesday February 26, 2002 @08:30PM (#3074874)
    I've never heard Don Box described as just a .Net engineer. That'd be like calling Richard W. Stevens just a "C programmer."

    Thanks for the laugh. It's always good to be reminded just how out of touch /. is with the Windows world.

On the eighth day, God created FORTRAN.

Working...