Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Communications Technology

Features of a post-HTTP Internet? 122

Ars-Fartsica asks: "We've been living with HTTP/HTML ("the web") for a quite a while now, long enough to understand its limits for content distribution, data indexing, and link integrity. Automatic indexing, stateful-ness, whole-network views (flyovers), smart caching (P2P), rich metadata (XML), built in encryption, etc are all fresh new directions that could yield incredible experiences. Any ideas on how you would develop a post-HTTP/HTML internet?"
This discussion has been archived. No new comments can be posted.

Features of a post-HTTP Internet?

Comments Filter:

  • Let's all just capitulate and make the official format a Microsoft Word document.

    Michael. [michael-forman.com]
    • Dude, that wouldn't waste enough bandwidth - I say we make them all PDFs with embedded fonts (FULL fonts), and lots of graphics.
      • whoops (Score:3, Funny)

        by Tumbleweed ( 3706 ) *
        That topic should've been changed to 'PDF To Yo Momma'

        Sorry, my bad.
      • Embedded fonts? What are you smoking? The proper way to do it is to just have graphics! That is, if it's a scanned-in document, no OCR, or have the authoring tool render into the big, fat-ass PDF-TIFFs. High color and really high DPI too- that's important in this hi-tek age, eh! Nothing better than a 10 MB PDF for one page of black and white text!
        • No, really, think about it - you could use a different font for each word in the document, then embed each full font in there. That'd totally rawk!

          I agree, though - lots of uncompressed TIFFs are a good thing, too.

          And then people can quote the existing PDF stuff, and add one line (with an entirely new font) that just says, 'I Agree.'

          We'll make good use of that new Verizon fiber to the premesis bandwidth, no problemo!

          I want FTMF - Fiber To My Fingers!
          • Use of such a mechanism is not compliant with the Americans with Disabilities Act. The proper, legal approach is to embed WAVE files of the text being read, and verbal descriptions of all graphics.

            Furthermore, citation is a significant problem on the Internet (for example, used resources can go away if cited by URLs). We need to solve the citation problem -- the appropriate approach is to embed all files used as sources of content for the existing file (which would, in turn, contain copies of all *their*
            • Actually, to further ADA compliance, it should be full video (with captioning), that way not only do the blind get access, but the deaf as well. Yeah, this is sounding good.

              Time to apply for ISO?
              • Time to apply for ISO?

                Oh, surely not quite yet. The ISO committes are good at being certain not to avoid including anything that someone might want, but they aren't perfect, and we need to be sure to avoid missing crucial features.

                Actually, to further ADA compliance, it should be full video (with captioning), that way not only do the blind get access, but the deaf as well.

                This is a good example. We were ready to go to ISO with this. But there are more -- what about dual-language law in states borde
                • "Consider the problem of dealing with revisions -- only a few file formats allow revision-tracking. Clearly, we should not exclude such a useful feature. Each file should also contain deltas from all previous revisions of that file -- much like a file in a CVS repository."

                  What about data corruption? Thats a huge problem these days. We have 3 copies of the documents imbedded inside itself, that way if one is corrupted, you have 2 more chances. Also each version will also have 3 copies. Whats the point o
                • Well, if your going todo it right, you should go mil-spec and have it contain a whole set of manuals explaining how the thing works all the way down to an entire print out of the source code with full sensible comments.

                  Also, since they might have a machine that's not completely compatiable, includes should be schematics and instrucutions for making their own machine altair style.
      • But at the same time you enlarging your size of docs, you have to spend more time, especailly precious time, to collect related graphics. Well, let's just use a automatic script to collect unrelated spam information and fill in the docs in a form that will not make the viewers see, in order to waste enough bandwidth, alright?
  • Wrong question. (Score:5, Interesting)

    by daeley ( 126313 ) * on Thursday July 29, 2004 @01:51PM (#9833986) Homepage
    Any ideas on how you would develop a post-HTTP/HTML internet?

    First identify the problem, then you can start devising solutions.

    So what's the problem? You mention certain limits of HTTP/HTML. Would these be overcome with better applications rather than throwing everything out?
    • Re:Wrong question. (Score:5, Interesting)

      by aklix ( 801048 ) <aklixpro&gmail,com> on Thursday July 29, 2004 @04:39PM (#9836453) Homepage Journal
      HTTP is a transfer protocal that does everything I need it to do. As for HTML, we practically have a post-HTML internet. DHTML, Javascript, CSS, pretty soon Apple's Canvas. It all works nice and pretty. So why would we need a post HTTP, especially if we have other protocals to do other things.
      • HTTP's fundamental flaw for web applilcations is that it doesn't have concurrent sessions like telnet. That means no matter what you do, in the end all the things like Javascript, XML, Sessions and the such will never be more than ugly hacks around the basic problem that HTTP doesn't keep track of the state of what's going on between machines.

        For a web applications solution [HTTP is a great protocol that should't go anywhere!], I'd propose a new protocol much like IBM's 3730 or 5250 terminal sessions...

    • People accept the limitations of html and http because its currently the best thing out there. It does have problems, though:

      Scalability. A server that isn't well provisioned can easily be slashdotted or DDOSed into oblivion. Not everyone can afford a DS3 or akamai. This problem could be solved through replication.

      Document identity. A document's location is a permanent part of its file name. If a document moves, its contents are the same, yet its name changes. Sometimes, its nice to be able to

  • Why? (Score:5, Insightful)

    by MaxwellStreet ( 148915 ) on Thursday July 29, 2004 @01:52PM (#9834002)
    Given that all the technologies you mention work just fine across the internet as we know it....

    Why think about getting rid of html/http?

    The pure simplicity of developing and publishing content is what made the WWW take off the way that it did. Anyone could (and generally did!) build a site. It was an information revolution.

    The other technologies will handle the more demanding apps out there. But HTML/HTTP is why the web (and in a larger sense) the internet is what it is today.
  • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday July 29, 2004 @01:53PM (#9834005) Homepage
    So, HTTP (and HTML, though the two really have nothing to do with each other, beyond the fact that HTTP is the primary way of delivering HTML) can't do everything. We know this. We have always known this, for as long as we've had HTTP.

    Has something changed that I'm not aware of here?

    HTTP may be the most popular protocol out there, but it's hardly the only one. SMTP is really popular, FTP, NNTP, IRC, whatever all the IM systems use, UDP protocols used by games, DNS ... many of these may be showing their age, but they're not showing any signs of going away any time soon.

    • Hyper-Text Transport Protocol
      Hyper-Text Markup Language

      Have nothing to do with each other?
      • Hyper-Text Transport Protocol

        Hyper-Text Markup Language

        Have nothing to do with each other?
        That is correct. In spite of the similar names, they have almost nothing to do with each other, beyond the fact that html is often delivered via http.
      • Hyper-Text Transport Protocol
        Hyper-Text Markup Language

        Have nothing to do with each other?

        Yup, that's correct... (-: They are completely "orthogonal" to each other.

        The notion of HTTP being a "hypertext" related technology is more of a historical accident than anything. (Hypertext was a buzzword of the 90's, everybody made claim to the word.) The developers of HTML wanted a more elegant way of serving web pages than the older protocols like FTP and Gopher, so they contributed to HTTP's development. Ho

        • The stateless nature of HTTP is exactly the problem right now because every body is trying to either shoe-horn states in [i.e PHP] or rebuild their own state keeping thru some other port [i.e. j2ee & .net] What's needed is an Open alternative that bundles all those specs up and makes it easy. I brought up IBM 3730 & 5250 in another post because they are oldie-but-goodie specs that do exactly what everybody is looking for...server-client & client-server interactions, data integrity, ability to
          • The stateless nature of HTTP is exactly the problem right now because every body is trying to either shoe-horn states in

            Yup. But if HTTP wasn't stateless to begin with, it would not have been adopted widely in the first place.

            [BEGIN RANT MODE]

            A lot of government and managerial people seem to forget that freedom from restrictions and overhead are what makes technologies and social processes popular. The current rush to reassert "control" over the Web and the Internet are going to drive away the very pe

            • you miss the point...HTTP has gone as far as is reasonable...and thats OK!

              Why can't anybody see that? It's time for something new to be developed that meets the needs of the NEXT 10 years. It's not about "control" or rather my post was that OSS should grab the reins FIRST. HTTP was mostly an OSS-type project and that made it very successful. It's time to put all the ducks in a row and lay out a new course....before THEY do it for us!

    • well DNS still pwns HTTP since almost any http request requires DNS while DNS is used for many non- HTTP purposes


      DNS IS TEH R0X0R
      HTTP SUX P3N0R
  • LaTeX (Score:1, Interesting)

    by Anonymous Coward
    I've been saying for years that if we had only adopted LaTeX as the primary means of displaying Web documents, we'd have a considerably more wonderful content delivery system.

    (LaTeX, being a programming language, is quite adept at laying things out, and accepting new sorts of extensions. It would be ideal for this kind of display ...)
    • I love the concept behind LaTeX. I love the quality of the output from existing LaTeX implementations.

      That being said, the syntax of LaTeX is a pain to learn, a pain to code in, and just not all that great.

      Now, I deal with the syntax because the approach of higher-level formatting is so good, and because the implementation is so good, but boy I wish that it was better.

      Oh, and LaTeX doesn't have the excellent error detection and reporting of, say, perl.
  • by Tumbleweed ( 3706 ) *
    ...would be to finally switch to IPv6; that would solve a lot more problems than mucking about with HTTP. Oh yeah, that and banninating IE from the Computosphere.
    • Actually, IPV6 has nothing really to do with HTTP or HTML . The web (as many slashdotters can explain even better than I) is built upon many layers.

      Digital or Analog
      physical transmition type (ethernet, optical cable, phone line, radio waves)
      addressing (IPV4, IPV6, probably more that I am not aware of)
      transport protical (TCP, UDP, etc...)
      packet type (http, ftp, gopher, smtp, etc...)

      That is why you can change one of the layers and none of the others have to know about it. In other words, you can serve up so
    • Oh yeah, that and banninating IE from the Computosphere.

      Banning IE would be easy if you could get a following. Just put JavaScript code in the 'onload' parameter of the 'body' tag that detects browser and if it detects IE, redirects to the Firefox download page. (I don't remember the code to do this off the top of my head, but it is very feasible) No one could use IE on your site. Get enough people to do this, and you've effectively 'banned' IE from every site except one. Then, people will start to us

  • Forget HTTP. (Score:5, Interesting)

    by Spudley ( 171066 ) on Thursday July 29, 2004 @01:57PM (#9834081) Homepage Journal
    Forget about replacing HTTP - let's deal with the real problem protocol first: SMTP.

    Please! Someone give us a secure email protocol that doesn't allow address spoofing.
    • Re:Forget HTTP. (Score:5, Insightful)

      by ADRA ( 37398 ) on Thursday July 29, 2004 @02:15PM (#9834374)
      Spoof integrity will always come down to two factors:

      1. Verification of Sender - This will never happen unless systems like cacert.org start to take off. Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited. A debate about privacy/spam could go on for years if given the chance.

      2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:
      - a. Every mail sender must be from a domain
      - b. Every mail sender has to route through an institutional server (the road warrior problem)
      - c. Every institutional mail server must deny relaying from anyone non-authenticated. (Should be done already)
      - d. Institution must be regarded positively by the community at large. If they aren't, they're completely eliminated from sending emails.
      - e. You have to get DNS servers that you can update.
      - f. You must lock down the DNS server from attacks (Have you done this lately?)

      Anyways, both solutions are possible, but neither are ideal for everyone. SPF has a real chance of shutting down spoammers, but I imagine the wild west internet we know is pretty much over.
      • Re:Forget HTTP. (Score:3, Informative)

        by 0x0d0a ( 568518 )
        This will never happen unless systems like cacert.org start to take off.

        Or decentralized trust systems, but yes.

        Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited.

        Not really. I can create multiple electronic personas, unless you're trying to enforce a 1:1 id:person ratio.

        2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:

        Where "SPF-li
        • I think trust systems are the best option. Non technical users can have blobs of trust created by their ISP. e.g. MSN trusts that an MSN user is trustworthy, and AOL users are trustworthy, and other trustworthy providers. Technical users can trust friends, trust major service providers, trust friends of friends and revoke trust as abuses occur.

          So AOL or MSN or whatever can establish the one account to one owner relationship. Randomly generated emails, even from valid addresses would be ignored since t

          • How are the licenses for PGP and GPG? I have to wonder why web mail environments haven't tried stuff like this. You could completely hide all the complexity within the service and between known PGP/GPG-happy services.

            The problem is that the trust system bundled with GPG (not that you couldn't build something on top of GPG's trust system) is binary -- you trust someone or you don't. There's no concept of "sorta trusting persona A, and therefore trusting persona B, which persona A trusts, somewhat less".
      • Spoof integrity will always come down to two factors:


        I think as long as there is a valid from address I'd be happy. As long as I can send back a 5 gig /dev/random attachment to you, you can send me spam until you are blue in the face.

        Verifying this is as simple as having a two way handshake protocol before delivering mail.
    • Forget about replacing HTTP - let's deal with the real problem protocol first: SMTP.

      What, work on SMTP, while there are children starving somewhere in the world?

      If we listened to people like you, nothing would ever get done. Well, perhaps some starving people would be saved. But that's besides the point, sheesh.

    • by Anonymous Coward
      If you think there's a problem with SMTP, then you don't understand what it's doing.

      Claiming that there's a 'spoofing' problem with SMTP is like saying there's a 'spoofing' problem with HTTP, because *anyone* can put up a website claiming to be anyone else.

      It's *NOT* a problem with the delivery protocol.

      There already is a way of preventing address spoofing with email - it's called PGP, and using it doesn't require any change of SMTP.
  • Rewriting? (Score:5, Insightful)

    by Ianoo ( 711633 ) on Thursday July 29, 2004 @02:06PM (#9834222) Journal
    Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it, usually in a more half-assed way than the original? (I'm talking to you, Apache programmers! ;)

    But seriously, where's the need to dump HTTP? It's not exactly a complicated protocol, and can be adapted to do many different things. Pretty much any protocol can be tunneled over HTTP, even those you'd normally consider to be connection-orientated socket protocols.

    As for HTML, again - why the need? By using object tags and plug-ins, the browser is almost infinitely extensible. Flash and Java bring more interactive content, streaming brings sound and video, PDF brings exact display of a document to any platform, and people are using all sorts of different XML-type markups every day now, such as RSS, XML-RPC, SOAP, and so on to do all kinds of interesting things like Web Services and RPC.

    Microsoft and the open source community are both working on markup-like things that will enable applications to operate over the web (all via HTTP). XAML and XUL's descendents might well have a big future, especially if the way documents should be displayed is more rigourously specified than HTML.
    • Re:Rewriting? (Score:1, Insightful)

      by Anonymous Coward
      Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it [...]?

      Are you a developer? There are lots of reasons, but they are not very good ones. It sounds like it might be discouraging, but it's really quite fun. You know the basic idea of how to do it, because you've done it once already, so you get to think about how to do it better. On a small scale, it is called refactoring. On a large scale it is probably a waste of time. But a lot of people
    • Re:Rewriting? (Score:3, Insightful)

      by miyako ( 632510 )
      Why is it that developers feel the need to periodically scrap everything they've been working on
      The reason is that often times the original design of something does not facilitate the structured adding of newer features. Mainly because the $foo is first developed nobody has any idea that people will want to be doing $bar 10 years down the road. Finally someone finds a way to allower $bar by tacking on a few things to $foo with superglue and duck tape. At first this is no big deal $bar is just a small
    • Job security.

      Now ask me a hard one.

      -Peter
  • by self assembled struc ( 62483 ) on Thursday July 29, 2004 @02:08PM (#9834256) Homepage
    The fact that HTTP is stateless is one of the reasons that Apache and the kin scale so effectively. The instant they're done dealing with the request, they cna do something else without thinking about the consquences. Why do I need state on my personal homesite? I don't. Let your application logic deal with state. Let the protocol deal with data transmission period.
  • If all those things in the title were used to develop a website, i think the things one could accomplish are amazing. as it stands you can already use xhtml and xhmlhttprequest to do highly dynamic websites. sometimes i wish so much emphasis wasn't put on backwards compatability in the web. i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.

    PS: Canvas is a new tag from ap
  • by km790816 ( 78280 )
    I heard a bunch of stuff about XTP a while ago, but not much recently.

    Here's what Google finds:

    http://www2.ics.hawaii.edu/~blanca/nets/xtp.html [hawaii.edu]
    http://www.cs.columbia.edu/~hgs/internet/xtp.html [columbia.edu]
  • The Internet is not just HTTP.

    Please study TCP/IP better before you ask such a question again.

    • Or maybe just reading too quickly. The question posed is "...how you would develop a post-HTTP/HTML internet?". There's nothing at all wrong with the question, and in fact it most certainly does indicate that the author is distinguishing between the "Internet" and HTTP (HTTP being 1 protocol which happens to run over the Internet).

      So instead of trying to prove that you're smarter than the average \.er by playing with semantics, how 'bout putting that noggin to better use and answering the question. Clearl

      • So instead of trying to prove that you're smarter than the average \.er by playing with semantics,

        Actually, I think it's a fair comment. The question becomes somewhat ambiguous when the line between the World Wide Web (which is ostensibly what the article poster meant) and the Internet as a whole is blurred. Is the intention to redevelop IP and/or TCP/UDP to be better suited for the distribution of web content, to the possible detriment of other forms of Internet content? Or is the question what it app
    • Don't be nasty (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:51PM (#9836626) Journal
      You know exactly what he meant, and simply couldn't pass up the opportunity to bash him to demonstrate your maximum geekiness.

      Please study TCP/IP better before you ask such a question again.

      You know what I've found? Professors and people that generally understand a subject are generally not assholes towards people that make an error in it (maybe if they're frusterated) -- they try to correct errors. It's the kind of people that just got their MSCE who feel the need to demonstrate how badass they are by insulting others.

      The question was not unreasonably formatted. The most-frequently used application-level protocol on the Internet is HTTP. The only other protocol directly used much by almost all Internet users are the mail-related protocols. The main way that people retrieve data and interact with servers on the 'Net is HTTP. Often, the HTTP-associated well-known ports 80 and 443 are the only non-firewalled outbound ports allowed to Internet-connected desktop machines. You're using a Web browser to read this at the moment. Other protocols are increasingly tunneled over HTTP. Saying that we have an "HTTP Internet" is entirely reasonable.
  • Unification (Score:5, Interesting)

    by Cranx ( 456394 ) on Thursday July 29, 2004 @02:25PM (#9834514)
    First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable.

    Then I would re-design DNS so that you have to provide not just a domain name to resolve to an IP number, but a "resource type" such as SMTP, HTTP, etc. (similar to MX records, but generic). Each resource type would have its own associated IP number and port.

    I would unify all the protocols under a single HTTP-like protocol and make everything, FTP, SMTP, NNTP, etc. a direct extension of it.

    I would merge CGI and SMTP DATA into a single "data" mechanism that could be used with any of the protocols uniformly.

    I would clean up the protocol so it's possible to concatenate multiple lines together without ambiguity, and uniformly, so the method for multiple line headers is the same as multiple lines of data.

    I would also move SSL authentication into that protocol, rather than having it at the TCP level. This would make shared hosting simpler and would save us a LOT of IPv4 numbers.

    I would peel the skin off of anyone who suggests that XML become an integral part of that protocol. XML is wordy, wasteful, hard to read and should be a high-level choice, not a low-level foundation.

    That's not all I can think of, but that's all I'm going to bother with right now. =)
    • by avalys ( 221114 ) *
      Why don't you cure cancer, solve the world's energy problems, and establish world peace while you're at it?
    • "First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable."

      That's been considered before, and was rejected because handling variable length addresses would place an enormous strain on routers and DNS servers.
      • I disagree with whomever feels that way, then.

        If you kept the same model of bit-patterning the numbers (network bits high, host bits low), a single byte (or smaller bit pattern) could be added to the packet to represent the number length (00000100 for IPv4 and 00000110 for IPv6).

        Lookups could be speeded up because you could pre-hash the router lookup table by separating IP networks by length of their number. If a packet came in with an IP number length of 7, you could search for a routing solution straig
        • I see no fundamental barrier to something like this.

          No, but it is easier for a chip engineer to make optimizations with constant-length addresses.

          And, honestly, as long as we're using IPv6 addresses as actual addresses, as they're intended to be used, I just cannot see length being an issue again. (Problems will come up if some idiot tries ramming additional data into the thing, like a MAC address.)
          • I just cannot see length being an issue again

            I guess I this at the core of the issue for me. I can imagine not being to imagine right now needing anymore than what IPv6 allows.

            It's not just a matter of having enough addresses for all the hosts we may have, there's an allocation issue. Every network is going to watch giant swaths of address space, so the ceiling that IPv6 provides is much lower than the sum of hosts that can fit in it. Lots of addresses are going to go to waste in one network while oth
    • First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable.

      As I go into detail in in my post futher in this thread, I don't think that this is a good idea. It makes optimizations harder, and IPv6 should never need to be extended as long as it is properly used. Furthermore, unless a new protocol uses the *exact same* routing mechanisms and *only* changes address length, compatibility gets broken anyway. I think the gain
      • SMTP/HTTP are not resource types. They are protocols.
        You could have a "WWW" resource type, I guess.
        This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.

        I think you misread what the original poster meant. He wanted a given DNS name to resolve to completely different IPs depending on intended use. For example, "tempuri.org" could resolved to one IP if being accessed in "Web" domain, while the DNS server would

        • You understood my DNS idea, I just wanted to add to your comments.

          Relying on a port number would require either server (or a third server, mostly likely) to dispatch requests to a single IP, then route traffic to other IPs based on intended use. He wanted the shift the burden of traffic differentiation up a level.

          This isn't how it would work. When a client resolves a domain name, it would provide a domain name and "use ID" and would get, in return, an IP address and port and would go directly to the IP
          • This isn't how it would work. When a client resolves a domain name, it would provide a domain name and "use ID" and would get, in return, an IP address and port and would go directly to the IP/port.

            We've already got one of those: rfc2782 [roxen.com]... It's in use already, but mainly in-site as part of DNS service discovery (rendezvous/zeroconf) and ActiveDirectory - it's not supported by e.g. standard web browsers, email clients etc.

            There are problems with using site-variable port numbers: it makes identifying t

            • There are problems with using site-variable port numbers: it makes identifying traffic types a little tricky

              This IS a real problem with the idea, but I think it could be worked out with some creative thinking.

              Someone types in example.com - what do you need to lookup? www.example.com A, example.com A, example.com SRV? What about sites where these are different - which address do you connect to? Then, do you send them off all at once (reduces delays in the common instance but has a tendency to increase de
      • SMTP/HTTP are not resource types. They are protocols.

        You could have a "WWW" resource type, I guess.


        My idea was to change DNS to allow IP numbers to be returned for arbitrary identifiers the way MX works, but more generically; not "resource types" per se. You can store numbers for HTTP, WWW, MAIL, TELNET, PORN, whatever you want.

        This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.

        Well-known ports are very
        • Well-known ports are very problematic in that it assumes there are a fixed-number of protocols to assign standard ports to, and it assumes everyone is cooperating. By allowing the arbitrary identifiers to determine the port, you can drop "well known ports" altogether.

          No, you don't. You simply move the problem from "well-known ports" to "well-known labels".

          Lots of reasons, but the two main ones I can think of are: combined code base (as quality of code increases, it increases for all protocols) and i

          • Well-known ports are very problematic in that it assumes there are a fixed-number of protocols to assign standard ports to, and it assumes everyone is cooperating. By allowing the arbitrary identifiers to determine the port, you can drop "well known ports" altogether.

            No, you don't. You simply move the problem from "well-known ports" to "well-known labels".

            If they move to "well-known labels", isn't that "dropping well-known ports?" Ports and labels are two different animals. Well-known ports are a

  • REST (Score:2, Insightful)

    Forget ditching HTTP, it's good even with its quirks. It's easy to use... And it's near perfect for applications designed with the REST philosophy in mind.

    Instead of ditching HTTP, let's ditch SOAP-RPC.
  • Flash (Score:1, Interesting)

    by beholder77 ( 89716 )
    Macromedia did a great presentation to my org on the idea of turning websites into live applications with flash. As a web developer I found the whole idea to be quite cool. Flash seems to give a heck of a lot more flexiblity and control than any HTML/Javascript hackery I've seen. The apps I saw demo'ed were even communicating with a DB server using web services.

    Flash has it's drawbacks of course (proprietary and non-indexable being pretty critical), but if opened up to a standards body, it could very we
    • I don't like Flash (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:28PM (#9836281) Journal
      I really hate Flash.

      I hate Flash for a lot of reasons.

      *) Lots of web designers think animation is a good idea. They tend to use it more than a user would like (especially since the "is it cool" metric, where users are asked for initial impressions of a site rather than to use the thing for a month and their feelings on usability) is wildly tilted toward novelty. Animation is almost never a good idea from a usability standpoint on a website.

      *) Lots of people doing Flash try to do lots of interface design, going so far as to bypass existing, well-tested and mature interface work with their own pseudo-widgets. They usually don't know what they're doing.

      *) Flash is slow to render.

      *) Flash is complex, and it's hard to secure the client-side Flash implementation compared to, say, a client-side HTML rendering engine.

      *) The existing Flash implementation chews up as much CPU time as it can get.

      *) Flash does not allow user-resizeablity of font sizes.

      *) Flash does not allow for meta-level control over some things, like "music playing in the background". Some websites provide a button for this. I don't want to have control if the designer choose to give me control -- I never want that software playing music if I choose to not have it do so.

      *) Flash does not allow user-configurable font colors (and for some reason, too many Flash designers seem to think that "ten-pixel high light blue text on dark blue looks great to them, so everyone should also be able to read their site as easily).

      *) Because Flash maintains internal state that is not exposed via URL, it's not possible to link to a particular state of a Flash program -- this means that you can only link to a Flash program, not a particular section of one. This is very annoying -- I can link to any webpage on a site, but sites that are simply one Flash program disallow deep linking. (I'm sure that concept gets a number of designers up somewhere near orgasm, but it drives users bananas.)

      *) The existing Flash implementation is not nearly as stable as the other code in my web browser, and takes down the web browser when it goes down.

      *) As you pointed out, I can't search for a "page" in a Flash program.

      Really, the main benefit of avoiding Flash to me is that it keeps web designers from doing a lot of things that seem appealing to them but are actually Really Bad Ideas from a user standpoint. Almost without exception, Flash has made sites I've used worse (the only positive example I can think of was either a Javascript or Flash in which the manufacturer of a hardware MP3 player demoed their interface to website users).

      I *have* seen Flash used effectively as a "vector data movie format", for which it is an admirable format -- I suspect most Slashdotters have seen the Strong Bad cartoons at some point or another. But I simply do not like it as an HTML replacement.
      • absolutley ~ all those points hit the nail on the head. However not all websites are designed with the intention to communicate information, but rather create some sort of environment; an experience, but this really borders on an interactive movie of sorts.

        But if the designer gets the tickle to make your browsing experience something of a movie and not provide a (point for point) site map alternative ~ your screwed and theyve screwed themselves.

        I browse with plug-ins off personally, flash ads are a pet

    • What makes a site good on the internet successful? its content. imo, flash limits content capabilities. Javascript has its use... you don't overuse it and you don't under use it. Javascript and flash can be effective tools for improving usability but that is about all they are good for.
  • by Gothmolly ( 148874 ) on Thursday July 29, 2004 @03:36PM (#9835581)
    You gloss over, with a sweep of your clueless wand, the rest of us who rely on the Internet for things like SMTP, SSH, Muds, Usenet, IM and VPNs.
    Please don't assume that my Internet is the same as your Intarweb.
  • Do you remember the Net prior to HTTP and the web? if so you can easily see why "the web" took off like it did.

    You need to develop a new protocol/app that provides something people actually want without added complexity and you'll replace the web as quickly as the web replaced usenet/gopher/ftp/irc (I know it didn't replace all of those things but for the majority of uses and people it did to some degree render them obsolete)

    Of course if your new system was really a wanted thing and was open enough to b
  • Oh, yeah (Score:5, Insightful)

    by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:05PM (#9835995) Journal
    Let's see:

    * The primary addressing mechansim would be content-based addressing (like SHA1 hashes of the content being addressed). We have problems with giving reliable references for things like bibliographies. We are gradually moving in this direction. P2P networks are now largely content-addressed, and bitzi.com provides one of the early centralized databases for content-based addressing.

    * We would have a global trust mechanism, where people can evaluate things and based on how well other people trust their evaluations, those people can take advantage of their evaluations. Right now, web sites have very minimal trust mechanisms (lifetime of domain, short domain names, and the generally-ignored x.509 certs). This would apply not just to domains, but be more finely-grained and apply to content beneath it.

    * The concept of creatable personas would exist. Possibly data privacy laws would end up requiring companies not to associate personas, or perhaps we would just make it extremely difficult to associate such personas. You would maintain different personas which may, if so desired, be separate. Such personas would be persistent, and could be used to evaluate how trustworthy people are -- e.g. if Mr. Torvalds joins a coding forum and makes some comments about OS design, he can simply and securely export his persona (a pubkey and some other data) from the other locations that he has been using that persona (like LKML, etc) and benefit from the reputation that has accrued to that persona. This would eliminate impersonation "this is the *real* Linus Torvalds website", etc.

    * Such trustable, persistent personas would allow for the creation of systems to allow persistent contact information to be provided ('snot that hard). This means no more dead "email addresses".

    * Domain names not be used as the primary interface mechanism to users for finding and identifying data providers. This is halfway handled already -- most people Google for things like "black sabbath" instead of looking for the official Black Sabbath website by typing out a single term. It's still possible for people to "choose their visual appearance", though, and Visa looks very much like "visa-checking.com", as long as end users have control over how domains are presented to users.

    * P2P becomes a primary transport mechanism for data -- from an economic standpoint, this means that consumers of data are responsible for subsidizing continued distribution of that content, and shifts the burden from the publisher of the content -- one step removed from consumers funding the production of their content. It solves many of the economic issues associated with data distribution. For this to happen, P2P protocols will have to be strongly abuse-resistant, even if that means a lesser degree of performance or efficiency. Many existing systems have severe flaws -- Kazaa, for instance, allows corrupted data to be spread to users, and conventional eDonkey (sans eMule extensions) does not provide any mechanism to avoid leeching, which destroys the economic benefits. Sadly, one of the few serious attempts to address the stability of the system was also from Bram Cohen of BitTorrent and abandoned -- called Mojo Nation, it used a free-market economic system to determine resource allocation, and was fairly abuse resistent. I have some efforts in this direction, but don't use a free-market model.

    * Email and instant messaging will merge to a good degree (or perhaps one will largely "take over"). Up until now, it has mostly been techncial limitations in existing software that has kept one from supplanting the others -- email provides poor delivery-time guarantees, instant messaging provides message size limitations. Email uses a strictly thread-based model, instant messaging uses a strictly linear model. Probably someone will coin a new, stupid term for the mix of the twain (like "instant mail").

    * Personas and global trust networks (not extremely limiting binary-style trust, a la PGP/GPG), as mention
    • I have to add a few things on you comments about RPC.

      SOAP is a hack to ram things through HTTP

      I completely agree. It was borne out of the need to tunnel RPC through HTTP due to misguided and zealous firewall administrators, added with the then-current hype: XML. The result is a bloated protocol.

      sunrpc complicated and ugly

      It isn't. The interface specification is close to C:

      struct Foo {
      int x;
      string x&lt;&gt;;
      };
      program Boo {
      version something
      void Bar(Foo f) = 1;
      }

      and after the stubs ha

    • HTML will finally fail.....

      If MS Office/OpenOffice would output CSS+XHTML+SVG, then it would be useful. Right now you have to learn a third-party client (Nvu, Moz Composer, Dreamweaver) that outputs decent (Nvu) to horrible (Frontpage 2003/FPExpress) code. I have yet to find a WYSIWYG editor that isn't brain-dead, the W3C Amaya browser/editor is decent, but slower than molasses.
  • by jesboat ( 64736 )
    All documents should be XML (or some other data discription language.) CSS's sucessor should be used to assign elements presentation. Possibly by converting them to other element trees.

    The XML should be psuedo-standardized, so browsers would be able to recognize TV-Listing-ML/Search-Result-ML and present it in an alternate form, if you wanted, with headers and footers added (to make advertisers happy, unfortonately necessary for a new Web protocol to suceed.)
    • I agree with you, but on this point:

      CSS's sucessor should be used to assign elements presentation. Possibly by converting them to other element trees.

      I would like to point that XSLT already exists, and it is not a replacement to CSS, but a complement. XSLT = data format CSS = style

  • HTTP is fine (Score:2, Interesting)

    by billcopc ( 196330 )
    HTTP is fine, a stream-transfer protocol can only do so much.

    HTML however feels rather clunky now with all these bloated half-supported standards tacked onto it. We still don't have consistent rendering across the board, and it's still a pain in the posterior to publish anything. CSS, that wretched hammer of aborted salvation, is yet another limited hack.

    We used to have HTML glitches and workarounds, now we have CSS glitches and workarounds; design compromises in a system that was supposed to break the
  • It's been tried but it requires Windows 2000 Professional or better, Microsoft Internet Explorer 6, IIS 6, and SQL Server 2000 or better. Version 2 requires Microsoft Longhorn.
  • I think the nicest thing I can say about XML is that sometimes it isn't blatantly inferior to any other solution. Sometimes.

    Tim
  • I don't really see a need to wholly move away from HTTP and HTML. They're both extremely flexible and will likely be very useful for many years to come. They're both relatively basic systems that aren't terribly difficult to implement with a modicum of programming talent. They're also extremely lightweight which makes it much easier to use them on equipment with very little power of memory.

    Because HTML is fairly verbose and well formed HTML is regularly laid out it isn't terribly difficult to parse. Comput
  • I think the future of the internet will be using technology such as the BitTorrent idea, to take advantage of the high client to server ratio to distribute content more effectively. This would truly make the web (the World Wide Web that is) interconnected.
  • Doesn't sound like much, but the biggest improvement over HTTP is merely a persistent connection.

    My ideal vision of the future of the internet is basically a version of Apache that supports persistent connections, so I can go back to the days of BBS, only with graphics and streaming video added. Or would we call them MMOBBSes now?

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...