Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Features of a post-HTTP Internet?

Cliff posted about 10 years ago | from the thinking-ahead dept.

The Internet 122

Ars-Fartsica asks: "We've been living with HTTP/HTML ("the web") for a quite a while now, long enough to understand its limits for content distribution, data indexing, and link integrity. Automatic indexing, stateful-ness, whole-network views (flyovers), smart caching (P2P), rich metadata (XML), built in encryption, etc are all fresh new directions that could yield incredible experiences. Any ideas on how you would develop a post-HTTP/HTML internet?"

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

Word to Your Mother (2, Funny)

Michael.Forman (169981) | about 10 years ago | (#9833942)


Let's all just capitulate and make the official format a Microsoft Word document.

Michael. [michael-forman.com]

Re:Word to Your Mother (2, Funny)

Tumbleweed (3706) | about 10 years ago | (#9834023)

Dude, that wouldn't waste enough bandwidth - I say we make them all PDFs with embedded fonts (FULL fonts), and lots of graphics.

whoops (2, Funny)

Tumbleweed (3706) | about 10 years ago | (#9834130)

That topic should've been changed to 'PDF To Yo Momma'

Sorry, my bad.

Re:Word to Your Mother (1)

RevAaron (125240) | about 10 years ago | (#9834450)

Embedded fonts? What are you smoking? The proper way to do it is to just have graphics! That is, if it's a scanned-in document, no OCR, or have the authoring tool render into the big, fat-ass PDF-TIFFs. High color and really high DPI too- that's important in this hi-tek age, eh! Nothing better than a 10 MB PDF for one page of black and white text!

Re:Word to Your Mother (1)

Tumbleweed (3706) | about 10 years ago | (#9834714)

No, really, think about it - you could use a different font for each word in the document, then embed each full font in there. That'd totally rawk!

I agree, though - lots of uncompressed TIFFs are a good thing, too.

And then people can quote the existing PDF stuff, and add one line (with an entirely new font) that just says, 'I Agree.'

We'll make good use of that new Verizon fiber to the premesis bandwidth, no problemo!

I want FTMF - Fiber To My Fingers!

ADA and citation issues (1)

0x0d0a (568518) | about 10 years ago | (#9837021)

Use of such a mechanism is not compliant with the Americans with Disabilities Act. The proper, legal approach is to embed WAVE files of the text being read, and verbal descriptions of all graphics.

Furthermore, citation is a significant problem on the Internet (for example, used resources can go away if cited by URLs). We need to solve the citation problem -- the appropriate approach is to embed all files used as sources of content for the existing file (which would, in turn, contain copies of all *their* sources, etc).

Re:ADA and citation issues (1)

Tumbleweed (3706) | about 10 years ago | (#9837244)

Actually, to further ADA compliance, it should be full video (with captioning), that way not only do the blind get access, but the deaf as well. Yeah, this is sounding good.

Time to apply for ISO?

Re:ADA and citation issues (1)

0x0d0a (568518) | about 10 years ago | (#9837411)

Time to apply for ISO?

Oh, surely not quite yet. The ISO committes are good at being certain not to avoid including anything that someone might want, but they aren't perfect, and we need to be sure to avoid missing crucial features.

Actually, to further ADA compliance, it should be full video (with captioning), that way not only do the blind get access, but the deaf as well.

This is a good example. We were ready to go to ISO with this. But there are more -- what about dual-language law in states bordering Mexico? We should also include Spanish subtitles. And to only include Spanish and English would not fully serve the needs of, for example, Swahili speakers -- surely they deserve their own subtitles.

Consider the problem of dealing with revisions -- only a few file formats allow revision-tracking. Clearly, we should not exclude such a useful feature. Each file should also contain deltas from all previous revisions of that file -- much like a file in a CVS repository.

Re:ADA and citation issues (1)

SirTalon42 (751509) | about 10 years ago | (#9846561)

"Consider the problem of dealing with revisions -- only a few file formats allow revision-tracking. Clearly, we should not exclude such a useful feature. Each file should also contain deltas from all previous revisions of that file -- much like a file in a CVS repository."

What about data corruption? Thats a huge problem these days. We have 3 copies of the documents imbedded inside itself, that way if one is corrupted, you have 2 more chances. Also each version will also have 3 copies. Whats the point of a revision control system if all the copies have been corrupted?

Another major threat is misdirrection. I propose all documents are signed. Using keys equal to the square of the size of the document (with a minimum of 1099511627776k keys). Also ever access attempt should be forced to factor a large prime number to prevent mass flooding, AND every access attempt should be given one of those pictures w/ text in it to prevent bots from just swarming over sites downloading the content. The minimum security should have at least 50 characters in the picture to authenticate.

Of course this will only work for 56k users, people with broadband and up will be required much high protection.

Re:Word to Your Mother (1)

afa (801481) | about 10 years ago | (#9840149)

But at the same time you enlarging your size of docs, you have to spend more time, especailly precious time, to collect related graphics. Well, let's just use a automatic script to collect unrelated spam information and fill in the docs in a form that will not make the viewers see, in order to waste enough bandwidth, alright?

Re:Word to Your Mother (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#9834145)

GASP!

Modded off topic! Looks like we have some MS fans without a sense of humor trolling around. :)

Wrong question. (4, Interesting)

daeley (126313) | about 10 years ago | (#9833986)

Any ideas on how you would develop a post-HTTP/HTML internet?

First identify the problem, then you can start devising solutions.

So what's the problem? You mention certain limits of HTTP/HTML. Would these be overcome with better applications rather than throwing everything out?

Re:Wrong question. (5, Interesting)

aklix (801048) | about 10 years ago | (#9836453)

HTTP is a transfer protocal that does everything I need it to do. As for HTML, we practically have a post-HTML internet. DHTML, Javascript, CSS, pretty soon Apple's Canvas. It all works nice and pretty. So why would we need a post HTTP, especially if we have other protocals to do other things.

Why? (4, Insightful)

MaxwellStreet (148915) | about 10 years ago | (#9834002)

Given that all the technologies you mention work just fine across the internet as we know it....

Why think about getting rid of html/http?

The pure simplicity of developing and publishing content is what made the WWW take off the way that it did. Anyone could (and generally did!) build a site. It was an information revolution.

The other technologies will handle the more demanding apps out there. But HTML/HTTP is why the web (and in a larger sense) the internet is what it is today.

Why does HTTP have to go away? (3, Interesting)

dougmc (70836) | about 10 years ago | (#9834005)

So, HTTP (and HTML, though the two really have nothing to do with each other, beyond the fact that HTTP is the primary way of delivering HTML) can't do everything. We know this. We have always known this, for as long as we've had HTTP.

Has something changed that I'm not aware of here?

HTTP may be the most popular protocol out there, but it's hardly the only one. SMTP is really popular, FTP, NNTP, IRC, whatever all the IM systems use, UDP protocols used by games, DNS ... many of these may be showing their age, but they're not showing any signs of going away any time soon.

Re:Why does HTTP have to go away? (1)

Drakon (414580) | about 10 years ago | (#9835924)

Hyper-Text Transport Protocol
Hyper-Text Markup Language

Have nothing to do with each other?

Re:Why does HTTP have to go away? (1)

dougmc (70836) | about 10 years ago | (#9836594)

Hyper-Text Transport Protocol

Hyper-Text Markup Language

Have nothing to do with each other?
That is correct. In spite of the similar names, they have almost nothing to do with each other, beyond the fact that html is often delivered via http.

So why criticize the article author? (1)

0x0d0a (568518) | about 10 years ago | (#9837272)

That is correct. In spite of the similar names, they have almost nothing to do with each other, beyond the fact that html is often delivered via http.

What you are saying is not contrary to what the article author said in the first place.

Re:So why criticize the article author? (1)

dougmc (70836) | about 10 years ago | (#9837536)

I was not criticizing the article (or question) author for saying http/html. I was asking why he felt they needed to go away at all.

Re:Why does HTTP have to go away? (1)

Chess_the_cat (653159) | about 10 years ago | (#9843082)

In spite of the similar names, they have almost nothing to do with each other, beyond the fact that html is often delivered via http.

Read that again. Maybe out loud. Then you can hear what an ass you're being.

Re:Why does HTTP have to go away? (1)

DLWormwood (154934) | about 10 years ago | (#9836667)

Hyper-Text Transport Protocol
Hyper-Text Markup Language

Have nothing to do with each other?

Yup, that's correct... (-: They are completely "orthogonal" to each other.

The notion of HTTP being a "hypertext" related technology is more of a historical accident than anything. (Hypertext was a buzzword of the 90's, everybody made claim to the word.) The developers of HTML wanted a more elegant way of serving web pages than the older protocols like FTP and Gopher, so they contributed to HTTP's development. However, HTTP's stateless nature and generic utility ended up being more useful than just for serving hypertext.

Re:Why does HTTP have to go away? (0, Flamebait)

Lehk228 (705449) | about 10 years ago | (#9839120)

well DNS still pwns HTTP since almost any http request requires DNS while DNS is used for many non- HTTP purposes


DNS IS TEH R0X0R
HTTP SUX P3N0R

LaTeX (1, Interesting)

Anonymous Coward | about 10 years ago | (#9834017)

I've been saying for years that if we had only adopted LaTeX as the primary means of displaying Web documents, we'd have a considerably more wonderful content delivery system.

(LaTeX, being a programming language, is quite adept at laying things out, and accepting new sorts of extensions. It would be ideal for this kind of display ...)

Re:LaTeX (0)

Anonymous Coward | about 10 years ago | (#9834234)

why not postscript?

AC

Re:LaTeX (1)

0x0d0a (568518) | about 10 years ago | (#9836958)

I love the concept behind LaTeX. I love the quality of the output from existing LaTeX implementations.

That being said, the syntax of LaTeX is a pain to learn, a pain to code in, and just not all that great.

Now, I deal with the syntax because the approach of higher-level formatting is so good, and because the implementation is so good, but boy I wish that it was better.

Oh, and LaTeX doesn't have the excellent error detection and reporting of, say, perl.

Re:LaTeX (0)

Anonymous Coward | about 10 years ago | (#9839105)

try this [lyx.org] works great. All the simplicity you could ever want, plus, you can throw your own raw LaTeX code in wherever you like.

Cheers.

Re:LaTeX (1)

0x0d0a (568518) | about 10 years ago | (#9840906)

I've tried lyx. I need xemacs to be happy, though.

I don't have a problem with text markup -- I just don't like LaTeX's particular syntax. It makes a lot of characters metacharacters (which makes it a pain to past text in). A lot of characters that I think should be "regular" characters are only valid in math mode. I hate the way LaTeX deals with wrapping (I never want text going off the page, really). I hate trying to deal with cell-spanning in tables, which should really be part of the basic tabular environment -- it's easy in HTML, and a pain in LaTeX. I wish LaTeX could let me just say "I want this floating element embedded in the text as close as possible to my text without ruining the look of the document". I wish that it was easier to use LaTeX as a full-blown programming language.

I've tried other free layout systems, and never liked anything as much as LaTeX, but LaTeX is an awfully long way from perfect.

a better plan... (0, Offtopic)

Tumbleweed (3706) | about 10 years ago | (#9834073)

...would be to finally switch to IPv6; that would solve a lot more problems than mucking about with HTTP. Oh yeah, that and banninating IE from the Computosphere.

Re:a better plan... (0)

Anonymous Coward | about 10 years ago | (#9836740)

a better plan would be to finally switch to IPv6; that would solve a lot more problems than mucking about with HTTP.
Isn't that kind of like saying; a better plan would be to finally switch to everyone driving SUV's; that would solve a lot more problems than mucking about with new car stereos.

In other words: WTF does one thing have to do with the other?

Re:a better plan... (1)

Tumbleweed (3706) | about 10 years ago | (#9837680)

> In other words: WTF does one thing have to do with the other?

Nothing; I just meant that if someone wants to go around fixing something, how about problems that are already known, and with a known solution, than to simply go around changing HTTP just because one can.

Forget HTTP. (5, Interesting)

Spudley (171066) | about 10 years ago | (#9834081)

Forget about replacing HTTP - let's deal with the real problem protocol first: SMTP.

Please! Someone give us a secure email protocol that doesn't allow address spoofing.

Re:Forget HTTP. (4, Insightful)

ADRA (37398) | about 10 years ago | (#9834374)

Spoof integrity will always come down to two factors:

1. Verification of Sender - This will never happen unless systems like cacert.org start to take off. Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited. A debate about privacy/spam could go on for years if given the chance.

2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:
- a. Every mail sender must be from a domain
- b. Every mail sender has to route through an institutional server (the road warrior problem)
- c. Every institutional mail server must deny relaying from anyone non-authenticated. (Should be done already)
- d. Institution must be regarded positively by the community at large. If they aren't, they're completely eliminated from sending emails.
- e. You have to get DNS servers that you can update.
- f. You must lock down the DNS server from attacks (Have you done this lately?)

Anyways, both solutions are possible, but neither are ideal for everyone. SPF has a real chance of shutting down spoammers, but I imagine the wild west internet we know is pretty much over.

Re:Forget HTTP. (2, Informative)

0x0d0a (568518) | about 10 years ago | (#9836941)

This will never happen unless systems like cacert.org start to take off.

Or decentralized trust systems, but yes.

Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited.

Not really. I can create multiple electronic personas, unless you're trying to enforce a 1:1 id:person ratio.

2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:

Where "SPF-like protocols" means "authorization protocols", yes. The problem is really nothing more than an authorization protocol, and not a very good one at that.

I disagree with you that authorization should take place on a domain level (unfortunately, this is the approach that the SPF people use). By doing so, it means that, say, a single account at ford.com is ever compromised by a baddie, it means that the only solution with domain-level granularity systems is to ban the entire domain.

Re:Forget HTTP. (1)

Dr. Evil (3501) | about 10 years ago | (#9837347)

I think trust systems are the best option. Non technical users can have blobs of trust created by their ISP. e.g. MSN trusts that an MSN user is trustworthy, and AOL users are trustworthy, and other trustworthy providers. Technical users can trust friends, trust major service providers, trust friends of friends and revoke trust as abuses occur.

So AOL or MSN or whatever can establish the one account to one owner relationship. Randomly generated emails, even from valid addresses would be ignored since they're not signed by a trusted source (MS or AOL), and we all know the story for the rest of the web of trust.

People who sign up for verified accounts only to send spam must have accounts terminated promptly by providers. It's not futile anymore since the provider controls who gets their signature.

Simplifying the interface to the system without compromising the security of the system is the utterly most important aspect of such a system.

I have to agree that SPF doesn't make a lot of sense.

How are the licenses for PGP and GPG? I have to wonder why web mail environments haven't tried stuff like this. You could completely hide all the complexity within the service and between known PGP/GPG-happy services.

Re:Forget HTTP. (1)

0x0d0a (568518) | about 10 years ago | (#9837503)

How are the licenses for PGP and GPG? I have to wonder why web mail environments haven't tried stuff like this. You could completely hide all the complexity within the service and between known PGP/GPG-happy services.

The problem is that the trust system bundled with GPG (not that you couldn't build something on top of GPG's trust system) is binary -- you trust someone or you don't. There's no concept of "sorta trusting persona A, and therefore trusting persona B, which persona A trusts, somewhat less".

Re:Forget HTTP. (1)

clambake (37702) | about 10 years ago | (#9840501)

Spoof integrity will always come down to two factors:


I think as long as there is a valid from address I'd be happy. As long as I can send back a 5 gig /dev/random attachment to you, you can send me spam until you are blue in the face.

Verifying this is as simple as having a two way handshake protocol before delivering mail.

Re:Forget HTTP. (2, Funny)

Scarblac (122480) | about 10 years ago | (#9834481)

Forget about replacing HTTP - let's deal with the real problem protocol first: SMTP.

What, work on SMTP, while there are children starving somewhere in the world?

If we listened to people like you, nothing would ever get done. Well, perhaps some starving people would be saved. But that's besides the point, sheesh.

There is no problem with SMTP (2, Interesting)

Anonymous Coward | about 10 years ago | (#9835872)

If you think there's a problem with SMTP, then you don't understand what it's doing.

Claiming that there's a 'spoofing' problem with SMTP is like saying there's a 'spoofing' problem with HTTP, because *anyone* can put up a website claiming to be anyone else.

It's *NOT* a problem with the delivery protocol.

There already is a way of preventing address spoofing with email - it's called PGP, and using it doesn't require any change of SMTP.

Rewriting? (4, Insightful)

Ianoo (711633) | about 10 years ago | (#9834222)

Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it, usually in a more half-assed way than the original? (I'm talking to you, Apache programmers! ;)

But seriously, where's the need to dump HTTP? It's not exactly a complicated protocol, and can be adapted to do many different things. Pretty much any protocol can be tunneled over HTTP, even those you'd normally consider to be connection-orientated socket protocols.

As for HTML, again - why the need? By using object tags and plug-ins, the browser is almost infinitely extensible. Flash and Java bring more interactive content, streaming brings sound and video, PDF brings exact display of a document to any platform, and people are using all sorts of different XML-type markups every day now, such as RSS, XML-RPC, SOAP, and so on to do all kinds of interesting things like Web Services and RPC.

Microsoft and the open source community are both working on markup-like things that will enable applications to operate over the web (all via HTTP). XAML and XUL's descendents might well have a big future, especially if the way documents should be displayed is more rigourously specified than HTML.

Re:Rewriting? (1, Insightful)

Anonymous Coward | about 10 years ago | (#9834981)

Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it [...]?

Are you a developer? There are lots of reasons, but they are not very good ones. It sounds like it might be discouraging, but it's really quite fun. You know the basic idea of how to do it, because you've done it once already, so you get to think about how to do it better. On a small scale, it is called refactoring. On a large scale it is probably a waste of time. But a lot of people are tempted to do it anyway.

Programmers are often idealists. They implement something and then feel bad about it, so they later go ahead and reimplement it because it is "ugly" or "crufty". Even if its interface seems to work well, the internal implemenation probably feels to us like it is crufty and liable to break down at any moment. So we overengineer it, layer after layer, to ensure that no ugly, interface-defacing code ever needs to be introduced anywhere. It's kind of compulsive for me, although at least I can see myself doing it and decide whether it is really necessary.

Now, I don't know the HTTP protocol very well myself, so I don't feel compulsive about reimplementing it. It feels pretty clean from what I've seen.

As for HTML, on the other hand, it would be beautiful to start with a clean slate on that one. Force XHTML+CSS, force browser rendering standards, force everybody to respect MIME types. If you do that, you feel like you want to change HTTP, just to force everybody to start from scratch so you don't have any partial-compatibilities.

Re:Rewriting? (2, Insightful)

miyako (632510) | about 10 years ago | (#9835054)

Why is it that developers feel the need to periodically scrap everything they've been working on
The reason is that often times the original design of something does not facilitate the structured adding of newer features. Mainly because the $foo is first developed nobody has any idea that people will want to be doing $bar 10 years down the road. Finally someone finds a way to allower $bar by tacking on a few things to $foo with superglue and duck tape. At first this is no big deal $bar is just a small little thing and it doesn't invalidate the design of $foo. Eventually more people use duck tape and superglue to add things to $foo and use duck tape and superglue to add more things to $bar untill what your left with is a big ball of tape and glue supported percariously with popsicle sticks and rubberbands. In this case it can be better to then redesign $foo to provide a better structure for things like $bar to be added without so much cruft. Othertimes it's decided that all the things like $bar should just be given a seperate program/protocol/whatever and $foo should to back to what it was originally.
Let's look at all this in the case of HTTP. Things like Java applets, Flash, even Javascript, are all hacks to get around the limitations of HTTP. Ofcourse I don't think that we are nearing critical mass of things being added onto HTTP, but the problem is certainly comming along. I think the latter of the above solutions is preferable in this case. HTTP is a good protocol and still serves a usefull purpose, what we need though is a second protocol for dynamic content.

Re:Rewriting? (3, Funny)

pete-classic (75983) | about 10 years ago | (#9835327)

Job security.

Now ask me a hard one.

-Peter

Don't get rid of statelessness (4, Insightful)

self assembled struc (62483) | about 10 years ago | (#9834256)

The fact that HTTP is stateless is one of the reasons that Apache and the kin scale so effectively. The instant they're done dealing with the request, they cna do something else without thinking about the consquences. Why do I need state on my personal homesite? I don't. Let your application logic deal with state. Let the protocol deal with data transmission period.

Re:Don't get rid of statelessness (3, Insightful)

AuMatar (183847) | about 10 years ago | (#9836960)

Of course, if we added state, we'd get rid of the need for cookies (and their privacy issues), and make writing web applications one hell of a lot easier.

If you're not going to make it stateful, don't bother replacing it. As a stateless protocol its about as lightweight as you're going to get.

XML + XForms + XMLHttpRequest + canvas (1, Insightful)

OmniVector (569062) | about 10 years ago | (#9834276)

If all those things in the title were used to develop a website, i think the things one could accomplish are amazing. as it stands you can already use xhtml and xhmlhttprequest to do highly dynamic websites. sometimes i wish so much emphasis wasn't put on backwards compatability in the web. i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.

PS: Canvas is a new tag from apple, used to draw things into an img like component. apple's working with opera and mozilla to integrate it into their browsers. hopefully this will go somewhere. i've always wanted something like that directly javascript accessable, but have never had the luck. it requires hack extensions like java and flash which don't communicate well with the underlying javascript without using some kludge like liveconnect.

Re:XML + XForms + XMLHttpRequest + canvas (0)

Anonymous Coward | about 10 years ago | (#9834324)

ruby -e 'puts "U3RlcCByaWdodCB1cC4gTWFyY2guIFB1c2gu".unpack("m") [0]'

so just what does that do? compute the value of pi to 20 digits or something?

Re:XML + XForms + XMLHttpRequest + canvas (1)

blacklite001 (453622) | about 10 years ago | (#9837540)

perl -e 'use MIME::Base64; print decode_base64("U3RlcCByaWdodCB1cC4gTWFyY2guIFB1c2g u");'

Re:XML + XForms + XMLHttpRequest + canvas (0)

Anonymous Coward | about 10 years ago | (#9835495)

javascript accessable
These words are mutually exclusive, please continue to provide noscript tags for the unfortunate and the well informed alike.

Re:XML + XForms + XMLHttpRequest + canvas (3, Interesting)

NoMoreNicksLeft (516230) | about 10 years ago | (#9835904)

Canvas? Try SVG, in a SVG aware browser. Not javascript accessible, rather ECMAscript accessible.

Re:XML + XForms + XMLHttpRequest + canvas (2, Insightful)

0x0d0a (568518) | about 10 years ago | (#9836875)

If all those things in the title were used to develop a website, i think the things one could accomplish are amazing. as it stands you can already use xhtml and xhmlhttprequest to do highly dynamic websites.

"highly dynamic websites". Hmm. What specifically do you mean by this?

i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.

Browsers and website designers already have the ability to do this. The reason they don't is that it's a pain in the ass for the user.

Re:XML + XForms + XMLHttpRequest + canvas (0)

Anonymous Coward | about 10 years ago | (#9846663)

Try to build a Web site that uses all of the above at once, and all you'll be likely to accomplish is getting special mention from Vince Flanders.

Start by learning... (0)

Anonymous Coward | about 10 years ago | (#9834382)

...what the words you are using mean.

smart caching (P2P)

"P2P" isn't smart caching. And many common implementations actually use HTTP.

rich metadata (XML)

XML is an easy to parse syntax for hierarchical data. It has nothing to do with "rich metadata".

Any ideas on how you would develop a post-HTTP/HTML internet?"

Sure. I'd refactor XHTML to include more useful element types (e.g. <navigation>). I'd switch protocols like SMTP and formats like RFC2822 over to Unicode/XML. I'd make maintaining state something intrinsic to HTTP. But first of all, I'd beat people who use buzzwords without understanding what they mean with one gigantic, fuck-off-big cluestick.

Critique (1)

0x0d0a (568518) | about 10 years ago | (#9836826)

"P2P" isn't smart caching.

Almost all existing P2P filesharing-oriented servents reshare downloaded files. From that standpoint, the statement is not unreasonable.

And many common implementations actually use HTTP.

Not that I'm aware of. Gnutella uses an HTTP-like protocol, which is as close as I can think of.

Sure. I'd refactor XHTML to include more useful element types (e.g. ).

I disagree. The current behavior of navigation controls operates on a meta-level -- the operator never gets control over what they do. In the past, giving website designers over control of web browser controls has widely resulted in poor decisions being made -- IE has taken a "web designer has control" approach, Mozilla a "user has control". Plus, it only takes a small percentage of poor usage or abuse of controls ("back" keeping people on the same ad, for instance) to make a control not worth it.

I could see a *new* set of interface controls "recommended-back", "recommended-forward", "recommended-up", "recommended-help", etc being introduced, but not overloading the existing controls.

I'd switch protocols like SMTP and formats like RFC2822 over to Unicode/XML.

Why do you dislike the existing MIME-encoded method of handling Unicode data?

What would be the benefit of XML usage?

I'd make maintaining state something intrinsic to HTTP.

Why? Demands for state maintenance vary widely across HTTP-using systems -- web browsers can be pigeonholed, sure, but cookies are already there and work nicely. What if you need five megs of state (different mechanism from cookies), or need to have the client aware of the content of the state? There are a lot of systems that can't afford to maintain state, like embedded systems.

But first of all, I'd beat people who use buzzwords without understanding what they mean with one gigantic, fuck-off-big cluestick.

I really don't think that he was all that awful, really.

Re:Critique (0)

Anonymous Coward | about 10 years ago | (#9837712)

> "P2P" isn't smart caching.

Almost all existing P2P filesharing-oriented servents reshare downloaded files. From that standpoint, the statement is not unreasonable.

It is as long as clients download from other clients indiscriminately, which is what the majority, if not all, do. HTTP already provides caching - and it's caching that works well, as you almost always get resources from a computer that is closer to you on the network. Typical P2P clients retrive resources from all over the place, leading to not only slower downloads, but unnecessary transfers.

Gnutella uses an HTTP-like protocol, which is as close as I can think of.

It is actually HTTP. Not "like" HTTP. HTTP.

> I'd refactor XHTML to include more useful element types (e.g. <navigation>).

I disagree. The current behavior of navigation controls operates on a meta-level -- the operator never gets control over what they do. In the past, giving website designers over control of web browser controls

I think you misunderstood. When I mentioned a <navigation> element type, I was thinking of a container for navbars and the like, not a replacement for browser controls. For instance, browsers could automatically provide a "skip to navigation" function and hide the navigation when printing documents out.

I could see a *new* set of interface controls "recommended-back", "recommended-forward", "recommended-up", "recommended-help", etc being introduced, but not overloading the existing controls.

HTML 4.01 already provides this. Look into the <link> element type.

> I'd switch protocols like SMTP and formats like RFC2822 over to Unicode/XML.

Why do you dislike the existing MIME-encoded method of handling Unicode data?

It's not MIME per se, it's the irregular mixture of ASCII + whitespace significance in the protocol (e.g. EHLO ...), followed by a different syntax for ASCII metadata (the RFC 2822 headers), and then a completely arbitrary format following. Don't forget ASCII-armored PGP sigs. Throw in the fact that implementations can and do vary quite widely, and it turns into a tangled mess you just want to go away.

Chuck it all through XML, you don't have to worry about encodings, you don't have to worry about parsing, all you have to do is pull out the addressing etc with a couple of xpath statements. The rest is handled by the XML parser you use, and will be consistent across all implementations. No ASCII armoring of digital signatures required, that can be handled in a standard way by XML signature. It's much more regular.

> I'd make maintaining state something intrinsic to HTTP.

Why? Demands for state maintenance vary widely across HTTP-using systems

With all due respect, I disagree. The basic mechanisms in use today don't vary much at all, but cookies are a kludge with all sorts of special cases. Netscape's original design was decent for its time, but hasn't been significantly improved upon since its introduction.

What if you need five megs of state (different mechanism from cookies), or need to have the client aware of the content of the state?

You can mantain state on the server side. All you need is a unique ID. If the client needs to be aware of the content, you do what you normally do and provide it with the content.

There are a lot of systems that can't afford to maintain state, like embedded systems.

I disagree. Where are the HTTP clients that can't store a few KB in memory during a website visit?

Re:Critique (1)

0x0d0a (568518) | about 10 years ago | (#9840858)

It is as long as clients download from other clients indiscriminately, which is what the majority, if not all, do. HTTP already provides caching - and it's caching that works well, as you almost always get resources from a computer that is closer to you on the network. Typical P2P clients retrive resources from all over the place, leading to not only slower downloads, but unnecessary transfers.

I guess it comes down to what you define as "smart caching".

It certainly caches, the question is whether it's considered to be smart or not. I guess you do not.

It is actually HTTP. Not "like" HTTP. HTTP.

Try sending "GET /" to your favorite Gnutella client, and see whether you get "file not found" or "syntax not understood".

HTML 4.01 already provides this. Look into the <link> element type.

I'm aware of this; this is currently provided in browsers by overloading existing controls instead of adding new "recommended navigation" controls.

No ASCII armoring of digital signatures required, that can be handled in a standard way by XML signature.

I don't know why there's any reason to armor signatures at all, frankly, outside of archaic mail clients -- the modern format is to slap the signature in as an attachment, which means that it'd be just as easy to base64-encode the attached signature as it would a .gif that's being sent.

I mean, there *is* some parsing code involved, and theoretically one could rework headers and whatnot to be in XML, but there is huge amounts of deployed software that uses the existing parsing code, and it's not as if an XML-based storage of them would be significantly better in any way that I can see. If anything, I view base64 encoding as more regular than that encoding XML does (IIRC, "With all due respect, I disagree. The basic mechanisms in use today don't vary much at all, but cookies are a kludge with all sorts of special cases. Netscape's original design was decent for its time, but hasn't been significantly improved upon since its introduction.

Okay, I guess I should say ... what issues do you have with cookies being used to store state?

I disagree. Where are the HTTP clients that can't store a few KB in memory during a website visit?

Embedded devices. Sure, each one isn't expensive -- you're talking maybe a few cents of additional cost to add some more onboard RAM -- but it is per-unit, which sucks.

Re:Critique (0)

Anonymous Coward | about 10 years ago | (#9843595)

Try sending "GET /" to your favorite Gnutella client

That hasn't been a correct HTTP request since HTTP 0.9. Try actually reading the protocol specification. It uses HTTP.

> HTML 4.01 already provides this. Look into the <link> element type.

I'm aware of this; this is currently provided in browsers by overloading existing controls instead of adding new "recommended navigation" controls.

Not in any browser I know of; it's usually provided as a supplementary toolbar.

I mean, there *is* some parsing code involved, and theoretically one could rework headers and whatnot to be in XML, but there is huge amounts of deployed software that uses the existing parsing code

Err... the whole point of this article is what we would do differently if we had a brand-new Internet to work on. Legacy code isn't an issue.

what issues do you have with cookies being used to store state?

The main issues I have are that it is far too tightly coupled with DNS and that it is something that web developers cannot rely upon or even not rely upon. What I mean is, you can't rely on it, as people switch off cookies, you have to go to annoying lengths to figure out if your cookies are being ignored, your cookies can be ignored on a case-by-case basis, making any previous determinations invalid, cookies can be "read-only" throwing systems completely out of whack, and so on.

Re:Critique (1)

0x0d0a (568518) | about 10 years ago | (#9844185)

That hasn't been a correct HTTP request since HTTP 0.9.

Which all newer HTTP specifications require backwards compatibility with.

Try actually reading the protocol specification. It uses HTTP.

Try actually using the servents. The document does not reflect how the GnutellaNet operates. Given the way the GDF operates (mostly trying to formalize existing practices rather than coming up with new protocol specs from scratch), it is unlikely that it ever will.

Not in any browser I know of; it's usually provided as a supplementary toolbar.

Okay, you've piqued my interest. Which browser have you used that provides separate buttons specially for use of this tag, be it in a separate toolbar or what? I don't see any such controls in Firefox, Konqueror, or Mozilla, which is all that I have installed on my system.

Err... the whole point of this article is what we would do differently if we had a brand-new Internet to work on. Legacy code isn't an issue.

But we'd need to provide benefits for that rewrite.

The main issues I have are that it is far too tightly coupled with DNS and that it is something that web developers cannot rely upon or even not rely upon.

The issues you state are privacy issues, and deliberately chosen. The fact that websites other than the granting one generally may not retrieve state from the browser (I assume that's what you mean by "tied to DNS") is pretty much necessary to avoid data leakage about your browsing habits to other websites. I feel quite strongly that administrators of one website should not be able to monitor what users do off of their website. The other issue, that cookies may not be available, falls into the same category. I used to deny cookies and whitelist particular sites, because I did not like the privacy implications (a webmaster has no need to see me as anything other than a series of requests -- he can tack a CGI argument representing my ID onto the end of an URL, if he feels it necessary to maintain server-side state, like a shopping cart). I currently just blacklist some sites, and convert all permanent cookies into session cookies, which keeps me reasonably happy and makes it reasonably difficult for folks like DoubleClick to associate my different identities together. Should I have to give up this privacy because a webmaster feels that it is necessary to use cookies instead of embedded identifiers in URLs?

Re:Critique (0)

Anonymous Coward | about 10 years ago | (#9844961)

> That hasn't been a correct HTTP request since HTTP 0.9.

Which all newer HTTP specifications require backwards compatibility with.

That's definitely untrue. [w3.org] The spec writers deliberately state that they won't address backwards compatibility.

> Try actually reading the protocol specification. It uses HTTP.

Try actually using the servents. The document does not reflect how the GnutellaNet operates.

Okay, I just fired up gtk-gnutella, opened a connection and issued an HTTP request by hand. It responded with a correct HTTP response conforming to RFC 2616. The software and the specifications appear to agree with me, not you.

Which browser have you used that provides separate buttons specially for use of this tag, be it in a separate toolbar or what? I don't see any such controls in Firefox, Konqueror, or Mozilla, which is all that I have installed on my system.

It's an element type, not a tag. Mozilla calls it the "site navigation toolbar", and you can switch it on in the View menu. I think Firefox only supports it when you install the extension. Opera calls it the "navigation bar" and you can also switch it on in the View menu. I don't think Konqueror supports it.

But you stated "this is currently provided in browsers by overloading existing controls" - which browsers do this?

The issues you state are privacy issues, and deliberately chosen.

Oh, I know that. But they are solved in a very simplistic manner that causes lots of problems and isn't very robust. For instance, you state that one domain can't retrieve cookies that another domain set for privacy reasons. Of course, I understand that. But why can't a website supply a list of domains that are permitted to do so? The current types of workarounds employed by the likes of Passport are numerous, very fragile, and should not be necessary.

The other issue, that cookies may not be available, falls into the same category. I used to deny cookies and whitelist particular sites, because I did not like the privacy implications (a webmaster has no need to see me as anything other than a series of requests -- he can tack a CGI argument representing my ID onto the end of an URL, if he feels it necessary to maintain server-side state, like a shopping cart).

This kind of hack is exactly why current cookies are a problem.

For instance, what happens when you click a link that takes you to an external website? That ID shows up in their logs. You want to restrict it to a certain IP address? Sorry, clients and IPs don't have a one-to-one relationship, that doesn't work.

I'm not saying you should give up your privacy, just that if they are disabled, there should be a real way of finding that out, instead of the *nasty* kludges currently in use.

Furthermore, what difference does it make to you whether your session is being tracked by in-URL ID or in-cookie ID? Your session is still being tracked, either way.

The question indicates misunderstanding (2, Informative)

SpaceLifeForm (228190) | about 10 years ago | (#9834411)

The Internet is not just HTTP.

Please study TCP/IP better before you ask such a question again.

Re:The question indicates misunderstanding (0)

Anonymous Coward | about 10 years ago | (#9834966)

I blame the /. editors for allowing such a stupid question to be posted.

The comment indicates mis-reading (1)

gray peter (539195) | about 10 years ago | (#9835620)

Or maybe just reading too quickly. The question posed is "...how you would develop a post-HTTP/HTML internet?". There's nothing at all wrong with the question, and in fact it most certainly does indicate that the author is distinguishing between the "Internet" and HTTP (HTTP being 1 protocol which happens to run over the Internet).

So instead of trying to prove that you're smarter than the average \.er by playing with semantics, how 'bout putting that noggin to better use and answering the question. Clearly you are an expert on the field in question and must have some good ideas.

My suggestion? Think of a browser-website connection as analagous to a client-server database development. Where is the latency? It's in establishing connections. What if HTTP had connection pooling? Seems like it would speed things up significantly.

Re:The comment indicates mis-reading (1)

Dachannien (617929) | about 10 years ago | (#9841486)

So instead of trying to prove that you're smarter than the average \.er by playing with semantics,

Actually, I think it's a fair comment. The question becomes somewhat ambiguous when the line between the World Wide Web (which is ostensibly what the article poster meant) and the Internet as a whole is blurred. Is the intention to redevelop IP and/or TCP/UDP to be better suited for the distribution of web content, to the possible detriment of other forms of Internet content? Or is the question what it appears to be to the uncareful reader, that being to arrive at a new application-level protocol suite for more efficient/effective distribution of web content without breaking the levels below?

Re:The question indicates misunderstanding (0)

Anonymous Coward | about 10 years ago | (#9836365)

Yeah, you're basically an idiot.

Don't be nasty (3, Insightful)

0x0d0a (568518) | about 10 years ago | (#9836626)

You know exactly what he meant, and simply couldn't pass up the opportunity to bash him to demonstrate your maximum geekiness.

Please study TCP/IP better before you ask such a question again.

You know what I've found? Professors and people that generally understand a subject are generally not assholes towards people that make an error in it (maybe if they're frusterated) -- they try to correct errors. It's the kind of people that just got their MSCE who feel the need to demonstrate how badass they are by insulting others.

The question was not unreasonably formatted. The most-frequently used application-level protocol on the Internet is HTTP. The only other protocol directly used much by almost all Internet users are the mail-related protocols. The main way that people retrieve data and interact with servers on the 'Net is HTTP. Often, the HTTP-associated well-known ports 80 and 443 are the only non-firewalled outbound ports allowed to Internet-connected desktop machines. You're using a Web browser to read this at the moment. Other protocols are increasingly tunneled over HTTP. Saying that we have an "HTTP Internet" is entirely reasonable.

Re:Don't be nasty (0)

Anonymous Coward | about 10 years ago | (#9839269)

The main way that people retrieve data and interact with servers on the 'Net is HTTP. Often, the HTTP-associated well-known ports 80 and 443 are the only non-firewalled outbound ports allowed to Internet-connected desktop machines. You're using a Web browser to read this at the moment. Other protocols are increasingly tunneled over HTTP. Saying that we have an "HTTP Internet" is entirely reasonable.

not so much.... if you wanted to extend your logic to its full, and obviously wrong, conclusion, then it would be more correct to say that we have a "cdp Internet", as I would be willing to bet that that is what most of the traffic is.. ... well... that and RIP/OSPF/(insert favorite routing protocol here).... course, adding all that up, it would probably be more correct (thinking both physically and uh... protocol-wise) to say we have a "Cisco Internet"... which is probably rather close to the truth...

The internet is more than the sum of it's protocols ;-) .... anywhore, insofar as the really important hardware goes, HTTP is barely a protocol, if you don't like the way it works, simply design a new protocol. Honestly, there's only what, a dozen or so distinct messages that can be sent by HTTP? GET, POST, and various error numbers.

HTML is where the *REAL* complication comes into play... and I would agree with just about anyone who tried to tell me it's inherently broken. Competing browsers that all have slightly different methods of adhering to (or creating) "standards" ... bah... HTML is the real target of the submitters question... leave HTTP out of it.

Unification (4, Interesting)

Cranx (456394) | about 10 years ago | (#9834514)

First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable.

Then I would re-design DNS so that you have to provide not just a domain name to resolve to an IP number, but a "resource type" such as SMTP, HTTP, etc. (similar to MX records, but generic). Each resource type would have its own associated IP number and port.

I would unify all the protocols under a single HTTP-like protocol and make everything, FTP, SMTP, NNTP, etc. a direct extension of it.

I would merge CGI and SMTP DATA into a single "data" mechanism that could be used with any of the protocols uniformly.

I would clean up the protocol so it's possible to concatenate multiple lines together without ambiguity, and uniformly, so the method for multiple line headers is the same as multiple lines of data.

I would also move SSL authentication into that protocol, rather than having it at the TCP level. This would make shared hosting simpler and would save us a LOT of IPv4 numbers.

I would peel the skin off of anyone who suggests that XML become an integral part of that protocol. XML is wordy, wasteful, hard to read and should be a high-level choice, not a low-level foundation.

That's not all I can think of, but that's all I'm going to bother with right now. =)

Re:Unification (2, Funny)

avalys (221114) | about 10 years ago | (#9835164)

Why don't you cure cancer, solve the world's energy problems, and establish world peace while you're at it?

Re:Unification (1)

The Master Control P (655590) | about 10 years ago | (#9835880)

"First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable."

That's been considered before, and was rejected because handling variable length addresses would place an enormous strain on routers and DNS servers.

Re:Unification (1)

Cranx (456394) | about 10 years ago | (#9836158)

I disagree with whomever feels that way, then.

If you kept the same model of bit-patterning the numbers (network bits high, host bits low), a single byte (or smaller bit pattern) could be added to the packet to represent the number length (00000100 for IPv4 and 00000110 for IPv6).

Lookups could be speeded up because you could pre-hash the router lookup table by separating IP networks by length of their number. If a packet came in with an IP number length of 7, you could search for a routing solution straight out of the "size 7 table", skipping all the networks listed in the size 4, 5 and 6 tables.

I see no fundamental barrier to something like this.

Constant-size addresses (1)

0x0d0a (568518) | about 10 years ago | (#9836330)

I see no fundamental barrier to something like this.

No, but it is easier for a chip engineer to make optimizations with constant-length addresses.

And, honestly, as long as we're using IPv6 addresses as actual addresses, as they're intended to be used, I just cannot see length being an issue again. (Problems will come up if some idiot tries ramming additional data into the thing, like a MAC address.)

Re:Constant-size addresses (1)

Cranx (456394) | about 10 years ago | (#9837898)

I just cannot see length being an issue again

I guess I this at the core of the issue for me. I can imagine not being to imagine right now needing anymore than what IPv6 allows.

It's not just a matter of having enough addresses for all the hosts we may have, there's an allocation issue. Every network is going to watch giant swaths of address space, so the ceiling that IPv6 provides is much lower than the sum of hosts that can fit in it. Lots of addresses are going to go to waste in one network while other networks starve for addresses.

I just think "fixed" length addresses is going to bite humans in the butt again one day.

Critique (1)

0x0d0a (568518) | about 10 years ago | (#9836509)

First, I would re-design IP to take variable-length addresses, so IPv4, IPv6 and everything else to come are all compatible and interchangable.

As I go into detail in in my post futher in this thread, I don't think that this is a good idea. It makes optimizations harder, and IPv6 should never need to be extended as long as it is properly used. Furthermore, unless a new protocol uses the *exact same* routing mechanisms and *only* changes address length, compatibility gets broken anyway. I think the gain may not be what you're hoping for -- you couldn't just slap IPX on an IP network, for example.

I do think that the fact that the BSD sockets API was designed to deal so poorly with long addresses is a real disaster, though. The *endpoints* of a connection-oriented address generally only care about the length of the address.

Then I would re-design DNS so that you have to provide not just a domain name to resolve to an IP number, but a "resource type" such as SMTP, HTTP, etc. (similar to MX records, but generic). Each resource type would have its own associated IP number and port.

SMTP/HTTP are not resource types. They are protocols.

You could have a "WWW" resource type, I guess.

This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.

I would unify all the protocols under a single HTTP-like protocol and make everything, FTP, SMTP, NNTP, etc. a direct extension of it.

Hmm. I dunno. I guess you could wrap these in HTTP, but where's the benefit of doing so? You can't really reuse any significant functionality and you'd slightly increase complexity (since everything would need to be linked to an HTTP library).

I would merge CGI and SMTP DATA into a single "data" mechanism that could be used with any of the protocols uniformly.

Hmm. I'm not quite sure what the benefit to doing so would be.

I would clean up the protocol so it's possible to concatenate multiple lines together without ambiguity, and uniformly, so the method for multiple line headers is the same as multiple lines of data.

Wouldn't this just impose overhead on protocols that use an eight-bit-clean and non-line-oriented interface, like FTP DATA?

I would also move SSL authentication into that protocol, rather than having it at the TCP level.

Not sure what you mean ... I guess you'd make every protocol SSL-tunneled? I mean, I think it's a good idea (more plaintext services becoming encrypted == better), but you can already do that, and the main reason that people don't is because of (now archaic) patent issues and because of CPU load. Also, SSL adds overhead, not just on the CPU, but in latency.

This would make shared hosting simpler and would save us a LOT of IPv4 numbers.

I don't see why this would be the case.

I would peel the skin off of anyone who suggests that XML become an integral part of that protocol. XML is wordy, wasteful, hard to read and should be a high-level choice, not a low-level foundation.

Thank you. I've seen one utterly idiotic proposal for doing something like what you're proposing and ramming everything through XML, which is ridiculous.

XML may be the most overused and oversold format ever. It's neat for a certain set of tasks, but it has no benefit for many things that it is used for.

Re:Critique (1)

DLWormwood (154934) | about 10 years ago | (#9836903)

SMTP/HTTP are not resource types. They are protocols.
You could have a "WWW" resource type, I guess.
This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.

I think you misread what the original poster meant. He wanted a given DNS name to resolve to completely different IPs depending on intended use. For example, "tempuri.org" could resolved to one IP if being accessed in "Web" domain, while the DNS server would return a different address if being used for a "Database" domain. This could have potentially reduced name disputes, if organizations didn't have the pig-headed need to lay claim to any name that merely resembles or contains their valued "trademarks" and "servicemarks."

Relying on a port number would require either server (or a third server, mostly likely) to dispatch requests to a single IP, then route traffic to other IPs based on intended use. He wanted the shift the burden of traffic differentiation up a level.

Not that I agree with this. It would put too much of a burden on the DNS system, only to make the lives of select few domain admins easier.

Re:Critique (1)

Cranx (456394) | about 10 years ago | (#9837994)

You understood my DNS idea, I just wanted to add to your comments.

Relying on a port number would require either server (or a third server, mostly likely) to dispatch requests to a single IP, then route traffic to other IPs based on intended use. He wanted the shift the burden of traffic differentiation up a level.

This isn't how it would work. When a client resolves a domain name, it would provide a domain name and "use ID" and would get, in return, an IP address and port and would go directly to the IP/port.

It would put too much of a burden on the DNS system, only to make the lives of select few domain admins easier.

It wouldn't be any more difficult than what all mail clients have to do right now to determine the MX record for a domain name. All software would have to provide that "use ID" and then connect to an IP/port, rather than how things are done now where there is no "use ID" and the port is assumed. It wouldn't burden anyone very much.

Re:Critique (1)

funky womble (518255) | about 10 years ago | (#9842370)

This isn't how it would work. When a client resolves a domain name, it would provide a domain name and "use ID" and would get, in return, an IP address and port and would go directly to the IP/port.
We've already got one of those: rfc2782 [roxen.com] ... It's in use already, but mainly in-site as part of DNS service discovery (rendezvous/zeroconf) and ActiveDirectory - it's not supported by e.g. standard web browsers, email clients etc.

There are problems with using site-variable port numbers: it makes identifying traffic types a little tricky, having implications for e.g. traffic prioritisation, blocking malicious/unwanted traffic. As such it's probably more useful on a network within one administrative domain than on the internet. There's no corresponding method for looking up service types given a port number and IP address (e.g. additional records to in-addr.arpa) to help out, possibly because it would be rather difficult to place any degree of trust in that data anyway: you can't really have an unknown DNS server controlling your firewall policy. This is a bit of a different thing than MX records, where port numbers can't be defined.

It wouldn't be any more difficult than what all mail clients have to do right now to determine the MX record for a domain name. All software would have to provide that "use ID" and then connect to an IP/port, rather than how things are done now where there is no "use ID" and the port is assumed. It wouldn't burden anyone very much.
For protocols already in common use, it would add delays and/or place more load on DNS in the changeover period (which is likely to be protracted). Other problems too. Someone types in example.com - what do you need to lookup? www.example.com A, example.com A, example.com SRV? What about sites where these are different - which address do you connect to? Then, do you send them off all at once (reduces delays in the common instance but has a tendency to increase delays overall)? How long do you wait for replies? - they could come back out of order. Or do you send them serially, which will add some delay to the majority of lookups, but is on-the-whole friendlier to the networks and DNS servers.

A web browser vendor is unlikely to be particularly happy to add and default-enable a feature that adds to the time taken to resolve the majority of names - but realistically, you won't have very high takeup on the server side until most clients threaten not to connect unless it's done. For newer protocols it's simpler, since there often need be no fallback to A record, and indeed SRV records are being used on some newer protocols. Still, the traffic identification problem is still there.

Adding this to HTTP is a bit different than the case with adding MX to email: many more people will notice the increased time to resolve the name. With email, the delay is in the background, after the message has left the end-user's mail client and enters the transport system: it's almost invisible. With interactive requests such as HTTP, any delay is immediately obvious.

Since it doesn't really buy you anything you don't get from an address-translating device on the IP address of example.com, and given the complexities and problems it adds, who's going to use it?

Re:Critique (1)

Cranx (456394) | about 10 years ago | (#9843181)

There are problems with using site-variable port numbers: it makes identifying traffic types a little tricky

This IS a real problem with the idea, but I think it could be worked out with some creative thinking.

Someone types in example.com - what do you need to lookup? www.example.com A, example.com A, example.com SRV? What about sites where these are different - which address do you connect to? Then, do you send them off all at once (reduces delays in the common instance but has a tendency to increase delays overall)? How long do you wait for replies? - they could come back out of order. Or do you send them serially, which will add some delay to the majority of lookups, but is on-the-whole friendlier to the networks and DNS servers.

You can still map IPs to any canonical name you want; "use IDs" wouldn't change that. You just need to provide the "use ID" to get the IP for the server you want to connect to.

One nice thing about "use IDs" though is you can STOP using canonical names like www.domainname.com to map IPs to protocols, and use them only for mapping to hosts. You would only have domainname.com and a "use ID" of HTTP. You would also have "use IDs" for other protocols used with the domain name, and you could return a different IP+port for each of them.

A web browser vendor is unlikely to be particularly happy to add and default-enable a feature that adds to the time taken to resolve the majority of names - but realistically, you won't have very high takeup on the server side until most clients threaten not to connect unless it's done. For newer protocols it's simpler, since there often need be no fallback to A record, and indeed SRV records are being used on some newer protocols. Still, the traffic identification problem is still there.

Remember, this is a re-design. I wouldn't take into account any difficulty they have converting their software to the new way of doing things. We're designing to "get it right."

Adding this to HTTP is a bit different than the case with adding MX to email: many more people will notice the increased time to resolve the name. With email, the delay is in the background, after the message has left the end-user's mail client and enters the transport system: it's almost invisible. With interactive requests such as HTTP, any delay is immediately obvious.

Why would it take more time? The only difference is that you're giving the DNS server a "use ID" and it looks up the domain name, then finds the IP+port which is mapped to that ID. That shouldn't take any noticeable additional time.

Since it doesn't really buy you anything you don't get from an address-translating device on the IP address of example.com, and given the complexities and problems it adds, who's going to use it?

Everyone, it's how things would be done in the re-design; you would have no choice.

Re:Critique (1)

Cranx (456394) | about 10 years ago | (#9837777)

SMTP/HTTP are not resource types. They are protocols.

You could have a "WWW" resource type, I guess.


My idea was to change DNS to allow IP numbers to be returned for arbitrary identifiers the way MX works, but more generically; not "resource types" per se. You can store numbers for HTTP, WWW, MAIL, TELNET, PORN, whatever you want.

This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.

Well-known ports are very problematic in that it assumes there are a fixed-number of protocols to assign standard ports to, and it assumes everyone is cooperating. By allowing the arbitrary identifiers to determine the port, you can drop "well known ports" altogether.

Hmm. I dunno. I guess you could wrap these in HTTP, but where's the benefit of doing so? You can't really reuse any significant functionality and you'd slightly increase complexity (since everything would need to be linked to an HTTP library).

Lots of reasons, but the two main ones I can think of are: combined code base (as quality of code increases, it increases for all protocols) and it would make it easier to implement protocols and work with them. All the different internet protocols, while very similar, have their own little quirks.

Also, any time you added some desired new feature to the protocol, it is immediately available to all the descendant protocols.

(DATA MERGE) Hmm. I'm not quite sure what the benefit to doing so would be.

Same as above; uniformity and a central code base simply makes it easier to implement, debug, etc. It's just a code quality, efficiency thing.

I would clean up the protocol so it's possible to concatenate multiple lines together without ambiguity, and uniformly, so the method for multiple line headers is the same as multiple lines of data.

Wouldn't this just impose overhead on protocols that use an eight-bit-clean and non-line-oriented interface, like FTP DATA?


I would actually keep the data channel FTP has and make that part of the protocol; it's very efficient. Although perhaps I would make it part of the new data scheme, where you would provide metadata through the protocol, and deliver the data either through the protocol or, optionally, through an 8-bit data connection.

Not sure what you mean ... I guess you'd make every protocol SSL-tunneled? I mean, I think it's a good idea (more plaintext services becoming encrypted == better), but you can already do that, and the main reason that people don't is because of (now archaic) patent issues and because of CPU load. Also, SSL adds overhead, not just on the CPU, but in latency.

This would make shared hosting simpler and would save us a LOT of IPv4 numbers.

I don't see why this would be the case.


Hmm...hard to explain briefly. Virtual hosting is done at the HTTP layer; when a connection is made, browsers request resources by providing a domain name and resource. A web server can "switch" off to the appropriate virtual host using the domain name given. SSL is negotiated before it gets to HTTP, and the domain name is not part of the negotiation. Certificates are matched by both domain name and IP address, but there's no way to hand off a different certificate based on the domain name because SSL negotiation doesn't use domain name information during the handshake. That doesn't happen until the HTTP layer. So you need a unique IP address for each virtual web host which has an SSL certificate. If you could move SSL negotiation into the protocol layer and out of TCP, you could switch off based on the domain name given. You could, actually, switch off based on lots of different criteria, but that's how I think most hosters would do it.

XML may be the most overused and oversold format ever. It's neat for a certain set of tasks, but it has no benefit for many things that it is used for.

Way overused. I boggle sometimes over how many people simply don't understand its strengths and weaknesses and simply want to apply it everywhere. It really offers so little that simple character encoding doesn't already offer, and the structure is so unlike typical programming structures (arrays, maps, etc.). I saw an XML document the other day that applied metadata in the form a tag to another tag, but required that the metadata appear immediately before the other tag to work properly. They weren't wrapped together in a tag, and there was no ID linking them. One had to be right after the other for the information in one tag to apply to the other tag. Ridiculous.

Re:Critique (1)

boneshintai (112283) | about 10 years ago | (#9838062)

Well-known ports are very problematic in that it assumes there are a fixed-number of protocols to assign standard ports to, and it assumes everyone is cooperating. By allowing the arbitrary identifiers to determine the port, you can drop "well known ports" altogether.

No, you don't. You simply move the problem from "well-known ports" to "well-known labels".

Lots of reasons, but the two main ones I can think of are: combined code base (as quality of code increases, it increases for all protocols) and it would make it easier to implement protocols and work with them. All the different internet protocols, while very similar, have their own little quirks.

Not all "internet protocols" are sufficiently similar to be wedged into a single, request-response-oriented protocol. Consider X11 (6000 + display #) or VNC (5800 + display #, 5900 + display #), for instance. Or IRC (6667), to pick something a little closer to the canonical set of 'core' (text-based) internet protocols. All of these protocols have been designed with a specific task in mind, and not one of them maps well to HTTP's request-response structure: they're all asynchronous.

Re:Critique (1)

Cranx (456394) | about 10 years ago | (#9838330)

Well-known ports are very problematic in that it assumes there are a fixed-number of protocols to assign standard ports to, and it assumes everyone is cooperating. By allowing the arbitrary identifiers to determine the port, you can drop "well known ports" altogether.
No, you don't. You simply move the problem from "well-known ports" to "well-known labels".
If they move to "well-known labels", isn't that "dropping well-known ports?" Ports and labels are two different animals. Well-known ports are a finite number of numbers, and labels are a nearly infinite number of text IDs. I wasn't advocating eliminating "well-known" anything, but re-designing DNS to have generic MX-like "use IDs" that return IP+port.
Not all "internet protocols" are sufficiently similar to be wedged into a single, request-response-oriented protocol. Consider X11 (6000 + display #) or VNC (5800 + display #, 5900 + display #), for instance. Or IRC (6667), to pick something a little closer to the canonical set of 'core' (text-based) internet protocols. All of these protocols have been designed with a specific task in mind, and not one of them maps well to HTTP's request-response structure: they're all asynchronous.
Which is why I only mentioned unifying protocols similar to HTTP, such as SMTP, FTP, NNTP, etc. Not all protocols could be unified that way, although you could perhaps initiate connects through a unified protocol that then handed-off to tighter, more efficient protocols.

Obviously (1)

0x54524F4C4C (712971) | about 10 years ago | (#9834875)


The only thing that makes any difference in the internet is pr0n. If XML, SOAP, cryptography and the like can provide more pr0n, that's good. Otherwise piss off.

REST (2, Insightful)

StupidEngineer (102134) | about 10 years ago | (#9834878)

Forget ditching HTTP, it's good even with its quirks. It's easy to use... And it's near perfect for applications designed with the REST philosophy in mind.

Instead of ditching HTTP, let's ditch SOAP-RPC.

Flash (1, Interesting)

beholder77 (89716) | about 10 years ago | (#9835334)

Macromedia did a great presentation to my org on the idea of turning websites into live applications with flash. As a web developer I found the whole idea to be quite cool. Flash seems to give a heck of a lot more flexiblity and control than any HTML/Javascript hackery I've seen. The apps I saw demo'ed were even communicating with a DB server using web services.

Flash has it's drawbacks of course (proprietary and non-indexable being pretty critical), but if opened up to a standards body, it could very well be the next HTML.

Re:Flash (0)

Anonymous Coward | about 10 years ago | (#9836192)

Try SVG. It does most of flash and is an open standard.

I don't like Flash (3, Insightful)

0x0d0a (568518) | about 10 years ago | (#9836281)

I really hate Flash.

I hate Flash for a lot of reasons.

*) Lots of web designers think animation is a good idea. They tend to use it more than a user would like (especially since the "is it cool" metric, where users are asked for initial impressions of a site rather than to use the thing for a month and their feelings on usability) is wildly tilted toward novelty. Animation is almost never a good idea from a usability standpoint on a website.

*) Lots of people doing Flash try to do lots of interface design, going so far as to bypass existing, well-tested and mature interface work with their own pseudo-widgets. They usually don't know what they're doing.

*) Flash is slow to render.

*) Flash is complex, and it's hard to secure the client-side Flash implementation compared to, say, a client-side HTML rendering engine.

*) The existing Flash implementation chews up as much CPU time as it can get.

*) Flash does not allow user-resizeablity of font sizes.

*) Flash does not allow for meta-level control over some things, like "music playing in the background". Some websites provide a button for this. I don't want to have control if the designer choose to give me control -- I never want that software playing music if I choose to not have it do so.

*) Flash does not allow user-configurable font colors (and for some reason, too many Flash designers seem to think that "ten-pixel high light blue text on dark blue looks great to them, so everyone should also be able to read their site as easily).

*) Because Flash maintains internal state that is not exposed via URL, it's not possible to link to a particular state of a Flash program -- this means that you can only link to a Flash program, not a particular section of one. This is very annoying -- I can link to any webpage on a site, but sites that are simply one Flash program disallow deep linking. (I'm sure that concept gets a number of designers up somewhere near orgasm, but it drives users bananas.)

*) The existing Flash implementation is not nearly as stable as the other code in my web browser, and takes down the web browser when it goes down.

*) As you pointed out, I can't search for a "page" in a Flash program.

Really, the main benefit of avoiding Flash to me is that it keeps web designers from doing a lot of things that seem appealing to them but are actually Really Bad Ideas from a user standpoint. Almost without exception, Flash has made sites I've used worse (the only positive example I can think of was either a Javascript or Flash in which the manufacturer of a hardware MP3 player demoed their interface to website users).

I *have* seen Flash used effectively as a "vector data movie format", for which it is an admirable format -- I suspect most Slashdotters have seen the Strong Bad cartoons at some point or another. But I simply do not like it as an HTML replacement.

Re:I don't like Flash (1)

whatever3003 (536979) | about 10 years ago | (#9843696)

absolutley ~ all those points hit the nail on the head. However not all websites are designed with the intention to communicate information, but rather create some sort of environment; an experience, but this really borders on an interactive movie of sorts.

But if the designer gets the tickle to make your browsing experience something of a movie and not provide a (point for point) site map alternative ~ your screwed and theyve screwed themselves.

I browse with plug-ins off personally, flash ads are a pet hate.
*hugs Opera*

~ LSH

Re:Flash (1)

file cabinet (773149) | about 10 years ago | (#9836377)

What makes a site good on the internet successful? its content. imo, flash limits content capabilities. Javascript has its use... you don't overuse it and you don't under use it. Javascript and flash can be effective tools for improving usability but that is about all they are good for.

Everything changes (0)

Anonymous Coward | about 10 years ago | (#9835577)

Thos of us who have been using the "internet" for a LONG time realize this. Everything changes.

Gopher use to be the most popular way to get information on the internet, that has been replaced by HTTP. Who uses gopher anymore? It's still out there, it's still usable.

Look at the history, the internet was started to exchange data with colleges. Ok great, email was pretty much it's first major use, then along came FTP, and then along came gopher, then along came HTTP.

HTTP will never be replaced or "go away" something new may come out to be used the most.

Personally I want a protocal to replace IP. I want verified connection lists, basically a firewall built into the protocal, to only allow verified connections and warn on others.

I want IP privacy masking, meaning If I connect to a server, it won't record my IP, and my IP will never be seen on the public network. The phone company can "block caller ID" Why can't an ISP block "host IP?"

The list could go on and on

Privacy (1)

0x0d0a (568518) | about 10 years ago | (#9836099)

Personally I want a protocal to replace IP. I want verified connection lists, basically a firewall built into the protocal, to only allow verified connections and warn on others.

IPSec? At the application level, SSL?

I want IP privacy masking, meaning If I connect to a server, it won't record my IP, and my IP will never be seen on the public network. The phone company can "block caller ID" Why can't an ISP block "host IP?"

Oh, it can. Lots of ISPs provide web proxies, in particular (they'd probably be tickled pink if people would regularly use proxies, and save them bandwidth costs). However, it makes it even easier for the ISP to track you. The majority of people I've talked to interested in masking are primarily interested in not having illegal activities tracked, and if anything, it's easier for police to find out an identity by going to the ISP and asking for their logs.

There's already been a full anonymity service provided by Zero Knowledge Systems. They even provided onion-skin routing within their own network to make external traffic analysis on their network a pain in the ass. Nobody bought it -- people don't understand or want to pay for anonymity. ZKS went on to do other things.

What about the non-HTTP Internet? (2, Informative)

Gothmolly (148874) | about 10 years ago | (#9835581)

You gloss over, with a sweep of your clueless wand, the rest of us who rely on the Internet for things like SMTP, SSH, Muds, Usenet, IM and VPNs.
Please don't assume that my Internet is the same as your Intarweb.

Come up with something people want. (1)

Captain Rotundo (165816) | about 10 years ago | (#9835964)

Do you remember the Net prior to HTTP and the web? if so you can easily see why "the web" took off like it did.

You need to develop a new protocol/app that provides something people actually want without added complexity and you'll replace the web as quickly as the web replaced usenet/gopher/ftp/irc (I know it didn't replace all of those things but for the majority of uses and people it did to some degree render them obsolete)

Of course if your new system was really a wanted thing and was open enough to become a world wide standard someone would probably just patch it into a browser in someway as to make it accessible from current websites the way flash is accessable from HTML, and then not only would people call your thing "the web" anyway they would only be using the bare minimum to get the new *feature* they wanted.

The Web took off because it worked. You couldn't really patch images and hyperlinks onto FTP the way you can seemlessly access email/usenet/(*insert here*) over HTTP with the proper server. just wouldn't work.

Oh, yeah (4, Insightful)

0x0d0a (568518) | about 10 years ago | (#9835995)

Let's see:

* The primary addressing mechansim would be content-based addressing (like SHA1 hashes of the content being addressed). We have problems with giving reliable references for things like bibliographies. We are gradually moving in this direction. P2P networks are now largely content-addressed, and bitzi.com provides one of the early centralized databases for content-based addressing.

* We would have a global trust mechanism, where people can evaluate things and based on how well other people trust their evaluations, those people can take advantage of their evaluations. Right now, web sites have very minimal trust mechanisms (lifetime of domain, short domain names, and the generally-ignored x.509 certs). This would apply not just to domains, but be more finely-grained and apply to content beneath it.

* The concept of creatable personas would exist. Possibly data privacy laws would end up requiring companies not to associate personas, or perhaps we would just make it extremely difficult to associate such personas. You would maintain different personas which may, if so desired, be separate. Such personas would be persistent, and could be used to evaluate how trustworthy people are -- e.g. if Mr. Torvalds joins a coding forum and makes some comments about OS design, he can simply and securely export his persona (a pubkey and some other data) from the other locations that he has been using that persona (like LKML, etc) and benefit from the reputation that has accrued to that persona. This would eliminate impersonation "this is the *real* Linus Torvalds website", etc.

* Such trustable, persistent personas would allow for the creation of systems to allow persistent contact information to be provided ('snot that hard). This means no more dead "email addresses".

* Domain names not be used as the primary interface mechanism to users for finding and identifying data providers. This is halfway handled already -- most people Google for things like "black sabbath" instead of looking for the official Black Sabbath website by typing out a single term. It's still possible for people to "choose their visual appearance", though, and Visa looks very much like "visa-checking.com", as long as end users have control over how domains are presented to users.

* P2P becomes a primary transport mechanism for data -- from an economic standpoint, this means that consumers of data are responsible for subsidizing continued distribution of that content, and shifts the burden from the publisher of the content -- one step removed from consumers funding the production of their content. It solves many of the economic issues associated with data distribution. For this to happen, P2P protocols will have to be strongly abuse-resistant, even if that means a lesser degree of performance or efficiency. Many existing systems have severe flaws -- Kazaa, for instance, allows corrupted data to be spread to users, and conventional eDonkey (sans eMule extensions) does not provide any mechanism to avoid leeching, which destroys the economic benefits. Sadly, one of the few serious attempts to address the stability of the system was also from Bram Cohen of BitTorrent and abandoned -- called Mojo Nation, it used a free-market economic system to determine resource allocation, and was fairly abuse resistent. I have some efforts in this direction, but don't use a free-market model.

* Email and instant messaging will merge to a good degree (or perhaps one will largely "take over"). Up until now, it has mostly been techncial limitations in existing software that has kept one from supplanting the others -- email provides poor delivery-time guarantees, instant messaging provides message size limitations. Email uses a strictly thread-based model, instant messaging uses a strictly linear model. Probably someone will coin a new, stupid term for the mix of the twain (like "instant mail").

* Personas and global trust networks (not extremely limiting binary-style trust, a la PGP/GPG), as mentioned above, will interact with mail. They will be the antispam and anti-joe-job tool of the future, the final fix.

* IP will evolve to better deal with QoS. Currently, QoS has largely been a failure on the Internet. However, with the spread of P2P and broadband, we have major bandwidth consumers that could purchase tiered service, where there is non-time-critical delivery (USENET and email were the last time that there was a possible place to slap low-priority flags on a large chunk of Internet bandwidth). It should be possible to pay different rates for high, normal, and low-priority bandwidth (or buy "50MB of high-priority bandwidth/month in the gamer package", etc). We have pretty much only dealt with this for RT-guaranteed delivery, like major providers that are selling VoIP-capable tunnels.

* Data addressed by content addressing will allow the association of metadata in a standardized fashion -- in particular, relationships like "derived-from" between two content-addressed pieces of data will be expressable and publishable. I'm working in my spare time on code to implement this.

* HTML will finally fail. Ultimately, the move of the general public to the Internet, and the de-emphasis of computer scientists failed to produce a number of people that understand abstract markup, as many had hoped. Instead, there is simply a mass of people that demand to use traditional publishing methods on web-pages, like non-rewrapping and pixel-level layout. HTML was never designed for this, despite extensive work to try to retrofit it for this. I don't think that PDF will necessarily supplant HTML, but I think that something closer to PDF will do so, something that is intended for static layout. It's too bad that most web designers are incapable of designing around content that changes size/shape/layout, but there's no sense crying over spilled milk.

* We will finally have a sane RPC mechanism. SOAP is a hack to ram things through HTTP, CORBA is complicated, and sunrpc complicated and ugly. I think one of the most important things that Java did was try to address the problem of distributed systems, and the fact that interfaces to talk across hosts are generally either complicated or a serious pain in the ass to use. Older languages were never designed to natively run on distributed systems. Distributed applications (including commercial ones) will become more common, now that broadband is more common and the industry is more familiar with the idea of providing remote services. Distributed applications, where part of the application lives on the server, can be largely piracy-proof, which gives them a tremendous boost over traditional software in the revenue department. Incidently, distributed apps are also probably not effectively bound by the GPL, since functionality of GPLed software can be provided without distributing source code. A major problem with this has been the lack of micropayments (see below).

* Micropayments, or at least more efficient electronic financial transaction systems, will come to the fore. The existing "standard electronic financial transaction system" is the credit card, which is a horrible, easily attacked system, controlled by a few companies, and expensive. PayPal is a partial move in the right direction, eliminating some of the people from the equation. Technically advanced systems (like the use of smartcards, especially those with a keypad and a calculator-style LCD strip on them) can be made that are largely fraud-proof. The only reason that e-cash hasn't caught on yet is because the existing financial services companies are the ones that have backed existing attempts, and demand perks similar to their similar ones. For example, record-keeping and statistical analysis is necessary to avoid credit-card fraud, so credit card vendors have come to enjoy the benefits of having a complete database of people's purchases (and the lucrative marketing possibilities associated with this). It is possible to provide anonymous e-cash systems, but no provider has any interest in doing so -- people don't value this. Credit card companies take a significant chunk of transactions -- about 3%. No e-cash proposal has attempted to take a flat fee, or simply much less. Nobody is going to use a system where someone is steadily taking cuts off the top as the standard mechanism of money interchange (especially person-to-person, where fees are particularly obvious). Such systems, once in place, will allow a number of services that have not previous existed.

* The concept of "channels" or "packages" will be applied to commercial websites, in the short term, to deal with the lack of micropayment systems. On TV, nobody wants to deal with paying for a single TV show -- pay-per-view is not the dominant TV paradigm. Instead, they buy packages of channels of many shows, which brings the price up to a quantity that people are comfortable dealing with. If people want to use thirty commercial websites, they aren't going to want to hassle with thirty monthly bills. Plus, there's a marketing benefit -- people can only be *using* one website at once, but by selling "access to *fifty* websites!", it *looks* like a more valuable product to the consumer. I can see, say, "Edutech, Inc" providing an "educational reference package" that sells access to a ton of reference sites. Maybe a parent won't buy service to just britannica.com, but would they buy access to m-w.com, britannica.com, and so forth? Same goes for other sites -- entertainment, B2B transaction, etc. Packagers add value to service -- it means that domain experts can evaluate products (and drop ones that are becoming problematic or not treating customers well) whereas individual customers are not in a position to get reactions from website developers -- they are, after all, only one person. Use of personas will again be extremely useful -- account maintenance has gotten out of hand, where people reuse passwords and have to hassle with too many passwords and so forth, to the point where it is a significant impediment to commercial websites existing. The "package" approach is not without technical precedent -- various AVS services have done this, and I believe gamespy has set up a similar such service, but they are largely limited by the use of cookies or another password to remember -- an issue which personas would fix. If credit cards are still used in such a scenerio, it means *one* (presumably more reputable) company involved with billing you instead of a horde of tiny websites.

Re:Oh, yeah (1)

isj (453011) | about 10 years ago | (#9846048)

I have to add a few things on you comments about RPC.

SOAP is a hack to ram things through HTTP

I completely agree. It was borne out of the need to tunnel RPC through HTTP due to misguided and zealous firewall administrators, added with the then-current hype: XML. The result is a bloated protocol.

sunrpc complicated and ugly

It isn't. The interface specification is close to C:

struct Foo {
int x;
string x&lt;&gt;;
};
program Boo {
version something
void Bar(Foo f) = 1;
}
and after the stubs have been generated by rpcgen you can do stuff like:
CLIENT *client=clnt_create(host,Boo,1,"udp");
Foo f;
...
Bar(&f,&result,client);
However, sunrpc is lacking authentication and its age is showing because it has not been developed the past few years. But the on-the-wire protocol is lighweight (XDR), the encoding/decoding routines very fast, and everything is documented. And everything Unix system supports it. You can also get it for Windows, Java, etc.

CORBA is complicated

Yes. The OMG (Object Management Group) unnecesarily tied it to a naming service, evangalized UML, and assumed that you had full control of the environment where it is running. It doesn't help that the IDL-to-C++ mapping is using non-standard classes (no STL). (but to be fair, OMG did not have a choice at the time). But CORBA IDL is very nice:

module Boo {
exception BarFailed { ... };
interface XYZ {
struct Foo {
long x;
string y;
};
void Bar(in Foo) raises (BarFailed);
}
}
And it can be used in C++ like this:
try {
servant->Bar(f);
} catch(.....BarFailed) {
...
}
But the memory ownership in the IDL-to-C++ is complicated. It does map rather nicely to Java though...


Ideally, I would like to see an IDL like CORBA, with stub generation like rpcgen or idl, with a on-the-wire format like XDR. And without the tie-ins to other components. And forget about objects being portable across the network. Posibility of fine-grained access control would also be nice (defaulting to full access for ease of testing).

XML (1)

jesboat (64736) | about 10 years ago | (#9837912)

All documents should be XML (or some other data discription language.) CSS's sucessor should be used to assign elements presentation. Possibly by converting them to other element trees.

The XML should be psuedo-standardized, so browsers would be able to recognize TV-Listing-ML/Search-Result-ML and present it in an alternate form, if you wanted, with headers and footers added (to make advertisers happy, unfortonately necessary for a new Web protocol to suceed.)

Re:XML (1)

hsoft (742011) | about 10 years ago | (#9842214)

I agree with you, but on this point:

CSS's sucessor should be used to assign elements presentation. Possibly by converting them to other element trees.

I would like to point that XSLT already exists, and it is not a replacement to CSS, but a complement. XSLT = data format CSS = style

HTTP is fine (2, Interesting)

billcopc (196330) | about 10 years ago | (#9838306)

HTTP is fine, a stream-transfer protocol can only do so much.

HTML however feels rather clunky now with all these bloated half-supported standards tacked onto it. We still don't have consistent rendering across the board, and it's still a pain in the posterior to publish anything. CSS, that wretched hammer of aborted salvation, is yet another limited hack.

We used to have HTML glitches and workarounds, now we have CSS glitches and workarounds; design compromises in a system that was supposed to break the boundaries of visual layout. Well here we are 5 years later and the graphics artists are still using Flash instead of CSS... I even collapsed and learned the dark art of PHP-generated Flash to do some things that just weren't worth the trouble in CSS. Content is king, but we have 256mb video cards and we want to use them!

It's been tried (1)

ehvoy (696364) | about 10 years ago | (#9839216)

It's been tried but it requires Windows 2000 Professional or better, Microsoft Internet Explorer 6, IIS 6, and SQL Server 2000 or better. Version 2 requires Microsoft Longhorn.

No XML please. (1)

TheOnlyCoolTim (264997) | about 10 years ago | (#9839561)

I think the nicest thing I can say about XML is that sometimes it isn't blatantly inferior to any other solution. Sometimes.

Tim

Pro Jax! (1)

Graymalkin (13732) | about 10 years ago | (#9841524)

I don't really see a need to wholly move away from HTTP and HTML. They're both extremely flexible and will likely be very useful for many years to come. They're both relatively basic systems that aren't terribly difficult to implement with a modicum of programming talent. They're also extremely lightweight which makes it much easier to use them on equipment with very little power of memory.

Because HTML is fairly verbose and well formed HTML is regularly laid out it isn't terribly difficult to parse. Computers with a fraction of the processing power and memory available to modern PCs can easily handle HTML files. Well formed HTML and especially XHTML documents are actually very useful because they can not only be easily parse but also easily handled by a variety of output devices. What is doubly useful about HTML is it can also point in an implementation neutral means to other documents. This is what hypertext is all about, text that can describe itself to anyone who is listening.

HTTP as a protocol is extremely useful because it is stateless and as such has very low overhead. Because of this it is very easy to implement and maintain. These features also make the protocol very robust and extensible. It can handle binary and textual data equally well and even tunnel other types of protocols inside of itself. These features that don't need to be replaced or necessarily enhanced.

The future (1)

DocUK (794395) | about 10 years ago | (#9841887)

I think the future of the internet will be using technology such as the BitTorrent idea, to take advantage of the high client to server ratio to distribute content more effectively. This would truly make the web (the World Wide Web that is) interconnected.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>