×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Smarter Clients Via ReverseHTTP and WebSockets

kdawson posted more than 4 years ago | from the edges-are-not-dumb dept.

Software 235

igrigorik writes "Most web applications are built with the assumption that the client / browser is 'dumb,' which places all the scalability requirements and load on the server. We've built a number of crutches in the form of Cache headers, ETags, and accelerators, but none has fundamentally solved the problem. As a thought experiment: what if the browser also contained a Web server? A look at some of the emerging trends and solutions: HTML 5 WebSocket API and ReverseHTTP."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

235 comments

A problem that I can see. (5, Insightful)

ls671 (1122017) | more than 4 years ago | (#29111435)

A problem that I can see is that web browsers already contain enough security holes, imagine if they contained a web server ;-)

Re:A problem that I can see. (1)

palegray.net (1195047) | more than 4 years ago | (#29111541)

What if the web server were limited to only communicating with external web servers to which a connection was already made, refusing any unknown connections from the network?

Re:A problem that I can see. (2, Interesting)

jellomizer (103300) | more than 4 years ago | (#29111577)

Are you trusting the site you connect to. That is the Active X mentality. If you went to the site then it is OK.

Re:A problem that I can see. (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29111635)

It's Windows that you can't trust for good security. I bet if this ran on Linux or OpenBSD it would be no problem to integrate a web server into the web browser. Let's face it, all the virii are for Windows and not running it is the best way to avoid them.

Re:A problem that I can see. (0)

Anonymous Coward | more than 4 years ago | (#29111815)

Stop the presses! Newsflash! Slashdot poster claims Windows is inherently insecure! Only way to ensure security is to not run Windows!

Re:A problem that I can see. (1)

jellomizer (103300) | more than 4 years ago | (#29111883)

So lets have it create a 120gig file called "-rf /" And lets see how many amateur Linux users will whip out their drives.

Re:A problem that I can see. (2, Insightful)

cripeon (1337963) | more than 4 years ago | (#29112235)

And pray tell me how exactly you're going to encode a "/" [wikipedia.org] in the file name?

Re:A problem that I can see. (0)

Anonymous Coward | more than 4 years ago | (#29112277)

I've done it before. Accidentally. It wasn't easy.

Re:A problem that I can see. (1)

palegray.net (1195047) | more than 4 years ago | (#29111761)

Nobody ever said the web server in the browser should have any privileges beyond the browser. That's not the ActiveX mentality.

Re:A problem that I can see. (1)

wealthychef (584778) | more than 4 years ago | (#29111781)

What if the web server were limited to only communicating with external web servers to which a connection was already made,

How is it a server if it is the one requesting a connection? This is semantic abuse! I'm calling the definition police.

Re:A problem that I can see. (1)

0racle (667029) | more than 4 years ago | (#29111863)

Opera wants to explore that particular 'what if.'

Re:A problem that I can see. (1)

TeXMaster (593524) | more than 4 years ago | (#29112341)

I wonder if anybody has tried targeting the Opera Unite services for holes? Or is Opera still too irrelevant market-wise?

Re:A problem that I can see. (1)

Adm.Wiggin (759767) | more than 4 years ago | (#29112283)

BLOAT. Firefox is "the great hog". Why would we want to add MORE? Yes, I love it, but that doesn't change the fact that Firefox is a pig.

Re:A problem that I can see. (1)

TeXMaster (593524) | more than 4 years ago | (#29112357)

Firefox is a hog because it's coded to be one, not because of the number of features. Opera provides hundreds more feature than FF (including a webserver) and it's much leaner.

Re:A problem that I can see. (1)

Adm.Wiggin (759767) | more than 4 years ago | (#29112391)

But they need to hire a user interface designer that's completely devoid of web design work. I feel like I'm using a poorly designed "Killer Web 2.0 Application" when I try to navigate the Opera preferences screens. The browser underneath all that is fantastic, but the UI is a deal-breaker for me.

A problem that I can't see. (2)

paxcoder (1222556) | more than 4 years ago | (#29112329)

Why is this any different from the classic thing?
All it does is enables the "server" to send data even if the "client" doesn't request them each time (doesn't refresh). Instead of trusting AJAX code gotten from the server and "refreshing" parts of pages in set intervals, why not just trust the socket which will provide the info necessary for the "refresh"? I don't see any new problem introduced here.
Please explain if you do.

Connection, yes. Server, no. (4, Insightful)

Animats (122034) | more than 4 years ago | (#29111495)

There's nothing wrong with a browser establishing a persistent connection to a server which uses a non-HTTP protocol. Java applets have been doing that for a decade. That's how most chat clients work. Many corporate client/server apps work that way. In fact, what the article is really talking about is going back to "client-server computing", with Javascript applications in the browser being the client-side application instead of Java applets (2000) or Windows applications (1995).

But accepting incoming HTTP connections in the browser is just asking for trouble. There will be exploits that don't require any user action. Not a good thing. Every time someone puts something in Windows clients that accepts outside connections, there's soon an exploit for it.

Re:Connection, yes. Server, no. (3, Interesting)

Scootin159 (557129) | more than 4 years ago | (#29111645)

While I agree with the parent, that accepting incoming connections is a bad thing - it may also be the "killer feature" to implement IPv6. Auto-configuring clients to support incoming connections is inherently difficult in NAT.

Re:Connection, yes. Server, no. (2, Informative)

raddan (519638) | more than 4 years ago | (#29111893)

I don't think you're going to see people give up NATs easily. NAT is a bona fide security feature, not just a consequence of having a LAN. This is the same thing that makes detecting bad segmentation faults possible in an operating system, and from that perspective, a separate address space is very desirable.

Any kind of 'fundamental change' that happens on the Internet needs to accept that NATs part of good architecture. You really want your toaster on the same address space as your Cray?

Re:Connection, yes. Server, no. (1)

Tony Hoyle (11698) | more than 4 years ago | (#29112085)

No, they're really not. They don't add any security except a tiny bit of obscurity.

Re:Connection, yes. Server, no. (1)

Korin43 (881732) | more than 4 years ago | (#29112503)

More like a huge amount of obscurity. Let's say you install Windows 2000 on a computer and don't install any patches or service packs. Connect it directly to a cable modem and you'll have viruses instantly. Do the same install on another computer and put it behind a router and you'll find that even without any patches you're fine. And my point isn't that patches aren't necessary, it's that the obscurity of being hidden behind a router protects you from threats that haven't been discovered yet, and that's the hardest ones to protect against.

Re:Connection, yes. Server, no. (1)

zn0k (1082797) | more than 4 years ago | (#29112111)

NAT is not a bona fide security feature. NAT by itself can be traversed, NAT keeps no traffic information other than port numbers - NAT at best obfuscates network space, which is a welcome side effect but doesn't make things secure. A firewall is a security feature, and it's perfectly possible to firewall IPv6 traffic. Also, it's of course perfectly possible to subnet an IPv6 network and separate your Crays and your toasters, if you so desire.

Re:Connection, yes. Server, no. (4, Insightful)

Dragonslicer (991472) | more than 4 years ago | (#29112149)

NAT is a bona fide security feature, not just a consequence of having a LAN.

What security does it provide that a REJECT ALL firewall rule wouldn't?

Re:Connection, yes. Server, no. (3, Insightful)

DiegoBravo (324012) | more than 4 years ago | (#29112499)

> What security does it provide that a REJECT ALL firewall rule wouldn't?

The security that most users don't have any idea about how to configure a REJECT rule, even if they have a firewall at all.

Re:Connection, yes. Server, no. (1)

PitaBred (632671) | more than 4 years ago | (#29112513)

The only thing I can think of is some opaqueness of the network behind the NAT, but that's not a huge win

Re:Connection, yes. Server, no. (1)

profplump (309017) | more than 4 years ago | (#29112599)

NAT provides exactly the same security as a connection-tracking firewall -- there is no further benefit to address translation over a dynamic firewall with the same rules. Dropping the NAT part makes it about 11,000 times easier to run services on the inside, particularly if they use multiple connections (e.g. FTP, SIP) in the course of a session, and it removes the "only 1 person can run a service on the default port" limitation introduced when you put more than one system behind a single address.

Re:Connection, yes. Server, no. (1)

gumbi west (610122) | more than 4 years ago | (#29111673)

I would actually expect more security problems on both sides! There is both a new server on the client and a new client on the server. Each will take some time to secure and inevitably open up vulnerabilities.

Re:Connection, yes. Server, no. (2, Interesting)

ls671 (1122017) | more than 4 years ago | (#29111783)

> But accepting incoming HTTP connections in the browser is just asking for trouble.

Exactly, transparent caching proxies would seem to solve issue in a simpler and easier to manage way. Then again, providers trying to implement caching proxies that I know of have all abandoned the idea after trying it. Their customers complained to much and it brings all sort of problems with the transactional behavior of an application.

So people do not like caching proxies, why would they like one in their browser ? Why would they like getting content from another user browser instead of from the original source ?

Also, we live in a economy where we try to boost demand. I observed this trend many years ago: The more we go into the future, the less we seem to be concerned about bandwidth. Bandwidth is getting cheaper and cheaper and providers want to sell more bandwidth ;-)

Bandwidth usage is meant to go up anyway so I do not see their concept fly. I mean if we are really concerned about bandwidth that much, let's start by design application properly and use what is already in place (cache, expiry headers, etc.) properly, which is seldom the case in the applications that I have seen.

 

Re:Connection, yes. Server, no. (1)

metaconcept (1315943) | more than 4 years ago | (#29111849)

So people do not like caching proxies, why would they like one in their browser ? Why would they like getting content from another user browser instead of from the original source ?

What if the other user's browser is the original source? Think something like Tiddlywiki instances running in the browser and synchronising peer-to-peer.

Re:Connection, yes. Server, no. (1)

ls671 (1122017) | more than 4 years ago | (#29112163)

How would this peer2peer like static content propagation be really useful in a modern web application ? It is not worth the security implications (basically the same as running any p2p client)

Most web applications have dynamic content nowadays you know.

Would you rather to get stale Slashdot articles from the browser running in your neighbor's computer or to get them fresh from Slashdot ?

Again, nothing here that caching proxies don't already do, and people do not like caching proxies as I explained above in the GP.

Re:Connection, yes. Server, no. (1)

metaconcept (1315943) | more than 4 years ago | (#29112247)

It'd be useful in the same way decentralised version control systems are useful by comparison with their centralised counterparts. The replication topologies are no longer constrained to the simple hub-and-spoke model. Think google wave, without the centralised server. Think google docs, without the centralised server.

Re:Connection, yes. Server, no. (1)

ls671 (1122017) | more than 4 years ago | (#29112577)

I understand your point, you are basically saying that we should incorporated fancy p2p capabilities into web browsers ;-)

My point is that it is a bad idea security wise. I use p2p clients/servers to do p2p ( Note: p2p clients are also servers). I admire p2p algorithms just as much as you might do but let's use right tools for the right job.

A lot of people, especially big corporations, won't like the idea of having their employees running p2p clients/servers incorporated in their web browsers either ;-)

People running p2p clients/servers should understand the issues and risks. Computers are not sold with p2p clients/servers already installed into them. Computers are sold with pre-installed web browsers. Think about the "Grandma" type of users.

Cheers ;-)

TCP, by any other name, would thou smell as sweet? (1)

TiggertheMad (556308) | more than 4 years ago | (#29111955)

Regardless if you are maintaining a persistent connection via a non-HTTP protocol, or setting up a dual web servers to chatter back and forth, you are still missing the point.

This is yet another stupid patch for the fundamental design flaw of HTTP: I was never supposed to used to mimic a persistent connection. Why keep running through ever more complicated mazes, and just build a freakin browser that maintains a connection with the host? (rhetoric, I'm not really asking this as a question...)

if you want plain HTML, use a browser. If you want a TCP/IP connection, use a HTTP connection with AJAX, Java applets, server state, cookies, keep alive headers, hidden form values and viewstate to SIMULATE ONE...

Re:Connection, yes. Server, no. (1)

Abcd1234 (188840) | more than 4 years ago | (#29112041)

It's also a stupid, stupid idea. On top of the security concerns, it's a waste of resources, both along the network route, and at the endpoints (mmmm... even more sockets for the web server OS to keep track of). And it's a huge hassle for firewalls.

Honestly, I've been a defender of the whole thick-ish-web-client revolution, but this is just getting ridiculous. HTTP is a request-response protocol. If you need something interactive, use a frickin' interactive protocol. Why the hell would you shoehorn it into HTTP, save to prove that you can?

In short: re-inventing non-passive FTP using HTTP is stupid. Very very stupid.

Problem? (4, Insightful)

oldhack (1037484) | more than 4 years ago | (#29111517)

I thought dumping the load on the server was the desired design feature. What is the problem they are trying to solve? Good old rich client model has been around for some time now.

Re:Problem? (5, Funny)

ceoyoyo (59147) | more than 4 years ago | (#29111575)

Ah, but the young 'uns have forgotten that anybody did anything before they came along. So "the network" is synonymous with "the web" and if you want to send any information it better be over HTTP. So bidirectional HTTP means you can communicate in both directions!

Next they're going to figure out that if you move the web app out of the browser you can have a much richer GUI experience.

Re:Problem? (1)

WinterSolstice (223271) | more than 4 years ago | (#29111707)

This wouldn't be so funny if it weren't so true :D

At least I no longer have to use a teletype for my batch output...

Re:Problem? (1)

ceoyoyo (59147) | more than 4 years ago | (#29112279)

You know, I've heard this cool idea being kicked around of using a printer to automatically produce a hard copy of the console output. Sort of like a log, but on paper. It seems it might just be the next big thing!

Re:Problem? (1)

WinterSolstice (223271) | more than 4 years ago | (#29112607)

That would be especially useful whenever the system has a problem - then you would be able to read back the whole issue, and it wouldn't get deleted on reboot!

Re:Problem? (2, Informative)

digsbo (1292334) | more than 4 years ago | (#29112227)

The problem is that this is not funny, but painfully true. The problems of cross-platform client GUIs are all being solved again, but on top of several new layers of APIs with little value-add and a big performance hit. If you want to write a client-server app, please do so! Stop trying to make the browser the cross-platform widget toolkit.

I've seen good developers try to use UDP instead of TCP to save a few bytes of overhead, and they failed. How will web developers without any concept of network programming theory manage to recreate (or even use) a persistent, robust, and high-performance connection over HTTP? I doubt it will work well. We'll end up with more kludge layers between the browser, OS, browser plug-ins, etc....

Re:Problem? (2, Insightful)

ceoyoyo (59147) | more than 4 years ago | (#29112349)

I once had a PhD supervisor who had a problem. He was setting up a database for this group who needed to be able to enter a few very simple bits of information for a clinical trial. I told him it would be no problem - whip up a little app in Python using Wx or QT that does nothing but display a few fields and create a new row in the database when you hit submit. Maybe do a little bit of checking and pop up a dialog box if it was a duplicate.

But no... it had to be a web app. So after hiring a database guy, setting up a hefty content management system and writing a bunch of code, the original group decided to use Access.

Re:Problem? (0)

Anonymous Coward | more than 4 years ago | (#29112401)

They are trying to use your PCs to push the "cloud" computing model forward, get it? They upload their website to your "client" and completely offload all the power, computing, and bandwidth liability to you, the end-user.

Why should Google have to pay for data-centers and concentrators anyway; when your Core2Duo sits mostly idle and everyone knows P2P is the best way to "deliver content?"

This sounds awesome! (0)

Anonymous Coward | more than 4 years ago | (#29111523)

Imagine if IE contained IIS...that would be awesome!

imagine indeed...

only in the mind of microsoft

Going backwards (1)

nurb432 (527695) | more than 4 years ago | (#29111535)

The whole point of 'the web' was to move processing out to the 'cloud' ( sorry for the buzzword use ). Ideas like this only would continue the backwards trend of moving the processing back onto the client, which personally i feel is the wrong direction.

Re:Going backwards (3, Insightful)

Tynin (634655) | more than 4 years ago | (#29111649)

I mostly agree, however I believe much of the initial push to move processing out to the 'cloud' was because clients likely had limited hardware. Now days client hardware is rather beefy and could handle some more of the load that the server doesn't need. That said, I think a web browser that opens ports and is listening for connections on my computer would make me more than slightly wary.

Re:Going backwards (3, Insightful)

Locklin (1074657) | more than 4 years ago | (#29111687)

The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

Re:Going backwards (2, Insightful)

tomhudson (43916) | more than 4 years ago | (#29112105)

The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

Exactly. The original web was supposed to be read-write. A combination of companies that wanted to ream people for $50/month for hosting a simple hobby site and ISPs that wanted to upsell you a $100/month "business internet package with 1 static IP" are the guilty parties.

Of course, the Internet routes around such damage, so we have home servers operating on alternate ports, ultra-cheap hosting plans, and dynamic dns.

Re:Going backwards (1)

iron-kurton (891451) | more than 4 years ago | (#29111837)

Sorry, I have to disagree. There is no right or wrong, as far as thin vs. thick clients are concerned -- it's really what's best for the job. Processing on the client side can be a good thing, as long as it's not abused (like it is with ajax).

Perhaps I just don't understand TFA, but... (1)

St.Creed (853824) | more than 4 years ago | (#29111607)

I've been reading the blog-article and the linked websockets API description. The websockets proposal specifically states the protocol does not give access to the raw network, and does not allow an IRC client without intermediate server. So where's the whole enthusiasm coming from? Even with websockets, it doesn't look all that different from the Opera implementation.

As far as I understand websockets from the description, you still have to point the browser to the right place to connect to. Once connected, you can then accept incoming messages from the other side. Well, color me pink and tie me down, but I'm not sure the rapture has just arrived already because of someone writing up an API.

Websockets shouldn't have much of a problem with firewalls though, since you could use the existing tunnel. I wonder what this would do for security inside the company.

Scenario: I'm pointing my browser at a server I run at home. In my browser I run a small webserver that can access a commandshell. Cool, now I can work from home despite a firewall and lack of software :) Sorry dear sysadmin, your firewall just got a few new holes in it :)

Re:Perhaps I just don't understand TFA, but... (0)

Anonymous Coward | more than 4 years ago | (#29111665)

Scenario: I'm pointing my browser at a server I run at home. In my browser I run a small webserver that can access a commandshell. Cool, now I can work from home despite a firewall and lack of software :) Sorry dear sysadmin, your firewall just got a few new holes in it :)

Glad that nobody has figured out how to do that with ssh -R on port 80.

Already possible with httptunnel (0)

Anonymous Coward | more than 4 years ago | (#29111771)

http://www.nocrew.org/software/httptunnel.html

Re:Perhaps I just don't understand TFA, but... (2, Informative)

Locklin (1074657) | more than 4 years ago | (#29111785)

Scenario: I'm pointing my browser at a server I run at home. In my browser I run a small webserver that can access a commandshell. Cool, now I can work from home despite a firewall and lack of software :) Sorry dear sysadmin, your firewall just got a few new holes in it :)

I used to do that on my University's network with a reverse SSH tunnel. You could even do it over port 80 if they blocked outgoing 22. The only issue would be if you could install that webserver on a locked-down machine that has no SSH client.

I thought this was called Flash? (1)

ACMENEWSLLC (940904) | more than 4 years ago | (#29111615)

Flash 10 has the ability to do advanced client side things. For example, it can update the screen with information from a server by posting to a website, by XML, etc. It's pretty good at doing this. 8e6 and Surfcontrol utilize this type of capability in their admin GUIs for example.

Beyond just nice GUIs, one can serve up a special Flash document on a website. When the user opens the web page, a reverse proxy tunnel can be established allowing access into the clients LAN through Flash, bypassing any firewall restrictions. I think that was a previous /. article.

It's got a lot of features many folks don't use.

Re:I thought this was called Flash? (1)

Per Wigren (5315) | more than 4 years ago | (#29112141)

It's also a proprietary, binary blob and an insanely buggy one at that. At least all versions that are currently usable in practice (Gnash/swfdec are only usable in theory).

HTTP isn't dumb, it's just minunderstood. (4, Insightful)

EvilJohn (17821) | more than 4 years ago | (#29111669)

So really....

HTTP isn't dumb, it's (mostly) Stateless. Instead of that, what about building net applications around stateful protocols instead of some stupid hack and likely security nightmare?

Re:HTTP isn't dumb, it's just minunderstood. (3, Insightful)

iron-kurton (891451) | more than 4 years ago | (#29111951)

Here's the thing: stateless works everywhere there is any internet connectivity. Imagine having to define a long-lasting stateful protocol around slow and unreliable internet connections. But I do agree that the current model is inherently broken, and maybe we can get away with defining short-term stateful protocols that could revert back to stateless....?

Re:HTTP isn't dumb, it's just minunderstood. (1)

BitterOak (537666) | more than 4 years ago | (#29112345)

Here's the thing: stateless works everywhere there is any internet connectivity. Imagine having to define a long-lasting stateful protocol around slow and unreliable internet connections.

That's exactly what TCP was designed for: persistent 2-way connections over possibly unreliable networks. But I agree with your basic point, given that firewalls may be configured to only allow HTTP and other basic protocols through.

Re:HTTP isn't dumb, it's just minunderstood. (1)

lennier (44736) | more than 4 years ago | (#29112121)

I've wondered recently how come we can't get a protocol like HTTP, but 1) not based on 'pages' but arbitrarily small/large and recursively nestable chunks of data, and 2) not pull and client-driven but publish/subscribe and persistent, where you'd attach to a data chunk and then be notified with the new value whenever that chunk changes. The rise of social services like Twitter and Facebook (and particularly the use of both by applications as a sort of generic publish-subscribe information bus) seems to be indicating that there is a need for such a thing, and building it on top of proprietary websites designed for other purposes entirely seems like a waste of time.

I'd like to get away from the 'client/server' approach of the Web back to the 'every endpoint is a host' of the underlying Net, because that's important for the end-to-end principle (and I think it was a big mistake to ever lose it). Moving just one step up the protocol stack from 'fire and forget datagram (IP)' to 'stream (TCP)' to 'subscribable chunk' seems like it would obviate the need for a lot of AJAX hackery. We could keep HTML as a display layer on top, but we really need a way to visualise and connect a whole lot of little tiny paragraph-size chunks of data - like, say, each post in a blog, each comment in a forum, each edit in a wiki, or each row in a table.

If we make this middleman protocol stateful, such that the equivalent of 'web proxy' for it is required to keep a cache and only transfer data into the internal network from outside when it changes... we could still keep a really simple, policy-free network, but reduce the insane amount of duplication of packets that we do now. If someone inside your home network pulls down a movie from a given URL, fine, it should get transferred once from your ISP's network to your home proxy/cache... and then sit on the cache there and not get re-downloaded. The same idea should work down to the level of individual Slashdot comments.

If we then add a very simple pure-functional language into this protocol, so that every 'chunk' could also be a function over other chunks, then we could get a generic RESTful computing model for mashups and the like.

Google Wave seems to be heading sort of in this direction, so maybe it might evolve into a generic replacement for the Web.

The web-application-forever-trend? (1, Interesting)

SplatMan_DK (1035528) | more than 4 years ago | (#29111681)

May I politely ask WHY anyone would one to continue making browsers "heavier" and "thicker" all the time, instead of simply making a good old fashioned rich client (thick client if you prefer)???

I am not looking to start a "Wep-app-vs-client-app" war here. I think there is a time and place for both thick client applications and web applications. And I am a very happy G-Mail user (among other web-app-things). But sometimes I am REALLY amazed when I see the lengths some web-developers will go to, in order to achieve PRECISELY the same goals that thick clients has been able to do for literally several decades!

The platforms and standards for making web applications are continuously MOLESTED in order to give them primitives abilities which, at the end of the day, are STILL only a shadow of the power a rich client has.

Stuff like AJAX hits the scene, and people call it a "milestone" or a "revolution". Wow. Now a user can get his screen updated async without hitting a "submit" button. Big stuff there.

Next thing will be ... what? Better graphics? Actual integration between applications? Easy third-party data integration? Ah, wait, maybe it will be an continuous (and actually working) user session? No no ... wait ... I got it ... it will be model-based programming. Yes. The revolutionary new "model-view-controller" design will totally change the landscape of web applications. It will be ground-breaking stuff to any web developer! Yeah!

The finest achievement any web application can get, is being described like it is "just as good as a rich client". Hasn't anybody stopped a moment to think about WHY that is? Perhaps it would be better just to use web clients where they make sense, and rich clients where they make sense?

Why on earth do some people continue to abuse the thin (read: skinny and bone-rattling) web standards for tasks that are clearly more suited for a traditional rich client application?

This is an honest question - technical answers are more than welcome. I genuinely want to understand what is going on in the minds of all these "progressive" web developers who are seriously proposing the introduction of advanced server-processes as part of a browser...

- Jesper

Re:The web-application-forever-trend? (0)

Anonymous Coward | more than 4 years ago | (#29111777)

You have to remember that HTTP is a stateless protocol, and as such has been a near failure. It's only through blind luck that HTTP is as popular as it is.

It's very important (mostly for buzzword compliant PHBs) that the world do whatever it can to make HTTP a pseudo-stateful protocol, so people can pretend that web pages are real applications (that magically disappear when you accidentally hit backspace while your input's focus is in the wrong spot).

Yes, the crippled (read: simple and easy to implement) HTTP protocol is slowing down the internet (read: stupid ideas) so we need to add 6-7 layers on top of it so it "works".

The OSI model is only a few layers. Wouldn't everything be better if it was a lucky number, like 13 layers?

HTTP + AJAX + GWT + HTML5 + Flash + ... + ... + ... = just write a desktop application already.

Re:The web-application-forever-trend? (0)

Anonymous Coward | more than 4 years ago | (#29112043)

HTTP is successful precisely because it is stateless and simple. I would not call it a near failure if the whole WWW was built using it.

Re:The web-application-forever-trend? (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29112147)

True and false.

It is a success for the things it was originally created to solve.

It is a failure (although very widespread) for the more modern things it is used for in advanced web applications... ;-)

Re:The web-application-forever-trend? (3, Insightful)

Algan (20532) | more than 4 years ago | (#29111869)

With thin clients, people already have a (somewhat) standardized client. You don't have to worry about deployment issues, software updates, system compatibility issues, etc. It's there and it mostly works. If you can develop your application within the constrains of a thin client, you have already bypassed a huge pile of potential headaches. Even more, your users won't have to go through the trouble of installing yet another piece of software, just to try your app.

Re:The web-application-forever-trend? (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29112117)

I totally disagree. Respectfully :-)

Your view is typical of a techie or CTO responsible for a software roll-out. The basic thought seems to be avoiding rich clients at all costs in order to make the whole thing "simpler".

But there are a ton of disadvantages for web clients. And in many (bot not all) cases they outweigh the advantages!

- They require MUCH more advanced back-end infrastructure to work, often including several servers and lots of monitoring/management
- much more complicated maintenance and upgrade
- much more complicated backup and disaster recovery procedures
- much more complex code in order to accomplish even the simplest things
- inferior user experience (this will become more visible as the complexity of the application increases)
- inferior 3rd party integration with the app (also more visible as the complexity increases)
- single point of failure (in fact a whole pile of server processes on top of each other)

I have deployed both web applications and classic client/server applications in medium-sized enterprises (500 - 5000 seats) for business use. I can honestly tell you that the most complex ones have been the web-based ones. They often depend on a ton of existing technology that has to match very precise specifications, and they very seldom work "out of the box". Because of the advanced back-end diagnosing a problem is virtually a nightmare and you have several technology vendors all pointing fingers at each others for problems that (according to all of them) are not even supposed to exist.

So yes, you DO have to worry about deployment issues. You DO have to worry about software updates (which are often very complex because of the extensive technology stack). And it most certainly DOES NOT "just work".

You are correct in stating that users don't need to install anything. But hey - the same goes for the rich client. The roll-out of any decent client application can be totally automated very easily. :-)

- Jesper

Re:The web-application-forever-trend? (0)

Anonymous Coward | more than 4 years ago | (#29112327)

You phrased it truthfully and insightfully...

But all of that stated--how else can our sales guy be waiting in the airport, see somebody and "sell" them on a demo account of our product. They don't have to install anything, don't have to circumvent IT policy--don't need admin rights. "Just point your browser at this webpage and click "launch"". Yeah, it's *huge* maintenance for our dev team to get every possible browser working... possibly not worth it even. But it lets us sell to...anything.

No hassles about DLLs, no installers, no JVM or .Net version crap. If by some chance your system is really really dated, we can recommend firefox for "enhanced" performance. It just works on any system you can get a decent browser on.

Well--it used to anyway. Until the marketing fags discovered flash and had a design firm splash it all over the site, and try to "layer" it into the application. Well...we didn't actually have any users running linux other than the devs anyway. And I'm sure Mac support will never lag behind... And I'm going to fall over laughing at the first marketer who calls to tell me he can't demo the app on his iphone anymore.

sigh.

Re:The web-application-forever-trend? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29111961)

Because getting the obscenely large percentage of the world's population familiar with the World Wide Web to switch from the web they know to some new (or old) software uniformly across all platforms is a fool's errand?

Re:The web-application-forever-trend? (1)

tomhudson (43916) | more than 4 years ago | (#29112155)

Because getting the obscenely large percentage of the world's population familiar with the World Wide Web to switch from the web they know to some new (or old) software uniformly across all platforms is a fool's errand?

3 words: Java Web Start.

Re:The web-application-forever-trend? (1)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29112051)

"some web-developers will go to"

If you are a web developer, you pretty much have the choice of switching careers or doing your best to bludgeon as much functionality out of your toolset as possible.

More generally, though, "web" apps do have advantages, they are just almost universally in areas outside of their power as programs(and, to be fair, areas where conventional client apps could be made to match, if a whole lot of groundwork were done).

Assuming you haven't already, looking at some user's home machines is a genuinely edifying experience. Check the desktop. Not infrequently, you'll find it littered with old installer packages, sometimes multiple copies of the same thing, scattered there by the user's frantic clicking. They Just. Don't. Get. the installation procedure, even when the installer is an executable that, once triggered, pretty much leaps out and installs itself. Don't even dream about getting the user to make any configuration changes besides entering a username and password. And updates, of course. You can either add yet another voice to the chorus of horrible autoupdate tray apps, crying for attention, or bug the user when they start the program, or just deal with having 26 different versions in the wild.

If you want to appeal to such users, you have two options: try to solve ignorance, by maintaining a good-sized helpline for the entire life of your product, or trying to solve installation; by doing a bunch of webapp hacking upfront. This solves updating, as well. Next time they refresh, there they are.

This all goes double for users at work, at a web cafe, on a friend's computer, or whatever. There are, in fact, some very sophisticated offerings that allow full client applications to be added and removed freely from a system, without harming it. Essentially none of them are available to, or usable by, average users. Public or corporate computers generally run in some flavor of lockdown, so client apps can't be installed, and any time joe user installs something on somebody else's machine, he risks munging it, or leaving his config details all over the system. This, again, makes webapps attractive.

None of these issues are insurmountable, with sufficient cleverness and effort, an OS could offer sandboxed, optionally persistent, access to full(or nearly full) client app power with webapp ease of loading, updating, and purging. Until that actually happens, though, webapps are here to stay.

FAT CLIENT is NOT the right FIX (2, Interesting)

Tablizer (95088) | more than 4 years ago | (#29112199)

What's needed is a "GUI Browser" standard that is meant for C.R.U.D. screens (business screens and forms) instead of what HMTL is: An e-brochure format bent like hell to fit everything else. You can only bend the e-brochure model so far. We don't need "fat clients" really to solve most of the problem, but really just a better GUI browser so that we are not sending over 100,000 lines of version-sensitive JavaScript just to emulate (poorly) a combo-box, data grid, or folder/file outline tree widget. That's like inventing a cell-phone just to call your next-door neighbor.
   

Re:FAT CLIENT is NOT the right FIX (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29112319)

So why not make a client/server solution, and make good clients for each of the platforms you want to support?

Compared to the absolutely massive resources needed to build an advanced web application with roughly the same capabilities as rich clients on those same OS'es it should not be too hard to do!

Why do we need a "GUI browser"? How would you describe such a thing anyway? The presentation layer of any modern OS already has all the things we need PLUS a ton of very useful local integration with with other applications?

- Jesper

Re:FAT CLIENT is NOT the right FIX (1)

Tablizer (95088) | more than 4 years ago | (#29112419)

Because people don't want the "install" step, for both time and security reasons.

How would you describe such a thing anyway?

Kind of like HTML, but with more widgets, more declarative "events" for common actions, standard ess-expression-based field validation, an MDI option, and a "stay until removed" aspect of drawing screens instead of redraw-page-per-post of regular HTML.

The presentation layer of any modern OS already has all the things we need [for GUIs]

So does the Tk-kit-family and Java for the most part. A "generic" GUI-browser could probably be built with them.
         

Re:FAT CLIENT is NOT the right FIX (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29112645)

I am sorry but I still don't understand the concept of this "GUI browser". It sounds like you are focusing a lot on the presentation layer and ignoring all the other aspects a rich client has.

What about 3rd party integration? Or a visual layout which matches the operating system? Or hotkeys and interactive behavior matching the OS? Or the ton of rich functionality provided by the OS to all local applications (security features, networking, local system access, core functionality like cross-application object embedding, etc)?

What exactly makes this "GUI browser" any better than just making a Flash 11 application and what are the benefits from using it? Which issues/problems does it solve?

- Jesper

Re:FAT CLIENT is NOT the right FIX (0)

Anonymous Coward | more than 4 years ago | (#29112547)

...so that we are not sending over 100,000 lines of version-sensitive JavaScript just to emulate (poorly) a combo-box, data grid, or folder/file outline tree widget.

Isn't that what Java applets are for?

I run www.reversehttp.net (1)

metaconcept (1315943) | more than 4 years ago | (#29111759)

From www.reversehttp.net:

"[...] we need to be able to act as if being able to respond to HTTP requests was within easy reach of every program, so that we can notify interested parties of changes by simply sending an HTTP request to them. This is the core idea of web hooks. We need to push the messy business of dealing with long-polling away from the core and toward the edge of the network, where it should be. We need to let programs dynamically claim a piece of URL space [...] and handle requests sent to URLs in that space [...]. Once that's done suddenly asynchronous notification of events is within reach of any program [...], and protocols and services layered on top of HTTP no longer have to contort themselves to deal with the asymmetry of HTTP. They can assume that all the world's a server, and simply push and pull content to and from whereever they please."

The details of getting the requests through the gateway to the serving application are pretty trivial and interchangeable. The new idea is in registering endpoint names. UPNP and STUN cover some of the same space, but IP's addressing is fundamentally different to HTTP's URL-based addressing in that it's possible with URLs to have recursive delegations of portions of the namespace -- something that IP just can't do (because it's flat).

Please explain ... (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29111933)

Can you please explain WHY you would want to molest the primitive HTTP protocol in this way? Seriously: why not use another protocol that is actually suited for the task at hand, and then let the client initiate the connection (thereby automatically solving a gazillion security issues and technical challenges imposed by NAT based consumer routers)? Why - please - WHY do you want to take "web application concepts" and transmogrify them (read: totally molest them) into something they are not... while clearly ignoring that A LOT of existing technology has already solves most of these issues? Why choose a web/HTTP implementation BEFORE analysing the task at hand instead of getting a clear picture of the task and THEN choosing an appropriate technology afterwords? - Jesper

Re:Please explain ... (1)

metaconcept (1315943) | more than 4 years ago | (#29111983)

Huh? Did you read the spec draft [reversehttp.net]? The metacircular use of HTTP is a cute hack, but in no way central to the idea. If you like, think of reversehttp as a kind of "remote CGI" -- like what people already do with apache's reverse proxying, but dynamically configurable.

Re:Please explain ... (1)

SplatMan_DK (1035528) | more than 4 years ago | (#29112249)

I think I understand it. I will admit that I am not a network expert, just plain and simple business app developer.

And I still don't understand why I would want my own machine to answer HTTP calls of any form, or why I should allow my (hardware) firewall to allow incoming requests of that nature. The security implication alone is a nightmare.

I would never want to give any outside party the ability to execute "remote CGI" on my machine(s) nor would I want them to have the properties of a webserver.

Can you explain a scenario where such a set-up makes sense (from a business or usability perspective) and where other protocols are unable to get the job done?

- Jesper

Re:Please explain ... (1)

metaconcept (1315943) | more than 4 years ago | (#29112641)

Can you explain a scenario where such a set-up makes sense (from a business or usability perspective) and where other protocols are unable to get the job done?

Anywhere you'd otherwise be configuring apache/DNS/firewall/CGI/reverseproxy rules by hand. There aren't any protocols for that other than reversehttp yet. The goal was to make it as easy to get a name/URL and become a full server in the HTTP network as it is to anonymously participate as a client. Particularly interesting is the way it makes short-lived services suddenly viable: previously, you'd either have to reconfigure your gateway apache instance (or similar) each time a service came and went (or moved!), or ad-hoc up some way of doing roughly what reversehttp does, but on a case-by-case basis.

(It's also, by the way, no more difficult to secure the gateway than it is to secure any other web service -- the demo server hands out URL space pretty freely, but obviously if you were deploying it yourself you'd apply normal HTTP access control policies to limit who was allowed to register which names and so on.)

Might as well just use Jabber (0)

Anonymous Coward | more than 4 years ago | (#29111807)

If you're going to build an httpd into the browser, and also the NAT people need some fixed proxy out thre somewhere, then you might as well just make it Jabber.

Totally off-topic, but I have to ask (-1, Offtopic)

antifoidulus (807088) | more than 4 years ago | (#29111829)

after being endlessly frustrated with Apple's nonstop bug-parade, which OS is buggier, OS X or Windows? Up until Leopard I would have said Windows, but Apple is trying SOOO very hard to prove that they can be even buggier than Microsoft. My infinitely restarting login window says "mission accomplished" guys, good job Steve!

Opera Browser Has Web Server (0)

Anonymous Coward | more than 4 years ago | (#29111913)

Here is a /. story about Opera's inclusion of a web server:
http://apache.slashdot.org/story/09/06/19/0227249/Opera-Unite-Web-Server-Benchmarked

You might want to look at RESTful Web Services, too.

Re:Opera Browser Has Web Server (1)

tomhudson (43916) | more than 4 years ago | (#29112205)

You might want to look at RESTful Web Services, too.

Thanks, but I'd rather slit my wrists. Why not learn Java and build a real application, where you don't have to deal with the browser at all?

Will never happen (1)

nulled (1169845) | more than 4 years ago | (#29112089)

This is the same concept as BitTorrent...where you spread the load out to anyone downloading the software. BitTorrent uploads the parts you downloaded, therefore spreading the load to the clients.

1) Bittorrent is already getting a lot of black listing from major cable companies, because people are downloading DVD movies and not paying for them using pay-per-view.

2) ISPs do not allow a home user to set up a web server and will block or ban your account if it finds port 80 or http/https (amoung others as well like ftp excessively used)

3) how is your browser going to access a Database like mysql with php, local on your machine? The pages would have to be cached and static or some other wizard way of doing it.

I am by no means saying that the idea isnt possible or even not a good idea. It sounds like a nice idea... however too many factors are at play, including co-location centers and the notion of a 'server' itself being discounted in favor of a 'super browser'.

The only reason this subject has come up is to try to combat DDoS attacks. Twitter and Facebook and the whitehouse.gov attacks.

No, what needs to happen is to get rid of the Microsoft Windows ZOMBIE botnet computers, which allow the DDoS attacks to happen with greater ease and frequency.

Re:Will never happen (1)

metaconcept (1315943) | more than 4 years ago | (#29112165)

3) how is your browser going to access a Database like mysql with php, local on your machine? The pages would have to be cached and static or some other wizard way of doing it.

There's nothing that limits reversehttp to the browser. Any HTTP client library can use it. The demo implementation has clients not just for browser-hosted Javascript but also for Python and Java. Besides the clients included with the demo implementation, Paul Jones has written hookout [lshift.net], for Ruby programs, and Tatsuhiko Miyagawa has written AnyEvent-ReverseHTTP [github.com] for exposing HTTP services via reversehttp from programs written in Perl.

Re:Will never happen (1)

Pop69 (700500) | more than 4 years ago | (#29112459)

2) ISPs do not allow a home user to set up a web server and will block or ban your account if it finds port 80 or http/https (amoung others as well like ftp excessively used)

Your ISPs may not allow home users to do this, mine allows web servers, mail servers, the whole shooting match on their home accounts. They'll even allocate you a block of 8 static IPs to do it properly.

FTP anyone (1)

Imagix (695350) | more than 4 years ago | (#29112441)

Doesn't anyone remember FTP? And why Passive-mode FTP was developed? All of the same reasons why this isn't a good idea. Your web browser ends up behind a NAT firewall and poof, this no longer works. (Without some deep packet inspection on the firewall to automatically open the ports, or UPnP, or SOCKS, or some other protocol for the web client to negotiate with the firewall to allow the connections).

Violates many ISP terms of use (0)

Anonymous Coward | more than 4 years ago | (#29112633)

Many ISPs prohibit running any kind of server on a home customer connection, and they will regularly port scan you and disconnect you if they find one.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...