Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OAuth 2.0 Standard Editor Quits, Takes Name Off Spec

Soulskill posted more than 2 years ago | from the none-of-us-are-as-dumb-as-all-of-us dept.

Security 101

New submitter tramp writes "The Register reports, 'Eran Hammer, who helped create the OAuth 1.0 spec, has been editing the evolving 2.0 spec for the last three years. He resigned from his role in June but only went public with his reasons in a blog post on Thursday. "At the end, I reached the conclusion that OAuth 2.0 is a bad protocol," Hammer writes. "WS-* bad. It is bad enough that I no longer want to be associated with it."' At the end of his post, he says, 'I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.'"

cancel ×

101 comments

Sorry! There are no comments related to the filter you selected.

Chick-fil-A spokesman dies (-1)

Anonymous Coward | more than 2 years ago | (#40802099)

The chief spokesman for Chick-fil A died early today amid the furor sparked by his boss' biblical opposition to same-sex marriage.

The Georgia-based fast-food giant did not cite a cause of death for 60-year-old Don Perry, vice president of public relations, but local news reports said he died of a heart attack, the Atlanta Journal-Constitution says. He had worked for the fast-food giant for 29 years.

"Don was a member of our Chick-fil-A family for nearly 29 years," the company said in a statement. "For many of you in the media, he was the spokesperson for Chick-fil-A. He was a well-respected and well-liked media executive in the Atlanta and University of Georgia communities, and we will all miss him. Our thoughts and prayers are with his family."

In the past week, Perry found himself on the front lines of the controversy that erupted after Chick-fil-A President Dan Cathy, whose father founded the business, reitereated the company's belief in "the biblical definition of the family unit." That prompted an outcry from gay-rights advocates, politicians and some businesses, but also drew praise and support from evangelical Christians and traditionalists.

Perry issued a statement last Thursday expressing the company's desire to "not proactively being engaged in the dialogue" on gay marriage.

"Going forward, our intent is to leave the policy debate over same-sex marriage to the government and political arena," his statement said.

Perry noted Cathy's father's "biblically-based principles to managing his business."

Re:Chick-fil-A spokesman dies (-1, Offtopic)

Anonymous Coward | more than 2 years ago | (#40802159)

A clear, concise sign from God.

Re:Chick-fil-A spokesman dies (1, Insightful)

Anonymous Coward | more than 2 years ago | (#40803367)

I bet he died from a heart attack while plowing his twink boytoy.

oh man (-1, Offtopic)

Anonymous Coward | more than 2 years ago | (#40802107)

Woke up this morning around 3 AM, ran to the bathroom and left a massive havana pancake all over the place. I've been farting and shitting since (this is posted from the can on my Google Nexus 7" tablet). I don't know if it was the mexican food I had for lunch or the indian food I had for dinner. God damn I wish I could stop pissing out my asshole for 5 minutes.

Re:oh man (-1)

Anonymous Coward | more than 2 years ago | (#40802125)

You've picked up a virus from your Android device. It's called the Eric Schmidt Virus - he talks out his arse, you're in the early stages

Re:oh man (-1)

Anonymous Coward | more than 2 years ago | (#40802235)

Tried to mod this up, but there was no "more informative than the summary" option.

Also, drink lots of Gatorade or other electrolyte drink... Electrolyte imbalance can lead to stroke like symptoms.

Re:oh man (1)

flimflammer (956759) | more than 2 years ago | (#40803995)

I rarely laugh at these offtopic comments. The original MyCleanPC ones were kinda funny but the later ones just got too ridiculous. But this one was just short and sweet. Got a good laugh out of me.

WordStar? (3, Funny)

jabberw0k (62554) | more than 2 years ago | (#40802147)

What's WS-* supposed to mean... WordStar? I almost thought, some geek reference to a VMS error message... (%WS-X-XYZZY) but surely not?

Re:WordStar? (-1, Flamebait)

luis_a_espinal (1810296) | more than 2 years ago | (#40802169)

What's WS-* supposed to mean... WordStar? I almost thought, some geek reference to a VMS error message... (%WS-X-XYZZY) but surely not?

You are kidding, right?

Re:WordStar? (1)

jabberw0k (62554) | more than 2 years ago | (#40802189)

I have never seen "ws-*" before... reference please?

Re:WordStar? (4, Informative)

luis_a_espinal (1810296) | more than 2 years ago | (#40802245)

I have never seen "ws-*" before... reference please?

Ask and ye shall receive.

http://en.wikipedia.org/wiki/WS-* [wikipedia.org]

http://lmgtfy.com/?q=ws-* [lmgtfy.com]

Courtesy of wikipedia and google.

Re:WordStar? (1)

Christopher Fritz (1550669) | more than 2 years ago | (#40802349)

Google gives me a bunch of .ws domain web sites with that search, but nothing about WS-*.[1] Including -inurl:.ws helps, but only very little.

A search for WS-* oauth returns more relevant results.

Bing (which I don't use as Google usually gives me more useful results) on a search of "ws-*" has "List of web service specifications - Wikipedia, the free encyclopedia" as the fourth result.

Both Bing and Google give useful suggestions in the dropdown when typing "ws-*" into the search box.

[1] Google results will of course vary by user's location.

Re:WordStar? (0, Flamebait)

Anonymous Coward | more than 2 years ago | (#40802421)

Oh please, you arrogant twats.
This web services sector is such a huge over-engineered mess of enterprisey consultant circle-jerking,
I'm actually *proud* I'm not having any relationship with it.

In practice, it's one of the dumbest things out there.
Because it's mostly protocols based on XML over HTTP over TCP over IP, when a direct binary markup TCP protocol would have done it, and usually already existed decades before.

Add Java "frameworks" in the spirit of EJB to web services, and you got a consultant's wet dream. (Hint: It will contain lots money.)

Re:WordStar? (1)

luis_a_espinal (1810296) | more than 2 years ago | (#40802577)

Oh please, you arrogant twats. This web services sector is such a huge over-engineered mess of enterprisey consultant circle-jerking,

Talking about going off the fucking tangent. Who the hell says I or anyone else is proud of the WS-* shit? Do you have to love a stupid acronym to know how to google it? It's not about whether WS-* is good or bad. It's about posters of a site whose motto is 'News for Nerds' who need 3rd parties to google acronyms for them.

I'm actually *proud* I'm not having any relationship with it.

In practice, it's one of the dumbest things out there.

Preaching to the crowd buddy. You ain't the first one who found out the flaws of it. Though don't let that get in the way of making you feel intelligent by repeating what most people know already (that WS-* is crap.)

Because it's mostly protocols based on XML over HTTP over TCP over IP, when a direct binary markup TCP protocol would have done it, and usually already existed decades before.

Add Java "frameworks" in the spirit of EJB to web services, and you got a consultant's wet dream. (Hint: It will contain lots money.)

And you figured that out all by yourself? Here, have a cookie for building an excellent strawman.

Re:WordStar? (-1)

Anonymous Coward | more than 2 years ago | (#40802653)

Did you actually look at the fucking results from what you googled? Or were you just in such a hurry to be an arrogant twat that you couldn't bother?

Re:WordStar? (1)

luis_a_espinal (1810296) | more than 2 years ago | (#40802843)

Did you actually look at the fucking results from what you googled? Or were you just in such a hurry to be an arrogant twat that you couldn't bother?

Yes, and the results right on top contains, among other things... tada... web services. Shit, let's forget about google. About wikipedia, that oh so not new and wonderful site that lists almost all type of shit, including... tada... an entry for WS-*.

So what's your grip anyways, that people think WS-* is a good thing (in which case, you are building a strawman because no one is making that claim here, certainly not me), or that the google results didn't spoon feed you the precise answer of your liking?

Re:WordStar? (1)

UnknownSoldier (67820) | more than 2 years ago | (#40802599)

Mods, lay off the crack pipe. The parent answered the question (indirectly.)

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40810761)

I guess a 62554 Id means you think in terms of wordstar and vms :-)

Re:WordStar? (5, Informative)

Anonymous Coward | more than 2 years ago | (#40802173)

It references the plethora of crappy standards created during the SOAP era. (WS-Security, WS-Routing, WS-Addressings, WS-YourMom)

Re:WordStar? (0, Offtopic)

Anonymous Coward | more than 2 years ago | (#40802195)

I believe he is referring to the Tungsten Monosulfide Anion, WS-

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40805401)

Come on mods, that's pretty funny.

Re:WordStar? (-1)

Anonymous Coward | more than 2 years ago | (#40802203)

I do hope that was a joke I missed, but just in case let me wiki that for you http://en.m.wikipedia.org/wiki/List_of_web_service_specifications

Re:WordStar? (5, Informative)

dkf (304284) | more than 2 years ago | (#40802249)

What's WS-* supposed to mean...

It refers to the plethora of web-services specifications, most of which take a fairly complicated protocol (XML over HTTP) and add huge new layers of mind-boggling complexity.

You don't ever need WS-*, except when you find you do because you're dealing with the situations that the WS-* protocol stack was designed to deal with. When that happens, you'll reinvent it all. Badly. JSON isn't better than XML, nor is YAML; what they gain in succinctness and support for syntactic types, they lose at the semantic level. REST isn't better than SOAP, it's just different, and security specifications in the REST world are usually hilariously lame. Then there's the state of service description, where WSDL is the only spec that's ever really gained really wide traction. WS-* depresses me; I believe we should be able to do better, but the evidence of what happens in practice doesn't support that hunch.

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40802327)

That's what happens when you let Americans loose on a grammar they 'morder' it

Re:WordStar? (1)

Anonymous Coward | more than 2 years ago | (#40802383)

The irony here is that your sentence is punctuationally deficient.

Re:WordStar? (1)

Anonymous Coward | more than 2 years ago | (#40810297)

One does not simply walk into Morder.

Re:WordStar? (4, Insightful)

Anonymous Coward | more than 2 years ago | (#40802533)

REST is better than soap because it uses the features of the transport instead of ignoring and duplicating them in an opaque fashion. SOAP is like having every function in your program take a single argument consisting of a mapping of arguments. Or a relational database schema with only three tables: objects, attributes, and values. In other words, SOAP is an implementation of the Inner Platform antipattern.

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40803655)

Key word is "opaque".

Re:WordStar? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#40802643)

As a regretful author of several WS-* specs, after I got sucked into the vortex of IBM and MS when they passed too close to our academic lab, I felt exactly as Eran Hammer stated in his blog. He wrote, "There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, ... It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career." I have used so many of those same phrases in reflecting on my experience with other veterans of that period!

And I'll tell you, XML and SOAP have no semantics either. They simply have a baroque shell game where well intentioned people confuse themselves with elaborate syntax. XML types and type derivation are syntactic shorthands for what amounts to regular expressions embedded in a recursive punctuation tree. There is absolutely no more meaning there than when someone does duck typing on a JSON object tree, particularly after the WS-* style "open extensibility" trick is added everywhere, allowing any combination of additional attributes or child elements to be composed into the trees via deployment-time and/or run-time decisions.

As a result, I am rather enjoying the current acceptance of REST and dynamically typed/duck typed development models. It is much more honest about the late-binding, wild west nature of the semantics involved in our everyday web services.

Re:WordStar? (1, Interesting)

jklappenbach (824031) | more than 2 years ago | (#40802983)

Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP. The overhead of XML -- usually an order of magnitude in data size -- can be a huge, undesirable impact. That said, there's one aspect of SOAP that popular REST specs are missing: a definition language. With the help of the WSDL, SOAP gained cross-platform client generation and type safety. REST protocols would do well to leverage this concept, at least for invocation parameter definitions. In most cases, REST result messages are encoded in JSON, where a Javascript interpreter for parsing and object model translation can be leveraged. But even then, having a documented result schema would be a huge improvement over forcing developers to inspect result sets at runtime to divine structure and content.

But, back on topic, having evaluated OAuth 2.0, I agree with Hammer's assessment. It's not a protocol, and the inability of this team to produce a viable solution will only lead to fragmentation and the failure of OAuth.

Ignore nothing, SOAP is awful (4, Insightful)

SuperKendall (25149) | more than 2 years ago | (#40803271)

Ignore all concerns but scalability, and REST becomes far more preferrable than SOAP.

You don't have to ignore any concerns. SOAP was always a bad idea, as there is nothing to be gained from it you cannot work out by the combination of the HTTP protocol with REST style access.

This was obvious even in the very earliest days of SOAP, when people at that time where noting that REST was so much more practical. I had to use it off and on with various internal IT projects but it was always a bad deal, and just about always was eventually moved to a REST style service so people could get work done.

That said, there's one aspect of SOAP that popular REST specs are missing: a definition language.

As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

But even then, having a documented result schema would be a huge improvement

No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

Re:Ignore nothing, SOAP is awful (1)

durdur (252098) | more than 2 years ago | (#40803653)

SOAP is quite widely deployed and yes, it is more complex for the client, but a lot of people have made it work for them. There is not one right way to build a web interface.

Re:Ignore nothing, SOAP is awful (1)

shutdown -p now (807394) | more than 2 years ago | (#40804743)

SOAP got popular because Java and especially .NET promoted it as the way to write web services. So, like XML, it's another case of an overengineered design-by-committee solution becoming popular simply because using it was the path of least resistance due to it being in the standard library. Most people using it that way don't actually have a clue about how it works, and they certainly didn't pick it because of the way it's designed.

Re:Ignore nothing, SOAP is awful (1)

Pinky's Brain (1158667) | more than 2 years ago | (#40806995)

XML is overengineered?

Re:Ignore nothing, SOAP is awful (1)

SuperKendall (25149) | more than 2 years ago | (#40805899)

Yes I know SOAP is quite widespread. This is do to Java and C# making valiant efforts to build enough tooling around it to reduce the pain, or at least to building a system where you have even odds of making a client that can communicate with a server...

But that does not change the fact that underneath it is a nightmare, things can still go wrong, and that everyones life becomes SO much easier when you go REST with JSON.

The real death of SOAP was the rise of mobile clients, which do NOT have the processing power nor the libraries to handle the overhead SOAP imposes. I mean yes, they can of course technically manage it but it's stupid to waste precious network, CPU and battery resources on SOAP overhead when REST works just as well.

Re:Ignore nothing, SOAP is awful (1)

jklappenbach (824031) | more than 2 years ago | (#40804183)

No, it's really not useful. It's overhead. It takes more effort to maintain such a formal interface than to have people simply consume JSON as they will. And often the parts of the system that are supposed to process those formal definitions fail. All around just a horrible block to getting things working the way you like.

Couldn't disagree more. Frameworks and protocols are meant to make life easier. What I see with many implementations based on REST are frameworks that, through the lack of a published schema, encourage half-baked, undocumented APIs that often result in developer headaches and lost time. Personally, I think we can do much better.

Re:Ignore nothing, SOAP is awful (1)

SuperKendall (25149) | more than 2 years ago | (#40805987)

Frameworks and protocols are meant to make life easier.

I agree. In this regard, SOAP is a dismal failure.

What I see with many implementations based on REST are frameworks that, through the lack of a published schema, encourage half-baked, undocumented APIs

To some extent, yes.

Is there the possibility of something that might hold a little more definition than the very loose combo of JSON over REST? I will not deny that is possible, but SOAP is way, way too far off the edge.

As it stands, simply well documenting the JSON you can expect is pretty robust. The fact that the definition of what data to expect in the JSON cannot be processed programmatically by the receiver means very little, as usually great pain comes with a system that can arbitrarily change the schema for the data that it is sending on the fly... downstream at some point the consumer of that data is expecting thing to be a certain way and will break if you drift off the path.

Since the receiver does not need to adapt automatically to changes in schema, what does it matter if a human or a computer is reading them? In fact having the human being the one processing the definition is far more valuable because you can then raise concerns about the data being transmitted.

Again I would not argue JSON is the end-solution for all time, but it's a far more practical one than what has come before.

Re:Ignore nothing, SOAP is awful (1)

l0ungeb0y (442022) | more than 2 years ago | (#40805115)

As you note, it's called JSON, and we've been using it for years. It doesn't "need to be in the spec" when everyone is doing it that way.

FFS! JSON IS NOT A DATA DEFINITION LANGUAGE!!!

Just get a fucking clue. JSON is a syntax, nothing less nothing more. It is up to the client to inspect the packet and has NO WAY to validate that the contents of the packet are indeed correct. Contrast this with an XSD that would outline which elements could exist, which attributes they had, where they could exist, what they could contain and even limit exactly how many could exist.

JSON provides none of that. Also, Javascript, which is what JSON is is a dynamically typed language, so you can't even give clues as to hat the data types of your values are so it's impossible to nil out invalid values without doing it programmatically.

And having a Schema is NOT WASTEFUL -- it's a condom to prevent asswipes like you who know jackshit about Service Architecture from going all willynilly over a well-designed system and clogging up the internals with shit you just slapped in place because you can't be bothered to understand how the environment works. To me, making Devs conform to an XSD, Strong Typing and MVC layers is fucking awesome on the service side where underlings like you can't be trusted to do your work without mucking things up.

Because as an architect, my greatest enemy is not the consumer, it's asswipe developers that don't know what the hell they are doing who are the greatest concern.

With that said, I think RoR with a JSON REST API is the way to go for a basic web service. Ruby is a typed language so you can't be throwing strings or bools into floats or vice-versa and Rails is a decent enough MVC layer that prevents the clueless from going willy-nilly with their "ace software design skills", but is flexible enough that it's easy to get your work done.

But is it suitable for an enterprise roll out to handle all the various internal systems? I think not.
And that's where you bring things in like Java or Python that can handle these things and that's where you get real nit-picky about validation, since those are vital internal processes and not JSON crap you feed consumers.

Re:Ignore nothing, SOAP is awful (0)

Anonymous Coward | more than 2 years ago | (#40805243)

What are you smoking? Sure, there's supposed to be a spec, but machine validating the input & output against the spec is too expensive (shows up in the profiler as so). It's also not worth it. If you have a problem so bad that your developers cannot get the request syntax right, can them. No, really. If they struggle with the syntax to the point of needing that much handholding as to require verification in production for every message, then they have no prayer of getting the semantics right.

  Want to speed up SOAP by 2x? Remove the DTD and organize the XML so it doesn't need one (very possible). JSON will give you more speed even than that at no more practical cost besides the retooling itself.

Re:Ignore nothing, SOAP is awful (0)

Anonymous Coward | more than 2 years ago | (#40805341)

Because as an architect, my greatest enemy is not the consumer, it's asswipe developers

I certainly hope you're not an architect, based on the amount of stupidity displayed in your post. As an architect, your greatest enemy is yourself for being so ignorant.

Hint: validation does not require an XSD.

Re:Ignore nothing, SOAP is awful (1)

SuperKendall (25149) | more than 2 years ago | (#40805953)

JSON IS NOT A DATA DEFINITION LANGUAGE!!!

Of course not, but it IS a loosely typed means of transferring data.

What I was arguing against is NEEDING a data definition language. That has ALWAYS been needless overhead for any web service I have ever seen, and in fact you are limiting clients by mandating a single possible data type for a field when a client might want to treat something differently.

And having a Schema is NOT WASTEFUL -- it's a condom to prevent asswipes like you

In my experience with over a decade of corporate IT development and now many years of independent use of web services from a huge range of companies - the only asswipes are the ones who think developers need a FORMAL schema to make use of data.

Real developers and teams figure this out rather easily from simply documentation or, even easier, just looking at examples of the JSON data.

People like you are in fact responsible for clogging the toilet that is IT development until the plunger of REST/JSON comes and inevitably forces you aside.

Because as an architect

I have been an architect as well for many yeas in IT and now independently. The difference I guess is that I focus on practical systems that deliver on time and function while still being flexible.

But is it suitable for an enterprise roll out to handle all the various internal systems? I think not.

Ruby itself may not be (at least for all uses), but REST/JSON can handle any degree of complexity a modern enterprise offers.

And since you'll have to build a REST service anyway for the inevitable mobile clients, who on earth would ALSO want to have to define/build/maintain a SOAP service too?

There is NO web service so internal you cannot expect a call for some mobile app to be built to access it directly, and zero good reasons why you should not build a system with this in mind.

since those are vital internal processes and not JSON crap you feed consumers.

As if crap cannot also get through SOAP... no matter how tightly you lock down the schema in the end each service must be defensively coded against bad input anyway, just because input is typed and well formed means very little in the end as far as data integrity. So don't spend your effort on a part of the system that makes life harder for everyone.

Re:Ignore nothing, SOAP is awful (0)

Anonymous Coward | more than 2 years ago | (#40807061)

Because as an architect,

Architects design buildings, and (over here at least) train for 7 years in order to qualify to do so. You are either a programmer or a data monkey.

Re:Ignore nothing, SOAP is awful (0)

Anonymous Coward | more than 2 years ago | (#40809411)

The assumption on the OP is that Data Definition Languages are an unnecessary overhead. The proof is evident: a lot of webservices (even "enterprise-level" ones) do seem to work without having to rely on an DDL to survive.

You claim that there is certain "vital internal processes" where "JSON crap" is not acceptable, since "you get real nit-picky" about validation.

Well, don't just say it. Prove it. So far those processes are just hypothetical. Give your argument some meat.

Re:WordStar? (1)

jmerlin (1010641) | more than 2 years ago | (#40803849)

You do that with REST over HTTP at least using media types and json schema, which are starting to gain more popularity with API developers. I'd argue there's nothing those systems have over what REST+JSON can provide if used properly. The problem is that most things that claim to be RESTful aren't really. The community is starting to move away from using the term "REST" to describe things, especially application APIs, because it has those connotations attached to it (see: facebook, twitter, etc APIs, they're just HTTP based RPC with loads of coupling and out-of-band information). Instead, they're now referring to things that are much more RESTful as "hypermedia" because their solutions conform to the hypermedia constraint (the most important one, so says Fielding) in Fielding's paper. These use hypermedia and media types to define an interchange format without the need for coupling, and often support schemas for type validation.

Re:WordStar? (3, Informative)

shutdown -p now (807394) | more than 2 years ago | (#40804723)

The problem with SOAP and WS-* stuff isn't XML. It's rather that it takes, IIRC, five levels of nesting of said XML to call a simple web service that takes an integer and returns another one. In other words, it's ridiculously overengineered for the simple and common cases, while supposedly covering some very complicated scenarios better - a claim that I cannot really verify since I've never in my life seen system architecture, even in the "enterprise", where that complexity was actually useful.

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40805353)

This is actually completely wrong, in the performance aspect, and is the kind of misunderstanding that led us to the poor options we have now for service frameworks.

SOAP is not tied to XML. It is not even tied to HTTP. The standard, which is certainly complex, defines the documents and messages in terms of the infoset. Without modifying application code - with some frameworks, without even recompiling - you can switch the transport to use a binary format. You can switch from HTTP to raw TCP, also. You can make your services auto-negotiate. I have done this in production, with extremely high performance compared to more-common SOAP or any REST implementation for the same problem.

The complexity was never intended to be seen by application code. Frameworks were intended to abstract and encapsulate all of that. SOAP was complex because it was supposed to allow for wide interoperability, the ability to switch to high-performance cross-platform transports as mentioned above, automatic client generation, etc. The problem is, we ended up with many mediocre but very loud developers who didn't even understand the problems or current solutions ranting about "how hard could this be?" and spewing out their own pathetic half-baked answers. Instead, they should have either contributed to the work that already existed, or gotten out of the way so that better engineers could prevail.

A big problem with SOAP is that it only ever had a good framework implementation on .NET, and many developers never encountered that. The Java frameworks were shoddy and weak, and still are. It wasn't even included in the core framework for a long time. The first time I made a SOAP service in Java, I said, "Wait... You mean, after all that tedious work to get it running, I can't make a call with a simple REST-like HTTP GET with the parameters in the URL? And when I connect to the service with a browser, it doesn't give me a description and show the available methods, and give me forms where I can enter arguments to call each? Seriously?"

SOAP had a lot of cruft, and could use cleaning up. Platform support never got built. But it was a generation ahead of anything else around, even now. Applications shouldn't have to care about the transport. Systems should be able to switch transports without changing. Efficient binary representations should be easy and normal. Services should be no harder to write for basic usage than identifying normal class implementations as the service. Services should describe themselves when contacted, for a human, and provide an interface to invoke their functions. Clients should be able to be automatically generated, and using them should be just like using a local class.

Maybe someday we'll get back there with with some REST+WADL+... reinventing of the wheel, but I somehow doubt it.

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40806475)

With the help of the WSDL, SOAP gained cross-platform client generation and type safety.

I've only been involved with applications using SOAP very briefly, a number of years back. Web services provided by IBM mainframe applications were used by a .NET application. Because the mainframe applications used decimal types for numbers, the WSDL generated by the tools IBM provided specified the valid ranges, say -999 through 999 for a three digit signed packed decimal number. The Visual Studio wizard used to generate the client code ignored the ranges specified, and the .NET side would happily pass on values the mainframe side couldn't process. That's the easy client generation and type safety across platforms as I experienced it.

I don't know if it was IBM using non-standard extensions or Microsoft ignoring part of the standard, my involvement with SOAP/WSDL was too brief and superficial to get round to finding that out.

It wasn't a big issue in practice, as the numbers were already validated when humans entered them in a web application, but then the question becomes why you need the type safety web services are supposed to provide at all in that context.

Re:WordStar? (1)

toriver (11308) | more than 2 years ago | (#40806565)

WADL [wikipedia.org] is the REST equivalent of the WSDL of SOAP, though apparently REST services can be described using WSDL 2.0 as well.

Re:WordStar? (1)

andsens (1658865) | more than 2 years ago | (#40806855)

I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!
SOAP sucks big monkeyballs and REST doesn't, period.

Re:WordStar? (1)

dkf (304284) | more than 2 years ago | (#40808577)

I'm gonna stop you right there. You should get a big slap in the face for saying REST and SOAP are on the same level!

You're right about that, they're not the same thing. They're fundamentally different ways of viewing an application on the web (one is about describing things beforehand, the other at runtime; one is about factoring verbs first, the other is nouns first). But from the perspective of the big picture, they're really not that different.

SOAP sucks big monkeyballs and REST doesn't, period.

That's what it seems like to you, but when you're working with applications that you're building on top of these webapps, SOAP works better. The tooling is better. The separation of concerns is better. The message characterization is better. You might claim that that is because the messages can change over time, but the alternative that the REST architectural style tends to propose isn't a win.

My webapps expose both SOAP and (strict, HATEOAS) RESTful interfaces. Neither is superior to the other in all areas.

Re:WordStar? (0)

Anonymous Coward | more than 2 years ago | (#40803281)

Apparently, WS-* refers to web services; OAUTH is some DRM standard allowing one to constrain sharing digital stuff. I got bored about it after that.

So what? (0)

Anonymous Coward | more than 2 years ago | (#40802167)

It doesn't have to be perfect - only "good enough".
Look at all the technologies we're currently using: The X Server, HTTP, and so on. None of it is perfect, but "good enough".

So instead of moaning, do something, to improve it!

Re:So what? (0)

Anonymous Coward | more than 2 years ago | (#40802261)

Sometimes, murdering it is the best approach. He did something; he aired the dirty laundry. They say sunshine cleans things; hopefully this will help.

Re:So what? (1)

luis_a_espinal (1810296) | more than 2 years ago | (#40802269)

It doesn't have to be perfect - only "good enough". Look at all the technologies we're currently using: The X Server, HTTP, and so on. None of it is perfect, but "good enough".

So instead of moaning, do something, to improve it!

Improvement can only take places when things can be salvaged at a reasonable cost. When the architecture of things is bad enough to cross a certain point, it is best to start over. The software industry has plenty of live examples of this, accumulated for the last 30-40 years.

Re:So what? (1)

Jonner (189691) | more than 2 years ago | (#40804575)

Eran Hammer seems to be saying that OAuth 1 is "good enough" and few will benefit from OAuth 2.

Re:So what? (2)

Deorus (811828) | more than 2 years ago | (#40802333)

Nobody uses X Servers for what they were designed (though I don't dislike the concept), and the only problem with HTTP is that people are abusing it for things that it shouldn't be used. By design, HTTP is a stateless pull protocol, and people are abusing it by forcing state, streaming, and pushing for no good reason.

Lack of perfection is not the problem, the problem are high level idiots with influence reinventing high level wheels full of compromises because they don't know better and should have never been engineers.

Re:So what? (1)

honkycat (249849) | more than 2 years ago | (#40803555)

Don't say "nobody," I use them for what they were designed for at least a few times a year.

Re:So what? (1)

sjames (1099) | more than 2 years ago | (#40803463)

Once a spec has spent too long trying to get from good enough to perfect, often by gluing on so many options, exceptions, and extensions that nearly anything can be said to comply but nothing can be said to implement it comprehensibly, there can be no good enough any more. The closest you can get is to carve a bunch of it away and call a cleaned up subset of it good enough.

So is OAuth 2.0 (-1)

Anonymous Coward | more than 2 years ago | (#40802177)

now lead edited by Alan Smithee [hollywoodlexicon.com] ?

Sounds familiar (2)

An Ominous Coward (13324) | more than 2 years ago | (#40802243)

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesnâ(TM)t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Sounds familiar. For anyone following the Smart Grid work, this is exactly why Smart Energy 2.0 is a fiasco. All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants -- parasites that feed first on the drawn-out work within the standards organization that results in a "flexible" specification (meaning that it's not a specification at all), then feed on any group that tries to implement the standard because they'll need the "expert" insight in order to make the "flexible" damn thing work at all.

Re:Sounds familiar (0)

Anonymous Coward | more than 2 years ago | (#40802301)

Yeah. Seems to happen a lot. SIP is nearly as bad.

Re:Sounds familiar (1)

JonySuede (1908576) | more than 2 years ago | (#40808521)

SIP is nearly as bad.

SIP is not only nearly as bad; I would says that SIP is an abomination and that well thought well designed h.323 should have won the soft-phone protocols war. But as usual the Worst is Better [wikipedia.org] approach won...

Re:Sounds familiar (2, Insightful)

Anonymous Coward | more than 2 years ago | (#40802395)

To be fair, it's a hard problem. Let's take the analogous example of a word processor. Surely, we can come up with something less bloated that Microsoft Word? Let's just get rid of all the arcane features that only 1 percent of the user base wants. That sounds good, until you find that entire industries (such as legal) run their business on Word and depend on those arcane features. Another user base (such as sci pubs) might need an entirely different subset of arcane features. Then there are those globalization features needed to support those who don't speak or write English as their primary language. So what are your options?

- only support the common subset that everyone wants. This would work at some level, but it's no longer a commercial product. This would perhaps be the mindset of a Open Source developer.
- support more specialized features via plug-ins or add-on packs. This is a maintenance, deployment and security nightmare, as we've seen with web browsers.
- support everything, which is what Microsoft does. It's a very unwieldy package.

Re:Sounds familiar (1)

Nurgled (63197) | more than 2 years ago | (#40804399)

Option 4: - Focus on a speciflc use-case and let others focus on other use cases, rather than trying to make one product that is a jack of all trades and a master of none. There's no rule that says all problems must be solved with one piece of software.

Re:Sounds familiar (1)

abirdman (557790) | more than 2 years ago | (#40806803)

There's no rule that says all problems must be solved with one piece of software.

There is such a rule. It's called monopoly capitalism.

Re:Sounds familiar (1)

arglebargle_xiv (2212710) | more than 2 years ago | (#40806193)

All of our major standards organizations (IEEE, ANSI, IETF, etc.) have been taken over by bureaucratic-minded industry and government consultants

Sad but true. About a decade ago I was part of an IETF standards effort that was turning into crap fast, when someone finally decided to run an interop test on implementations the conclusion was "this protocol does not work". The working group chair's comment on this was "we'll push it through as a standard anyway and then someone will have to figure out how to make it work". My (private) reaction to this was "The IETF has now become the ISO / OSI". In other words it had become the very thing that it was created as a reaction against.

In terms of bureaucracy, there are standards groups in there whose composition is 95% conslutants feeding off the (mainly) US government like intestinal parasites and 5% academics trying to push their pet toy protocol, with 0% actual implementers providing input (in the abovementioned standards group, there hasn't been anyone who cuts code involved in the process for more than a decade). Occasionally some new guy will come along and ask why it's so hard to do X in whatever standards document exists to tell you how to do X, and has to be quietly informed that the "standard" for X is merely the document for some US Navy procurement contract dressed up as an RFC and not any real standard at all.

Twenty years ago we had the IETF come in to kill off OSI. Now that significant parts of the IETF have become what they were supposed to get rid of, we need something else to come in and play the role that the IETF originally played. Unfortunately I don't think there's anything out there...

Who still falls for "frameworks" in 2012? (0)

Anonymous Coward | more than 2 years ago | (#40802251)

A framework is like a set of libraries that you embed your code in, instead of embedding them into your code.
The problem is, that usually
1. that doesn’t give you the option to use only parts of it,
2. it is a vastly over-engineered mess because of the same lack of separation,
3. you can't use it with other things, because those other things don't fit into the fixed framework,
4. doesn't allow you to model your big picture for the same reason.

I've seen enough of them, no not fall for that again.

See also: "Enterprisey", "inner platform anti-pattern", "TheDailyWTF"

Re:Who still falls for "frameworks" in 2012? (1)

oakgrove (845019) | more than 2 years ago | (#40802401)

I gather you don't write Android or iOS apps as the process can only really be described as plugging your code into the framework.

Re:Who still falls for "frameworks" in 2012? (1)

icebraining (1313345) | more than 2 years ago | (#40803541)

Wrong kind of framework. They're talking about a framework of concepts and ideas, not a software framework.

a few excerpts (3, Interesting)

anarcat (306985) | more than 2 years ago | (#40802305)

Good article, quite interesting to see the problems a community is faced when going through standards processes.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete. Bringing OAuth to the IETF was a huge mistake.

That is a worrisome situation. With the internet openness being so much based on open standards, the idea that the corporate world is taking over standards and sabotaging them to fulfill their own selfish interests is quite problematic, to say the least.

As for the actual concerns he is raising about OAuth 2.0, this one is particularly striking:

Bearer tokens - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.

I don't know much about oauth, but this sounds like a stupid move.

Re:a few excerpts (2)

Trepidity (597) | more than 2 years ago | (#40802551)

The enterprise-use-cases problem is partly for structural reasons. The IETF process makes it most natural to participate if you're a representative of a company, because it is very long, requires many meetings (some of them in-person), and therefore is most feasible to participate in if someone is paying your salary and travel to spend 3 years standardizing a protocol. Sometimes academics participate as well, if it's a proposed standard that is very close to their interests, enough so that it makes sense to take time that could be spent doing new research, and spend it on the IETF process instead. If you aren't in either of those positions, participation in an IETF process is likely to be economically challenging.

Re:a few excerpts (1)

naasking (94116) | more than 2 years ago | (#40803145)

I don't know much about oauth, but this sounds like a stupid move.

No, it's how it should have been to begin with. Bearer tokens are now pure capabilities supporting arbitrary delegation patterns. This is exactly what you want for a standard authorization protocol.

Tying crypto to the authorization protocol is entirely redundant. For one thing, it immediately eliminates web browsers from being first-class participants in OAuth transactions. The bearer tokens + TLS makes browsers first-class, and is a pattern already used on the web quite a bit, albeit not as granularly as it should be.

His criticisms against bearer tokens [hueniverse.com] are based on the ideals of authenticating identity, but bearer tokens in OAuth are about authorization. These are very different problems, and authentication actually impedes the delegation patterns that people want to use OAuth for.

Giving someone a bearer token authorizes them to use a resource on your behalf. That third-party shouldn't also have to authenticate with the resource as well. It could be a person or service that's entirely unknown, so authentication requirements actually prevent work from getting done. This just leads to awkward workarounds, which OAuth was supposed to prevent!

I'm glad someone sees things clearly (1)

redengin (1171623) | more than 2 years ago | (#40804647)

Thanks for stating it so well.

OAuth (3, Interesting)

bbroerman (715822) | more than 2 years ago | (#40802339)

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

Re:OAuth (1)

schlesinm (934723) | more than 2 years ago | (#40802573)

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

1.0 had some issues when you moved beyond web apps (JavaScript or mobile apps), but I am much more confident of its security.

Re:OAuth (3, Interesting)

icebraining (1313345) | more than 2 years ago | (#40803567)

There's nothing wrong with SSL/TLS for this. Software doesn't fall for SSL stripping and you can even copy the service's certificate over and validate against that, bypassing CA issues.

Re:OAuth (1)

chrb (1083577) | more than 2 years ago | (#40803863)

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS

Hammer has been saying similar things for years now: OAuth 2.0 (without Signatures) is Bad for the Web [hueniverse.com]

Re:OAuth (1)

Jonner (189691) | more than 2 years ago | (#40804631)

Having implemented OAuth1.0 and 2.0 services for communicating with various platforms, I was amazed at the lack of any security in Oauth 2.0. As mentioned by others, it completely relies on SSL/TLS, which is itself somewhat broken. From what I have gathered, it's simpler. That's about it. Actually, I prefer OAuth 1.0 and have modeled many of my own APIs after it.

TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies used for TLS authentication.

Re:OAuth (1)

dkf (304284) | more than 2 years ago | (#40808785)

TLS is not broken at all. Using it properly can be difficult. This, as well as lack of redundant security mechanisms is the reason Eran Hammer didn't like relying on TLS solely. If you think TLS is broken, you may be confusing it with the public key infrastructure everyone uses for HTTPS. The problems with poorly run signing authorities are not fundamentally technological but administrative. Outside of accessing public HTTPS sites with a browser, you can take more control over the certificates and policies used for TLS authentication.

To be more exact, the key to using TLS well is controlling the code that determines whether a particular chain of certificates (the ones authorizing a connection) are actually trusted. HTTPS does this one particular way (a fairly large group of root CAs that can delegate to others, coupled with checking that a host is actually claiming to be able to act for the hostname that was actually requested) but it isn't the only way; having a list of X.509 certificates that you trust and denying all others is far more secure (though annoying from a deployment perspective). There are a number of other policies that you could use; there's no reason you couldn't build a web-of-trust model, or use cunningness in certificate extensions, or apply path length constraints, or ...

The deep problem is that there are two key parts; authentication, and authorization. They're totally different things. Authentication gives you a token that describes someone or something with varying levels of detail, which could be anywhere from a bare UUID to a long list of descriptive information. Authorization is the process of taking the token, plus possible extra information supplied by various parties, and working out what you want to actually allow them to do. In the simple case, where you've got clients and servers and the servers don't talk to each other, it's very easy. The problem comes when the client wants to tell Server A to do something on Server B on their behalf (or even A controls something on B which controls something on C; I have situations were that depth of complexity is relevant, A being a portal, B a workflow engine, and C a filestore or database); that's the delegation problem, and it's been causing people trouble for many years.

The root of the delegation problem is that you can't do a zero-trust delegation; the parties in the delegation have to trust each other to not be evil and misuse the tokens being passed around. There are ways to minimize the trust required, but that's surprisingly difficult to get right. It's all not helped at all by the fact that there aren't any real standards for doing delegation (specifications, yes; accepted standards, no) and the fact that it can't be zero-trust makes the deep security hackers hate the whole idea. That in turn encourages them to work on the wrong parts of the problem; we don't need Yet Another scheme for limiting the amount of trust being delegated or for limiting who it can be delegated to (I've yet to see a scheme which can work in a real, messy deployment).

I prefer to avoid security as an area. It's a real mess, and the community isn't very cohesive.

Webdevs and Sign in... (0)

Anonymous Coward | more than 2 years ago | (#40802535)

Eran Hammer on Twitter: "The new and improved signing proposal for mac tokens was sweet and simple. One JS line. But they wanted JWT and assertions." Emphasis mine.

Some information (2)

wonkey_monkey (2592601) | more than 2 years ago | (#40802575)

Yeah yeah, I know, if you don't already know and can't be bothered to go looking, you must therefore be a dribbling buffoon who should not dare to even use the internet let alone visit the hallowed and sacred Slashdot, but:

OAuth is an open standard for authorization. It allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their credentials, typically supplying username and password tokens instead. Each token grants access to a specific site (e.g., a video editing site) for specific resources (e.g., just videos from a specific album) and for a defined duration (e.g., the next 2 hours). This allows a user to grant a third party site access to their information stored with another service provider, without sharing their access permissions or the full extent of their data.

Re:Some information (0)

Anonymous Coward | more than 2 years ago | (#40802801)

Awesome that you went through the trouble of Googling that info and cutting and pasting it into a post, instead of just reading the linked article. But this is /. I guess.

v1 was bullshit too (2)

CockMonster (886033) | more than 2 years ago | (#40802663)

I tried to implement OAuth v1 on a mobile device. What a pain in the hole. And it all fell down once you had to get the user to fire up the browser to accept the request. There was no way (I could figure out) to handle the callback so instead it seems to have been implemented via a corporate server thereby defeating the whole purpose of it. The easiest to work with was DropBox. I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.

Re:v1 was bullshit too (1)

Anonymous Coward | more than 2 years ago | (#40803253)

I never got what extra level of security sorting the parameters provided the signature would show up any tampering, it just means you gobble up memory unnecessarily.

Well it's good that someone else understood it and forced you to do it, then.

But in actual response to your answer: it allows the request signature to be calculated by the server you're sending the request to so that it can ensure that the parameters have not been tampered with.

Re:v1 was bullshit too (0)

Anonymous Coward | more than 2 years ago | (#40803421)

Someone didn't read what the guy said...

"...provided the signature would show up any tampering..."

Re:v1 was bullshit too (0)

Anonymous Coward | more than 2 years ago | (#40803521)

Preempt requests so they dont pop up, I do that with Basic auth headers to avoid the 401.

Re:v1 was bullshit too (5, Informative)

Mark Atwood (19301) | more than 2 years ago | (#40803531)

I was there, I helped write v1.

The reason you had to sort the parameters etc etc was because OAuth 1.0 was designed to be implementable by a PHP script running under Apache on Dreamhost. Which meant you didn't get access to the HTTP Authentication header, and you didn't get access to the complete URL that was accessed. So we had to work out a way to canonicalize the URL to be signed from what we could guarantee you'd have: the your hostname, your base url path, and an unsorted bag of url parameters. Believe me, we *wished* for a straightforward URL canonicalization standard we could reference. None existed. So we cussed a lot, bit the bullet, and wrote one that was fast and simple as possible: sort the parameters and concatenate them.

Go yell at the implementors of Apache and of PHP. If we could have guaranteed that you'd have access to an unmangled Authentication: HTTP header, the OAuth 1.0 spec would have been 50% shorter and a hell of a lot easier to implement.

Re:v1 was bullshit too (0)

Anonymous Coward | more than 2 years ago | (#40803651)

Oauth 1.0 worked great for what it did. It also forced me to learn why the hashing and signature required such things. I'm glad I had to get OAuth working and as long as clients tokens werent open to everyone (see twitter), it worked great for API usage.

Re:v1 was bullshit too (1)

CockMonster (886033) | more than 2 years ago | (#40804149)

Hi Mark, thanks for replying. Do you not think it was a flaw to target a spec towards a specific language/architecture? Another thing that really pissed me off was the complete lack of help testing my implementation. I'd have given up far sooner if it hadn't been for this site: http://term.ie/oauth/example/client.php [term.ie]

Re:v1 was bullshit too (1)

equex (747231) | more than 2 years ago | (#40806897)

Well, half the world runs on Apache + PHP, but you are right in asking why.

Re:v1 was bullshit too (1)

dkf (304284) | more than 2 years ago | (#40808817)

Do you not think it was a flaw to target a spec towards a specific language/architecture?

From the perspective of someone on the outside of the process, it was both a mistake and not a mistake. It was a mistake in that it causes too many compromises to be made. It was not a mistake in that it allowed a great many deployments to be made very rapidly. IMO, they should have compromised a bit less and pushed back at the Apache devs a bit harder to get them to support making the necessary information available.

But I wasn't there, so I've very little room to criticize.

Re:v1 was bullshit too (1)

Admiral Burrito (11807) | more than 2 years ago | (#40804251)

Speaking of sorting parameters, there is at least one issue I still see in a lot of libraries. The spec says encode things, then sort them. Many of the libs I've seen do it the other way around. Sorting first is the most obvious way to do it, but I guess the spec was trying to avoid issues with locale-specific collations by forcing everything to ASCII first. Most sites uses plain alphanumeric parameter names so people get away with doing it either way.

Still, it goes to show how developers can completely fail to RTFSpec, even when developing a library for use by lots of other people. Seems to be exactly what worries Mr. Hammer.

Re:v1 was bullshit too (1)

CockMonster (886033) | more than 2 years ago | (#40804519)

IIRC, you have to encode the key, encode the parameter, append them with '&' and encode again, and then sort them, generate the signature, encode the signature key and the signature itself. Or something. Oh and the encoding routine is urlencode plus some extra characters so that has to be written from scratch too.

Re:v1 was bullshit too (1)

gbjbaanb (229885) | more than 2 years ago | (#40807451)

Go yell at the implementors of Apache and of PHP

then why didn't you? Last time I checked Apache was open source so you could have submitted your required changes. I'm not quite so sure of PHP, but maybe there is a way to add an extension to it that grabs the unmangled header from your newly customised Apache.

Re:v1 was bullshit too (1)

petermgreen (876956) | more than 2 years ago | (#40809583)

The problem is AIUI the goal was to make things work on shitty webhosts. So working in up to date apache/php with the right settings is not enough, you have to work on whatever old version of apache/php and whatever crummy config the webhost offers.

Re:v1 was bullshit too (1)

gbjbaanb (229885) | more than 2 years ago | (#40810331)

sure, but if you don't fix things, they'll never get fixed. The OP seemed to just be too whiny about how things were difficult, boohoo.

In a year or two all those old Apache webhosts would be upgraded - or TBH, if he'd made the patch and added it they would pretty much all get upgraded in the next update release. And those that didn't, would be really insecure anyway due to other unpatched vulnerabilities. I think webhosts tend to update their servers reasonably regularly.

Eran Hammer bevame part of the problem (0)

Anonymous Coward | more than 2 years ago | (#40808907)

The guy may have been bright but he was a royal pain in the butt to work with.
Maybe he was hoping everyone else would resign and leave him to it.
He despised any enterprise bigger than three people and a dog and always bit the hands that fed him.
Standards-setting involves understanding the needs and constraints of big and small players not pissing on them with the holier-than-thou tone that was Eran's trademark

Keep it simple stupid (0)

Anonymous Coward | more than 2 years ago | (#40803117)

Sorry this guy had a rotten experience after trying to help the web community.

Simplicity is the absolute bedrock of software. Every layer of the stack that adds more complexity, adds more time and resources to support. Which means at the end of the day you can't build a larger more powerful app because you are wasting time with unnecessary complexity.

The enterprise is fucked, because they don't understand the importance of simplicity. This is why they are being replaced by cloud service providers at all levels, and this trend will continue. It is an order of magnitude increase in productivity to have a giant like Google handle a million companies' emails vs. a million individual IT departments that already have too much on their plate.

Thanks Eran! (1)

dtrainopain (1168077) | more than 2 years ago | (#40805153)

I’ve worked on related standards and I can identify with much of Eran’s frustration. Eran’s a smart, dedicated, passionate person who has worked very hard to make OAuth work for everyone - not just those looking to profit from it. And OAuth is currently the best open standard option for securing REST-based web services today. I hope that when he thinks about OAuth, he thinks primarily about the huge contribution he has made, and not with regret. The standardization process ultimately brings a lot of competing interests to the table - often from vendors. Vendors are increasingly focused on identity as it facilitates the ‘de-perimeterization’ trend in the approaches taken to securing networks. In the identity standards process these different interests are often addressed by creating different ‘profiles’ within the standard – to address specific use cases and concerns like the ones mentioned by him and in some posts here. Once the standard is ratified (and often before) everyone goes off and creates implementations of those profiles – but usually not all of them – to suit their needs. That makes the products more complex to deploy and to do so securely – a lament that Eran expresses. Ultimately the market will decide which profiles were the most important – based on their adoption. I believe that much of Eran’s vision has been and will continue to be realized as adoption increases and OAuth profiles mature.

I prefer XML-RPC over SOAP and REST (0)

Anonymous Coward | more than 2 years ago | (#40808099)

Everyone hates SOAP and we all understand why. REST + OAUTH is a good combination but the fact that for every single service you need a different implementation sucks. You can say it is all JSON or in some cases XML or even HTTP POST VARIABLES. But the fact is, there is no standard for REST. On the other hand, more than 10 years ago there was XML-RPC, It was dead simple to implement and it was very flexible. Once you had an XML-RPC implementation, you could call any service without the need to recode your message handling stuff. I still don't know why it haven't gained traction.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?