Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Security Technology

IETF Starts Work On Next-Generation HTTP Standards 82

alphadogg writes "With an eye towards updating the Web to better accommodate complex and bandwidth-hungry applications, the Internet Engineering Task Force has started work on the next generation of HTTP, the underlying protocol for the Web. The HTTP Strict Transport Security (HSTS), is a security protocol designed to protect Internet users from hijacking. The HSTS is an opt-in security enhancement whereby web sites signal browsers to always communicate with it over a secure connection. If the user is using a browser that complies with HSTS policy, the browser will automatically switch to a secure version of the site, using 'https' without any intervention of the user. 'It's official: We're working on HTTP/2.0,' wrote IETF Hypertext Transfer Protocol working group chair Mark Nottingham, in a Twitter message late Tuesday."
This discussion has been archived. No new comments can be posted.

IETF Starts Work On Next-Generation HTTP Standards

Comments Filter:
  • The summary seems a bit confused, like they've misinterpreted the proposed standardisation of HSTS and the beginning of work on HTTP 2.0 as the same thing.

    • Right? I had to read it a few times to make sense of it. I'm still not quite clear on what HSTS has to do with HTTP/2.0...

  • by Bananatree3 ( 872975 ) on Wednesday October 03, 2012 @10:20PM (#41545291)
    The EFF has plugins [eff.org] for Chrome and Firefox to force HTTPS on as many sites as it can. Will be nice to have it formally in HTTP 2.0, but that feature is available for many sites with the plugin it seems.
  • There's going to be push-back from corporations on this one unless they break it so it's insecure. Truly secure browser-to-server communication resistant to man in the middle attacks would mean IT can't record and document what information is being sent from employees' computers. Legal will put the kabosh on the use of any tech that prevents them from papering over their asses by saying they did everything possible to prevent transmission of confidential/proprietary data. Note: Everything in a corporation i

    • by Anonymous Coward on Wednesday October 03, 2012 @10:47PM (#41545397)

      You can already install a local certificate and proxy HTTPS traffic and there are commercial solutions allowing such for corporate monitoring. It also ''adds security'' by removing the desktop or mobile devices need for certificate authentication and management by moving it the proxy. In short, monitoring HTTPS traffic is routine in the enterprise and has been standard practice for many years.

    • by DragonWriter ( 970822 ) on Thursday October 04, 2012 @12:30AM (#41545767)

      There's going to be push-back from corporations on this one unless they break it so it's insecure. Truly secure browser-to-server communication resistant to man in the middle attacks would mean IT can't record and document what information is being sent from employees' computers.

      Untrue. MITM-proof communication doesn't protect you from someone who has control over either endpoint, so it doesn't prevent monitoring of corporate computers.

    • by mcrbids ( 148650 )

      I guess you aren't familiar with *cough* proxy servers *cough* which work just fine with SSL being very secure from the organization outward, but able to keep convenient logs of all traffic flowing in and out.

      Combine with blocking outbound to 443 from all computers except the proxy. For the truly paranoid, a deep-packet-inspection firewall can be trivially configured to drop all SSL packets.

      Problem (mostly) solved, for the sufficiently ethically compromised organization. Private smart phones represent a sig

    • It's illegal for companies to spy on their employees in such a manner in most sane countries anyway, so I don't see any issues with this.

  • Not everything in this wide world can be represented as static state. There are lots of dynamic, parallel, and long-running actions happening all around us. It sure would be nice to trigger a processing operation with an EXECUTE verb because PUT and POST just don't make sense in that context.
    • How about "DO" instead? Much shorter.

      Anyway, browsers have GET and POST, but does anybody know one that has PUT and DELETE? These should be relatively simple to implement, but the last time I looked, none had any, meaning that if you wanted to use REST APIs from your browser (as opposed from server-to-server), you'd have to do awkward things like
      GET "/account/12345/delete"
      instead of
      DELETE "/account/12345"

      Which is a problem because GET is supposed to be "idempotent" (not supposed to have any side effects no

      • If its not your taking away a whole class of web scalability (caches like varnish/squid). But yet people abuse the shit of GET instead of POST.
      • Which is a problem because GET is supposed to be "idempotent" (not supposed to have any side effects no matter how many times you run it).

        (1) you could just use POST and deny GET requests for URLs with side effects. You just end up encoding the "delete key" into the form's target URL instead of using a field, but for most applications that's fine. (2) "Idempotent" means that running once is identical to running many times. Deleting an account is usually idempotent so it's valid (albeit crazy/stupid) fo

      • by aneroid ( 856995 )

        Wrong. GET is supposed to be "nullipotent" [wikipedia.org]. You're correct about GET not supposed to have any side effects.

        PUT and DELETE are idempotent [wikipedia.org] - "multiple identical requests should have the same effect as a single request"

        The reason browsers don't have them is because of the HTML/XHTML spec - "HTML forms (up to HTML version 4 and XHTML 1) only support GET and POST as HTTP request methods."[1] So if they implemented it, most likely would be done differently by each browser, and more so in IE as usual.

        1: http://sta [stackoverflow.com]

        • by aneroid ( 856995 )

          (They are of course present in XMLHttpRequest.)

        • Highly interesting, never knew about nullipotent.

          >The reason browsers don't have them is because of the HTML/XHTML spec

          Well, OK. But XHTML was from about 2000 or so. Now we're at HTML5 and still don't have browser-supported DELETE and PUT? What I mean to say is, if it's not in the spec, they should PUT it there, already.

      • My thought: a browser is for viewing content, not performing raw operations. You probably don't want people to be able to delete content nodes on your server just by issuing a DELETE request, you'd want to POST a request to server-side code to perform the operation on the user's behalf so it can do proper filtering (eg. not permitting deletion of "/"). A browser isn't the only client around, and some things are just not things you really want to be doing in a browser. There's too little validation of what's

        • by dkf ( 304284 )

          You probably don't want people to be able to delete content nodes on your server just by issuing a DELETE request

          It's not a problem in reality; just because someone asks to delete something doesn't mean you have to say "yes".

        • Regarding security: You, of course, always have a security setup, anyways.
          E.g., you can only DELETE items that you created.

          >You probably don't want people to be able to delete content nodes on your server just by issuing a DELETE request, you'd want to POST a request to server-side code

          Well, it's always going to be handled by server-side code, no matter if it's a simple GET. In fact the server doesn't even have to respond to a GET if you don't have the right security clearance.

          Say you have a project mana

      • Most browsers support DELETE and PUT through AJAX.

      • by ianare ( 1132971 )

        When implementing RESTful APIs, I've found this Firefox plugin to be quite useful. It allows you to use DELETE and PUT requests (amongst others) from your browser.

        https://addons.mozilla.org/en-US/firefox/addon/restclient [mozilla.org]

      • The "browser" doesn't "have" GET and POST. Those are used in the HTML forms. You can use PUT and DELETE just fine - but nobody does.

        • No, they are "in" the browser. First of all, only GET and POST are supported as values in action for HTML4 [w3.org].

          Secondly, the difference between GET, POST, PUT, etc. is not that the browser requests a URL, and merely passes along the "action" parameter, no matter what it is "get", "put", "mickymouse", "goofy".

          Rather, what happens is that the browser makes an entirely different type of HTTP request depending on the action param.

          GET /path/to/file/index.html HTTP/1.0

          POST /path/script.cgi HTTP/1.0
          (and then the data)

    • Can't you POST an executionRequest?

  • Will this work in IE 6?

    If IE 6 doesn't support it then I am not interested. We do not want to turn down .01% of our visitors as that would cost hundreds!! Now get your ass back to work spemnding thousands to support these hundred of dollars worth of users.

  • Isn't that what TLS is for?

  • Any for hardware standards? For example a GFX hardware interface? Any hope for an open GiGE like standard for cameras?
  • CA (Score:3, Interesting)

    by fa2k ( 881632 ) <pmbjornstad@noSPAm.gmail.com> on Thursday October 04, 2012 @05:36AM (#41546801)

    Please, can HSTS also get an option to limit the acceptable certificates for a domain?
    We have this:
    - There have been multiple breaches of CAs already.
    - Any CA can sign a certificate for any domain name

    How about these options:
    - parent: accept any certificate which is signed by a certificate given in the "HSTS" header and stored on the user system. Option to require a direct descendent.
    - direct: specify just one allowable certificate.
    - You can specify multiple alternative certificates in the "HSTS" headers.
    If the parent or direct certificate expired and the browser didn't know about an alternative, it would fall back to accepting any valid certificate. Thus, people who forgot to update their "HSTS" headers wouldn't be SOL. There could be another flag to reject servers which didn't have any HSTS headers, even after all known certs expired.

    Big companies could have an internal CA and require that as their parent. They would thus be completely immune to CA breaches. Small-time users could use the direct mode, and thus also be immune to all CA breaches. One could also set the CA root (e.g. VeriSign) as the parent, in which case they would be immune to all breaches except for the CA they chose, and it woudn't require intervention unless they change CA. My proposal should also work for self-signed certs, with the normal caveats.

    Now where do I post my suggestion ? ;)

    • I would like to see Multiple CAs; I don't know this is possible now because I only ever saw 1 cert configs on my old server.

      I'm less concerned with CA breaches than I am with con-men who often easily can buy CA certs. I think the local government should be a CA for every business that incorporates with them (have you seen the paper certificates they give? you could make them yourself, and the business ID numbers are not secure either...) It was harder to incorporate without showing a ton of legit identifi

    • by atisss ( 1661313 )

      You're getting redundant. How can you secure and verify HSTS origin (to transfer info about allowed CA), if you don't know with whom you established HSTS (there is no authority that has signed it).

      Current CA scheme works as it is, because CA information is included in browser, and in order to replace that, there has to be other means to transfer authority information (DNSSEC could theoretically be usable)

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...