Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Security

Black Hat Presentation Highlights SSL Encryption Flaws 152

nk497 writes "Hackers at the Black Hat conference have shown that SSL encryption isn't as secure as online businesses would like us to think. Independent hacker Moxie Marlinspike showed off several techniques to fool the tech behind the little padlock on your screen. He claimed that by using a real world attack on several secure websites such as PayPal, Gmail, Ticketmaster and Facebook, he garnered 117 email accounts, 16 credit card numbers, seven PayPal logins and 300 other miscellaneous secure logins."
This discussion has been archived. No new comments can be posted.

Black Hat Presentation Highlights SSL Encryption Flaws

Comments Filter:
  • Oh god (Score:4, Funny)

    by LordKaT ( 619540 ) on Thursday February 19, 2009 @11:54AM (#26917389) Homepage Journal

    Someone fix the summary before my brain melts.

    • Re: (Score:2, Offtopic)

      Well, if the hacker types like the submitter, I'm not too worried about my login credentials.
      • ...hackers and phishers ever take a third-grade English class.

        Typos, grammar errors, and awkward Google transalations probably do more to alert average users to scams than SSL certificate warnings.

        • Re:God forbid... (Score:5, Insightful)

          by Anonymous Monkey ( 795756 ) on Thursday February 19, 2009 @12:38PM (#26917961)
          Reminds me of the first lesson in hacking: Social Engineering is More Powerful than Passwords. Only the other way around. If you learn what hackers do, you can avoid them (most of the time). And if their is a Master Hacker who can dupe me, I doubt their is much I can do to stop him. Thankfully I'm not important enough to be a target.
          • Re: (Score:3, Funny)

            by Joebert ( 946227 )
            The only question is, are you attempting to look unimportant so you will be looked over, or because you know someone will look at that as a dead give away that you're trying to "look" unimportant because you actually have a lot to hide and want to draw attention to yourself because you actually are, unimportant ?
            • Part time job as an accountant. Volunteer work with Deaf people the rest of the time. Live in a one bedroom apartment with my wife. Nope, not important. That is unless you like to harass people without money.
              • Re: (Score:3, Insightful)

                by Vellmont ( 569020 )


                Part time job as an accountant. Volunteer work with Deaf people the rest of the time. Live in a one bedroom apartment with my wife. Nope, not important.

                A hacking program designed to find exploits in your computer, at your ISP, or any of the internet infra-structure between you and doesn't really care how "important" you are. It only cares about the logic with which it was written.

                Your internet connection is only the most easy thing to steal. If an automated program (you can call it a "virus" or "worm" if

                • Good point, and as my ATM card has been eaten by the machine at the bank because my account got hacked perhaps I should restate the reason for my lack of concern.

                  I'm not very exposed to risk online. My bank account and credit cards shut down and lock up tight if they suffer a breach, and I tend to carry enough cash to get back home, so I'm not worried about that. If an odd charge shows up on my bank statement I go to the bank and they reverse it without any question (it's good to be old friends with m

          • Re:God forbid... (Score:5, Insightful)

            by Vellmont ( 569020 ) on Thursday February 19, 2009 @01:09PM (#26918465) Homepage


            Thankfully I'm not important enough to be a target.

            A common myth, based on a belief that "hacking" is done by some smart guy sitting around thinking about which "important person" to go after next.

            The answer (if you're smart enough and slightly lazy) is "why not everyone?" or at least "anyone that falls into the trap". An automated program doesn't really care who you are, if you're "important" or not. Only that it can trick you into losing some money. Personally I think that's why a lot of people fall for 419 scams.

            • Re: (Score:3, Interesting)

              by KillerBob ( 217953 )

              I think he was talking about people specifically trying to break into his box, presumably a server. Like him, I don't really think my server is that juicy a target. A determined hacker *can* break into it, no argument. But it's got enough of a deterrent in place, in the form of frequent updates by a sysadmin who subscribes to the mailing lists for all the software she's running, requiring SSH to log in, the non-existence of any remote administration tools except for SSH, only allowing one user shell access

            • by msimm ( 580077 )
              I think he's saying he's probably smarter then the average program. A targeted attack, unlike a lazier automated attack still has a better chance of success. What you present is a low-hanging fruit argument.

              • I think he's saying he's probably smarter then the average program.

                An attitude that can get you in a lot of trouble. Richard Feynman once said "I'm smart enough to know that I'm dumb." In other words, don't think you're so smart you can't be fooled.

                What you present is a low-hanging fruit argument.

                There's some of that to be sure. But anyone can get hacked, even by an automated program. Do _YOU_ check the security link every time you login to your bank website? The biggest problem with ALL of this damn s

                • by msimm ( 580077 )
                  Do _YOU_ check the security link every time you login to your bank website?

                  I do. And I certainly don't mean to suggest being smart == impervious, in fact I believe it's more likely that an educated user will avoid many of the low-hanging fruit scenarios precisely because of their increase in vigilance.
          • Thankfully I'm not important enough to be a target.

            Bullshit. You don't get attacked for being important, you get attacked because you're there.

            • Right, I said "Master Hacker," the kind in movies that go after one person and then take over everything about them. Not the kind that sets up goofy phishing schemes.
        • Re: (Score:1, Offtopic)

          Transalations? How well did your third grade go? :p
    • Re:Oh god (Score:5, Funny)

      by gnick ( 1211984 ) on Thursday February 19, 2009 @12:10PM (#26917623) Homepage

      You simply misunderstood the summary - It's fine the way it is.

      Independent hacker Moxie Marlinspike showed off several techniques to get fool the tech behind the little padlock on your screen.

      "fool the tech" is a little bot that hides behind the padlock on your browser, watches what you're typing, and reports it back to Moxie. Moxie has several techniques for getting Fool behind the padlock. Why Moxie named the little tech Fool, I have no idea.

    • Re:Oh god (Score:4, Informative)

      by Lord Ender ( 156273 ) on Thursday February 19, 2009 @12:20PM (#26917755) Homepage

      OK: "Some implementations of SSL encryption are flawed. These can be fixed. SSL encryption itself is not flawed."

      • SSL is flawed, at least for the web. Usability studies have shown time and time again that the vast majority of people do not understand and will ignore the bad cert error dialogs. That's a pretty fundamental problem, and why Firefox now makes it really hard to bypass them, but all it's doing is putting a sticking plaster on a bullet wound.

        • Re:Oh god (Score:4, Insightful)

          by Lord Ender ( 156273 ) on Thursday February 19, 2009 @03:03PM (#26920187) Homepage

          You are absolutely wrong. SSL is not flawed. The UI browsers have implemented regarding SSL is flawed. The UI should make it clear to the users exactly where they are sending their information. It should also make it clear when they are submitting a password over plain text.

          • Re: (Score:2, Interesting)

            Firefox even has everything needed to defeat this already built in - it's just not enabled by default. By setting browser.identity.ssl_domain_display to 1 in about:config, it displays a blue strip left of the URL with the last two parts of the domain name, similar to the green strip with the registrant's name for EV certs.

            They should enable this by default, and whoops, the iiijk.cn attack described in the PDF is instantly obvious.

        • SSL is flawed, at least for the web. Usability studies have shown time and time again that the vast majority of people do not understand and will ignore the bad cert error dialogs. That's a pretty fundamental problem, and why Firefox now makes it really hard to bypass them, but all it's doing is putting a sticking plaster on a bullet wound.

          To put it bluntly: if you ignore a warning saying that you might be being scammed when visiting your banking site, the flaw is in your brains, not in SSL. Altought I sup

      • Not even that. The issue identified in the conference talk isn't with SSL at all. It's with X.509 certificate validation.

        And to be precise, there is nothing wrong with certificate validation itself, just with the particular combination of (a) certificate authorities which erroneously issue certs which permit signing, and (b) broken implementations which don't check the X.509v3 constraints while traversing the certificate chain.
  • Independent hacker Moxie Marlinspike showed off several techniques to get fool the tech behind the little padlock on your screen.

    What do you command, master...

  • Hacking (Score:3, Insightful)

    by eclectro ( 227083 ) on Thursday February 19, 2009 @11:57AM (#26917433)

    It's always about getting the fool.

  • by Seth Kriticos ( 1227934 ) on Thursday February 19, 2009 @12:02PM (#26917499)
    Come on, this does not highlight vulnerabilities of SSL, but errors in implementing it for specific platforms. This was always a weak point.
    • Worse than that, it seems to be about the vulnerabilities of not using SSL, not anything inherent in SSL itself.

      Unless I'm mistaken, the only vulnerability in redirecting all HTTP traffic to HTTPS is that users might type paypal.com, end up on https://paypals.com/ [paypals.com] and not notice.

      Sensationalist, as usual.

      Granted, most sites are still quite a bit worse off than that -- forms served over HTTP that send your password over HTTPS -- but as usual, there are simple workarounds. For example, bookmark https://mail.go [google.com]

  • Maybe the bad guys are busy elsewhere... wait...

  • by MadMidnightBomber ( 894759 ) on Thursday February 19, 2009 @12:06PM (#26917559)

    It's a problem with sites that start out with http://example.com/ [example.com] and then transition to https://secure.example.com/ [example.com].

    If I read it right, encrypt it all, turn off http except as a 301 redirect to https and you should be fine. Anyone confirm this?

    Course, you still should check the certificate is the one you're expecting.

    • Re: (Score:2, Interesting)

      by Anonymous Coward
      They did say in the video they rewrite the http->https redirects so I don't think that's the way. The only solution is to turn of HTTP completely, but that'd mean your users would have to type https:/// [https] to use port 443 and https all the time.
      • by zappepcs ( 820751 ) on Thursday February 19, 2009 @12:36PM (#26917947) Journal

        What exactly is wrong with that? I'm sure that someone can write a script for FF that will detect the error and automatically add the 's' and resend. People had to get used to typing http://www/ [www] in the first place. It's not such a huge jump to add the 's'.

        This is the same argument that I see with switching to Linux: oh, users will have to relearn things, it's different than Windows. Yet those same users have to relearn when they get a new cable box and remote. They have to relearn when they get a new microwave. They have to relearn when they get a new television. They have to relearn when they change banks, and on and on and on. It's a lame argument.

        In the end, users in general are uninformed, lazy, and lack the drive to become well versed in computer security. SSL encryption issues are hardly the biggest security flaw in computing today. The biggest security flaw is between the ears of the end user. SSL issues hardly register on the list of problems behind the spread of the most devastating malware we know about today.

        meh

        • People had to get used to typing http://www/ [www] in the first place. It's not such a huge jump to add the 's'.

          Most people don't actually type that "http://", as all browsers I've seen add it automatically. Including Lynx.

          It doesn't help that a lot of ads these days don't bother to include "http://" in the URLs they display, either.

      • If I turn http off for my domain, but users type in http, then your malicious hacker can intercept the http request (even though it would never succeed), and respond with a redirect to https.

        So turning off http does not solve this problem. It's still not a bug with SSL though.

    • by Qzukk ( 229616 ) on Thursday February 19, 2009 @12:27PM (#26917833) Journal

      It looks like there are a couple of things, but their main one is a man-in-the-middle attack based on the user not paying attention to the browser's SSL flags. See the difference between page 61 and 62 of their presentation: https://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf [blackhat.com]

      They show on page 69 how it looks once they substitute a lock image for the favicon (if they had wanted to be Extra Evil, they'd have given their fake favicon a blue background, which would have made firefox 3 look exactly like it was SSL protected, except for the S missing in the URL)

      They then proceed to show how allowing unicode in the hostname continues to confuse and confound people. Register a cert for *.foo.com, then set up a hostname of www.google.com[unicodeslashlike]login[unicodeslashlike]blah[unicodeslashlike]blah[unicodeslashlike]blah.foo.com and presto, you have a valid certificate for a site that looks more or less like https://www.google.com/login/blah/blah/blah.foo.com [google.com], except that it's not hosted by google.

      Basically all of these are attacks on the end user, what you do or don't do on the server won't change a thing.

      • They show on page 69 how it looks once they substitute a lock image for the favicon (if they had wanted to be Extra Evil, they'd have given their fake favicon a blue background, which would have made firefox 3 look exactly like it was SSL protected, except for the S missing in the URL)

        WTF? No. The box where that icon is shown in FF3 isn't 16x16 pixels. Having a blue background would look weird and out of place.

      • by naasking ( 94116 )

        Exactly. Of course, if you use the Petname Toolbar [mozilla.org], none of these attacks work.

      • by Khopesh ( 112447 ) on Thursday February 19, 2009 @05:22PM (#26921969) Homepage Journal

        My biggest gripe about these black hat papers is that they aren't as useful to non-black hats; there are no proposed solutions or workarounds.

        I think the most important trick in the paper is that first one you mentioned, of MITM translating server-side SSL to client-side plain-text and assuming the reader won't notice (or care). The easiest workaround is to get Firefox to return the yellow background [cnet.com]. You still have to train users to mentally require it, but it's a step in the right direction.

        On to the second hack you noted. The article specifically mentions that .com and several other top level domains (TLDs) are purposefully punycoded (see page 90). However, the logic is still sound and the actual TLD doesn't matter. The example Moxie used was *.ijjk.cn.

        A solution proposal (from the top of my head): In the specific case of IDN-valid characters that approximate slash and question-mark, the simple solution is to propose a feature in firefox that recognizes them. Specifically, anything that appears to be forging a protected TLD, so punycoding IDN domains matching a regex like \w\W+(com|net|org)\W (and perhaps additionally a search for any of the proposed confusing characters), would cover a lot of ground. In the meanwhile, you could put the domain up in firefox's blue SSL box [cnet.com].

        The final vulnerability discussed in the paper (the first one in the paper's ordering) was that of standard certificates acting as intermediate certificates in the chain. This has an obvious solution and the paper even implies (but doesn't verify ... freaking black hats) that Firefox already has it implemented.

    • Re: (Score:3, Insightful)

      by Lord Ender ( 156273 )

      You are almost right. It is a combined flaw of both browsers and web site implementations. If just one of the two were flawed, it wouldn't be a major issue. But since both are, even security-conscious users are likely to get duped by this.

      So many engineering disasters rely on multiple little things going wrong simultaneously...

      • by mcgrew ( 92797 ) *

        "Disaster" is a pretty strong word. The Titanic was an engineering disaster, the Challenger and Columbia accidents were engineering disasters, that bridge that collapsed a few years ago was an engineering disaster, there was a type of cancer radiation machine a while back that was killing people because of a software bug that was an engineering disaster, the Ford/Firestone automobile rollovers were engineering disasters. To call getting hacked a disaster is a bit out of proportion.

        • If you read more carefully, you would notice that I did not specifically refer to this issue as a "disaster." My point was about small problems leading, unpredictably, to a much larger problem.

          However, since this could be used to empty the life savings of thousands of people, it has the potential to lead to disaster.

          • by mcgrew ( 92797 ) *

            Taht's a good point. When depleted life savings end in suicide, that is indeed a disaster.

    • Re: (Score:3, Interesting)

      by Deanalator ( 806515 )

      No, that will not fix this attack. I have not been able to find a copy of his tool online yet, but I am going to assume that he did it right.

      This tool should still be able to pull down the html from the https the website, and present it to the user as an http site. No amount of javascript, HTTP redirects, or a href="https:// ... is going to save you in this case. The MITM proxy is always going to be able to strip any of that out, and replace it with something that keeps the clear session alive.

      The way to


      • Once firefox has visited a website using SSL, firefox needs to automatically connect to SSL, and never trust unencrypted data from that site again.

        There's at least one problem with that approach. The one I can think off the top of me head is the initial landing site might be http only, and the login site is https. So your browser goes to http://www.nameofmybank.com/ [nameofmybank.com] you click on a link to https://login.nameofmybank.com/ [nameofmybank.com] If the browser only cares about the whole site name, it'll only go to the http site w

    • Re: (Score:3, Interesting)

      by Vellmont ( 569020 )


      If I read it right, encrypt it all, turn off http except as a 301 redirect to https and you should be fine. Anyone confirm this?

      Not really. You've only shifted the problem into one of intercepting and modifying the 301 redirect, from intercepting the individual links.

      You could turn off http entirely, but then you'll get people complaining that your website doesn't work from the vast majority of people (hell, including me really).

      This is really a browser problem, and a user problem. One way to fix this wou

      • by XanC ( 644172 )

        That sounds like a good idea. There could be a DNS option on the domain for secure-only. Of course DNS has had its own issues, but every bit helps.

    • Re: (Score:3, Informative)

      by AusIV ( 950840 )
      The problem is that a MITM can modify that 301 redirect from https://secure.example.com/ [example.com] to http://secure.example.com/ [example.com] Since they're in the middle of the transaction, they capture your packets and encrypt them before forwarding them on to https://secure.example.com./ [secure.example.com] The only indicator that something is amiss is the lack of an 's' in the protocol, which lots of people won't notice.

      Alternatively, he might redirect from https://secure.example.com/ [example.com] to https://secure.example.com/.ijj.cn [example.com], except that the slash

    • As others have mentioned, sslstrip already handles any redirects you do. The user would have to explicitly type 'https://' every time. Further, there are certain things that are just no good over SSL. For instance, caching proxies aren't supposed to cache SSL connections. Doing everything over SSL sounds nice, but doesn't really work in practice.

      The first use of of sslstrip was against implementations that didn't do enough checking on a chain of certificates. Some implementations still don't do it right, an

  • by russotto ( 537200 ) on Thursday February 19, 2009 @12:17PM (#26917709) Journal

    If you don't implement the security, you're not secure. The author claims that some browsers don't check to see that an intermediate certificate is actually authorized to sign other certificates. So naturally there's a simple attack based on that, but it doesn't really show a flaw in SSL.

    The author also complains about companies which post secure forms on non-secure pages, which is a valid complaint but is also a case of "You're using it wrong" rather than a problem with the protocols. Most users are never going to check for the lock (or whatever), so the basic problem will be with us forever, but banks don't have to screw it up by putting login forms on non-secure pages normally. Yes, it's convenient to have a login on a home page, and yes it would consume too many resources to make every home page hit into an https hit, but security ought to count for something, particularly with a bank.

    • No, the problem does NOT have to be with us forever. If browser makers simply gave pop-ups whenever a form with a password control were submitted: "Do you really want to send your password to asdfasdf.cn?" for ssl or "You are sending a password unencrypted! It could be intercepted by hackers. Are you sure you want to do this?" for http, then the problem would go away.

      • by MobyDisk ( 75490 )

        But with his trick of using SSL + unicode characters, it would say:
        "Do you really want to send your password to https://www.google.com/SecureLogin.asp?ijkll [google.com]"
        Which looks perfectly valid.

      • And lets put a checkbox underneath that says 'Do not show this message in the future' so we don't annoy the hell out of all of our users!

        Oh wait, browsers already do that. And then users check the box. And then they get suckered into these sites....

        aEN

        • Sorry, but which browsers warn users about sending POST variables from password fields over unencrypted channels?

          Maybe you are thinking of the "you are leaving an encrypted web page" warning that IE does.

          • Not password fields per se, but any unencrypted POST can result in a warning dialog with pretty much any popular browser.

            Firefox 3.x has the config on the Security tab in the options dialog...the "Warnings" section.

            IE has the config on the Security tab in the options dialog...click the zone you want to affect, then click "Custom Settings" and find the "Submit non-encrypted form data" under "Miscellaneous".

            As you can tell from my descriptions of how to find these, they aren't that "in your face", so maybe th

      • by Tx ( 96709 )

        I can't tell if this is humour or not. Are you on the Microsoft UAC team, or are you having a laugh?

      • by jonaskoelker ( 922170 ) <`jonaskoelker' `at' `yahoo.com'> on Thursday February 19, 2009 @01:47PM (#26919043)

        If browser makers simply gave pop-ups

        No. No no no! Death to pop-ups.

        And here's why: they interrupt you in what you're trying to do. If they surprise you, you feel less in control of your environment which is bad (see http://en.wikipedia.org/wiki/Learned_helplessness [wikipedia.org] and http://en.wikipedia.org/wiki/Locus_of_control [wikipedia.org]). If they don't they're pointless because you'll already know in advance what your answer is going to be, so why can't you just tell the program what your answer is when you tell it to go do whatever made it interrupt and annoy you?

        A better solution is the slide-down bar which you probably know from using firefox. Instead of being in your way, it steals a little screen real estate near the edge and uses a color to tell you "you might want to pay attention here" without being in the way of what you really want to look at. Something similar happens when gedit and evince encounter an error.

        They're much better than pop-ups, in the cases where you have enough room for the text you need to display to the user.

        But you-the-browser probably should tell the user "Your password will be sent to $OTHER_DOMAIN. This is likely to be a security problem", so use a slide-down bar for this.

      • by dave562 ( 969951 )
        Browsers already do that. The first time you use IE or Firefox and submit form data over HTTP it will tell you something to the effect of, "You are about to submit unencrypted data over the Internet where it might be intercepted. Do you want to do this?" Then right under that, there is a check box to disable the feature.
        • What I propose is specific to password controls, and would not have a 'permanently disable' button.

          Would this annoy users? Yes. Would it save them from being hijacked? Yes. This is called a trade-off.

          • Until they installed the Firefox extension that turns this feature off, and if that can't be done, until they download a version of Firefox with this hacked out.

            People care more about a smooth experience than they do about security, which is why Microsoft will never do this, especially after the UAC debacle.

            • You've got it wrong again. UAC annoyed people every time they did things they were supposed to do. That's not the same thing. People aren't supposed to send passwords in plain text. People don't actually mind getting alerted when they do something they are not supposed to do... example: lights/gates at railroad crossings.

              You are essentially arguing that there is no point to having SSL at all, because nobody cares who has access to their credit cards and passwords. That's incorrect.

        • by bugg ( 65930 ) *
          Perhaps what browsers should do is have a separate class of errors for whenever there's a password field in the form. Given how often people google, comment on blogs, or what-have-you, I'm not about to tolerate an additional click for every POST. But I will tolerate an additional click for every POST where one of the fields was a password.
      • by redJag ( 662818 )
        Pop-ups? You mean those annoying things I have to click OK to before my computer will do what I want? Putting those in doesn't make the problem go away, just shifts the liability even more on to the user.
  • by gzipped_tar ( 1151931 ) on Thursday February 19, 2009 @12:40PM (#26917997) Journal

    One of the claims from the presentation (linked in TFA: https://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf [blackhat.com], PDF file) is "people don't type https:/// [https]" -- they reach SSL-enabled urls either by submitting a form (from non-SSL page!) or the result of HTTP redirect. And "that has made all the differences" according to the hacker.

    Maybe we need a special TLD for HTTPS-only traffic. Let's say ".s". For a given URL, if the hostname is of ".s" domain but the protocol part is not "https:" (or other secure protocols) then the URL is invalid by standard. A browser should be mandated to use HTTPS for such a host if the URL is given incomplete (e.g. user typing "example.s" rather than "https://example.s/" in the Awesome Bar). It should also fail to use a non-secure protocol even if it's available for a ".s" site during any phase of communication.

    I don't think this idea is good enough but it's the first thing coming to my mind..

    Also I'd like to know more about another exploit mentioned in the presentation.. the failure to check the "Basic Constraints" field of a SSL cert. Is Firefox vulnerable?

    • Re: (Score:3, Interesting)

      by imemyself ( 757318 )
      I kind of agree with you about having something in DNS to tell the client that it must use SSL. When I read through the Powerpoint, I was wondering about using TXT records, or SRV records or some other type of DNS records to tell the client that it must connect using SSL.

      I wonder how practical this would be? I think it would be easier to "bolt-on" than using a new TLD, but would it be more vulnerable to DNS spoofing than using a new TLD?
      • It can start as a specially-formed TXT and transition to its own field, like SPF did. DNS spoofing is its own problem; if they own DNS they have you anyway.

        • DNS spoofing is its own problem; if they own DNS they have you anyway.

          To some extent that's true, but at least in theory proper SSL could warn the user that something's wrong, because the cert the attackers have wouldn't be valid.
          • by XanC ( 644172 )

            Well, right, but I think that's an orthogonal problem to the one that an SSL-only DNS option would solve.

  • The same guy. (Score:2, Informative)

    by Anonymous Coward

    This is the same guy who published the infamous basic constraints IE vulnerability [thoughtcrime.org] a few years ago. His website and the software is www.thoughtcrime.org [thoughtcrime.org]

  • by master_p ( 608214 ) on Thursday February 19, 2009 @12:49PM (#26918137)

    End-to-end encryption is required at all levels of the internet. Until that is available, the internet will never be secure, because someone will be able to read the non-encrypted data you send and reply with a fake response.

  • People click fake urls in their email, and provide their bank credentials to phishers.

    The problem is that bank's website forces the user to authenticate themselves but currently there's no mechanism to force the website to authenticate themselves to the user.

    The solution: Smart Cards (e.g. Credit Card with a chip) and Smart Card readers. Or a USB device doing the same; i.e. a fob

    Of course the banks will have to spend a few bucks to provide that to their customers; currently it probably is cheaper for the b

    • by Joebert ( 946227 )
      What's the difference between data sent from the keyboard and data sent frm a smart card ?

      If it still has to be transfered it doesn't matter what peripheral created the signal.
      • Re: (Score:2, Insightful)

        by gilado ( 1197463 )

        RSA encryption and authentication is the difference
        You can't expect the user to do RSA authetication using a keyboard

        But the chip in the smart card does exactly that.

    • by dave562 ( 969951 )

      Fraud is so prevelent the banks have written it off as a cost of doing business. I had my account compromised a couple of months ago. I called the bank on the same day that I noticed the fraud and by the end of the day they had credited my account for the fraud, opened an investigation, and setup a new account for me. I didn't even need to redo any of my direct deposits, or automatic billing because it all transfered over to the new account. Wells Fargo calls it a "lost stolen transfer". I'm sure that

  • I'm going back to pen and paper to send mail, then there's no encryption for hackers to break!
  • by Garse Janacek ( 554329 ) on Thursday February 19, 2009 @01:53PM (#26919133)

    SSL encryption isn't as secure as online businesses would like us to think.

    What? I mean, are online businesses down in their underground lairs, laughing at the misinformation perpetrated on an unsuspecting public? "Hah! They believe that SSL encryption is secure!"....

    Maybe it should be "...isn't as secure as online businesses would like it to be." I think that it is in the interests of businesses as well as their customers for SSL transactions to remain secure. We can address incompetently implemented security protocols without treating it like a conspiracy on the part of the sites...

    • by daemonburrito ( 1026186 ) on Thursday February 19, 2009 @08:55PM (#26924141) Journal

      It's not a conspiracy theory. It appears that a lot of businesses have concluded that occasionally eating the loss on a fraudulent transaction is cheaper than fixing problems.

      Maybe it should be "...isn't as secure as online businesses would like it to be."

      If they "would like it to be" secure all they would have to do is spend more money on their infrastructure to encrypt everything. So, while it's not a "conspiracy", users who trust sites like paypal or their bank should be upset that these businesses have decided that security is too expensive. Users should be upset that big sites that handle money have decided that it is cheaper to wait for you to notice that money is missing, contact them, and then credit your account (maybe). And if you don't notice, well... it's not their responsibility.

      I think that it is in the interests of businesses as well as their customers for SSL transactions to remain secure.

      I would think so, too. However, people who run these companies' IT appear to have come to a different conclusion: Spend a certain amount of money on a somewhat secure system, and then put the responsibility on the customer to notice fraud. If noticed, credit the customer's account. Since the problems with mixing secure and non-secure elements have been known and exploited for years, we can conclude that these companies have done their cost-benefit analysis on the current way of doing things and found it to be acceptable.

  • by 93 Escort Wagon ( 326346 ) on Thursday February 19, 2009 @02:30PM (#26919677)

    I remember a few years ago banks (and others) were trying to "educate" people about not forcing https connections to their main pages for login purposes. Their explanation was "our login forms submit to a processing script that runs on https, so there's no problem". Well, one thing Moxie demonstrated is an effective way to attack this exact sort of situation via MITM.

    I do take issue with his statement "no one types in https (or http for that matter)". With many people he's correct; but I know I do pay attention to this, and I try to get my family and friends to do so as well. Also (especially nowadays) people need to start paying attention to whether they're in situations where MITM is made much easier, such as on unencrypted wireless networks.

  • This is actually not a problem with SSL. It's a basic flaw in the design of X.509 (the certification spec that SSL uses), and has been known and talked about from the beginning. You have critical policy information (e.g., the "basic constraints" certificate extension) being expressed in a credential, but the consumer of that credential may or may not interpret the information correctly. The lesson here (which gives the lie to all the PKI hype) is that you cannot separate certification policy from applica
  • Mobile phone companies are looking to do exactly this to HTTPS traffic transiting the GPRS network:

    http://blog.masabi.com/2009/01/how-do-transcoders-affect-https.html [masabi.com]

    It won't be long before ISPs that provide dial-up connections do the same with their "web accelerator" products.

    Oh, and Opera Mini does this as a matter of course.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...