Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Full-Disclosure Wins Again 122

twistedmoney99 writes "The full-disclosure debate is a polarizing one. However, no one can argue that disclosing a vulnerability publicly often results in a patch — and InformIT just proved it again. In March, Seth Fogie found numerous bugs in EZPhotoSales and reported it to the vendor, but nothing was done. In August the problem was posted to Bugtraq, which pointed to a descriptive article outlining numerous bugs in the software — and guess what happens? Several days later a patch appears. Coincidence? Probably not considering the vendor stated "..I'm not sure we could fix it all anyway without a rewrite." Looks like they could fix it, but just needed a little full-disclosure motivation."
This discussion has been archived. No new comments can be posted.

Full-Disclosure Wins Again

Comments Filter:
  • by InvisblePinkUnicorn ( 1126837 ) on Wednesday August 15, 2007 @11:46AM (#20237481)
    A bug only exists if the public knows about it.
    • Re: (Score:3, Insightful)

      by Billosaur ( 927319 ) *

      Incorrect. A bug exists if a bug exists. A bug only gets fixed if the public knows about it, specifically the computer savvy segment of the population, since the average user can't tell a bug from a feature.

      • My statement was made from the point of view of a company. I thought that was obvious.

        I thought wrong.
        • by xappax ( 876447 ) on Wednesday August 15, 2007 @12:48PM (#20238253)
          It's unfortunately not that hard to imagine that your sarcastic remark was serious - we constantly hear the same sentiment echoed very seriously in relation to computer security, electronic voting machines, even terrorism and criticism of the Iraq War.

          Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
          • Re: (Score:3, Insightful)

            by TubeSteak ( 669689 )

            Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
            Look at it from their point of view:
            Anyone who points out problems, is creating a problem.

            A lot of times, if you don't officially know about it, you don't have to officially do anything about it.
            • Re: (Score:2, Insightful)

              by dmpyron ( 1069290 )
              Except that they officially knew about the problem. Assuming he had taken the time to sign his email. When they said they did know if they could fix it without a major rewrite, that was a tacit admission that they had known about it.

              At least he went to the company first and sat on it for a while. Lots of people publish first, then notify the maker. That definitely makes him a white hat in my book.
            • by mgblst ( 80109 )
              Congratulations, you just got the point he was trying to make. Is that really worth a post, though?

              This all comes down to responsibility, and the simple fact that people got into positions of power by avoiding taking any responsibility for things that have gone wrong. All the people at the top of most governments and organisations are masters at avoiding responsibility.
          • by Thuktun ( 221615 ) on Wednesday August 15, 2007 @03:10PM (#20240095) Journal

            Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
            NARRATOR: Fortunately, our handsomest politicians came up with a cheap, last-minute way to combat global warming. Ever since 2063 we simply drop a giant ice cube into the ocean every now and then. Of course, since the greenhouse gases are still building up, it takes more and more ice each time. Thus solving the problem once and for all.

            GIRL: But--

            NARRATOR: Once and for all!
          • Re: (Score:1, Funny)

            by Anonymous Coward
            Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.

            IMHO, knowing about specific software flaws is an advantage to everyone but the company that makes the software with the flaw. The people in power only "think" the way you describe because they get their power from the same companies that loose out when someone finds a flaw with that companies software.
            Hand a policeman a $20 bill and help you ar
        • by mrchaotica ( 681592 ) * on Wednesday August 15, 2007 @12:58PM (#20238373)

          He fell into the sarchasm.

      • by Morgor ( 542294 )
        That is probably the closest to the truth, however I keep wondering if, by releasing the full details of a bug or security hole to the public, you force the developers to make patches that fixes that specific exploit but leaves open the hole, thus protecting the software from script kiddies browsing through security sites in search of exploits, but not from being cracked again in the future. What if the company did have some truth in saying that the bug could only be thoroughly fixed by rewriting the softwa
        • Sure, but the patch buys them time so that they can fix the actual hole. Usually a hole is a sign of a bigger problem, and certainly any developer would want to re-write vulnerable sections to close the hole up permanently. Of course the other issue is with the development cycle; if you're coming out with a new version of the software, do you really want to invest that much time in re-writing the old code to eliminate the bug. Probably not. You'd want to patch the hole and then make sure it did not recur in

    • Re: (Score:2, Funny)

      A time bomb doesn't exist if it hasn't exploded yet.
  • by toQDuj ( 806112 ) on Wednesday August 15, 2007 @11:48AM (#20237503) Homepage Journal
    I believe there is a system that forces a company into action if it delivers faulty products.

    Why then, should software be any different? Do we have to force companies to take action once a bug is submitted to them?

    B.
    • The difference (Score:4, Insightful)

      by InvisblePinkUnicorn ( 1126837 ) on Wednesday August 15, 2007 @11:50AM (#20237529)
      Somehow I don't think that too many lives are being put at risk if EZPhotoSales has a bug in its software. Now a seat buckle on a car, that's a different story...
      • Re: (Score:3, Interesting)

        by toQDuj ( 806112 )
        Sure, seatbelts are a prime example, but I've also seen recalls for much more mundane stuff, such as Ikea furniture and kiddie toys. A bug in software could really cause problems, albeit probably indirectly.

        B.
        • Many software products contain many bugs. Because software systems contain so many source code lines, you are almost certain that there are bugs. This is especially true if languages like C++ are used to create relatively undemanding applications. Many upon many of these bugs will never show up, if they were, they would have been discovered during testing. And if they show up, they may not do much harm. For example, a memory leak or buffer overflow in a graphics application won't matter too much.

          I don't thi
          • Many software products contain many bugs. Because software systems contain so many source code lines, you are almost certain that there are bugs. This is especially true if languages like C++ are used to create relatively undemanding applications. Many upon many of these bugs will never show up, if they were, they would have been discovered during testing. And if they show up, they may not do much harm. For example, a memory leak or buffer overflow in a graphics application won't matter too much.

            Yeah, that's the reason the highly important security-bugs do not exist: if they were important, they would have surfaced during the testing phase.

            • Re: (Score:3, Insightful)

              by owlstead ( 636356 )
              Those "highly important security-bugs" will most likely be found in OS or server components. Sure, that's a lot of software, but they aren't general purpose applications. I tried to exclude those products by writing "undemanding applications" and "memory leak or buffer overflow in a graphics application". So either I was still not clear enough or you were misunderstanding me for some other reason.

              Buffer overflows and such tend to not surface in normal applications, because you would have to go out of your w
          • by toQDuj ( 806112 )
            But in a way, a piece of software is just like a complicated design document of an actual product. Faults could slip in either. I find that programs should be more like Volkswagen designs instead of Yugo's. Instead of only looking at the exterior look and feel of the software, more attention needs to be spent on designing the internal components well.

            That means for software companies, that they should put more manpower on coding beyond the outer shell. Well thought-out interior functions are just as importa
            • I won't argue on that, you got right to the point. Nowadays I do sometimes program "out of hand". The thing is, if the software is small enough, or when there is time to redo from start, programming out of hand makes some sense. This is also the major reason why I am not programming (supporting, but not programming) open source solutions. The reason is that the design is not really available most of the time, and it takes way to much time to figure everything out from source. I know there are people that ca
        • Re: (Score:1, Interesting)

          by xmarkd400x ( 1120317 )
          The difference might not be as big as you think. Most hardware that can harm people has a very specific application. If you modify it or use it outside of its intended use, the manufacturer has no liability. For instance: if your seatbelt won't fit around your belly, and you cut it sewing some cloth in to make it longer. You get in an accident, and you die because the seatbelt broke. This is by no means the fault of the manufacturer. Now, how this applies to software: If software was to become liable for
      • by Hemogoblin ( 982564 ) on Wednesday August 15, 2007 @01:22PM (#20238689)
        I was thinking of moderating, but I'll reply instead:

        Its possible to be injured in ways other than just physically. What about fraud and identity theft? It could be very damaging to thousands of people if one of the software applications that your company is using has flaws that allow fraud or identity theft to occur on massive scale.

        To quote "Going Postal" by Terry Pratchett: "You have stolen, embezzled, defrauded, and swindled without discrimination. You have ruined businesses and destroyed jobs. When banks fail, it is seldom bankers who starve. In a myriad small ways you have hastened the deaths of many. You do not know them, but you snatched bread from their mouths and tore clothes from their backs."

        Theres a reason why fraud and theft can have as harsh a punishment as assault. (In Canada at least.)

        Maybe EZPhoto Editor isn't going to put anyone at great risk if it fails, but I'm sure you could think of some software that might.
        • by Afecks ( 899057 )

          Its possible to be injured in ways other than just physically. What about fraud and identity theft?
          If you're driving along, hit a bump and your trunk flies open ejecting all your personal belongings into the street*, I think you'll have a hard time suing for those damages. Even if some identify thief finds your wallet and ruins you financially.

          *I'm not implying you live in your car, just for the sake of argument.
      • What if the security vulnerability ended up compromising the personal information of someone in the witness protection program?

        Certainly that would qualify as a life at risk.
      • Re: (Score:3, Insightful)

        by db32 ( 862117 )
        Ok, lets up the stakes. MS loves to tout their super success such as their claims about how machines running windows run the stock market. Boy I bet it would suck if they got hit with something obscure and nasty. Not enough, ok, lets go up again. There are tons of medical devices that run software from home grown code to Windows. Now...you can't go blindly patching a Windows box that runs complex medical equipment without intensive testing, not like "will it cause problems with our 'mission critical' i
    • by hateful monkey ( 1081671 ) on Wednesday August 15, 2007 @12:07PM (#20237729)
      The biggest reason this wouldn't work well right now is because there are so many pieces of software that are written by small companies that couldn't afford a massive change in liability laws. This would turn software into a business that needs an enormous amount of money to enter the market, which would essentially destroy small startups and leave the business to large well-funded corporations. Open source software would never be usable outside of a very narrow range of applications that present little to no legal liability unless a large company were willing to absorb their liability costs (insurance, etc.). As it stands even Microsoft states in its EULA that it does not warrant Windows or Office to be good for any purpose. If every student or business person could sue Microsoft for losing their important document minutes before their presentations, even Microsoft, with their billions in the bank, would not be able to stay in business long. In addition, the reason companies fix publicly disclosed bugs is not because of liability, it is because a known bug makes them look bad to prospective customers. If they had to worry about the sort of liability you are talking about they would be hesitant to fix any bug that didn't open them to a lawsuit, just in case the FIX created an issue they could be sued for.
    • by Anonymous Coward
      Here is a PERFECT example where
      a) change was needed
      b) public was unaware
      c) individual wanted change
      d) individual alerted a portion of the public
      e) change was made.

      No lawyers, no State, no violation of freedoms, no taxes, no fines.

    • Call me sceptical (Score:3, Interesting)

      by RingDev ( 879105 )
      I'm not familiar with the software in question, but are they meaning to say that the company did nothing for a month, then they posted the vulnerabilities publicly, and in less than 7 days the company became aware of the post, tested the vulnerabilities, designed a solution, corrected the code, and had a software update tested and ready for deployment?

      If so, that is some AMAZING response time. But I would venture a guess that they had already been working on the corrections. The public posting may have made
  • by Ckwop ( 707653 ) * on Wednesday August 15, 2007 @11:49AM (#20237521) Homepage

    In the threat-models used by cryptographers, the attacker is assumed to know everything except cryptographic keys and other pre-defined secrets. These secrets are small in number and small in size. Their size and their limited distribution means we can trust protocols based on these secrets.

    Software that is used by millions of people is the very antonym of a secret. Compiled source is routinely reverse engineered by black hats. Web-sites are routinely attached using vectors such as SQL injection. In short, you can't assume that any of the source code is secret. Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.

    In light of adopting such a severe threat-model, the argument over full disclosure is a non-debate. Black-hats with sufficient resources probably already know of the bug. The only people aided by disclosing it wide and publically are the people who run the software who can take evasive action. In contrast, you only told black-hats what they already know.

    Simon

    • It's still a debate. Business is about likelihood not absolute truth and definitely not the idealized world that cryptographers make up. Sure, if someone tells you about a bug in your software you risk your software being responsible for damages to your customers. That's a potential cost. If you fix it, that's also a cost. Perhaps you simply disagree with the company's assessment of the relative costs. Keep in mind, from the company's viewpoint, for the bugs to have a true effect, someone has to do so
    • by Otter ( 3800 ) on Wednesday August 15, 2007 @12:17PM (#20237905) Journal
      Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.

      But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.

      • by Ckwop ( 707653 ) * on Wednesday August 15, 2007 @12:39PM (#20238145) Homepage

        But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.

        It's a ridiculous assumption until you try to work out how you can usefully weaken the assumption! Ask yourself this, how do you know how good the attacker is? They're not going to share their successes with you, in fact, they will probably never make contact with you.

        You are only as strong as your weakest link but with the vast distribution that's possible this days you have to expect to be up against the very best attackers. So what then is the plausible attacker your meant to be up against?

        Incidentally, this is why cryptographers choose such a harsh threat-model in which to place their protocols and ciphers. Only by designing against an attacker who is effectively omniscient can you truly get security. You need to look no further than Diebold to see what happens when you don't do this.

        Sure in the real world, disclosing vulnerabilities has an impact! Of course it does, but to say it decreases the security of the users of the software is simply nonsense. It may well do in the very short term, but in the longer term it is absolutely vital that full disclosure occurs if security is to improve.

        Simon

        • by Otter ( 3800 )
          Sure in the real world, disclosing vulnerabilities has an impact! Of course it does, but to say it decreases the security of the users of the software is simply nonsense. It may well do in the very short term, but in the longer term it is absolutely vital that full disclosure occurs if security is to improve.

          Yes, that'd be the entire point! When you're talking about the field of cryptography research that calculation is obvious. But users of software can't be expected to put up with increased vulnerability

        • I think a very old quote sums up the response to this quite well: Security through obscurity is NOT security.
      • How many? (Score:3, Interesting)

        by benhocking ( 724439 )

        There *are* two genuine conflicting sides here and you can't just wave one of them away.

        I can count at least 3, and I wouldn't be surprised if there aren't a lot more. Between only telling the company about a discovered security flaw and immediately announcing it to the entire world is a whole range of possibilities. To name a few:

        • Initially tell only the company. If they do nothing, then release it to everyone.
        • Initially tell only the company, but tell them that you will release it to everyone in X days
        • There *are* two genuine conflicting sides here and you can't just wave one of them away.
          I can count at least 3, and I wouldn't be surprised if there aren't a lot more.
          That's ridiculous! This is slashdot, where there are only 2 ways to do something... your way, and the wrong way.
          • by mgblst ( 80109 )
            Maybe you are trying to be funny, but the real problem is people who treat slashdot like it is a singular entity. Their are a large number of people who have difficulty in dealing with complex systems, like slashdot. These people sort of see the world broken up into two distinct groups, me and not me. You are making this mistake with slashdot. Every story I have read here has consisted or a variety of different opinions - but this is more complex to argue against. Much easier to reduce the world to two, and
            • I was half-heartedly trying to be funny, since my comments were in response to the first
              somewhat reasonable post I came across who could actually see that it's possible
              to have some middle ground with the whole full disclosure issue.

              Maybe I should have just posted "You must be new here" and basked in the +5 funny
              • by mgblst ( 80109 )
                Ah, ok. Simple rookie mistake then. They trick of writing something funny is, don't forget to include the funny bit! What you did was not much different than the "you must be new here" meme, it is just the slashot it black and white meme.
      • Re: (Score:3, Interesting)

        I went back and looked at some statistics for my Subversion logs and bug tracker. I find that roughly 11% of bugs were "discovered;" that is, filed first, by me. That means a whopping 89% of programming errors went unnoticed by me, and were found by the community. Now, I may be a lone maintainer of code, but even in a team, bugs will still get past. The assumption that the public, or at a minimum, the black-hat community knows more about your bugs than you do is not unreasonable. It is just as valid in the
        • For example, read up on the ongoing attacks on AACS. The black hats (and yes, they are black hats) working on breaking AACS have exploited all kinds of software and hardware bugs and shortcomings in order to gather more information and cryptographic secrets. They have the upper hand because they are not fully disclosing their work. If they were to fully disclose the bugs in various tabletop HD-DVD players and software tools that they use to garner keys, you can bet that the problems would be fixed. As is, though, they are still ahead of the AACSLA.

          I'm not sure I'd go so far as to say that. DRM is a poor example for any security model, because there's no real security there, just obscurity. In the long term, it doesn't really matter what the hackers release, because there's no long-term way for the AACSLA to stop them (well, aside from putting them all in jail, which is doubtless what they'd love to do). You can't give someone both enciphered information, and the key to the cipher, and expect them to not be able to combine the two -- that's exactly w

      • Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.

        But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just w

        • by Otter ( 3800 )
          Yeah, that's a great plan.

          I can't remember if I turned the stove off when I left for work this morning -- I'd better call my neighbor and ask him to set my house on fire!

    • I saw the vulnerability page. They don't have access restriction to subdirectories.

      Here's how I've solved this problem:

      1) Modify the htaccess (or even better, the httpd.conf) files, so that ANY access to any of the subdirectories of the main app is forbidden. The only exceptions are: a) submodule directories, whose php files do a login check, or b) common images (i.e. logos) /CSS/XSLT/javascript dirs.

      2) The only way to view your files is through the web application's PHP file lister and downloader. This should be child's play for anyone with PHP knowledge: PHP has the fpassthru function, or if you're memory-savvy, use standard fopen. Make sure the lister doesn't accept directories above the ones you want to list, and for the files use the basename() function to strip them from subdirectories.

      3) Any file in the PHP application MUST include() your security file (which checks if the user has logged in and redirects them to the login page otherwise). For publicly-available pages, add an anonymous user by default.

      4) For log in (if not for the whole app), require https.

      4a) If you can't implement https, use a salt-based login, with SHA-256 or at least MD5 for the password encryption.

      5) Put the client's IP in the session variables, so that any access to the session from a different IP gets redirected to the login page (with a different session id, of course).

      6) After log in, regenerate the session id.

      7) Put ALL the session variables in the SESSION array, don't use cookies for ANYTHING ELSE.

      I consider these measures to be the minimum standard for web applications. It shocks me that commonly used apps still fail to implement them properly.
      • by Dirtside ( 91468 )
        This is a good general method, but there are some problems in certain environments. My company, for example, runs a massive load-balanced server farm; we can't really use PHP sessions because two successive requests from the same user may go to separate servers.

        Locking to IP address is a non-starter because there are ISPs who will rotate their visible IP range dynamically, so that user A might appear to be coming from IP X on one request, and from IP Y on the subsequent request. Then that's user's screwed
        • Re: (Score:3, Informative)

          by Rich0 ( 548339 )
          If you store your session IDs in a central database you'd be covered. Maybe under extremely high load this might be an issue, but often these bugs crop up in software that doesn't face these sorts of high-demand applications.
          • I haven't dabbled on this end in awhile, but even there, if your session ID is stolen, you've got a problem.

            I wonder about using unique keys for each pass. That is, during logon, the client and server pass random numbers back and forth. Those are used to seed additional random number generation. If your request doesn't match up with the other's response, then your session has been hijacked and should be cancelled/redirected.
            If the thief manages to sneak in with a valid set of numbers, then your next requ
            • by Rich0 ( 548339 )
              Mix in SSL, and it should get even harder.

              Uh, you essentially just described SSL - except SSL is already a lot more thorough than this. About the only attack it is vulnerable to is man-in-the-middle, which isn't an issue with good CAs. Your session definitely isn't getting hijacked with SSL.
        • Locking to IP address is a non-starter because there are ISPs who will rotate their visible IP range dynamically

          AOL doesn't count :-P

          Anyway, an option would be that at login, the user has the option to set a flag like "my ISP changes my IP randomly" (something like the login screens with the option "This is a shared computer"). Best of both worlds :)

          For intranet sites inside a company, this is a non-issue, since all computers have a fixed IP.
      • 7) Put ALL the session variables in the SESSION array, don't use cookies for ANYTHING ELSE.

        There're a bunch of things you can do with cookies and still be on the safe side. As long as it's only for read access and something that's only visible to the user you could safe the time to lookup stuff in the DB, i.e. a Nickname or the current quota limit.

        Also you can use mcrypt with Blowfish or AES for small chunks of data and store it in a cookie.

        As long as you aren't storing to much data in cookies you can use the client as session based data storage - it even scales with directly with your clients -

  • Incentives (Score:5, Insightful)

    by gusmao ( 712388 ) on Wednesday August 15, 2007 @11:51AM (#20237537)
    It was aways clear to me that full disclosure is a better option simply because people react to incentives, and bad publicity creates a strong incentive for vendors to fix and patch their systems.
    Nothing like fear of losing sales and yearly bonus to motivate higher management.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Well not exactly. Publicly reporting a security bug simply changes an engineering group's priorities. Other bugs don't get fixed, new features won't get added. We can debate whether or not that's a bad thing, but that's all it is - the publicly disclosed bug will just get fixed first.
  • Full discoluse could be finding a bug and then posting it onto the first 1337 haxxor forum you can find- which most people would agree is wrong, but full disclosure after giving the software company warning can't do any harm- cos either they'll have fixed it or they wont bother fixing it untill forced or not at all.
    • by xappax ( 876447 )
      but full disclosure after giving the software company warning

      It's debatable whether that's considered true, good-faith full disclosure. If you discover a security vulnerability, you are suddenly burdened with a moral imperative. You know that many people are in danger, but the people don't. Every day that you delay telling them is another day they're in danger, and quite possibly being exploited. It's important to remember that by denying people the knowledge of the insecurities in their systems, you
      • by dgatwood ( 11270 )

        It's important to remember that by denying people the knowledge of the insecurities in their systems, you are effectively protecting the interests of the attackers, regardless of your intentions.

        IMHO, the proper thing to do is to do a partial disclosure with selective full disclosure. Disclose all the details to the company. Disclose all details to CERT, who will issue a bulletin available only to certified vendors and will wait a period of time before public disclosure. Simultaneously post a publ

        • by xappax ( 876447 )
          disclosure---even delayed full disclosure---preserves that negative cost. Immediate full disclosure amplifies that cost to dangerous proportions.

          In a way, you're right. Full disclosure makes the cost to the company significant enough that it's a danger to the company's interests. This is as it should be - there's no reason to make selling insecure code less dangerous.

          Everyone makes mistakes, including you.

          Damn right, and when I make a mistake, I have to face the consequences. When I mess up, I do
          • by dgatwood ( 11270 )

            Probably? What's the probability here? Probably doesn't apply in security design. A system is not secure because there's only a 2% chance that someone knows the secret to breaking in, a system is secure because there is no secret to breaking in. I don't want to play odds with my security, and other people playing those odds for me without my knowledge is unethical

            Wow, that';s twisting reality. Which would you rather have: a 2% chance of dying today or a 100% chance of dying? Because that's the choi

    • That isn't considered Full Disclosure. Posting on a script kiddie forum is not disclosure to a public information disclosure service like Bugtraq. The former is considered malicious disclosure of security threats.
  • The Government (Score:2, Insightful)

    It works in software, it works in government too. Only slimy bastards hide behind their veil of secrecy to their customers/public. Maybe one day we will have open source voting machines.
    • by Pojut ( 1027544 )
      ENTIRELY off topic, I know, but why is it so difficult to make a secure (both digitally and physically) electronic voting machine that actually WORKS? We can put people on the moon, travel miles below the ocean, build computers the size of fingernails, and yet can't create an electonic voting machine that doesn't break when you so much as look at it?
      • Because, it turns out, humans are pretty smart when we put our minds to things.

        In the examples you cited the people who have exposure to those systems are motivated to see them succeed. I imagine the space shuttle would be easy to break if malicious individuals had access to it.

        If ALL of the users of voting machines were motivated to see them succeed- what we have would work wonderfully. Unfortunately finding solutions that other people can't break when they are trying hard is not so easy.

        Of course there is
        • by Pojut ( 1027544 )
          My point is that it really cannot be that hard to make a system that is physically and digitally secure. I know that people are smart and that they will circumvent something if they want to, but seriously. Come on. Is it really that difficult to make something that people can't fuck with within the 2 minutes that they are standing there?
      • by caerwyn ( 38056 )
        Why? Because, as it stands now, it's much harder to build something unbreakable than it is to break something. This applies to digital and physical security alike- especially when you have the perpetually weak link- human intereaction- in the mix.

        We give the electronic voting makers a lot of crap for making insecure systems, and rightly so- knowing they're insecure, they shouldn't put them on the market to be used in something so important. But it's easy to forget that it really is a hard problem. The fact
        • by Pojut ( 1027544 )
          That's just it...it's not hard. No ports except for power and for the little card, and make the port for the cards in the voting machines write-only...only have the readers in one central location for each district where the cards are sent to. That solves your physical problem AND your electronic problem (short of the cards being hijacked on the way to the readers, of course)
          • by caerwyn ( 38056 )
            That's just it. Many of the hacks of existing voting machines don't necessarily have to do with the machine itself- they have to do with grabbing the card and modifying it externally in some fashion. So now you have you worry about physical card security at all times- and you also have the same unsolvable problem that the RIAA is dealing with when it comes to DVDs- you have to have an encrypted card, but you've got to have the key to decryption buried somewhere in the machine...

            It really isn't as trivial as
            • by Pojut ( 1027544 )
              Again, easily solved. The folks sitting in the voting booths insert a card into the machine that they remove from a sealed package, and the card stays completely internal in the machine. After the person is done selecting their votes, the card remains in the machine and is not retrieved until the voting booth is closed.

              I know that the human element always exists, but it is drastically reduced if the person voting A. sees the card coming out of a sealed package into the machine and B. never actually touche
    • Re: (Score:1, Troll)

      by adisakp ( 705706 )
      It works in software, it works in government too. Only slimy bastards hide behind their veil of secrecy to their customers/public.

      But the current admistration has held all their policy meetings in secrecy and has failed to provide disclosure of details of it's inner workings to congress even in numerous private sessions due to "executive privilege". Are you calling our great leader a slimy bastard ?
    • We don't even have 'open' elections anymore. No matter what the deal is with machines, punch ballots and the like the ballots will just be destroyed after a judge says not to [enquirer.com].

      Welcome back to the USSA!
  • Two basic problems (Score:3, Interesting)

    by cdrguru ( 88047 ) on Wednesday August 15, 2007 @12:02PM (#20237673) Homepage
    Full disclosure results in announcing a bug not to the world, but only to people that are paying attention. Does this include all the users of that software? No, not even most of them. So who gets informed? People looking for ways to do bad things. The user's do not hear about the defect, the potential exploit or the fix that corrects it.

    They are just left in their ignorance with the potential for being exploited.

    The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.

    The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.

    All in all, full disclosure is simply blackmail. Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes. Their instance of the product isn't fixed because unless they are paying attention they don't know. They get to lose support if the company decides to pull the product rather than kneel to the blackmail. If the bug is exploited the end user get to suffer the consequences.

    You can think this would justify eliminating exclusions for damages for software products. There isn't any way this would fly in the US because while we like to think we're as consumer-friendly as the next country, the truth is this would expose everyone to unlimited liability for user errors. Certainly unlimited litigation even if it was finally shown to be a user error which is by no means certain. And do not believe for a moment that you could somehow exclude software given away for free from damages. If you have an exclusion for that you would find all software being free - except it would be costly to connect to the required server for a subscription or something like that. Excluding free software would be a loophole that you could drive a truck through.
    • by garett_spencley ( 193892 ) on Wednesday August 15, 2007 @12:14PM (#20237869) Journal
      This is a very odd point of view.

      First of all, if the users of the software aren't paying attention, who's fault is that ?

      Secondly, you would think and hope that the software manufacturers would be paying attention and that they would inform their users, who may or may not be paying attention.

      Full disclosure doesn't just imply disclosure to a small, specific group of people. It involves making information PUBLICLY available to EVERYONE. If someone isn't paying attention then that's their own fault. But if you don't feel like end users who are too worried with other things to be paying attention to Bugtraq are getting a fair break then point the finger at the software manufacturer instead. After all, they're the ones who sold faulty software and they're often the ones who continue to sell faulty software when bugs are not disclosed to the public, because they take the mind set of "what they don't know can't hurt them".

      Unfortunately, what "they" don't know CAN hurt them. Because those same people you were talking about who are "interested in doing harm" are usually the ones to find the bugs to begin with. So they already know and those end users that you are so adamant about protecting are already at risk.

      So IMO it's the responsibility of the software manufacturers to pay attention, fix bugs, release patches and inform their users that they need to apply said patches ASAP.

      I mean, are you really advocating keeping information from people ? What if you had cancer, would you prefer that your doctor not inform you ? As I already stated, full disclosure is all about making information publicly available to absolutely everyone, so that absolutely everyone can make whatever choices they feel like with that information. Your argument is that full disclosure is selective about who it makes the information available to. I have to disagree. At the very least it makes the information available to the developers who made the buggy software to begin with, and competent admins who follow those lists so they know what kind of bugs are running on their servers (I used to be one of those).
      • I'm glad you said that and not me. I was about to write a dissertation on security and disclosure based on the the SEC's stance and requirements. Citations and everything.
    • Full disclosure results in announcing a bug not to the world, but only to people that are paying attention.

      Yes, but the group that is paying attention includes the people with the greatest need to maintain security.

      The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.

      True, although sometimes this community already knows some of it.

      The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.

      This is a good thing. First it informs the people. Second, it gives people a bad impression of vendors who have security holes and encourages them to move to more secure vendors. That's the free market improving security.

      All in all, full disclosure is simply blackmail.

      Nope. Offering to not release the vulnerability for cash is blackmail.

      Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes.

      Look, I'm a huge advocate of resp

  • by Lord Ender ( 156273 ) on Wednesday August 15, 2007 @12:12PM (#20237803) Homepage
    This is not about full disclosure. This is responsible disclosure. Full disclosure would be if he went to bugtraq before contacting the vendor. Responsible disclosure is where a responsible security research goes to the vendor FIRST, and only goes to the public after the vendor has had a reasonable amount of time to fix the problem.

    Responsible disclosure allows responsible companies to get a fix before a flaw is used maliciously, but the researchers still get credit. With responsible disclosure everyone wins except black hats.

    Full disclosure benefits black hats more than it does anyone else.
    • by griffjon ( 14945 )
      This is spot on. Many companies don't see the business interest in responding to security flaws until it hits full disclosure. It doesn't logically follow that we should jut go straight to full disclosure. Let the company know that there's a flaw, and that you will disclose said flaw in some reasonable timeframe that balances the patch time with the severity of the flaw. Insightful companies will get to work patching, the rest will be gruff or nonresponsive ... and then you disclose and they get around
    • ...responsible disclosure would also include:

      - the timeline for full disclosure being given to the vendor (I don't know whether that did or didn't happen in this case), and

      - reaching some mutual or community agreement on what a "reasonable amount of time to fix the problem" is for the problem in question.

      That said, I definitely agree this wasn't "full disclosure", since the vendor was informed, but it wasn't necessarily responsible disclosure, either. To me, "responsible disclosure" implies that a patch is
      • Re: (Score:3, Informative)

        by Lord Ender ( 156273 )

        To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens

        No. Wrong. It's not a matter of opinion. With responsible disclosure, a security researcher notifies a vendor before publishing his research. It absolutely DOES NOT imply that a patch is made available before the researcher publishes his findings. A vendor is still free to shoot itself in the foot under responsible disclosure.

        The only gray area is determining just how much time i

        • Re: (Score:2, Insightful)

          Someone mod the parent up!
        • No. Wrong. It's not a matter of opinion. With responsible disclosure, a security researcher notifies a vendor before publishing his research. It absolutely DOES NOT imply that a patch is made available before the researcher publishes his findings. A vendor is still free to shoot itself in the foot under responsible disclosure.

          I didn't say it implied that; I said, "To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens". And it is a
          • I didn't say it implied that; I said, "To me, "responsible disclosure" implies that

            This is a contradiction. The phrase "to me" prepended to a factual predicate does not change the meaning of the statement. If you aren't a native English speaker, and I am misunderstanding what you mean to say, I apologize.

            It is every vendor's dream to have security researchers work as free consultants, hand-holding them through fixing security problems. The reality is that researchers are under no obligation to do anything o

            • This is a contradiction. The phrase "to me" prepended to a factual predicate does not change the meaning of the statement.

              No, it is not. It means that is what is implied by "responsible disclosure" to me, which is exactly what I said. That isn't what it necessarily means to anyone, or that I think that's what it should mean to anyone, and I understand that.

              That is what responsible disclosure means to me, and that is a valid viewpoint; it most certainly is a matter of opinion.

              Tied to that, obviously, is the
              • "Two plus two equals four" and "To me, two plus two equals for" are equivalent statements.

                The word "responsible" refers ENTIRELY to the researcher, not to the vendor. Any definition of full disclosure which depends on whether or not a vendor choses to act is therefore an invalid definition.

                In your own cursory examination of articles and blogs, what term did you find the industry uses for disclosures in which the researcher gave a company advance notice of a publication, but not as much lead time as some wou
                • "Two plus two equals four" and "To me, two plus two equals for" are equivalent statements.

                  Only because "two plus two equals four" is a provably correct factual statement.

                  What constitutes responsible disclosure, in the context in which I was speaking, is a matter of opinion.

                  Therefore, it is perfectly reasonable that someone might say "To me, means ," and to at the same time not believe that the statement universally applies or is accepted by everyone. Quite the opposite, actually.

                  The word "responsible" refe
    • by xappax ( 876447 ) on Wednesday August 15, 2007 @01:42PM (#20238943)
      With responsible disclosure everyone wins except black hats.

      Black hats win too. You ask 4cId_K1LL3R whether he'd like you to "fully" or "responsibly" disclose the 0day buffer overflow that he discovered a week ago and has been using to break into systems. I'm sure he'd far prefer that you keep the public in the dark about the issue for a month or so while the company leisurely gets around to patching it.

      Black hats win, but software companies win most of all - which, after all is why software companies invented and promoted "responsible disclosure" in the first place. "Responsible" disclosure allows a company to improve their reputation and their software at little to no cost, thanks to volunteers who fix their security problems without telling the public. This, in turn, enables them to continue using the same irresponsible software engineering practices as they always have, with no impact on their bottom line.
      • Realistically, if two people discover a vulnerability independently, one of them is likely to know about this long before the other. In such cases, one additional month is a negligible amount of time compared to the overall time the initial discover had free reign of the affected systems.

        Additionally, most companies can't immediately implement work-arounds on the day of a 0-day publication. They have to wait until a patch is released from a vendor. In such cases, the black hat has the same amount of time to
        • Additionally, the scenario given is typical of the /. responses I see:

          This, in turn, enables them to continue using the same irresponsible software engineering practices as they always have, with no impact on their bottom line.

          Compiling and testing doesn't find everything. It's easy to accuse an ace coder or a crack team of programmers of sloppiness when you don't know the people. Sure some companies push an overly aggressive time frame, but not all of them do and (from what I can tell) not most of them

        • by xappax ( 876447 )
          most companies can't immediately implement work-arounds on the day of a 0-day publication

          There's always an easy, immediate work around. Sometimes it's as trivial as adding a firewall or IDS rule, sometimes it's as extreme as physically unplugging the affected machines until the issue is patched.

          If you're an institution with servers containing a lot of highly sensitive information, you'll probably be willing to do extreme things to protect your data if it's really, truly necessary. The problem is, you
  • "Got a good reason for taking the easy way out..." - Daytripper
  • by mfh ( 56 ) on Wednesday August 15, 2007 @12:13PM (#20237841) Homepage Journal
    1. Bug is reported.
    2. Secretly, a team of crack programmers (or programmers on crack) develop the patch.
    3. The patch sits in a repository until public outcry.
    4. Public outcry.
    5. Patch released... LOOK HOW FAST WE ARE!
  • Is it just me or (Score:1, Insightful)

    by Anonymous Coward
    have Slashdot stories become more openly biased. I wouldn't even call this a story, it's an opinion.
  • Coincidence? Probably not considering the vendor stated "..I'm not sure we could fix it all anyway without a rewrite." Looks like they could fix it, but just needed a little full-disclosure motivation.

    They might not have been lying. Fixing it properly might have required a rewrite, and instead they may have been forced to include a number of slapped-together kludges with Lord-knows-what side-effects under extreme time pressure. I know what kind of code *I* write when I'm under that kind of time constrai

  • False assumptions? (Score:3, Interesting)

    by mmeister ( 862972 ) on Wednesday August 15, 2007 @01:10PM (#20238543)
    There seem to be some false assumptions here. It is assumed the company did not look at the bug and potential fixes until after it was "fully disclosed". If they released a fix a couple days later, the more likely scenario is that they've been looking at the problem and assessing what options they had to address the problem.

    Ironically, the full disclosure probably forced them to put out the solution before it was ready, leaving the risk of new bugs. IMHO, forcing a company to rush a fix is not the answer. If you work for a real software company, you know that today's commercial software often has thousands of bugs lurking, although many are very edge case and are often more dangerous to fix than not fix (esp if there is a workaround).

    There should be enough time given to a company to address the issue. Some can argue whether or not 5 months is enough time, but that's a different argument. I think forcing companies to constantly drop everything for threat of full disclosure will end up doing more harm than good.
    • Re: (Score:3, Insightful)

      by Minwee ( 522556 )

      If the company was indeed looking at the problem, then they lied about it. Their response to being notified of the problems, as described in the article, was to say "Gee, we're not going to bother fixing that. Instead we're going to work on a new product and just sell it as an upgrade to everybody."

      When someone tells you flat out that they aren't going to do anything, why is assuming that they aren't doing anything false?

    • There seem to be some false assumptions here. It is assumed the company did not look at the bug and potential fixes until after it was "fully disclosed".

      I don't think you RTFA.
      He told them about the problems.
      Their response: We're not fixing it because we have a new client coming up.
      5 months later, no new client, so he went public.

      If you read the other article linked in the summary, it seems like they could have trivially done a lot to secure things server side. Like not making the password hash file readable and not allowing user uploaded scripts to run on the server.

      • Re: (Score:3, Interesting)

        by mmeister ( 862972 )
        Sorry, I was trying to make a more generic argument, and clearly flubbed that. My original point is that we will likely to more long term damage if all we do is bully companies. Believe it or not, there is more going on that just folks sitting waiting to fix bug reports that comes in for some random guy. And with smaller companies, they don't have a team that is on the attack for vulnerabilities found.

        I didn't see the original email he sent to the company. Nor did I see mention of followups to try and push
  • by Trillan ( 597339 ) on Wednesday August 15, 2007 @02:02PM (#20239201) Homepage Journal

    "Coincidence? Probably not considering..."

    Yeah, everyone knows that patching security holes is an instant process. What other explanation could there possibly be? The public found out about the bugs, and the vendor waved a magic wand, and presto-changeo, they were fixed.

    Okay, now let's be real here.

    That the patch appeared almost immediately after is the surest sign that the vendor was already working on them. It probably also indicates the vendor wasn't confident that they were finished, and rushed them to get them out after only a couple days of public disclosure.

    So enjoy your half-baked patch.

  • True economics (Score:1, Interesting)

    by Anonymous Coward
    There seems to be this strange notion that blackhats benefit from full disclosure.

    The thinking seems to be something like this: when a bug is disclosed, blackhats that were unaware of the bug become informed and have a window of exploitation until the bug is patched.

    This seems absurd to me. As soon as the bug is disclosed, users become aware and can immediately block the vulnerability. If there is no other solution, they could at least stop using the vulnerable software. So the window of exploitation is the
  • morality (Score:2, Funny)

    by Anonymous Coward
    Forget morality for a minute... Making the bigwigs at some major company cry out "OH SHIT" in unison is one of the few sources or free entertainment I have left.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...