Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Microsoft Makes It Harder To Avoid Azure

dhavleak Re:Options are good but... (164 comments)

AC here. I think you've started a new meme in your comment by calling things memes to discredit them :)

I always hear the general economies of scale and specialization meme but I don't understand what prevents the knowledge and expertise from being bottled and applicable to physical hardware?

The cloud guys (Amazon, Microsoft, VMware, others) employ the top talent in the world to work on this around the clock and will never stop. Company X selling widgets (lets say golf clubs) will just buy machines, and employ people to run the data center as efficiently as possible. How far do you think they will go in terms of maxing out the power efficiency of their data center to name just one metric on which individual data centers simply cannot compete. How do they even build the expertise in that? How can they afford to revamp that when progress renders their once-awesome data center obsolete in just 5 years or so?

Besides from my experience the real costs are not the hardware.. not even close..

Correct. And that's one of the other sources of cost efficiency for the cloud guys. Their data centers are humongously vast arrays of machines managed by very few humans.

The "elastic" meme I've heard before and I am not buying it. All problems solvable by throwing more hardware at the problem are by definition easy problems to solve.

Spikes can happen when you don't expect them. Hardware purchases don't happen automatically as your traffic spikes. 'Elastic' can do that. Even if you buy hardware ahead of time -- you now have to own and maintain that hardware. There's even scope for handling daily spikes -- prime time traffic spikes across time zones can be balanced in the cloud so you have less infrastructure serving more clients than would be the case if each client owned their own datacenters. It's actually weird that you question these concepts and disparagingly call them memes. A wait-and-see approach is certainly warranted. Questions regarding portability, and trust issues with the providers are warranted. Questioning the cost-effectiveness of cloud vs. on-premise is a little silly IMO.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

That is constricted thinking. If brute-force becomes cheap your scheme is broken

Errr... you need to upgrade your crypto knowledge. If brute-force means 1 mio. times the lifetime of the universe, and it becomes a million times cheaper, it is still a tiny bit impractical.

My crypto knowledge is fine. What do you think I meant by "cheap" above? And you're accusing me of arguing semantics. Do you measure the cost of brute-forcing RSA-512 in terms of the lifetime of the universe? Please upgrade your crypto knowledge if that's the case.

Citing a real-world example, what is the difference between RSA-512 vs. RSA-2048 that you would consider one secure and the other insecure?

Well, using a real-world metaphor, if RSA-512 is a grain of sand (let's say 10 micrograms) then RSA-2048 is the entire mass of the entire universe. Put into a grain of sand in yet another entire universe, where every grain of sand represents an entire universe and then taking that entire universe and multiplying it by a couple trillion. That's not a matter of "brute-force becoming cheap". If every grain of sand in the entire universe were a supercomputer that could break one RSA-512 per second, RSA-2048 would still be secure against brute-force attacks.

This is just dissembling now. So many words, because you refuse to acknowledge a simple point: algorithmically there is no difference between the two.

Obscuring the version info increases the cost (time, bandwidth) of the attack.

Yes, but it is not a security measure. It's a one-time tiny benefit. We recommend it because it doesn't really cost you much and thus it's a net-gain. Real-world example: If you have a server running at a version that has a known exploit in the wild, then you wouldn't consider obscuring the version number as a mitigating action, would you? It does not provide security.

Same issue again. So many words to exlain why you prescribe it, and so many words to then disown it as a security measure. The point is simple and clear. Obscuring that version info is a tiny little security measure. No individual thing provides security -- so your line at the end does not apply.

In ASLR you *obscure* addresses from an attacker. It's an important security mechanism, any modern OS is incomplete without it, and there's just no escaping the fact that it is nothing but a form of obscurity.

No, you don't. ASLR has nothing to do with "security through obscurity". Please stop playing tricks with semantics. You won't be able to quote even a single serious source that puts ASLR even in the vicinity of "security through obscurity".

Playing tricks with semantics? After you ramble on with some nonsense about grains of sand instead of just discussing the core of the issue? As an attacker, you know there's a freaking stack (for example), don't you? With ASLR the only difference is that you don't know where it is anymore -- because it has been *obscured* from you. That's not a semantic trick -- you are the one dancing around with endless verbiage to defend a stupid and pointless stigma over the word 'obscurity'.

You're missing my point again, which is that you *should* really be talking about *cost*.

Negative. Cost is one factor, but not the only one.

What are the other factors? Everything translates into cost eventually.

Plus you don't generally know the cost equations of your attacker.

You know the relatives costs inside your system. What is the cheapest point of attack in your system currently? That is your first priority. Once that's dealt with, ask and answer that question again and again. You will *raise* the *cost* of breaking your system by doing so. You don't need to project the attackers cost to do that. You are deliberately being dense now.

Sometimes, work time is expensive, sometimes it is cheap.

Sometimes people ramble on when they lose the plot. This was a great discussion until this post.

Sometimes acquiring an exploit is expensive, sometimes it is cheap.

Somebody originally developed it at some cost. Whatever the economoes of them selling it, if you raise that initial cost, you improve the chances of them looking at someone else's product for holes instead of yours.

Sometimes, you all you need is being a less easy target than the guy next door

My exact words were "As a security professional your job is to make their cost of doing business in your neighborhood prohibitively high." You're saying the exact same thing back to me and lecturing me on how wrong I am.

and sometimes, costs don't even matter to your attacker as long as they can afford it at all.

Actually, that's just a scenario where the attacker is willing to pay such a high cost that nothing you do cryptographically will deter them. If they are willing to hold a gun to your head or a loved one's head to make you give up your passwords, you better make sure they can't find you (i.e., make your location obscure).

You should be talking about security. Cost is one factor, since most security measures are imperfect. But sometimes, you have one that isn't. One-time pads can not be cryptographically cracked, for example. Cost of cryptoanalysis stops being a factor if you use one-time pads, and the efforts shift to stuff like key distribution.

We are talking about security. The currency in this trade is *cost*. One-time pads are a great example of that -- for an effectively unbreakable system, its pretty seldom used. Why do you think that is? Because it's manual nature means that the *cost* is not worth it to the good guys.

Established on slashdot, and in journalism circles perhaps.

Nice strawman, I'm not biting. You are trying to re-define the meaning of the term in order to win an argument.

Whatever man... This shit is getting pointless...

Real world example: would you ever let your clients share even the slightest information about a DB schema publicly? Even if you authenticate all entry points? C'mon man!

There's a difference between "don't tell them if it doesn't serve a purpose" and "the security of this system lies with this being kept a secret". If a client would ask me if they should publish their DB schema on their website, I'd not tell them the sky is falling, I'd ask them why. Because the security of their DB should not rest with the schema being a secret.

Who said anything about the security if their DB resting with the schema being a secret? Did I even imply that this will be your only security mechanism? And you're accusing me of strawmen? WTF man?

I used to work with the SELinux crowd, contributed a couple patches, held a couple talks. For demo purposes, I once put the IP address and root password of my notebook on a piece of paper. With a proper policy, you can do that on an SELinux system. I wouldn't recommend it, but once again, the security of the system rests with the policy and the role authentication, not the root account.

You're lecturing me about stupid shit that adds nothing of value again (see grains of sand above). Stop doing this! See your above strawman -- did I suggest that hiding your IP address is a single point of defense you should rely on? WTF is this logic?

Maybe it's all semantics with you. I don't consider just any tiny bit that might or might not make the attackers job a little bit more difficult to be a security feature.

You accuse me of semantics, and then argue semantics in the very next sentance.

Countercheck: Would you consider re-arraging the keys on your keyboard a security feature? It certainly makes things more difficult, but c'mon!

Another strawman -- citing a silly example doesn't make your point valid. It just means you used a silly example as a strawman. Do you consider re-arranged keys obscurity? If yes, then why?

Obscurity is an inextricable part of security. Anyone claiming otherwise has never worked in the profession.

I do and I have and still do. Obscurity is not a part of security. It's an additional low-cost measure you can take if it makes you feel better, but your system should be just as secure without it.

Low cost measure of what? Low cost measure of what? Ans: security.

That is why I don't consider it a part of security - it should not affect the security of your system in any measurable way. It's more obvious in crypto: Your attacker can know which algorithm you use. Whether or not he knows should not affect the security of your encrypted data. If it does, your algorithm is broken.

This is a pitfall people fall into too easily. The algorithm has to be public knowledge else even the good guys wouldn't be able to use it. But you do obscure something -- the key. And it still comes down to cost. If something else in your system is easy (cheap) to bypass, will you object to your attacker "but, but, I was using crypto!!"?

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

In the sense that SHA has been broken. To cryptographers, that was old news 10 years ago. "Broken" in cryptography means "there is a way to break it that is considerably easier than brute-force". It could still take 10 billions years.

That is constricted thinking. If brute-force becomes cheap your scheme is broken. Citing a real-world example, what is the difference between RSA-512 vs. RSA-2048 that you would consider one secure and the other insecure? That alone should illustrate to you that your above definition of "broken" in cryptography is incorrect. The longer (more obscure) key increases the cost of the attack.

I disagree with that. Security through obscurity is no security. If your systems security relies on obscurity, then it is broken by design.

In ASLR you *obscure* addresses from an attacker. It's an important security mechanism, any modern OS is incomplete without it, and there's just no escaping the fact that it is nothing but a form of obscurity.

Now we do tell our clients to not display the version number of the webserver and stuff, but that's not because it would make it any more secure. It's because a large number of attacks these days are automated, untargeted and to save time and bandwidth they often scan for targets first. It's the old "I don't have to run faster than the bear" approach.

You actually supported my point there. Obscuring the version info increases the cost (time, bandwidth) of the attack. Also remember -- a slightly more resourceful attacker could even catalog this info and save it for a day when an attack against this version falls into their lap. However intangible the difference, you did recommend obscurity to your client (rightly so) and you did make them ever so slightly more secure by doing so.

I'm sorry, I won't follow there. You're just playing semantic tricks there and re-defining a word. "Obscurity" is not a synonym for "cost". Never was, never will be. If you want to talk about cost, then talk about cost, and not about obscurity.

You're missing my point again, which is that you *should* really be talking about *cost*. By any means necessary (obscurity, dancing naked, whatever helps), increasing the *cost* of a successful attack is what's important. Writing malware is a for-profit business. As a security professional your job is to make their cost of doing business in your neighborhood prohibitively high.

Because you are using words in meanings that are private to yourself and contrary to established meaning.

Established on slashdot, and in journalism circles perhaps.

Maybe you need to step back and ask yourself why we security professionals - otherwise not exactly known for making anything simple and straightforward - apply such a strict one-liner? It's because too many scammers and idiots have done so much damage to IT security by labelling something as "security" that really was just obscurity.

I don't know what to tell you at this point. You yourself recommend obscuring version info to your customers. If you're truly a security professional, then you know very well that absolutely any unnecessary piece of information you give an attacker about your system is one piece too many. Real world example: would you ever let your clients share even the slightest information about a DB schema publicly? Even if you authenticate all entry points? C'mon man!

And obscurity is not security. Anyone claiming otherwise is trying to sell you snake-oil.

Obscurity is an inextricable part of security. Anyone claiming otherwise has never worked in the profession.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

First, all your examples are looking at crypto which has long been broken.

RSA has long been broken? This is news to me. And the 'fix' for RSA 1024 being 'broken' even by conventional means is what -- a longer key right? That's higher level of obscurity, that's all.

Next, of course circumvention is the technique to use. That's the whole point of a good crypto algorithm: Making sure that the actual encryption is too tough to crack.

Not necessarily too tough to crack -- just harder to crack than anything else in your system suffices. I'm *not* advocating the use of weak crypto though -- my point is simple -- there's a ton of other techniques that go into creating a secure system. Once you have a good crypto scheme, you need to concentrate on everything else, and some of those things will fall under what people love to call security through obscurity. Thinking of obscurity as bad is not helpful. Thinking of security in terms of "cost to break things" and "cheapest thing to break" is what's helpful. High cost = high obscurity, if you will. Good crypto just represents the best obscurity there is (and therefore has the highest cost to break). Then you move on to the next cheapest thing and make it more expensive to break, and so on, until all options for breaking your system are more expensive than the value of what your system is protecting. This is a pretty standard rule of evaluating threat models -- I don't know why people are resisting it so much.

My initial point was merely this: When security issues are discussed on Slashdot some idiot (several of them actually) will come along and put in a one-liner about security through obscurity being no good -- and that spells the end of rational discussion on the topic. The fact of the matter is that there has always been a balancing act between obscurity, and transparency, and cost when it comes to security, and that clichéd effing line contributes absolutely nothing of value to the conversation.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

Asymmetric keys are not "better" obscurity. You can't break a good encryption algorithm with even a huge cluster. That's the whole point - that I don't need obscurity. I can tell you what algorithm I used, what size my key is, absolutely everything except the key itself - and it'd still take you a century with all the current computing power on the planet to break it.

Completely dead wrong! First, let's look at some expensive methods:
http://www.h-online.com/security/news/item/Cracking-WPA-keys-in-the-cloud-1168636.html (WPA)
http://www.darkreading.com/authentication/cloud-based-crypto-cracking-tool-to-be-u/229000423 (SHA-1)
http://web.archive.org/web/20121115112940/http://people.ccmr.cornell.edu/~mermin/qcomp/chap3.pdf (RSA)

Next: less expensive is to circumvent that altogether and look for other weaknesses (less expensive, and much more common -- and Ormandy's exploit is an example of that -- he gains root by some other weakness rather cracking crypto to get passwords and then authenticate as root -- get it?). And lastly, depending on the value of the protected data (or the desperation of the attacker) a gun can be held to your head, or other such extreme measures can be taken. The point is not what you stated. The point is that everything can be broken by some means! Make sure the cost of breaking is greater than the value of the thing being protected. If you disagree with that line, re-read Applied Cryptography -- you'll see it mentioned in that book. And in any case, my larger point to GP was to discard of this idiotic one-liner that hinders meaningful discussion of security issues.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:aiding and abetting 8 computer fraud and abuse (404 comments)

You can't prove that anyone else *did* know about it.

Yep, you were the one hanging your hat on a unsupportable assertion - I didn't say anything beyond that. If your arguments start with unsupportable hyperbole, don't expect anyone to take you seriously.

Unsupportable assertion?? You're being overly pedantic. Before he went public, Ormandy's knowledge was either exclusive to him, or exclusive to an incredibly small number of people -- as opposed to common knowledge to all -- it's a simple point really.

But if a spear-phishing attack occurs, we will know immediately that this is now in the wild.

It doesn't sound like you understand how spear-phishing works.

Really now? Care to explain?

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

I have nothing -- I share your interest in this if anyone else has some stats.

One word regarding disclosure though -- a happy medium has been reached in the industry -- incidents like this are the outlier these days. Everyone agrees that disclosure is good -- both in terms of doing right by users, and in terms of maintaining a credible threat that vendors better take security seriously or else they will get publicly pantsed. Everyone agrees pure obscurity (basically a cover-up is bad). Almost everyone agrees pure transparency is bad (since vendors & users are sitting ducks that way). I say almost only because every once in a while (for example today) you get a guy trying to make a name for himself, or trying to make an example out of someone, or just generally being a dick.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:aiding and abetting 8 computer fraud and abuse (404 comments)

If he is the smartest person on the planet, he's likely to be the only one knowing about it. If not, it's likely that someone smarter than him found it before him and didn't tell anybody publically (they may however still have sold it privately to russian malware writers - months ago).

That's an artificial condition you're placing there (he could merely be the only whitehat who happened to be researching this particular approach at the time, or this particular module). And the way you speculate that the worst case scenario could be true, anyone can speculate that the best-case scenario could have been true. How many times do we play out that nonsensical speculation before people tire of pointing it out and accept it as an unknown for which the best guess has to be made? The illogical part of your scenario btw is that the malware writers got access to this months ago and yet we know of no exploits -- that part does not make sense.

With 7 billion people, the chance of being the smartest person on the planet is pretty low, though.

Two things -- as I already pointed out, this is an extremely arbitrary requirement you are placing, and secondly if he is not the smartest person in the world he should pause before acting unilaterally as he did.

I for one don't want to bet my security on someones ego. I want to be able to work around the problem (unplugging being the last resort, but still up to me to choose), as soon as possible.

You're always welcome to your preferences -- just note that Ormandy's ego made you less secure because whatever % of blackhats knew about this hole, that % just got elevated to 100% and there is still no patch, and no mitigations aside from stupid ones (pull the network cable / shut down the system type nonsense. I'd love to see you recommend that to a hospital or a business with a straight face). MS (software vendors in general -- could be anybody) doesn't have the liberty of catering to just your preferences anyway. They have to consider the world at large which is full of users that will never even know of this event having transpired, so they will not resort to any of the actions that were prescribed above. Many of them wouldn't even know how to, even if they stumbled upon the news of this exploit.

Anyone who finds a security problem and keeps it hidden from the public is therefore a bad guy, whether or not they sell the information privately to malware writers.

The choice is not quantized in this manner, my friend. Between "disclose" and "hidden from the public" lie infinite shades of grey that you fail to mention. Absolutely nobody -- not one single person of the several 100 posts on this issue is asking him to keep it hidden from the public *forever*. Just that he give MS a fighting chance to fucking patch it before he fucks them (or rather their users) over.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:aiding and abetting 8 computer fraud and abuse (404 comments)

Nope. The only thing unique here is his public disclosure of his knowledge. You have no way of knowing who else knew of this bug.

This is a pointless line. I can't prove that *nobody else* knew about it. You can't prove that anyone else *did* know about it. But if a spear-phishing attack occurs, we will know immediately that this is now in the wild. In the meantime, a patch is not ready, no mitigating instructions are ready, but the exploit is known to world+dog now -- so the likelihood of such an attack has gone up if anything.

The flaw in the argument of anyone defending Ormandy is that once you consider the generic case in which a security hole can be found in anyone's product (not just MS's), you'll realize that supporting Ormandy's actions here is the same as saying that responsible disclosure serves no purpose. That's a really extreme stance and it is absolutely not what the industry as a whole has agreed to (Google included, AV vendors and the vast majority of security researchers included). And if that's not what you're advocating, then why does MS not deserve the same courtesy afforded to the rest of the industry? Why does Ormandy think this hole he found is so much more deserving of glory than all the thousands of holes found by researchers each day, who do disclose responsibly? What was so special about this one particular issue that necessitated this extreme step that left users vulnerable?

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

Allow me to explain then:

You might use every security mechanism and precaution in the book, and still get 'hacked' if you are a sufficiently interesting target. If you're some random obscure individual (which most of us are) you can get away by merely using good passwords, and keeping your OS and apps updated, not visiting compromised sites, not opening random-ass attachments (basic measures like that). Does it mean the software and processes you use are secure? Hell no. Does it mean you are effectively secure? Could be. But even so, you're relying on the fact that you are a random, uninteresting, obscure individual that nobody would bother to single out and specifically target.

Same thing applies to GP's point about asymmetric keys. You can use PKI -- it won't matter if you do careless shit with it and cheaply cough up the symmetric key you exchanged with it. Or if you chose some idiotic block cipher. Or if you do something silly like saving unencrypted data to a temp file. It's a house of cards and attackers are not necessarily looking at it top-down the way you are -- they'll exploit anything they can. You can't say you use encryption therefore you are truly secure and you do not rely on security through obscurity. No -- you have to discard useless clichés like that and actually take the trouble to follow secure development practices.

What that means is -- you draw up your threat models, find your weak points, eliminate the issues you find, mitigate what can't be eliminated, follow other secure practices (banned APIs, code reviews, static analysis tools, fuzz testing, pen testing) etc. etc. -- it's a long list. You do everything you possibly can, and THEN, you still acknowledge that you cannot possibly have found all flaws. There will be exploits found that you'll need to patch. So you prepare an infrastructure for developing, testing, deploying said patches, and a notification system for people to let you know about these holes when they find them.

All of the above exists. All that effort has been taken. Ormandy just defeated it all because he doesn't give a fuck.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:aiding and abetting 8 computer fraud and abuse (404 comments)

If they/he can be prosecuted, Microsoft should be prosecuted too. They made the bug and a whole lot of other bugs. That's basically aiding and abetting AND neglect.

Let he who has written absolutely secure software cast the first stone.

In other words, that's a pretty dumb response. No software can be guaranteed to be secure -- it's just not provable that you have no security bugs no matter how good your processes and testing (never mind the fact that new *classes* of exploits are found on a decently regular basis). Besides if you see the details of Ormandy's bug, it's actually an extremely esoteric issue so it's not like you can call MS negligent for not finding it first.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

Secrets that cost substantially less to discover than the value of whatever they're protecting are merely "obscured".

Well put. This is exactly what I was explaining to OP. This is also why the obscurity mantra is irritating -- think about cost/effort of breaking the scheme instead of repeating tried slogans.

Just hoping nobody deciphers your corporate login's minified .js or throws a fuzzer at your kernel isn't going to cut it.

Sure -- but MS isn't doing that.

Him telling you rather than selling it to the highest bidder actually does put him on your team ... unless what you're trying to protect isn't what the actual system is ostensibly there to protect, but is instead your image.

Incorrect -- he told (sold whatever) it to everyone (including the so-called highest bidder) for a price of $0. He did not achieve the "on your team scenario". He could arguably have abetted the "highest bidder", therefore could arguably have hurt MS's customers. He did hurt MS's image, which I do not care about -- but it seems like the only rationale for his insistence on going this route. I repeat -- I don't give a damn about MS's image. But I can't help but think Ormandy likes to hurt them, and that influenced his choice. Because as I said, he certainly achieved no goals of protected users, and he arguably has hurt them, with the route he chose.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re: But not to give them a chance to correct it fi (404 comments)

The nonsequitor there, is in asserting that because the whitehat hasn't disclosed his findings, that others haven't also independently found the hole, and been more mum about it.

Blindly assuming there is no exploit in the wild is silly. Blindly assuming there is an exploit in the wild is equally silly. You have to examine each case as you encounter it.

Using an exploit that has been publicly discosed, and thus, everyone is super paranoid about it, and actively trying to plug it-- OR-- a nice little treasure trove of privately discovered exploits that aren't public knowledge that you can quiety switch to once the hole you are currently using gets discovered?

For the 99% of users that don't read Slashdot, vendor-sec, arstechnica, cnet, etc. etc. how did they even know about the exploit? I'm sick of people making this point without thinking it through for even a moment. Public disclosure will reach the black hats -- guaranteed! Public disclosure will not reach the 99% of non-technical computer users in the world -- also guaranteed! How effing complicated is this point that you seem unable to grasp??

"Saving face" for the company fascilitates the real blackhats by keeping admins and users ignorant of the threat.

Nobody gives a shit about saving face. Responsible disclosure can save *users* from encountering exploits before patches are ready -- if there wasn't an active exploit out there, there damn sure is one right now, there damn well isn't a patch. Give it one patch cycle, two at the most. How fucking hard is that. You keep on and on and on treating it as if the choice is "disclose now" vs. "never disclose". Why do you insist on being so dense?

All public disclosure does is make real blackhat attackers silently move to their next vector, and cause a spike in script kid activities. (And of course, make the software vendor look bad.)

*This* is non-sequitur and shows a lack of understanding as well. One -- it assumes there were already active exploits, so it does not account for having put users in danger if there were none. Second, there is no patch yet, so attackers will not "move on" as you put it. Third, attackers do no move to their next vector. They generally use a broad spectrum of attacks and assume some low percentage of success (which is all they need for creating botnets). They just had an easy one drop into their laps, and they know that a defense doesn't exist for now. That's all that happened. Blackhats are not a homogeneous entity either. Even if a small number of them know about this exploit earlier, now they all do and the race is on.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

I never said I believed in "unbeatable protection". That's a strawman. I basically said that "out of sight, out of mind!" Is not a proper risk mitigation practice. Most certainly NOT the same thing as professing a belief in perfect security.

"out of sight, out of mind!" is a bigger strawman than anything I said. Responsible disclosure, so MS has at least a chance to respond -- that's all people are calling for. And the point wasn't about unbeatable protection -- the point was to dispel of this silly one-liner that only serves to hinder meaningful discussion of security issues.

Shitty obscurity based half-assery fakes being strong, to detur attempts, but fails easily on inspection. Something like using a password to XOR a file, and calling it "encrypted.", or doing what sony did and reusing the sae salt over and over again, completly defeating the purpose of the salt in the process.

*This* is a strawman. Don't point out stupid shit that other people did, and claim that it makes your point valid. Remember again the general recommendation -- the cost of breaking your scheme must be greater than the value of what you're protecting. If you're using the scheme above, you should be using it to protect minesweeper scores at best.

Relying on "don't tell anybody! We'l get to it eventually, and if you don't tell, nobody will find out!" Is bullshit, which is what typically happens with so called "responsible disclosure." I have heard of serious exploits hanging around for YEARS after being "responsibly disclosed."

This is a strawman again. Simply, disclose responsibly. The patch cycle is well documented. If 1 cycle goes without a patch, you can remind them. If they second one goes by and no patch, disclose. How hard is that? Answer -- not hard at all. When you're not out to fuck people over, and don't have some agenda you're trying to further, it's really not that hard to be reasonable.

I understand that you can't fix the hole instantly, and that the patch needs to be tested to make sure it doesn't poke another hole elsewhere.

It's not just that. The patch needs to be tested to ensure that it actually works! That was an issue the last time Ormandy did this -- he provided a binary patch that did not fix the issue! In addition to that, it has to not cause other bugs (not necessarily exploits -- but bugs -- because those too can cause work stoppage etc.). When the hole is being exploited already, all this goes out the window -- exchange information openly and get that shit fixed ASAP. When it's not yet being exploited actively, you can spare users a lot of headache, and a lot of lost productivity by simply following responsible disclosure guidelines that are well documented and well-known to Ormandy himself.

However, informing the people at the most risk, (customers), that they need to take some mitigating actions to reduce the threat, and to watch for signs of exploit until the patch is ready is what is the responsible thing for the software vendor to do.

Dude, you can drop the veneer about caring about MS's customers. Ormandy can drop that too. There's a clear course of action by which Ormandy and MS could have done right by them together. Ormandy made sure that's no longer an option, and they are in greater danger now than was strictly necessary. And you are defending his actions out of glee that MS is looking like an idiot.

NOT hide the exploit and try to forget about it, while less scrupulous crackers silently use it in combination with other exploits to commit fraud, steal company prividleged information, steal user persona data, build botnets, and worse, while pretending that "it won't happen, because nobody squealed!"

Nobody is asking to HIDE anything! You complained about a strawman earlier??? Responsible disclosure does not imply infinite time. Ormandy works for Google right? He can follow their guidelines if he wants to be pedantic about it -- 60 days if there are known active exploits, 1 week if not. He's being an asshole, and people like you are encouraging asshole behavior.

informing the people at the most risk, (customers), that they need to take some mitigating actions to reduce the threat

What mitigating actions?? There's a fugging exploit out there, the customer is the whole fugging world, and 99% of them do not read Slashdot, arstechnica, etc -- how do you get the word out to all of them??

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re: But not to give them a chance to correct it fi (404 comments)

Exactly. MS has a well-documented monthly patch cycle. Give them until the next patch release date if you don't think there are exploits in the wild. Give them 1-week if there are already exploits. Similar rules for any other vendor depending on their patch cycles etc. Little common sense is all it takes.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

Yes, sure, soon they'll come after me with their quantum computers.

They won't -- because you are obscure -- get it?

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:Target Microsoft (404 comments)

* Know that the threat exists and be more careful

Be more careful how? Know that it exists how? I am Joe Public and I do not read Slashdot, Arstechnica or the other places where this bug will be mentioned. The black hats do and they are in the know and actively creating exploits. 99% of users have no idea that this happened, and therefore are not being 'more careful'.

* Disconnect the system from the internet

Why? Remember people do not even know if this event! Second -- you will recommend people disconnect from the internet before you censure Ormandy for precipitating the event that would make that necessary? Am I in the twilight zone?

* Disable the system.

Another practical solution -- wonderful! Pause to remember for a second that responsible disclosure can lead to getting a patch before such drastic measures are needed.

* Patch the machine code using a debugger.

What does Joe Public mean to you, and what do you think Joe Public's proficiency level is with debuggers and machine code?

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:aiding and abetting 8 computer fraud and abuse (404 comments)

TFA states nothing about him giving MS 4 weeks. AFAIK that was the previous time he went public on MS's ass; this time he just went out guns blazing as soon as he discovered the issue.

In terms of obligations (to release secure software), I disagree. You don't even need to look at EULAs (for Windows or for commercial Linux distros). There is no such thing as "absolutely secure" software. You simply cannot release X software and make the statement "X is secure -- I guarantee it". What you can do (and what Microsoft does better than most -- this is documented -- here's a random citation in support of that) is you can follow secure development practices, use defense in depth, and have a good patching mechanism.

I'm not going to bash Ormandy for publicizing the bug anymore than I would bash somebody for publicizing a bug in the Linux kernel. Come to think of it, aren't all the bugs for linux (and other opensource projects) public on a bug site?

This is plain incorrect. You have a responsible disclosure mechanism for Linux just as much as you do for MS/Windows (or any other product/entity/whatever). Disclosing an exploit on Linux without first giving the maintainers a chance to patch, is fucking them over just the same as this is fucking MS over. The fact that's he's done this twice now just shows that he's doing it out of spite.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:But not to give them a chance to correct it fir (404 comments)

Except that he's right. The "Security through obscurity is no security at all" mantra is the first thing that people who know nothing about security fall back on again and again. Asymmetric keys are merely *better* obscurity than most other means. You're still just counting on not being a sufficiently interesting target that your keys are not going to be put to the test by somebody with access to a proper compute cluster (or maybe a quantum computer), or that they won't bypass that and exploit you some other way.

You should know this already. Speaking generally, all security mechanisms can be broken, so you need to ensure the cost of exploiting is greater than the thing you get access to after exploiting.

about a year ago
top

Google Security Expert Finds, Publicly Discloses Windows Kernel Bug

dhavleak Re:Target Microsoft (404 comments)

Really? And what do you expect Joe Public to do differently now that they are informed? Travis Ormandy, and you, have zero regard for the party at risk, so please drop the veneer. You (and he) are only interested in damaging MS -- MS's users are collateral damage that's all.

about a year ago

Submissions

top

Carriers And FCC Talk ETFs, Main Issue Overlooked

dhavleak dhavleak writes  |  more than 4 years ago

dhavleak (912889) writes "The four major wireless carriers, as well as Google, on Tuesday responded to an inquiry from the Federal Communications Commission regarding early termination fees (ETFs). There were no major surprises in the findings, and no discussion as to why consumers who provide their own device and are not under contract pay the same monthly charges as those who are under contract, who should in theory have higher monthly charges representing payments towards the ETF."
Link to Original Source
top

What's cooking in MSR: Microsoft Tag

dhavleak dhavleak writes  |  more than 5 years ago

dhavleak writes "Some interesting looking stuff from the boffins in MSR: Microsoft Tag: "With the Microsoft Tag application, just aim your camera phone at a Tag and instantly access mobile content, videos, music, contact information, maps, social networks, promotions, and more. Nothing to type, no browsers to launch!". Device support is fairly extensive (iPhone, WinMo, Blackberry and more), and tag scanning appears to work quickly and reliably from a different distances and angles.

Long Zheng has an overview on his site. The tag is similar to a barcode, but has obvious visual differences — colored vs. black and white, and triangles vs. squares or lines. Perhaps this was required to optimize the area in which the tag data was encoded, while also yeilding better results for pattern recognition software. The technology looks interesting — but will it get the adoption necessary for it to be successful? What applications do you see for such technology?"

Link to Original Source
top

Anti-Obama Email Hoaxes Still Fool Dumb White Guys

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak writes "The hoax e-mails that paint Democratic presidential front-runner Barack Obama as unpatriotic and as a clandestine Muslim are still having an impact on voters' perception of the Illinois senator, reports a Wednesday story the New York Daily News.

This kind of sentiment expressed through voter interviews — however remote — could play a factor in the elections. What's slashdot's take on how Obama can counter the whisper-campaign?"

Link to Original Source
top

Microsoft Denies Putting 'Copyright Cop' in Zune

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak writes "The New York Times suggested Wednesday that future versions of the Zune might come with a tiny cop capable of catching digital lawbreakers.

Ina Freid on Cnet reports that Microsoft denied this with a statement saying ""Microsoft has no plans or commitments to implement content filtering features in the Zune family of devices as part of our content distribution deal with NBC".

Microsoft spokesman Adam Sohn echoed the sentiment. "We've agreed to work with these guys on a number of issues, but we have no plans or commitment to put filtering technology as part of this arrangement with NBC""
top

High Tech Attempt at Suppressing The NC Black Vote

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak writes "A Washington DC advocacy group called Women's Voices, Women's Vote is being accused of waging a high-tech voter suppression campaign, after voters in predominantly black districts in North Carolina began receiving automated phone calls implying that they hadn't properly registered to vote in the upcoming Democratic primary.

It would be interesting to know how many votes have already been lost to these tactics, if the media will actually pick up this story, and if there will be any cases filed against the group behind this considering this is a Class 1 felony in North Carolina."

Link to Original Source
top

'CSI' sleuths out Microsoft's latest technology

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak (912889) writes "A guidance counselor at a Manhattan prep school is murdered while the prom is taking place in the gymnasium. Forensic scientists for the New York police attempt to recreate the crime scene by uploading hundreds of camera phone thumbnail photos snapped at the dance onto a computer. The PC screen fills up in a concentric square pattern, revealing a wide shot of the gym at the center. Investigators can manipulate the images to show close-ups of the scene from every angle.

This episode of the CBS crime drama CSI: NY, scheduled to run Wednesday night, is fiction. But the technology at its core, Microsoft's Photosynth software, is real. It analyzes scores of images for similarities and stitches them into a three-dimensional reconstruction."
top

Bruce Schneier Weighs in on Lock-in Strategies

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak (912889) writes "Wired has an article from Bruce Schneier on the intersection of security technologies and vendor lock-ins in IT. From TFA: With enough lock-in, a company can protect its market share even as it reduces customer service, raises prices, refuses to innovate and otherwise abuses its customer base. It should be no surprise that this sounds like pretty much every experience you've had with IT companies: Once the industry discovered lock-in, everyone started figuring out how to get as much of it as they can."
top

Gates Fndatn. & Rotary pledge $200M to fight P

dhavleak dhavleak writes  |  more than 6 years ago

dhavleak writes "From TFA:
Aiming to inject $200 million into the global campaign to eradicate polio, the Bill & Melinda Gates Foundation announced Monday that it is awarding a $100 million challenge grant to the Evanston-based Rotary Foundation.

Scientists and public health professionals have been debating whether eradication is possible. Some have argued that resources should be directed at trying to contain the disease, which would be far less costly than trying to eliminate it.

That idea was dismissed during Monday's announcement."

Link to Original Source
top

Liberal Democracy Becomming Corporate Dictatorship

dhavleak dhavleak writes  |  more than 7 years ago

dhavleak writes "John Pilger (a reputed investigative journalist and documentary film maker who acted as a war correspondent in conflicts in Vietnam, Cambodia, Egypt, India, Bangladesh and Bahrain) recently gave a stirring speech at the Socialism 2007 conference in Chicago. The speech is a startling reminder of how mainstream journalism is just an extension of government, and it encourages people to keep reading between the lines to see the concealed role of the media. From the transcript:

Real information, subversive information, remains the most potent power of all — and I believe that we must not fall into the trap of believing that the media speaks for the public. That wasn't true in Stalinist Czechoslovakia and it isn't true of the United States.
and..

We now know that the BBC and other British media were used by the British secret intelligence service MI-6. In what they called Operation Mass Appeal, MI-6 agents planted stories about Saddam's weapons of mass destruction, such as weapons hidden in his palaces and in secret underground bunkers. All of these stories were fake. But that's not the point. The point is that the work of MI-6 was unnecessary, because professional journalism on its own would have produced the same result.
"

Link to Original Source

Journals

dhavleak has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?