Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Security By Obscurity — a New Theory

Soulskill posted more than 2 years ago | from the can-we-try-this-at-airports dept.

Security 265

mikejuk writes "Kerckhoffs' Principle suggests that there is no security by obscurity — but perhaps there is. A recent paper by Dusko Pavlovic suggests that security is a game of incomplete information and the more you can do to keep your opponent in the dark, the better. In addition to considering the attacker's computing power limits, he also thinks it's worth considering limits on their logic or programming capabilities (PDF). He recommends obscurity plus a little reactive security in response to an attacker probing the system. In this case, instead of having to protect against every possible attack vector, you can just defend against the attack that has been or is about to be launched."

cancel ×

265 comments

Sorry! There are no comments related to the filter you selected.

Remember it only talks about cryptography (5, Informative)

tech4 (2467692) | more than 2 years ago | (#37579776)

I hate it when people always seem to take the phrase out of context and apply it to mean any kind of security, like network security or the old Windows/Linux battle. It's a completely different kind of situation, and in the former it's especially true that security by obscurity is a hardener layer. It's also why Linux has managed to stay as (consumer) malware free to day, even though it still has a fair share of its own worms and other security problems.

Re:Remember it only talks about cryptography (5, Funny)

davester666 (731373) | more than 2 years ago | (#37579914)

This part of the summary is just great: "... is about to be launched"

Yes, having somebody sitting there as the attack is taking place and somehow guessing how the attacker will try to compromise your system makes it much easier to defend against the attack. Of course, just correctly guess sooner, and then you can fix the system beforehand and then you don't need someone sitting there....

Re:Remember it only talks about cryptography (2)

elucido (870205) | more than 2 years ago | (#37580378)

This part of the summary is just great: "... is about to be launched"

Yes, having somebody sitting there as the attack is taking place and somehow guessing how the attacker will try to compromise your system makes it much easier to defend against the attack. Of course, just correctly guess sooner, and then you can fix the system beforehand and then you don't need someone sitting there....

It also assumes we can determine the capability or the resources the enemy is willing to employ. It's a lot safer to assume you don't know than to try and assume you know.

I don't think they understood. (2, Insightful)

khasim (1285) | more than 2 years ago | (#37579788)

Obscurity only makes your security "brittle". Once broken, it is completely broken. Like hiding your house key under a flower pot.

Which means that the real security is the lock on the door. All you've done is allow another avenue of attacking it.

Re:I don't think they understood. (3, Interesting)

jhoegl (638955) | more than 2 years ago | (#37579824)

There is another way to look at this.

Imagine you have gold behind a locked door. Now imagine you have 50 locked doors.

This is your security through obscurity.

Re:I don't think they understood. (3, Insightful)

Cryacin (657549) | more than 2 years ago | (#37579842)

Well, if you had them behind 2^128 you'd have a trust certificate :P

Re:I don't think they understood. (1)

jhoegl (638955) | more than 2 years ago | (#37579912)

Hahah, good point.

Although these days CA authorities are becoming the weak link.
They will have to rethink centralized security, big time.

Re:I don't think they understood. (0)

TwinkieStix (571736) | more than 2 years ago | (#37580010)

I'm just splitting hairs here, but 2^128 bits... Each of those bits is a boolean - on and off. Each of those locks is nothing more than a light switch. What makes those bits work is that flipping any one doesn't provide the feedback of a further open door. So, it's actually more like a lock on a door with 2^128 light switches that all much be flipped in just the right positions before only ONE door opens.

Re:I don't think they understood. (2)

buchner.johannes (1139593) | more than 2 years ago | (#37580048)

Think about it a little more and you'll see that it's the same thing. A number and it's representation in a numeral system share a duality. Also, it's not 2^128 bits, it's 128 bits, but you probably meant that anyways.

Re:I don't think they understood. (1)

jbengt (874751) | more than 2 years ago | (#37580058)

I, for one, don't trust certificate "authorities"

Re:I don't think they understood. (1)

RoFLKOPTr (1294290) | more than 2 years ago | (#37579868)

There is another way to look at this. Imagine you have gold behind a locked door. Now imagine you have 50 locked doors. This is your security through obscurity.

You hid the gold under the floorboards. Consider your security broken.

Re:I don't think they understood. (1)

jhoegl (638955) | more than 2 years ago | (#37579888)

This isnt Rook >

Series or parallel? (1)

khasim (1285) | more than 2 years ago | (#37579872)

Does the attacker have to get through 50 doors to get the gold? Not all locked with the same key? (etc) This is good security (unless locked with the same key and so forth). ..or..
Does the attacker have to get through ONE door that is NOT locked (the security depends upon the attacker not getting the right door) ? ..or..
Does the attacker just have to check the doors for recent fingerprints to guess which door to attack?

Re:Series or parallel? (1)

jhoegl (638955) | more than 2 years ago | (#37579906)

Well, there are many methods. One would be honeypotting, another would be and in line with the "Security through Obscurity" thinking, you have to choose which door to attack. The point being, the hacker doesnt know because of security through obscurity. What you can do is Honeypot all the other doors and know about the attempt, or setup an alert and know about the attempt.

Frankly, if it is that important to be connected to the internet, but requires high security, the cost is justified.

You can even setup a "wag the dog" approach where you let it slip that Site A is how everyone accesses things, and have a few tricks there, but site B or C is where it actually is.

My point is Security through obscurity is a valid point to be made, but under the right direction and/or policies.

I don't think that's correct. (1)

khasim (1285) | more than 2 years ago | (#37579968)

One would be honeypotting, another would be and in line with the "Security through Obscurity" thinking, you have to choose which door to attack.

Just as in my house key example. The attacker has to know WHICH flower pot has the house key.

The problem is that once that piece of information is uncovered, the entire security implementation is broken.

The point being, the hacker doesnt know because of security through obscurity.

Yes, I understand the concept. I just don't agree with it. Again with the house key example: the work of putting a decent lock on the door is negated by having an easier, alternative avenue of attacking the door.

My point is Security through obscurity is a valid point to be made, but under the right direction and/or policies.

My point is that it is not because all it does is allow another, easier, avenue of attack.

If it does not, then it is not "security through obscurity".

Re:I don't think that's correct. (1)

jhoegl (638955) | more than 2 years ago | (#37580054)

I am not suggesting leaving it open and just not telling anyone. That would be crazy.

What you want to do is keep it secure as possible, but give the potential intruder something else to work on that yields no results, but increases their risk of exposure.
Security through obscurity does not automatically assume that it is a door left wide open, just no one knows about it.

Consider things that are currently unknown to the public, such as Air Force one. Only a few people know about its defenses and potential. However, they do not leave it out in the open devoid of guards and security. So therefor, including the security surrounding it, you also have obscurity of its potentials.

Do you understand the thinking now?

Nope. That would be "obscurity". (3, Informative)

khasim (1285) | more than 2 years ago | (#37580156)

I am not suggesting leaving it open and just not telling anyone. That would be crazy.

No, that would be "security through obscurity".

What you want to do is keep it secure as possible, but give the potential intruder something else to work on that yields no results, but increases their risk of exposure.

But that does nothing to improve the security of the system. If the attacker choose the correct door (or whatever) then you're left with only the defenses of that door.

Security through obscurity does not automatically assume that it is a door left wide open, just no one knows about it.

No. The "security THROUGH obscurity" means that the door IS unlocked (or unlockable with the hidden key) and that the "security" comes from no one KNOWING that it is a way in. That's what the "through" part of that statement means.

Do you understand the thinking now?

I've always understood it. And you're making a very common mistake. Obscurity != Secret in "security through obscurity".

Re:Nope. That would be "obscurity". (0)

Anonymous Coward | more than 2 years ago | (#37580294)

>

No. The "security THROUGH obscurity" means that the door IS unlocked (or unlockable with the hidden key) and that the "security" comes from no one KNOWING that it is a way in. That's what the "through" part of that statement means.

Says who? And who uses only one kind of security? I'm using security through obscurity right now, because I'm posting anonymously, obfuscating my identity, but it doesn't mean my computer is in the DMZ or that my account has no password.

obfuscate Verb/äbfskt/
1. Render obscure, unclear, or unintelligible.

Re:I don't think they understood. (1)

Pence128 (1389345) | more than 2 years ago | (#37580172)

Or you could add 6 bits to your key.

Re:I don't think they understood. (1)

bondsbw (888959) | more than 2 years ago | (#37579862)

Put up more doors with more locks... that'll fix it! (Just don't tell them about the hidden door into the basement...)

Re:I don't think they understood. (2, Insightful)

jmerlin (1010641) | more than 2 years ago | (#37579892)

And once you guess their encryption password, their encryption isn't completely broken? Your analogy is flawed, fundamentally you are assuming someone leaves a key lying around in an easily accessible area. No security we have isn't fundamentally based on obscurity. None.

Re:I don't think they understood. (1)

jhoegl (638955) | more than 2 years ago | (#37579952)

Exactly.

In fact, viruses are developed based on obscurity. I mean, it is in our everyday lives. To believe that obscurity is somehow the Achilles heel is just crazy thinking.

You have it wrong. (3, Informative)

khasim (1285) | more than 2 years ago | (#37580006)

And once you guess their encryption password, their encryption isn't completely broken?

You're confusing the "obscurity" portion of that statement.

Passwords should rely upon the difficulty in cracking them due to their complexity. The system is known. The password is not known.

Security through obscurity refers to the workings of the system being hidden. Such as the key under the flower pot opening the door. Once that information is discovered, the system is cracked.

Re:You have it wrong. (1)

jhoegl (638955) | more than 2 years ago | (#37580060)

So once someone gets your password, the access is granted?

So how is this different?

Time. (1)

khasim (1285) | more than 2 years ago | (#37580190)

In the end, it all comes down to time.

If it takes you 20,000 years to crack my password with a password cracker, then the system is secure for 20,000 years. After which it is cracked (until I change my password again).

If the password is hidden on a post-it under my keyboard, then there is an easier, alternative avenue of attack. And the system is cracked in a minute.

So, having the "security through obscurity" resulted in a less secure system that was cracked a lot quicker than the original system.

That is why you do not use "security through obscurity".

Re:Time. (1)

hazem (472289) | more than 2 years ago | (#37580346)

> That is why you do not use "security through obscurity".
Well, if you define "security through obscurity" to such an absurd point, then of course there's no value to obscurity.

However, obscurity is an important part of any security system, but only an idiot would rely on obscurity as the only source of security, and only someone being obtuse would assume that that's what others mean.

Soldiers use "security through obscurity" by wearing camouflage. It's by no means their only means of security. I helps prevent observers from seeing them, but it doesn't prevent a motion sensor from detecting them. Does that mean the use of camouflage is invalid? Of course not. But they use it for the intended purpose, obscuring them from observation (an important part of their security), but they must rely on other methods for securing against other threats.

You also have to take into consideration the threats you are securing against. I can use many obscurity methods to hide the fact that I'm running a spy network out of my house this week. And all those methods may suffice to prevent the casual police officer or even citizen from finding me out. Of course, that won't protect me from a double-agent who already knows where my house is, nor will it protect me from deliberate surveillance once I'm already a suspect. I have to use other methods against that. However just because those things are possible, it doesn't mean I should give up the idea of obscurity and just hang out a sign that says, "spies meeting here". There is still value in the obscurity. However it's only part of the security puzzle.

Of course if you want to narrowly define it into absurdity by a scheme where you put a key to your door under a flower pot, tell everyone you have gold in your house and then say you put a key under your flower pot, then of course, that's stupid. But who would think otherwise?

Which is the whole point. (1)

khasim (1285) | more than 2 years ago | (#37580414)

Well, if you define "security through obscurity" to such an absurd point, then of course there's no value to obscurity.

You may view it as "absurd" but it having no value is the whole point.

In these SPECIFIC instances, obscurity only REDUCES the security of a system.

Soldiers use "security through obscurity" by wearing camouflage.

The problem is that we're discussing computer security. Physical security is a different matter and has very limited usefulness as an analogy.

Of course if you want to narrowly define it into absurdity by a scheme where you put a key to your door under a flower pot, tell everyone you have gold in your house and then say you put a key under your flower pot, then of course, that's stupid.

No. You misunderstood that. The "obscure" part is where you do NOT tell everyone that the key is under the flower pot.

The key is the "secret". Just like a password is a "secret".

No matter how good the lock is, once the "obscure" part is found, the security is cracked.

It may SOUND "absurd" but there are a LOT of people arguing for exactly that in this thread.

Re:You have it wrong. (0)

jmerlin (1010641) | more than 2 years ago | (#37580084)

You're claiming known (for now) calculable difficulty to crack is better than a system where the difficulty is not easily calculated. Perhaps, but not necessarily. Both are fundamentally based on obscurity, though. The difficulty with which you can correctly guess the obscurity I would term "strength" and that matters, of course. But it's still obscurity.

Re:I don't think they understood. (1)

burris (122191) | more than 2 years ago | (#37580176)

No because you can change the key, which is much easier than changing the cryptosystem. With a good source of entropy, I can generate large numbers good keys all day long. Good cryptosystems are much harder to come by, so the cryptosystem is designed to make changing keys easy. Cryptosystems are also designed to minimize the impact of a single key being discovered. Forward secrecy, for instance, where stealing a key might not get you anything at all.

Re:I don't think they understood. (0)

jmerlin (1010641) | more than 2 years ago | (#37580350)

If I get a big chunk of data that's encrypted, YOU cannot change the key anymore. It's the same issue, but I agree, the basis of strength should rely solely on obscurity that is very easily and rapidly changed. A desirable trait in any security system.

Re:I don't think they understood. (2)

rainsford (803085) | more than 2 years ago | (#37580184)

No, the encryption ISN'T completely broken. If I have an encryption system that uses passwords for security, and you guess my password, the security is broken for this instance of the system...but I can just pick another password and security is restored. "Security through obscurity" doesn't mean security based on ANY secret, it means security through secrecy in some fundamental element of the system, especially when such a secret makes the system brittle. If you steal my key, I can simply rekey a lock and I'm just as secure as before. But if I ALWAYS leave a spare key in the same spot, once you figure that out the entire system is fundamentally broken. That's security through obscurity.

Re:I don't think they understood. (0)

jmerlin (1010641) | more than 2 years ago | (#37580308)

Now you're talking about modularization of the "security mechanism" so that the weak piece is fundamentally simple to exchange. This is definitely a strength and a huge asset to security, but it is still based on obscurity.

Re:I don't think they understood. (2)

jbengt (874751) | more than 2 years ago | (#37580266)

Your analogy is flawed, fundamentally you are assuming someone leaves a key lying around in an easily accessible area. No security we have isn't fundamentally based on obscurity. None.

Secrecy is not identical to obscurity. The meaning of obscurity in "Security Through Obscurity" refers to the overall scheme and methods. The secured secrecy of keys and the like is assumed and does not mean that the security system is based on obscurity as understood in the context of discussing security through obscurity.

From the Wikipedia article linked in TFS:

Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure.. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.

Think of going to two banks to decide where to store some irreplaceable valuables.
In one bank, they tell you about their armed security guards, they show you the vault and describe how thick the steel is, how it operates on a timeclock and a combination They detail how they give you one key to the safety deposit box and how they keep the other, and that you need both keys to open the box. They tell you know that before they let you past the armed guards they require you to show identification and sign in, and only then will they accompany you to your box to turn their key while you turn yours to open the box. They even give you the blueprints to the bank to assure you how well it's built.
The other bank tells you that they can't say what they do with your valuables, because they need to keep it a secret in order to maintain security.
Which bank would you prefer?

Of course, if you are handling your own security, adding multiple layers, including obscurity, can help. But at the core, you need to implement similar protections as the first bank, or you are just fooling yourself to think you are being as secure as it.

Re:I don't think they understood. (1)

burris (122191) | more than 2 years ago | (#37580332)

Here is a real world example where getting a key gets you nothing. Lets say you're targeting someone specific to get their secret cookie recipe or their confession and you've installed a wire tap on their net connection and you've been recording all of the traffic. The target has been chatting with their friends over some encrypted chat thing and you're sure they've been discussing the recipe/crime. So one day your goons stop the mark, steal their laptop which contains their private keys, and beat them with a hose until they give up the password that unlocks them. You type in the password right there and make sure it works. Maybe you just try a password cracker and get lucky.

Now you're golden, you can go back and decrypt all that old traffic and get the recipe, right? No, the private keys stored on the hard drive were only used to authenticate the exchange of randomly generated temporary keys used to do the actual encryption and do you no good at all.

Lets say you manage to steal the key material undetected and guess the passphrase protecting them. Now you can passively watch all of the traffic that goes by? No, you must do an active "man in the middle" attack.

Lets say you are very powerful and are capable of doing an active attack during the conversation. Now you're all set to get your marks secrets as soon as they discuss them again, right? No, because your mark is using voice or video chat, recognizes whom they are speaking with, and are comparing the hashes of the temporary keys being used to encrypt the conversation before talking about anything sensitive.

Re:I don't think they understood. (5, Interesting)

thegarbz (1787294) | more than 2 years ago | (#37579940)

Which means that the real security is the lock on the door.

But that is also just obscurity in another form. The obscure part is that the attacker doesn't know the combination to the lock, or doesn't know how the tumblers specifically are keyed. Otherwise a key could be made up.

All security is obscurity, just different levels of it. In some schemes the obscure value is shared (hidden directory on the server that isn't crawled but can none the less be accessed by a direct link). Some obscure values aren't (public key encryption).

The hiding the key under the rock is analogous to using a weak form of obscurity to hide a strong one. Which in this case is no better than the obscurity of not letting anyone know that the door lock doesn't actually work anyway.

Secret != Obscure in this instance. (1)

khasim (1285) | more than 2 years ago | (#37580100)

But that is also just obscurity in another form.

Nope. Similar to the use of "theory" in science. The common usage of the word is not the exact same as the usage in this context.

The system is designed so that it can only be opened by the correct secret (the key in this case). That does not mean that the key is "obscure" even though it is the "secret".

Obscurity refers to the system. The key is still the secret. What the obscurity is is the fact that you're hiding (obscuring) the secret under a flower pot.

To put it another way, using a password cracker to "find" a password and spending 2^128 years doing so is very different from "finding" a password hidden under a keyboard.

Re:I don't think they understood. (1)

Dr. Tom (23206) | more than 2 years ago | (#37580272)

The key is a secret. If it gets loose you have no security. However, the security protocol (if it is a good one) will allow rekeying; keys are one-time only, and if a key is revealed you can immediately switch to a new key the attacker doesn't know (keys are just random numbers).

Re:I don't think they understood. (0)

Anonymous Coward | more than 2 years ago | (#37579962)

Now for a protip that I fear only a few will be able to fully comprehend, and hence may be modded down:

All security is security by obscurity.
It's just how obscure it is.

Obscurity is how much you know and don't know about it.
So "completely unknown" is just the most obscure you can go. Hence XORing with that is the most secure you can go.
Well, except for "physically impossible to know" as it may be possible in quantum encryption. (Iâ(TM)m not an expert in that field.)

Re:I don't think they understood. (1)

TwinkieStix (571736) | more than 2 years ago | (#37579966)

But, isn't the pattern to the very lock you describe a "secret" or obscure in as much that the lack of knowledge about how to duplicate that key is what keeps intruders out?

Most forms of security rely on some form of obscurity to decide which group of people is allowed access and which group of people is not. A password or a private key, if known to everybody would allow everybody into the system. Only those who hold that extra piece of information are able to access the system through the means by which it was intended to be accessed.

I believe that the point of contention is whether obscuring the system in some way prevents people from entering the system in ways it was NOT intended to be accessed. We could make an argument either way here: Does holding back information on a vulnerability until the vendor has a few days to release a patch first make the system more secure in that period of time? Maybe because fewer people (good and bad) know about this exposed surface. Does keeping ALL of the source code to an application away from open peer review make the system less secure? Mabe, but perhaps the answer depends on if that specific system has more security brain-power put behind breaking into the system or making the system better. There is probably a lot more brain-power behind keeping popular security libraries secure, so open peer-review is surely better. But, I suppose that there exists at least one piece of software with no open source community that if suddenly showed up on github would see the black-hats use it negatively before the white-hats start helping contribute patches.

Re:I don't think they understood. (1)

Tasha26 (1613349) | more than 2 years ago | (#37580014)

I agree. Security in CompSci has to be a bit more than putting up safeguards (fw, av, encryption) or going from one DES to triple DES just to make brute force attack more difficult. Surely the only solution is to develop a language or maths for it. This way we can reason about security problems and be able to say for sure: this is provably secure just like 1+1=2. After the logic comes the implementation details.

Re:I don't think they understood. (1)

gatkinso (15975) | more than 2 years ago | (#37580390)

Yes, well, what if they can't find the lock?

Sure (3, Insightful)

EdIII (1114411) | more than 2 years ago | (#37579798)

That's fine and all. If you want to create your security through incomplete information, or different tactics and strategy, that is a choice.

Just don't be a childish whining little bitch and run to the FBI to stop the big bad anti-social "hackers" from revealing your used-to-be incomplete information in security conventions and trying to have them arrested.

You get double whiny bitch points trying to invoke copyright to prevent the "leakage" of your incomplete information.

I certainly get the point of the article, but a system that is secured through well thought out and tested means will always trump a system where, "Golly Gee Willickers Bat Man.... I hope they don't find the secret entrance to our bat cave that is totally unprotected and unmonitored".

Re:Sure (1)

icebraining (1313345) | more than 2 years ago | (#37579932)

What's a password - or even a private key - if not incomplete information?

Re:Sure (4, Insightful)

EdIII (1114411) | more than 2 years ago | (#37580008)

I don't think that is what they mean by incomplete information.

In the context of security through obscurity it has always, to me, seemed to mean that your method and process of providing security is not well understood and it is this fact that is providing the majority of the security. If somebody figures out the method or process, your security is greatly compromised.

A password, or private key, is not a good example in this case. I think a better example would be that passwords and private keys protect documents created by a certain well known company, but that their methods and processes were so laughable that you could create a program to bypass the keys themselves.

Or in other words........ the only thing keeping Wile E Coyote (Super Genius) from getting to Bugs Bunny though the locked door is his complete lack of awareness that there is nothing around the door but the desert itself. Take two steps to the right, two steps forward, turn to your left, and there is Bugs Bunny. You did not even have to get an ACME locksmith to come out.

Re:Sure (1)

circletimessquare (444983) | more than 2 years ago | (#37580070)

you attempted to redefine his terms, and then you attempted to change the topic. in other words, you don't have an answer

aka, incomprehensibility by affability

because the real answer would be to concede that icebraining is correct: it's just a matter of perspective of what security is, and what obscurity is, and, on some philosophical level, they are indeed the same concept after all. not that this is a mighty a thunderclap of a realization, and not that it completely changes security paradigms. but it is indeed and interesting, noteworthy platitude, a musing you might have while sitting on the toilet: security IS obscurity, after everything is said and done

so just admit the simple platitude is true on some abstract, unuseful and inconsequential level, and move on with your life

Re:Sure (4, Informative)

EdIII (1114411) | more than 2 years ago | (#37580152)

Uhhhhhh..... okay

I am not redefining terms here at all.

Granted, this is from Wikipedia:

Security through (or by) obscurity is a pejorative referring to a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. A system may use security through obscurity as a defense in depth measure; while all known security vulnerabilities would be mitigated through other measures, public disclosure of products and versions in use makes them early targets for newly discovered vulnerabilities in those products and versions. An attacker's first step is usually information gathering; this step is delayed by security through obscurity. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.

icebraining is not correct here, and your assertion I am changing the definition from the norm and widely accepted definition is false. Security through obscurity, as a concept, is not something vague and a matter of perspective. It is a very well defined term in security and has been for quite some time.

According to the definition above, a password is not incomplete information, or information being obscured, as it is being presented in the context of the article and the principle of security through obscurity.

Making this a philosophical debate that a password is also obscurity at some level has nothing to do with the principles that are mentioned.

Re:Sure (0)

circletimessquare (444983) | more than 2 years ago | (#37580232)

you keep resisting son. all you have to do is admit the abstract and inconsequential truth

Re:Sure (1)

EdIII (1114411) | more than 2 years ago | (#37580278)

Whatever man. I am not resisting anything.

Passwords and secret keys don't have anything to do with the principles of security through obscurity.

I am getting the distinct impression I am feeding a troll, so the kitchen is closed. Come back tomorrow.

Re:Sure (0)

Anonymous Coward | more than 2 years ago | (#37580326)

Maybe you can make a movie about resisting zombies. That would be great.

Re:Sure (0)

Anonymous Coward | more than 2 years ago | (#37580186)

Maybe you could make a movie about an incomprehensible zombie. That would be great.

Yet... (-1, Troll)

dev635 (2474860) | more than 2 years ago | (#37579808)

Vast majority of DRM schemes are broken, and I think it says something about security through obscurity....
Yet, though VideoGuard [evenweb.com] salletite/cable TV drm scheme remain unbroken for 8 years, despite the fact that is very popular and really sucks (you need their set-top-box to view it)
and its really frightening that one day they will create a scheme that can't be broken....

Re:Yet... (0)

Anonymous Coward | more than 2 years ago | (#37579866)

link is goatse, just warning you all

Re:Yet... (4, Funny)

RoFLKOPTr (1294290) | more than 2 years ago | (#37579882)

A new kind of goatse troll in which the troll commenter hides his actions by contributing to the thread in a positive manner.

*golfclap*

Re:Yet... (2)

NoSleepDemon (1521253) | more than 2 years ago | (#37579904)

Goatse through obscurity?

Re:Yet... (2)

RoFLKOPTr (1294290) | more than 2 years ago | (#37579950)

Wow that didn't even cross my mind. So in addition to contributing to the thread positively, the goatse troll is actually relevant to the topic at hand. Absolutely amazing. A technological marvel.

Re:Yet... (1)

Shoe Puppet (1557239) | more than 2 years ago | (#37579972)

On top of that, he was logged in while an AC pointed out it's goatse.

Re:Yet... (0)

Anonymous Coward | more than 2 years ago | (#37580160)

This being a leftists' web site, figures that just regurgitating PC platitudes passes for "contributing to the thread in a positive manner."

Re:Yet... (0)

Anonymous Coward | more than 2 years ago | (#37580194)

I was able to view it without a set-top-box. As a matter of fact, it was wide open.

Nature disagrees (3, Interesting)

Anonymous Coward | more than 2 years ago | (#37579816)

Camouflage is the oldest and most natural form of security on the planet.

Re:Nature disagrees (0)

Anonymous Coward | more than 2 years ago | (#37579990)

Any sufficiently high amount of obscurity becomes "real security".
AES for example is just very very obscure, as there are many many things you don't know. (Like: The bytes from the key.)

While for a key under a flower pot, there is just one thing to know.
If you have enough pots and enough keys, you can achieve AES-equivalent security. (Taking as much time to try pots as it takes to try keys.)

Re:Nature disagrees (1)

RenHoek (101570) | more than 2 years ago | (#37580024)

Carrying a bigger stick then your opponent is the oldest and most natural form of security.

Re:Nature disagrees (1)

perpenso (1613749) | more than 2 years ago | (#37580336)

Camouflage is the oldest and most natural form of security on the planet.

Carrying a bigger stick then your opponent is the oldest and most natural form of security.

Actually its camouflage *plus* the bigger stick. The camouflage gives one the potential advantage of deciding if and when the bigger stick comes into play.

Re:Nature disagrees (0)

Anonymous Coward | more than 2 years ago | (#37580094)

Or being brightly coloured and covered in toxic poison...the other extreme.

Half the story (1)

Anonymous Coward | more than 2 years ago | (#37579822)

Obscurity is good when backed up by good code where it takes time and effort to break into. That is not often the case where instead obscurity is used to hide the large holes cause by badly written code. Obscurity buys you nothing when these holes can be broken into through blind attacks. And it is often for this reason why we don't like obscurity as it also motivates companies not to fix these holes as it cost time and money to look for them continuously (which they have to spend themselves if they want obscurity).

So in actuality, open code is the best compromise in general.

Obvious (0)

Anonymous Coward | more than 2 years ago | (#37579828)

Of course obscurity brings extra security. If you for example leave a note on your desk with the password written down you get some extra security by obscuring it.
The expression that "obscurity isn't security" comes from the idea that relying on only obscurity for security is a bad design choice.
Ideally you do both. First you encrypt your information for security. Then you do not onöy hide your key but also what encryption algorithm you used for obscurity.
Obscurity will not protect you from the people who knows what they are doing bit it might protect you from the script kiddies and that is a lot better than nothing.

Misapplication of Kerckhoff's Principle (3, Interesting)

telekon (185072) | more than 2 years ago | (#37579840)

Kerckhoff's Principle specifically applies to cryptosystems. Not only does TFA describe more of a generalized application to systems and code, but it's not really describing 'security through obscurity.' It's describing informational arbitrage, i.e., profiting (not necessarily financially) from an imbalance of knowledge on one side of a two-participant game.

The dynamic adaptive approach has its merits, particularly as it is increasingly clear that most security is only the illusion of security, maintained until it is breached. But traditional 'security through obscurity' refers to systems for which the only security measure in place is maintaining the secrecy of a protocol, algorithm, etc.

It seems to me the ideal approach is a balanced one, that embraces the UNIX philosophy: cover the 90% of most common attack vectors with proven security measures (and update practices as needed), and take a dynamic adaptive approach to the edge cases, because those are the ones most likely to breach if you've done the first 90% correctly.

Re:Misapplication of Kerckhoff's Principle (0)

Anonymous Coward | more than 2 years ago | (#37580354)

But traditional 'security through obscurity' refers to systems for which the only security measure in place is maintaining the secrecy of a protocol, algorithm, etc.

"etc."? You mean like a... key? ^^
Yes, for "real security" you can just as well say that "the only security measure is place is maintaining the secrecy of a" key.

Obscurity can be cured (1)

Gothmolly (148874) | more than 2 years ago | (#37579846)

Once you're no longer obscured, you're done.

Luck (1)

El_Muerte_TDS (592157) | more than 2 years ago | (#37579848)

Call it luck, or educated guess, call it fate for all I care. One miss, and you're screwed.

This is new? (1)

denshao2 (1515775) | more than 2 years ago | (#37579874)

I thought it was obvious.

SbO: lame (2)

Dr. Tom (23206) | more than 2 years ago | (#37579878)

Security by Obscurity is lame. The REAL test of a good security protocol is when you publish ALL the details and the bad guys STILL can't get in. If you are merely relying on somebody, somewhere, not saying anything, you are asking for it. All the real security products that people actually trust are open source. I will never, ever, ever, ever, trust anything that is closed source. There could be a back door, and you can't argue with that. Again, and again, and again, the ONLY security algorithms worth talking about are OPEN. If you can publish your work in public and STILL be secure, THAT is security. That is quite possible, it has been done many times. If you can't do that, you are just making excuses for your lame security that relies on a secret. Look at history. Your secret will be published, and then your product will be dead.

Re:SbO: lame (1, Insightful)

jmerlin (1010641) | more than 2 years ago | (#37579920)

Someone else can get in -- all they need is a little bit of information you've left out (like a key). Obscurity. Right there. Self defeating posts are self defeating.

Re:SbO: lame (0)

Dr. Tom (23206) | more than 2 years ago | (#37579996)

derp read wikipedia you are wrong

Re:SbO: lame (1)

jmerlin (1010641) | more than 2 years ago | (#37580188)

It's a little scary someone from the NIH with a doctorate in a field is so short sighted. Never mind, that's really, really scary. It explains a lot, really.

Re:SbO: lame (-1, Flamebait)

Dr. Tom (23206) | more than 2 years ago | (#37580288)

I've written security code. Have you? Check my slashdot id number. I'm an oldfag. I've been here a LONG time. I know what I'm talking about, and you don't.

Re:SbO: lame (1)

jmerlin (1010641) | more than 2 years ago | (#37580370)

Writing code does not imply intelligence nor skill, similarly, neither does duration of residence. Nice try, though.

Re:SbO: lame (0)

Dr. Tom (23206) | more than 2 years ago | (#37580382)

and your credentials are what, internet troll

Re:SbO: lame (0)

Anonymous Coward | more than 2 years ago | (#37580164)

You can still unpack this a bit. Anything server-side is never truly "open" in the sense that you can't truly know that it has your open software, and anything client-side is always "open" in the sense that at the end of the day the compiled code is exposed to you. The only thing preventing you from auditing the closed-source code is "security by obscurity", because compiled code is much more difficult to understand than well-written source code.

Even if the enemy knows the system other than the key, they shouldn't be able to crack the system -- but it does not follow that if the enemy doesn't know the system then if the enemy did know the system they'd be able to crack it. Obscurity isn't inherently a flaw in a security system, it's just not sufficient to provide security.

The idea behind open source being a security feature is that the chances of the bad guy finding the critical flaw in closed code before the good guys find it, is greater than the chances of the bad guy finding a critical flaw in open source code before the good guys find it. I'm not really convinced that's true. The "with many eyes, all bugs are shallow" maxim may apply when all parties are equally invested in eliminating bugs, but with security you have a team that wants to exploit bugs (directly or indirectly), and a team that wants to fix them.

Missing the point? (3, Interesting)

nine-times (778537) | more than 2 years ago | (#37579898)

Well maybe I'm wrong, but I always thought the complaints of "security by obscurity" were not that obscurity couldn't be helpful to security, but that it was a bad idea to rely on obscurity.

It seems obvious to me that the more complete the attacker's knowledge, the greater the chance of a successful attack. If an attacker knows which ports are opened, which services are running, which versions of which software are running which services, and whether critical security patches have been applied, for example, it's much easier for them to find an attack vector if there is one. You're more secure if attackers don't know that information about your systems, because it forces them to discover it. That takes additional time and effort, and they may not be able to discover that information at all.

However (and here's the point), it's not a good idea to leave your systems wide open and insecure and hope that attackers don't discover the holes in your security. It's not smart to rely on the attacker's ignorance as the chief (or only) form of protection, because a lot of times that information can be discovered. It's true that "obscurity" is a form of security, but it's a fairly weak form that doesn't hold up over time. The truth tends to out.

Rational, but flawed. (1)

RyanFenton (230700) | more than 2 years ago | (#37579902)

Past performance IS a proper indication of how the future will be, if everything stays as expected. But reality is rarely fully what we expect it to be.

Defending against known threats is certainly part of the task of securing something - but the other part is observing what makes up the thing you're defending, and looking for weaknesses, and from that how to react when those weaknesses are exploited. Not doing the last bits is one of the very bad parts of groupthink, complacency.

One of the best ways to develop groupthink? Pretend that everything you're doing is a crucial secret, and cut yourself off the entire outside world - and thus never invite outside input to help you adapt to anything new.

Obscurity works by default - because it's all about protecting your precious secret from any experimentation. But once it becomes important to someone to test your secret, your obscurity is a very limited defense.

Ryan Fenton

Re:Rational, but flawed. (1)

siride (974284) | more than 2 years ago | (#37579908)

I don't think you even read the article.

Sure, it's a strategy... (0)

Anonymous Coward | more than 2 years ago | (#37579938)

...but not one which is effective enough to be your sole strategy. Obscurity should be your fall back and first defence, but between these points you should have the best defence you can muster.

"Security by obscurity" is misleading. (2)

ZouPrime (460611) | more than 2 years ago | (#37580004)

As a information security professional, I've always seen the whole "security by obscurity" issue somewhat misleading. By repeating the mantra, I feel many people forgot its true meaning.

Security shouldn't RELY on obscurity. That's true. But it doesn't mean obscurity, by itself, doesn't provide security benefits.

There are many examples where this is obvious. For example, would you publish your network topography on your public website? Of course not. Even if you were convinced that its security and access control are air tight, the cost of keeping such documentation "obscure" is negligible versus its usefulness by a potential attacker.

The problem arise when obscurity is used in lieu of proper security. Unfortunately, it still happens too often. But while the presence of obscurity may be seen as suspicious by an outside party trying to evaluate the security of a system, it shouldn't be considered as evidence of its insecurity, as it sometimes is.

Finally, I understand the "many eyes" argument, and how public disclosure of the security details of a system can help improving it. After all, nobody would think about trusting a crypto algorithm that hasn't been made public and scrutinized accordingly. But this logic cannot be generalized for all systems in all context.

anything that can be made by a man (1)

circletimessquare (444983) | more than 2 years ago | (#37580018)

can be unmade by another man

it's that simple

the rest is just an arms race to keep one slight step ahead in constant effort and constant motion

Re:anything that can be made by a man (1)

Anonymous Coward | more than 2 years ago | (#37580218)

An arms race, indeed. And a war. But as Sun Tzu noted one of the most important strategies of was is misleading your opponent.

OK, great, but not at the expense of users (1)

bersl2 (689221) | more than 2 years ago | (#37580044)

The entire concept of security by obscurity acts as a justification for keeping secrets. It often sweeps up information whose release will help users much more than it will help attackers. Once it becomes a sanctioned tool of security, instead of an objective of the security, those who set up and maintain the security lean on obscurity like a crutch.

I realize my argument is an appeal to the slippery slope, but I see it everywhere in society. People, organizations, and governments can get into frames of mind wherein they lose focus of the overall goal of information security and just start obscuring everything, which makes their interactions with others difficult and sometimes hostile.

In fairness, the article itself says as much:

Typing and proling are frowned
on in security. Leaving aside the question whether gathering
information about the attacker, and obscuring the system,
might be useful for security or not, these practices remain
questionable socially. The false positives arising from such
methods cause a lot of trouble, and tend to just drive the
attackers deeper into hiding.
On the other hand, typing and proling are technically
and conceptually unavoidable in gaming, and remain re-
spectable research topics of game theory. Some games can-
not be played without typing and proling the opponents.
Poker and the bidding phase of bridge are all about trying
to guess your opponents’ secrets by analyzing their behav-
iors. Players do all they can to avoid being analyzed, and
many prod their opponents to sample their behaviors. Some
games cannot be won by mere uniform distributions, with-
out analyzing opponents’ biases.
Both game theory and immune system teach us that we
cannot avoid proling the enemy. But both the social ex-
perience and immune system teach us that we must set the
thresholds high to avoid the false positives that the prol-
ing methods are so prone to. Misidentifying the enemy leads
to auto-immune disorders, which can be equally pernicious
socially, as they are to our health.

But inevitably, this kind of caveat is thoroughly ignored by most people. They will only hear something like "Security by Obscurity Now Considered Useful", and a whole new set of administrative roadblocks will be thrown up in the name of security, when in fact it's helping very little, if any; furthermore, those who try to circumvent the new measures to do something they consider to be within the permitted use of the network may be considered security risks (or even malicious entities outright) and will be dealt with as such, when nothing of the sort was intended.

Security thru (1)

JustOK (667959) | more than 2 years ago | (#37580072)

Security thru absurdity is just crazy enough to work.

Re:Security thru (0)

Anonymous Coward | more than 2 years ago | (#37580142)

Hey, that was supposed to be a big secret in the security community!

But back to the subject. Security by obscurity can be a good idea, if you have active intrusion detection. No need to tell your opponent where you have placed your honeypots.

Secrecy != Obscurity (1)

ewanm89 (1052822) | more than 2 years ago | (#37580086)

In information security, secrecy does not equal obscurity.

Obscurity is if I give out access cards for the doors of my building, but all the magic of the card is a single magnet, and just changing the magnetic field at the reader will unlock the door.

Another example of obscurity: I give out access cards but encode them all to the same code and just tell people this one is only for these particular non restricted zones (this is more like DRM systems).

Layers (0)

Anonymous Coward | more than 2 years ago | (#37580096)

People, many of your implementation examples aren't "either/or" situations. From a practical standpoint you are usually better with a layer of each: security and obscurity, For example, a strong vault that is hidden is better than the same one exposed. A steganographically-encrypted file is safer than that same file in the public domain. How much safer is open for debate, but you are probably safer with both layers in most individual *implementation* situations.

Where the debate comes alive is in two main areas:

1) Design. An open system design tends to be more trustworthy for reasons explained elsewhere. Obscurity in the *design* of any particular layer is usually bad idea (but obscurity in the choice of layers may be a good thing, e.g. what vault you chose or which tested open source encryption algorithm you picked).

2) Testing. If many people use the same system it becomes obvious if a vulnerability is found, and more people are looking for cracks. That same system in a one-off implementation is less obviously secure, even though (paradoxically) it may have been made more secure through obscurity.

Excerpts from article summary (0)

Anonymous Coward | more than 2 years ago | (#37580104)

ARE what AMAZON uses basically: Does ANYONE know what or how their overall schema & OS they use are?

I don't & last I knew of/checked on, it was some proprietary thing I'd never heard of, OR I could NOT get an answer!

(Which struck me as odd actually, could be totally their "own" but I doubt it actually - why rebuild the wheel in other words, but - when you have billions? Then again... why not!)

"Dusko Pavlovic suggests that security is a game of incomplete information and the more you can do to keep your opponent in the dark, the better." - Posted by Soulskill on Saturday October 01, @06:12PM
from the can-we-try-this-at-airports dept.

Anyhow - that seemed more like security by obscurity to me actually, ala this report -> http://uptime.netcraft.com/up/graph?site=amazon.com [netcraft.com]

AND, heck, like Microsoft?? You can't even DDoS them... that's right - ever wondered WHY you don't hear that MS or AMAZON get DDoS'd? They can't be is why.

How/Why? Well - They've "overbuilt" their network capacity hugely, & to SUCH an extent, you're not going to "overload" or "saturate" connections to them, and IF you try?

"plus a little reactive security in response to an attacker probing the system." - Posted by Soulskill on Saturday October 01, @06:12PM
from the can-we-try-this-at-airports dept.

Heh - "Yes Kids" - They take proactive & reactive measures, in that they monitor for it, & close such connections past a certain point/threshold...

(The unroutable types that DDoS use, think 10.x.x.x for example, that do NOT go "outbound online" to the public net (which also used to make the IP Stack go nuts & thus, the CPU too, until most OS' patched for it))...

Then, the IP stack (MS example here) also has settings of:

SynAttackProtect, EnableDynamicBacklog, MaximumDynamicBacklog, MinimumDynamicBacklog, TcpMaxHalfOpen, TcpMaxHalfOpenRetried

Those of you that have MS' based OS can do the same via those IP stack settings, mind you, & those ALL work IN COMBINATION with one another @ THE OPERATING SYSTEM'S IP STACK LEVEL to stave off DDoS/DoS attacks too!

(Of course, in combination with hardware measures noted above both MS & Amazon do, to stall off "the unstoppable attack method" (the DoS/DDoS)).

APK

P.S.=> It's the "how & why" you NEVER see Amazon OR Microsoft getting news that "anonymous/lulzsec" (& the like) "took down MS/Amazon via DoS/DDoS"... both companies use some "security-by-obscurity" (MS in closed source code for the MOST part to most folks), & also precautionary settings as well as unknowns (AMAZON'S OS TYPE USED, from above).

(Because you KNOW that'd be "big news" IF either went down to a DDoS/DoS, of course... & especially around here with all the "Pro-*NIX" sentiment regarding Microsoft (from the sockpuppet FUD spreading trolls that keep 100 user accounts to attempt to fool others with that bullshit is more like the real truth of it though - using "jump on the bandwagon" puny marketing ploys @ psychological manipulation of the weak-minded who don't check into things themselves))... apk

Layers (1)

lowy (91366) | more than 2 years ago | (#37580106)

People, many of your implementation examples aren't "either/or" situations. From a practical standpoint you are usually better with a layer of each: security and obscurity, For example, a strong vault that is hidden is better than the same one exposed. A steganographically-encrypted file is safer than that same file in the public domain. How much safer is open for debate, but you are probably safer with both layers in most individual *implementation* situations.

Where the debate comes alive is in two main areas:

1) Design. An open system design tends to be more trustworthy for reasons explained elsewhere. Obscurity in the *design* of any particular layer is usually bad idea (but obscurity in the choice of layers may be a good thing, e.g. what vault you chose or which tested open source encryption algorithm you picked).

2) Testing. If many people use the same system it becomes obvious if a vulnerability is found, and more people are looking for cracks. That same system in a one-off implementation is less obviously secure, even though (paradoxically) it may have been made more secure through obscurity.

Camouflage vs compartmentalisation (0)

Anonymous Coward | more than 2 years ago | (#37580130)

I think use camouflage must understand the hidden level of sophistication - machines are excellent augmenters of 'stone-turning' . Compartmentalisation of programs would be useful - from other existing programs, and of course from the system itself. But flexibility in the upgrading system and user interaction make the shared software environment a bit like cooking - proper procedures will always work, barring untidiness or the completely unexpected.

Real security by obscurity (1)

ShooterNeo (555040) | more than 2 years ago | (#37580228)

What about true obscurity. What kind of OS or software runs on the computers in a nuclear missile silo? Do those computers even use an OS? The point is, with little or nothing published, an attacker who was able to access systems like those would have little realistic hope of hacking them. There's no 0 day lists, no marketplace to pick up working cracks, no books describing how the internals of such a system.

Re:Real security by obscurity (1)

Dr. Tom (23206) | more than 2 years ago | (#37580424)

You are young. You don't know. Eventually they'll figure out the secret, if it's valuable. Your security is flawless if nobody wants your data. You are a script kiddie. Pro hackers can figure out what OS is being used by the way it responds to packets. The point is that if you are relying on secrets like what OS version you are running, then you lose.

This shit counts as CS paper? (1)

Alex Belits (437) | more than 2 years ago | (#37580246)

Seriously?

Complexity and utilisation (0)

Anonymous Coward | more than 2 years ago | (#37580362)

I always thought that the main problem in the start of high internet transmission and multiple ports was that the technology was under-utilised, with consumer e-commerce still nascent and the hackable potential not appreciated or considered worthwhile. Its becoming a public space with a depth lent by the technology, so you get people exploring all aspects and being territorial where aquisition without consequence (both money and hijacked PCs) is possible. I suppose just as a computer hacker possesion a part or all of a PC when they manage to infect it with a virus (until removed), so a grafitti scribbler owns the advertising space of public spaces they have written on until someone can remove the paint.

I agree (0)

Anonymous Coward | more than 2 years ago | (#37580412)

The more obscure a system is the harder it is to crack. Take gravity for instance; We know gravity exists, we can feel and see its presence everywhere, but due to it's obscurity, our understanding of it is quite limited. Through peeking and poking can we truly begin to understand systems, and this tends to take time, allot of time on the most obscure of systems. Besides, in all the white-noise, you have to know a system even exists to begin any sort of probing... Your attack vector may be way off is all I am saying.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?