Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Keeping Passwords Embedded In Code Secure?

Cliff posted more than 7 years ago | from the execute-only dept.

Software 130

JPyObjC Dude asks: "When designing any system that requires automated privileged access to databases or services, developers often rely on hard coding (embedding) passwords within the source code. This is obviously a bad practice as the password is then made available to anybody who has access to the source code (eg. software source control). Putting the passwords in configuration files is another practice, but it is still quite insecure as cracking hashed passwords from a text file is a trivial exercise. What do you do to manage your application passwords so that your system can run completely automated and yet make it difficult for hackers to get their hands on this precious information?"

cancel ×

130 comments

Sorry! There are no comments related to the filter you selected.

Passwords suck (4, Informative)

kunwon1 (795332) | more than 7 years ago | (#17406658)

Use SSL with certificates. It's more easily automated and just about anything worth running has the option.

Re:Passwords suck (2, Insightful)

xenocide2 (231786) | more than 7 years ago | (#17406744)

But what do you do when you need to revoke the cert? The problem is that they want authentication without the rigor of.. authentication.

Re:Passwords suck (1)

swillden (191260) | more than 7 years ago | (#17407138)

Use SSL with certificates. It's more easily automated and just about anything worth running has the option.

Makes little difference from a security standpoint, though. If the attacker can get at the file system, then he can read the private key.

Re:Passwords suck (0)

Anonymous Coward | more than 7 years ago | (#17407278)

False - Ever heard of CRL [wikipedia.org] ?

Re:Passwords suck (2, Insightful)

theshowmecanuck (703852) | more than 7 years ago | (#17407316)

It is kind of moot anyway... if the attacker can get at the file system, they probably can do whatever they want.

Re:Passwords suck (1)

swillden (191260) | more than 7 years ago | (#17408598)

It is kind of moot anyway... if the attacker can get at the file system, they probably can do whatever they want.

Ah, but that's exactly the scenario that people are often trying to defend against when they try to hide or "encrypt" passwords needed by applications.

The point is that it's impossible.

Re:Passwords suck (1)

efalk (935211) | more than 7 years ago | (#17409708)

Not impossible at all. Store MD5(password) (or other secure hash) in the program. Have I missed anything?

Re:Passwords suck (1)

swillden (191260) | more than 7 years ago | (#17410060)

You're solving the wrong problem. Storing hashes is useful to avoid keeping copies of passwords that are presented to your app in order to authenticate. What we're talking about is an application that needs to authenticate to something else. For example, to log into a database. Storing a hash of the password you need to log into the database doesn't help because you can't reverse the hash to recover the password. And if you could reverse the hash, well, then so could the attacker.

Re:Passwords suck (1)

Magic Fingers (1001498) | more than 7 years ago | (#17411220)

Like swordfish,
"Once you know the password you can go anywhere!"

Re:Passwords suck (1)

swillden (191260) | more than 7 years ago | (#17408582)

A CRL is only useful if you know the key has been compromised. I think OCSP is a better solution, anyway.

Easy; encrypt the private key (1)

Dion (10186) | more than 7 years ago | (#17411106)

Ok, so you use SSL with client certificates for authenticating against the server, but you are worried that the evil doers might simply read the keys right off the disk.

In that case you can do what Apache and openssh does to protect that kind of information:

Encrypt the keys and store them on disk.

When the program starts (maybe it's just a keyholder process) then the user is prompted for a passphrase and the key is stored in memory, if the keyholder sees any processes it doesn't like on the machine (like a debugger) then it simply shuts down.

This means that the keyholder will need to be started by the user every time the system has been booted, but after that everything can be automated.

This would at least mean that the attacker will need to gain access to the machine with all the software running in tact to get to the key, so if he is forced to reboot the machine then you have won.

Re:Easy; encrypt the private key (1)

swillden (191260) | more than 7 years ago | (#17411244)

When the program starts (maybe it's just a keyholder process) then the user is prompted for a passphrase and the key is stored in memory

And thus you're right back to the initial problem: Either you have to have an attended startup (and restart) process, or else you have to store a password somewhere on the system.

Re:Easy; encrypt the private key (1)

Dion (10186) | more than 7 years ago | (#17411480)

Well, the OP didn't say that you weren't allowed to request a password at boot.

I think most people are missing the point in this question any way, it seems as though the OP wants to let applications access services with all the same credentials and keep those credentials from the user, at this point the you have already lost as that's simply impossible (see DRM).

A better way would be to write a (trusted) server that the (untrusted) clients can talk to in stead of letting the clients talk directly to the backend database and services with the security problems that creates.

Re:Passwords suck (1)

ryanr (30917) | more than 7 years ago | (#17407148)

I don't see how an SSL certificate is going to help here. Since he's talking about authenticating clients, it would be a client certificate, which would have to be embedded in the app, same as a static password.

Re:Passwords suck (1)

plalonde2 (527372) | more than 7 years ago | (#17410604)

Really, the thing to do to address this is some trivial googling. Use a secure password store and a decent protocol: http://plan9.bell-labs.com/sys/doc/auth.pdf [bell-labs.com]

Re:Passwords suck (1)

ryanr (30917) | more than 7 years ago | (#17410710)

That's no different than any other SSO design. Read the rest of the thread about that. That won't help unless the environment is already set up for that kind of authentication.

You don't even need the source code (1)

Anonymous Crowhead (577505) | more than 7 years ago | (#17406662)

Just `strings` and patience.

Re:You don't even need the source code (3, Interesting)

mark-t (151149) | more than 7 years ago | (#17407192)

As a developer who has hardcoded passwords into applications before, I can safely say that using 'strings' would NOT have worked, as I never actually create d a string for such a password -- rather, I would implement a backdoor password as an FSM, with each state having its own separate case code that compares a character in the string entered to a single character from the actual password. Any deviance from the path for the FSM would fall through to the normal password handling facility, using the characters entered so far as the string entered. The passwords in such a case were non-trivial, between 20 and 40 characters, including combinations of letters, numbers, punctuation, and blanks, so the likelihood of stumbling across them accidently was remote to the extreme. Changing the password was only possible with access to the source code, and was done in a way that was simple to maintain, but in the over 10 years of use that these programs received in the companies that they were written for, the security of these hard-coded passwords were never compromised (because of the industry we were in, if it had been, we would have heard about it because the way we wrote it, it would have caused a panic).

It's probably not something I'd ever do these days, but back in the 80's and early 90's, it worked very well.

Re:You don't even need the source code (1)

ryanr (30917) | more than 7 years ago | (#17407200)

You don't think that's actually safe, do you? You've just made it more complicated (aka fun to crack.)

Re:You don't even need the source code (1)

mark-t (151149) | more than 7 years ago | (#17407264)

Well, in the environment we were in, it was as safe as the physical security on the building, and as nobody else would have ever had reason to suspect that there was such a backdoor in the first place (we never told anyone until long after the software fell into disuse, and even then it wasn't deemed a security risk by anybody because phsyical access to the computer was necessary to enter the password anyways). In the 12 years that the software was used, we only ever needed to use the backdoor once... and we used it to successfully restore a system that had damaged data files, so they were not upset about it when they found out.

As I said, it's probably not something I'd ever do today though.

Re:You don't even need the source code (1)

SillyNickName4me (760022) | more than 7 years ago | (#17407328)

and as nobody else would have ever had reason to suspect that there was such a backdoor in the first place (we never told anyone until long after the software fell into disuse, and even then it wasn't deemed a security risk by anybody because phsyical access to the computer was necessary to enter the password anyways)

Excuse me?

Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there. This is basic security and has been documented at least since the first edition of the DOD orange book on secure computing systems. Anyone who claims to know about security but does not assume this, is ignoring at least 3 decades of best practises with regards to security.

Of course in an environment where security doesn't matter, this is not a concern.

Re:You don't even need the source code (1)

mark-t (151149) | more than 7 years ago | (#17407402)

Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there

Yeah... and...?

I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?

This wasn't an option for the companies who contracted us to write the software for them... and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being used that it was necessary to restore working functionality to an otherwise inoperative system due to damaged data files.

As I said, I can't imagine I'd do it this way if I were to write an application for anyone today, but I am saying that this mechanism has worked in the past, and it has worked very well.

Re:You don't even need the source code (1)

SillyNickName4me (760022) | more than 7 years ago | (#17407940)

I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?

Generally spoken, that is indeed the correct answer.

This wasn't an option for the companies who contracted us to write the software for them...

Sure it was, they could have contracted someone else who gave them the possibility to review the source code.

and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being used that it was necessary to restore working functionality to an otherwise inoperative system due to damaged data files.

Sure, and that is a valid use of a 'backdoor'. It also means that you caused an extra security risk for your customer. Giving them a choice beforehand instead of telling them a decade after the fact would have given them a choice, which they did not have now.

With all respect, I don't doubt your intentions, but anyone who ever pulls such a thing on the company I am responsible for cancount on:
1. never ever getting me as a customer again
2. meeting me in court

As I said, I can't imagine I'd do it this way if I were to write an application for anyone today, but I am saying that this mechanism has worked in the past, and it has worked very well.

Re:You don't even need the source code (3, Funny)

Vellmont (569020) | more than 7 years ago | (#17408416)


2. meeting me in court

So you're really going to spend tens of thousands of dollars to recover non-existant damages to prove a point? The conversation might go something like this:

Judge: I see you're suing for 10 million dollars, but you don't list your damages. How did the defendants actions hurt your business? Was there a security breach? Did the defendant not meet the terms of the contract?

You: Well not really. The contract didn't say anything about what I'm suing about. Nobody broke in and we had a lot of means to prevent it, but someone COULD have broken in. Basically this guy just made me real mad because I didn't agree with his security procedures. Dag-nab-it, the guy slightly increased my risks! We don't have any damages, I just assumed that whenever I don't like something, I just sue the pants off them.

Judge: Umm.. Right. Well sorry, civil courts operate on damage to one party caused by another. Criminal courts operate where criminal laws have been broken. Since there's no damages you can show, and no laws have been broken I'm throwing this case out. Didn't your lawyer tell you all this?

You: Only the first 10 lawyers. Then I found this really good one...or at least so I thought at the time. He charged he $20,000 and told me it'd be thrown out at the first hearing. I guess I should have gotten a better lawyer.

Re:You don't even need the source code (1)

SillyNickName4me (760022) | more than 7 years ago | (#17408770)

So you're really going to spend tens of thousands of dollars to recover non-existant damages to prove a point? The conversation might go something like this:

Since the company I am working for, and for which I am responsible for security, works a lot with sensitive information from customers, the risk of losing their trust is quite there, even more so if it becomes publicly known that such a backdoor existed. In that case there wold be real damage even if no actual security breach ever took place.

Re:You don't even need the source code (1)

tzanger (1575) | more than 7 years ago | (#17408942)

Since the company I am working for, and for which I am responsible for security, works a lot with sensitive information from customers, the risk of losing their trust is quite there, even more so if it becomes publicly known that such a backdoor existed. In that case there wold be real damage even if no actual security breach ever took place.

So essentially you're saying that your job is to recommend against closed-source software. That's great. That is not, however, what this guy is talking about at all. If his customers required access to the source, they would have made that apparent in the software requirements specification, and the guy would have priced it accordingly or submitted a no-bid.

Really, this entire discussion is stupid. "If I don't have the source, I assume a backdoor" "Yeah, so?" "Well your software would never be allowed in my company." "Ok, I don't remember ever selling it to you, or having a quote request come from you." "Yeah, because I'd never allow your software in my company."

Re:You don't even need the source code (1)

SillyNickName4me (760022) | more than 7 years ago | (#17409280)

So essentially you're saying that your job is to recommend against closed-source software. That's great.

No it is not. Giving my company access to the source code can be based on a NDA, and in no way requires you to produce open source software.
WE want to be able to verify that no backdoor exists. Alternatively, we could arrange a guarantee by means of a contract that no such thing exists with a very hefty penalty attached if it turns out otherwise.

That is not, however, what this guy is talking about at all. If his customers required access to the source, they would have made that apparent in the software requirements specification, and the guy would have priced it accordingly or submitted a no-bid.

It is people like him who actually substantiate our demand to be able to verify that no such backdoors exist, instead of accepting a statement to that extent.

Really, this entire discussion is stupid. "If I don't have the source, I assume a backdoor" "Yeah, so?" "Well your software would never be allowed in my company." "Ok, I don't remember ever selling it to you, or having a quote request come from you." "Yeah, because I'd never allow your software in my company

If you believe it is stupid to point out bad and potentially very harmfull practises...

I guess I have some good reason now to not care much about your opinion on those matters.

Re:You don't even need the source code (2, Informative)

mark-t (151149) | more than 7 years ago | (#17409726)

The danger of it ever having become publicly known that there was a backdoor was negligible... the number of companies that we wrote software for was countable on one hand, and being vertical market software, there was no danger of it being used elsewhere.

Re:You don't even need the source code (1)

SillyNickName4me (760022) | more than 7 years ago | (#17410838)

The danger of it ever having become publicly known that there was a backdoor was negligible... the number of companies that we wrote software for was countable on one hand, and being vertical market software, there was no danger of it being used elsewhere.

Well, you just made it public..

Re:You don't even need the source code (1)

mark-t (151149) | more than 7 years ago | (#17411382)

Yep... some years after the software is no longer used... so it's not an issue. But I can virtually guarantee that nobody on slashdot knows which companies used it or even what software I am talking about. As I said, the number of companies that used the programs was countable on one hand and the other programmer and I personally knew every employee of the companies that contracted us to write software for them, which is why we were able to contain the security risks involved.

Like I said before though, it's not a mechanism I would use today.

Monopoly? (1)

tepples (727027) | more than 7 years ago | (#17408932)

I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?
Generally spoken, that is indeed the correct answer.

So what do you do when you suspect a backdoor in the software published by a monopoly or by each member of an oligopoly? Do you put your business on hold for 20 years waiting for the patent to run out?

Sure it was, they could have contracted someone else who gave them the possibility to review the source code.

Unless it is not commonplace for the monopoly or among the members of the oligopoly to allow third parties to review the source code.

Re:Monopoly? (1)

SillyNickName4me (760022) | more than 7 years ago | (#17409198)

So what do you do when you suspect a backdoor in the software published by a monopoly or by each member of an oligopoly? Do you put your business on hold for 20 years waiting for the patent to run out?

Pay someone to write an alternative, and live in a place where software patents are not valid to begin with. And yes, we did the first, and yes, I am living in a place where software patents are not valid.

On top of that, making sure 3rd parties do not get access to data about our customers is actually a legal requirement we have under EU law, so as an alternative to giving us access to the source, you could of course give a guarantee by contract that no such backdoor exists.That will be looked at on a case by case basis.

The number of incidents caused by disgrunted former employees from software development companies is a bit too high to not take the existance of backdoors serious.

Re:You don't even need the source code (1)

mark-t (151149) | more than 7 years ago | (#17409786)

Sure it was, they could have contracted someone else who gave them the possibility to review the source code.
I guess they could have done... but nobody else that worked for any of the companies that contracted us would have had even the slightest clue on how to do that, so they would have had to hire somebody else. If other programmers were going to review our source code (which we would know about, since the code was at a site that we controlled, and there was no remote access to it), it would DEFINITELY have been done differently.

Hashed passwords for database access? (4, Funny)

ari_j (90255) | more than 7 years ago | (#17406668)

I wasn't aware that it was a common practice to store database passwords as hashed strings in configuration files. Does your program run a brute-force attack against the hash every time it needs to create a database connection?

Re:Hashed passwords for database access? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17407076)

You type in a normal password, which is then hashed. Then you compare the hashed password to the saved hashed password.

Re:Hashed passwords for database access? (1)

eluusive (642298) | more than 7 years ago | (#17407174)

That still doesn't make any sense. We're talking about making database connections.

Re:Hashed passwords for database access? (2, Insightful)

Proud like a god (656928) | more than 7 years ago | (#17408200)

For passwords for that program's users, yes it works, but as a stored password to be passed on to another system, like you say, no.

No answer (1)

Sancho (17056) | more than 7 years ago | (#17406684)

On-disk passwords simply aren't secure. If you need automation, you want to secure the systems as much as possible.

That said, salted hashes are pretty tough to crack. Changing the passwords regularly will make it unrealistic for a cracker to obtain the passwords through brute force.

Re:No answer (4, Insightful)

FooAtWFU (699187) | more than 7 years ago | (#17406762)

That said, salted hashes are pretty tough to crack. Changing the passwords regularly will make it unrealistic for a cracker to obtain the passwords through brute force.
I don't think this is really the problem - the problem is that you have something like, say, a fairly standard sort of command you might find in a MySQL database. You might get the strings from a config file, but you need to pass the password as plaintext:

mysql_connect('dbserver.foo.org','apache', 'z*UIYD!0');
or similar credentials.

And you know what? That's not secure. But then again, the database it's connecting to should be as firewalled as all get-out, and even if it's NOT firewalled, it should have host-based authentication so that you can only access it with that password from the appropriate machine (your web server). At that point, if someone can hook into your LAN to sniff traffic or spoof things, you're probably in deep trouble anyway - but perhaps you could configure the database server to only accept connections over a VPN of some sort with appropriate authentication certificates.

Re:No answer (3, Interesting)

theshowmecanuck (703852) | more than 7 years ago | (#17407344)

Someone mod parent up more. As stated, DB access usually happens over an internal network (99% of the time) and only the outside interface of the web server is open to the public (assuming it is an app that is accessible to the world and not an internal app anyway). On bigger apps, only the model components on the backing app server(s) should be doing the DB access etc. and that should definitely be behind the firewall along with the DB. In all cases if the firewalled internal network is compromised, you are really screwed anyway, so what does encryption etc. matter? Unless you don't trust the people who administer your apps and can wreck you business more easily by not doing backups and using a baseball bat on the hard drives or something equally brutal.

If it bleeds we can kill it. (1)

Luke727 (547923) | more than 7 years ago | (#17406716)

GET TO DA CHOPPA!

You can't (1)

ryanr (30917) | more than 7 years ago | (#17406726)

You can't secure a client-side password without another password to protect it. Which is contrary to what you're trying to accomplish. If you could give a bit more detail about what you're trying to accomplish, we could probably better enumerate the trade-offs.

Re:You can't (0)

Anonymous Coward | more than 7 years ago | (#17406834)

Right, without a threat model it is pointless to speculate on what he should do.
How about telling us what capabilities the people who might try to gain access the database are assumed to have?

Public-key crypto? (2, Insightful)

FlyByPC (841016) | more than 7 years ago | (#17406732)

IANRAProgrammer, but...

I believe public-key cryptography could do this. Encode the public key (several kilobits, if you're paranoid)? in the source, and have the program use it to authenticate the secret key given by the user. Publish the source code on YouTube for all the good it will do an adversary, right?

Re:Public-key crypto? (2, Informative)

strider44 (650833) | more than 7 years ago | (#17406908)

Nope, still won't work against a cracker. This is just another form of DRM and DRM is fundamentally flawed. (If you don't believe me show me a major game that hasn't yet been cracked.)

In short, if a cracker has full access of a program or system and the system has access to the passwords (even if it does some fiddling around before revealing the passwords) then the cracker has full access to the passwords. There's no way to protect against that except by not allowing any access to the passwords (by just not posting the files that the passwords are in) or by not having the passwords in the program at all and having it so the user must type them in. Anything more are delaying tactics and do not give a high level of security, just some level of obfuscation...

Re:Public-key crypto? (1)

strider44 (650833) | more than 7 years ago | (#17406928)

Apologies - I misinterpreted obviously what you said (I thought that you meant for communicating between processes not someone who is using it actually inputting a password). Actually your idea will work against a cracker and I just made a fool of myself, damn no edit key. It's not the most practical solution since there would be only one password that every user must have and it also won't give automation like the summary wants but it will still work against someone without that password.

Re:Public-key crypto? (1)

Short Circuit (52384) | more than 7 years ago | (#17407050)

What you're describing is effective--and as easy as wrapping your communication with your db with libssl.

What the user appears to be trying to accomplish is allowing db access without querying the user for a password. To do this, he believes he needs to embed the authentication credentials in the application or its configuration files. To that end, he's asking how Slashdot folk do this securely.

If it's assumed that a person using the software is authorized to access the DB, because the person has access to the software, then it's a fair request. Perhaps the best way to go about it isn't to depend on secure db passwords, but to use on another authentication mechanism. A couple ideas come to mind, including host-based authentication (The simplest way would be by IP address.) and user-based authentication. (If the user has a unique username on the computer or network, authenticate against that; he had to know his own password to log in.)

Another possibility could be that anonymous usage of the software is allowed, but anonymous usage of the DB is not. As in, there are tables or fields in the database that contain confidential information that shouldn't normally be accessed. Worse, you may not want some idiot intern writing his own software to make changes to tables in the database.

In this case, in this case, I'd consider how the database is structured, and see if it isn't possible to grant access to some data and fields, and not to others. If it's a freshly-created DB, it might be worthwhile structuring the database with that aim. I'm not a DB expert, though; I couldn't tell you all the RDBMSs that support table-based permissions. (Though I happen to know IBM's UDB/DB2 does.) In such a system, one could have different username/password combinations depending on how much access should be granted. Read-only access to certain tables could be possible with a non-secret username/password combination, while higher degrees of security would require more secure authentication. (Bringing us back to a password prompt or a non-password authentication technique.)

However, if the software may be run by both authorized and unauthorized users, then you probably don't want to make access to the DB automatically possible just from running the software. Requiring a prompted password or other individual-based authentication system becomes necessary.

Just my 25 cents. (Inflation, you know...)

The question is based on a false premise (1)

Ignorant Aardvark (632408) | more than 7 years ago | (#17406742)

Putting the passwords in configuration files is another practice, but it is still quite insecure as cracking hashed passwords from a text file is a trivial exercise.

This simply isn't true. If salting is used (which is quite commonplace these days), it's pretty much going to be impossible to recover the password from the hash.

Re:The question is based on a false premise (3, Insightful)

Nos. (179609) | more than 7 years ago | (#17406880)

The problem with this is.... how does the program get the password it needs? If its encrypted with a salt...well, that's one way, so the program would have to do a brute force everytime it wanted to use that password.

There's little point to encrypting a locally stored password, as the decryption technique must be relatively simple to allow the program to access it. The idea is to secure everything around it, including the system that is being connected to. Use host based authentication, firewalls, etc. to reduce the risk.

Re:The question is based on a false premise (0)

Anonymous Coward | more than 7 years ago | (#17407176)

> The problem with this is.... how does the program get the password it needs?

You do it like UNIX stores the /etc/passwd. You store the salt with the password. You can also use a separate piece of information like an ID #.

Re:The question is based on a false premise (1)

Proud like a god (656928) | more than 7 years ago | (#17408220)

Does that work? Isn't the salt mean to be kept as secure as a non salted password hash? Otherwise you still know you need to reverse hash(salt+password), just like without salt when you needed to reverse hash(password).

Only by not knowing the salt (i.e. that any password you hash isnt going to match the stored hash unless it happens to be the unknown salt+password) would you be trying the harder task of reversing hash(?????password) into salt+password.

Re:The question is based on a false premise (1)

Nos. (179609) | more than 7 years ago | (#17408426)

UNIX passwords are one way hashes. Which means, when I type in my password to log in, the login process takes the password I typed, gets the salt from /etc/passwd and encrypts it. If what is encrypted matches what is in /etc/passwd, then I get logged in. Otherwise I don't. The password is never decrypted.

Kerberos (3, Informative)

forlornhope (688722) | more than 7 years ago | (#17406766)

Kerberos was built for just this situation. Read up on it. I think its even available as Active Directory for MS.

Re:Kerberos (1)

swillden (191260) | more than 7 years ago | (#17407160)

Kerberos was built for just this situation. Read up on it. I think its even available as Active Directory for MS.

You're right that MS provides a bastardized version of KERBEROS, but wrong that it helps.

In order to get an authentication ticket from the ticket-granting server, you have to authenticate to the ticket-granting server. If the machine can start up completely unattended, that means it has the KERBEROS authentication credentials stored on disk somewhere, which means the attacker can get them, and can then authenticate to the ticket-granting server and get whatever authentication tokens he needs.

Re:Kerberos (1)

ryanr (30917) | more than 7 years ago | (#17407242)

Under exactly the right circumstances (i.e. all of your userbase always logs into the domain before running the database client app in question), this pushes the authentication problem to exactly where he wants it. Unfortunately, the original poster hasn't given nearly enough information to tell if Kerberos/AD/any-other-SSO will help his situation.

But as you've indicated, for other situations this won't help.

Re:Kerberos (1)

swillden (191260) | more than 7 years ago | (#17408640)

Under exactly the right circumstances (i.e. all of your userbase always logs into the domain before running the database client app in question), this pushes the authentication problem to exactly where he wants it. Unfortunately, the original poster hasn't given nearly enough information to tell if Kerberos/AD/any-other-SSO will help his situation.

But as you've indicated, for other situations this won't help.

Yeah, it seemed to me he was talking about completely unattended startup of a server that requires access to a database (or whatever).

Permissions (1)

andy753421 (850820) | more than 7 years ago | (#17406832)

'chown apache:apache database.conf && chmod 600 database.conf' That's good enough for me. Generally I'm not concerned with people accessing the physical hardware in order to bypass permissions, that's another issue entirely.

Re:Permissions (1)

georgewilliamherbert (211790) | more than 7 years ago | (#17406864)

Ever worked anywhere where security concerns meant that the UNIX Admins aren't supposed to have access to the database contents?

It gets much more fun.

Re:Permissions (1)

Proud like a god (656928) | more than 7 years ago | (#17408158)

So as admins do they have access to the database program binary? What happens if they alter it to allow them access or just dump the data elsewhere when launched? Checksums and IDS against ur own admins?

Re:Permissions (1)

raphae1 (695666) | more than 7 years ago | (#17408566)

Re:Permissions (1)

Blackknight (25168) | more than 7 years ago | (#17409952)

That's assuming the conf file is actually in a web accessible directory. If you put the file somewhere else and just read it using an include statement it's fairly secure.

Re:Permissions (1)

user24 (854467) | more than 7 years ago | (#17410034)

unless there's a vulnerability elsewhere, eg:
yourserver.com/show_page.php?page=../../../../data base.conf

the permissions will do nothing to secure the config in this case.

Assume they know the password (2, Insightful)

mattfata (1038858) | more than 7 years ago | (#17406872)

The better practice would be to make raw access a non-issue. Don't give the user account the privileges to accomplish anything that wouldn't be possible with the application itself. If you're using some sort of SQL database, consider limiting the permissions on the account to stored procedures that correspond your application's features.

Re:Assume they know the password (0)

Anonymous Coward | more than 7 years ago | (#17406912)

I agree - although you could also do a three teir approach. Client application talks to a dedicated server program which then talks to the database. The server program can be on a trusted, locked down host with its own configuraiton file and responds only to valid requests.

But, yes, this assumes that whoever has access to the configuration file is permitted full database access (as they can impersonate the server program itself). I suspect this isn't quite a big deal though, you could easily have only one trusted person maintaining the 'live' system seperate from the test/devlopment staging system.

You can use web services for that, if you wish to be web2.0 compliant.

Wrong Question (4, Interesting)

eric.t.f.bat (102290) | more than 7 years ago | (#17406898)

First: only an idiot would put a password into source code. That's what configuration files are for. What, you want to have to edit a script every time the password changes? Second, there's no point encoding, encrypting or otherwise "securing" the configuration file. If a user has access to your configuration files, he has access to everything else, and all your security is useless. So really the question is: I don't want the neighbours to see me naked. What should I tattoo on my butt-cheeks to make me safe?

Re:Wrong Question (0)

Anonymous Coward | more than 7 years ago | (#17408206)

I don't want the neighbours to see me naked. What should I tattoo on my butt-cheeks to make me safe?
A lifesized tattoo of you fully clothed -- and not just over your buttcheeks, over your whole body.

Well... (1)

vga_init (589198) | more than 7 years ago | (#17406914)

If you want to do something quick and dirty without bothering to code in some robust password mechanism (let's say you want to use the same old password every time, hard coded as you say), why not creat a function to generate the static password using deterministic methods? People with access to the source code wil lbe able to spot what you've done, but at least they won't know the password without actually running the code. You could try to obfuscate the function as much as possible and store the generated password in dynamically allocated memory. This way someone who merely disassembles the code can't read the password plainly.

Re:Well... (1)

flonker (526111) | more than 7 years ago | (#17407046)

More along these lines, create a seed password, set it in the source and in the database. Have the application randomly (this is the hard part) change the password in a non deterministic manner, changing it first in a backup config file, then on the database server, then in the main config file. (In case of failure, the admin can copy the password from the backup config file to the main config file.) Possibly have the application rotate the password every so often. This protects the password from someone who has access to the source in some repository, but not to the machine the app is running on.

Of course, I may be answering the wrong question. If the application has access to the database, and a person has admin access to the machine the application is running on, there is no way that you can stop a determined adversary from getting access to the database with the same credentials as the application. (This can be logically proven quite simply.) At best, you can make it more difficult with obfuscation techniques all along the path the password must travel. (Obfuscate the password on disk, and obfuscate the unobfuscation code. Keep the password in memory for the shortest amount of time possible. When sending the password to the network, encrypt it in some form first. And using something like a dynamicly linked libssl means you send the password in plaintext to the library, which is then trivial to capture. Not to mention this violates the shortest amount of time possible in memory rule.)

If the person you are hiding the password from has access to the machine the app is running on, but does not have admin access, the answer is trivial. Put the password someplace they don't have access. Of course, then you must secure the machine, but that's an entirely different story.

Re:Well... (1)

ryanr (30917) | more than 7 years ago | (#17407190)

People try to do this sort of thing all the time. It's not actually secure, of course, but it makes for some entertainment. If you want some to play with, Google "crackme."

You must trust root (1)

Gothmolly (148874) | more than 7 years ago | (#17407008)

w/o trusting root, your whole application comes crashing down. All the chmodding in the world won't save you from root.

The only other way to do this would be to have your app retrieve the key from a trusted remote location via SSL, then use it on the remote app... which is sounding more and more like a kerberos or mutual SSL key thing anyways.

Defense in depth (0)

Anonymous Coward | more than 7 years ago | (#17407088)

You (or your application) doesn't hash the password, the authenticating system (in this case the database) does. Possibly. Depends on the database. In any case, the app is acting as a user in this context, and so presumably needs the password in plaintext, as a user would.

The appropriate solution for something like this is defense in depth, so that a compromise of one element won't necessarily invalidate your entire protection scheme.

1. Lock down the database user. Grant only permissions that are required for the app to work correctly.
2. Lock down the database and app servers. Make sure your app is running under a restricted account, and that said account is the only one with access to your password file/application code.
3. Lock down the database connection port. Only allow connections from the app server's IP. Ideally, the database should be on a non-routable subnet with access only through port forwarding.
4. Lock down the data stream. If your database supports a PKI implemention, use it to authenticate and encrypt the connection.

Can't be done, no way, no how. (5, Informative)

swillden (191260) | more than 7 years ago | (#17407114)

First, let me dispose of one issue:

This is obviously a bad practice as the password is then made available to anybody who has access to the source code (eg. software source control).

It's much, much worse than that, because the password is also available to anybody who has access to the binary. "man strings".

Others have suggested various options, but absolutely none of them work.

  • "Shrouding" passwords, whether in code or in config files. Don't make me laugh. No matter how you try to obfuscate the password, all of the code needed to recover the password (or the hash, or whatever needs to be submitted to perform the authentication) is there, just waiting to be dug out. You can make it obscure, but you can't make it secure.
  • Public key authentication? Bzzzzt. The private key has to be present on the file system, where an attacker can grab it. "So, encrypt it!", I hear. Umm, you have to have the passphrase to decrypt it somewhere in your code or config files.
  • Kerberos? You still have to have some mechanism for authenticating to the ticket-granting server, and if the attacker can get that, then he can also authenticate, just like you.
  • Host security module? TPM with auth credentials bound? Well, these do protect against some attacks, but if the attacker can own the server, he can use the hardware token to do the authentication for him, just as though he were the server. These do prevent him from being able to take advantage of physical access to the machine to reboot it with another OS and then dig through the drive contents. Assuming the system is configured tightly enough that booting a different configuration is the only way in, then a TCPA TPM actually does the job. This of course, requires that the system have no exploitable security holes (ha!).

The bottom line is: If the machine has all of the information needed to perform the authentication without human intervention, then an attacker who gains control of that machine has all of the information needed to perform the authentication. Period. No getting around it. The best you can do is limit the damage in the case where the attacker has only partial access.

What is that best? For a network-accessible machine, do the following:

  • Lock down the system as tightly as possible. Standard system security stuff, but be as hardcore about it as you can.
  • Use an authentication protocol that can be performed between a highly-secure HSM and the remote resource, using the main machine as a passthrough only.
  • Secure the HSM with a password or authentication key, so that the HSM won't do its authentication job without first being authorized.
  • Use a TPM to bind the HSM authentication data to the system state. This will make patches a PITA, but we're going for maximum security here, so that's okay.
  • Put the whole assemblage in a secure facility, ensuring (hopefully) that no potential attacker gains physical access to the machine.

That's a lot of work, and it's still not completely secure. Luckily, very little needs even that level of security. Oh, and there aren't any OSes available that make good use of a TPM yet, so it's not really possible.

For most systems, what I'd really recommend is: Put the auth credentials in plaintext in a config file and limit access to that file to the bare minimum. If you have Mandatory Access Controls (e.g. SELinux), configure them to allow only the server process to read that file. Then, lock the whole system down as tightly as possible (within existing constraints). Ensure that a bare minimum number of people have logins on the machine, and that they all have minimum permissions, firewall it as completely as possible, and keep it up to date on security patches. Finally, put it in a locked room and tightly control physical access to it.

Of course, even this reduced-security approach is too onerous in many cases, so you have to make compromises. That's where a good understanding of security and plenty of hard thinking about what compromises can be made come in.

There ain't no silver bullet.

Re:Can't be done, no way, no how. (3, Informative)

swillden (191260) | more than 7 years ago | (#17407172)

Responding to myself... Uh oh.

It occurs to me that I may be answering the wrong question. If the assumption is that the attacker won't have access to the server, but may have access to the development team's source code, then the answer is simple: put the password in a config file that the developers don't have access to.

Re:Can't be done, no way, no how. (1)

KermodeBear (738243) | more than 7 years ago | (#17407256)

Wrong question or not, your answer was awesome. It confirmed a lot of what I already believed to be true and gave me some new tidbits of information as well. Many thanks. (o:

Re:Can't be done, no way, no how. (1)

chthon (580889) | more than 7 years ago | (#17407672)

A very good summary of what I found out myself.

I have the same problem, and what I did was just use no password at all, but create different roles for the system.

Our programs have only a certain role in which they can insert or update only certain parts of the database. Really sensitive tasks must always be done by an operator, who has to log in manually.

Unfortunately, we are using mySQL, which is not as rigorous. For update actions the restricted role must also have query capabilities.

I think that by using postgreSQL or Oracle it is much easier to restrict this role even further, so that no query capabilities are needed, probably through a view or by using stored procedures.

Re:Can't be done, no way, no how. (1)

Bishop (4500) | more than 7 years ago | (#17408730)

Parent is correct (unlike so many other posts). Storing the password in the clear in a config file is good enough in most cases. Obviously you want to restrict access to that file. Attempts to obfuscate the password are pointless. If an attacker can read the config file, then they can probably read the processes memory.

Re:Can't be done, no way, no how. (2, Insightful)

Anonymous Coward | more than 7 years ago | (#17408756)

Shrouding passwords is terrific, as it makes customers, QA and marketing shut the hell up.

Where I work, we have a product that needs to store a shared encryption key for communications. The interaction with customers, QA, and marketing went like this:

Them: OMG, the password is there in plain text

Us: The password is in a file readable by root only, as is the install directory. If you can read it, you already pwn the box

Them: OMG, the password is there in plain text

Us: The product has to run unattended as root. There's nothing sensible we can do about it.

Them: OMG, the password is there in plain text

End result: we changed the program to encrypt the password using a fixed key. Customers, QA and marketing finally shuts the hell up.

Re:Can't be done, no way, no how. (1)

swillden (191260) | more than 7 years ago | (#17408812)

Yes, shrouding passwords can have benefits unrelated to security. As long those who are evaluating security realize this, there's no problem. In order to avoid looking stupid for the occasional customer who *does* understand security, I hope your manual includes a statement like "The obfuscation of the password prevents casual viewing by system administrators, but provides no real security for the password. Security is provided by proper system configuration, limiting access to the file containing the password. If access cannot be sufficiently limited, then the attended startup mode should be used, with the password provided by a trusted administrator during startup."

Re:Can't be done, no way, no how. (1)

uradu (10768) | more than 7 years ago | (#17408838)

Great statement of the facts, should be required reading for any middle and upper management. Sadly, this scenario is extremely common in enterprise environments, where there are tons of unattended custom gateway and batch processing type applications running on various servers, transferring data from one system to another and manipulating/massaging it while doing so. Typically these apps are either boot time services or stand-alone apps that get kicked off at boot time, without any user intervention. As such, their security scenario is exactly what you described: completely unattended execution yet needing authenticated access (more often than not using simple user id/password combinations) to production systems such as databases, file servers, FTP servers, EDM systems, etc.

It can be very hard to impossible to impart to management the notion that this scenario is inherently insecure and insecurable: since the applications can obtain the credentials in completely automated fashion from plainly readable data (i.e. embedded in the executable or in a config file), any third party can perform the same computations to obtain the credentials. The request is invariably always: ok, so encrypt the credentials. Argh!!!

One theoretically reasonably secure approach would be to prompt for credentials at application startup, so cracking would at least require access to the machine's RAM, but this approach is quite unrealistic in large installations that often involve nightly unattended reboots. The next best approach seems to be to use NT based authentication for access to resources that allow that sort of authentication (e.g. file shares and such that can limit access to domain authenticated users), and configuring the apps to run under an authorized account. This approach doesn't work for all resources though, since many require simple user id/password authentication. A work-around would be to store credentials in files on NT authenticated file shares, as long as communication is not in plain text. But this leads to extra configuration tedium and also to a single point of failure, which sends many people into convulsive fits.

For many apps credentials storage is approached in quite pragmatic fashion: store them in a config file using some basic obfuscation (e.g. XORing against a reproducible pad), which eliminates the vast majority of "attacks" (in a "secure" enterprise environment mostly accidental discovery by semi-authorized personnel such as various admins that shouldn't all have access to that resource). Use OS access restrictions to the application and config file location. If possible, also implement host-based authentication. Run the whole thing in an intranet environment that is as physically secure as possible. Lastly, say your nightly prayers to your most trusted deity, preferably in a secure and sound-proofed location.

What I do... (1)

GWBasic (900357) | more than 7 years ago | (#17407334)

It's all about managing your risk: This is what I do:

  • The password is loaded from a configuration file
  • The password (in the configuration file) is encrypted
  • The encryption key is stored in a script that's only accessable by a generic system account
  • The automated job runs in a system that has to store the password; the stored password is only readable by trusted employees

Is the system 100% secure? No. Is the system secure enough? Yes! The key is risk management; the probability of our system being comprimised is incredibly low. We're more likely to be comprimised by a disgruntled employee; besides, anyone trying to get sensitive data would have to spend an entire week figuring out the damn thing!

The key to senarios where a password needs to be stored in a machine-readable format is risk management; you reduce the risk of being hacked such that the overall value makes the system beneficial.

Re:What I do... (1)

Proud like a god (656928) | more than 7 years ago | (#17408182)

Why not cut out the stored encrypted password bit and have the password only stored where those trusted employees can access? Sounds like a simple user/file permission system really.

Makes sense if... (1)

DimGeo (694000) | more than 7 years ago | (#17407700)

... if the code with the password in it runs on your server, and never ever leaves it, and noone but you has any access to it. This is a completely valid way to deal with things, although it's ugly and if your server ever gets stolen, you'd better be using some kind of full-disk encryption. And if the server ever gets pwned, then your password will be pwned, too.

A simple solution. (1)

Lethyos (408045) | more than 7 years ago | (#17408020)

I see a lot of elaborate answers, but we all seem to be forgetting something obvious. When the service comes up, have it prompt an administrator for the password then store it in memory. Ultimately this is only obfuscation, but despite passwords getting stored in memory all the time and I think the rate of compromise remains fairly low. At any rate, it is a lot less likely an attacker will find it there than in a plaintext file on the disk. Apache HTTPD and all the MTA services I use do this when using SSL certificates with encrypted private keys. Seems like a good start, at least. If you want to get a little more elaborate, your service can generate a random key on some interval which may be used to encrypt that password in memory. Advantages of all this? No plaintext storage, it becomes easy to change the password of your database (simply inform the administrators), none of the developers need to be concerned with the credentials, and your code keeps the password (and the key for encrypting it in memory) a moving target.

Re:A simple solution. (1)

Proud like a god (656928) | more than 7 years ago | (#17408260)

Are you trolling or missing the point? If it is to be an automated system, you can't have the admin manually put in the password each time it starts. So how do you replace that? Having either the program itself read from a config file or even another program supply it doesn't solve the problem of how/where to securely store the password for these methods to work.

Eventually you seem to have to trust root and file permissions that the programs and config files can only be accessed by those you trust to do so, and not altered to give up their secure password to whoever.

Re:A simple solution. (0)

Anonymous Coward | more than 7 years ago | (#17408688)

I think his basic point (which gets lost in implementation details) is, "An automated solution isn't secure." You have to have a human in the loop to be any more secure than a private configuration file, with security enforced by the operating system. There's no getting around that; as another poster mentioned, if the system can do it automatically, an attacker can make the system do it.

For really critical systems, I'd say you should put the sensitive info on a USB key or something, and hire people 24/7 to plug it in and provide a password whenever the system requires the credentials (two factor auth). That's obviously a very expensive solution, though, even at minimum wage (and people making minimum wage, or even significantly more, obviously aren't trustworthy, either).

If you can't do this, you might as well just go with the private configuration file approach. A few levels of obfuscation might discourage casual inspection, but don't fool yourself into thinking it provides any real security against compromise.

Of course, you don't need someone around 24/7 on five minute notice if you don't have high uptime requirements. A number of Web servers I informally administer require passwords for the SSL keys, and we just type them in once on those rare occasions that the servers need to be restarted.

Downtime requires attendance. (1)

Lethyos (408045) | more than 7 years ago | (#17409004)

To compliment comments made by the first response, there are only two situations where you need an administrator to supply the password. Once when the system is first brought online and then every time afterwards the system experiences a critical fault or scheduled maintenance that requires services to be restarted. In both cases, there has to be staff available. Especially if a system goes down (which it should not typically do) then there is likely a problem that demands attention. Otherwise, under nominal operating conditions, the service requiring the credentials would run without any additional attention.

Avoid storing it on disk (0)

Anonymous Coward | more than 7 years ago | (#17408356)

Have it cached in memory only (no swapping), meaning you have to enter it each time a machine restarts..

Commercial product solution (1)

PopHollywood (770077) | more than 7 years ago | (#17408424)

My company is considering using the Cloakware Server Password Manager (CSPM) [cloakware.com] to solve this problem.

I've done some preliminary testing and here's basically what it does:

  • Stores usernames and passwords for applications, databases, etc. in a centralized database - encrypted with AES.
  • Central database managed via a web interface.
  • An application makes a call (via API or script) to CSPM daemon running on local machine to obtain desired username/password to target database. For example, the payroll application might ask CSPM for username/password to "PayrollDB".
  • Daemon gathers evidence about requesting application (local username, current application path, hash of application executable, machine-specific fingerprint, etc.) and sends to CSPM server which uses the data to determine if the requestor is legitimate.
  • Daemon caches server responses so that future password requests are serviced by local cache.

My only concern is that Cloakware licenses CSPM per-application ID (the software is "free").

Re:Commercial product solution (0)

Anonymous Coward | more than 7 years ago | (#17408646)

Sounds a lot like Kerberos, except there's an extra daemon that tries to figure out whether the requestor is legitimate. Of course, that can be owned, too... personally, though, I'd try making Kerberos do that, rather than buy a commercial product that might be snake oil. Kerberos is open and heavily analyzed.

Some possibilities (1)

digitalhermit (113459) | more than 7 years ago | (#17408452)

First thing, storing passwords is a bad idea but sometimes cannot be avoided. There are a few things that can be done. None can really prevent someone from dumping the memory contents because, unless you use more sophisticated client/server validation (based on IP, MAC, host auth, etc.), someone with the right privileges can core dump the system or strace the process. Yes, if someone has access to strace a process you probably have bigger issues, but it's conceivable in a DMZ environment where a particular host is compromised.

1) With most implementations of crypt() you can specify a salt value. This salt can be hardcoded or based on some property of a secondary file. For example, store the hash in a config file, but use the MD5SUM of another file as the salt. This prevents someone from just running the script/binary elsewhere and extracting your passwords.

2) Use a proxy mechanism that you can control. I.e., you may not be able to modify the server side, but you could setup a secondary server with restricted privileges that acts as a gateway to the database. Instead of the DB being accessible from a DMZ, the accessible machine authenticates against the proxy. You can set up many times of authentication on the proxy.

3) When possible, keep the stored password in memory only as long as is necessary to build the connection. I.e., clear the memory immediately after auth to prevent a dump from showing the plaintext of the password.

smart cards (1)

oohshiny (998054) | more than 7 years ago | (#17408510)

The only secure way on current hardware for automated authentication is not to embed passwords in source code. If you're willing to use extra hardware, your best bet is a smart card.

Perl Rocks! (0)

Anonymous Coward | more than 7 years ago | (#17408876)

I remember seeing something to solve this elegantly on CPAN a while ago... The module is called Data::Encrypted and does almost exactly what you ask.

http://search.cpan.org/~amackey/Data-Encrypted-0.0 7/Encrypted.pm [cpan.org]

DESCRIPTION
===========

Often when dealing with external resources (database engines, ftp, telnet, websites, etc), your Perl script must supply a password, or other sensitive data, to the other system. This requires you to either continually prompt the user for the data, or to store the information (in plaintext) within your script. You'd rather not have to remember the connection details to all your different resources, so you'd like to store the data somewhere. And if you share your script with anyone (as any good open-source developer would), you'd rather not have your password or other sensitive information floating around.

Data::Encrypted attempts to fill this small void with a simple, yet functional solution to this common predicament. It works by prompting you (via Term::ReadPassword) once for each required value, but only does so the first time you run your script; thereafter, the data is stored encrypted in a secondary file. Subsequent executions of your script use the encrypted data directly, if possible; otherwise it again prompts for the data. Currently, Data::Encrypted achieves encryption via an RSA public-key cryptosystem implemented by Crypt::RSA, using (by default) your own SSH1 public and private keys.

RSA Authentication
==================

Data::Encrypted uses RSA authentication to encrypt and decrypt its data. It achieves this by reading the user's public and private RSA keys. By default, Data::Encrypted assumes these files are stored in the .ssh subdirectory of their home directory (found using File::HomeDir), but you can provide alternative key files yourself, either by supplying alternative key filenames, or by building Crypt::RSA::Key's yourself:

Microsoft to the rescue (1)

TheOtherChimeraTwin (697085) | more than 7 years ago | (#17408884)

As much as it pains me to admit this, Microsoft provides a nice solution to this problem.

For example, Keeping secrets in ASP.NET 2.0 [microsoft.com] or Wrap the Data Protection API [bluevisionsoftware.com]

The trick is that they use the user's password to encrypt the data. Tight integration with the operating system has the occasional benefit.

Did this once....GPLed source available. (1)

Zurk (37028) | more than 7 years ago | (#17408992)

http://freshmeat.net/projects/sentinel/ [freshmeat.net]
i did this for sentinel. i used an executable packer and a number of different means for keeping the salt secure, such as splitting it up and randomizing it during compile. you can look at the source as it compiles...its strings proof but not perfect.

End-to-End Authentication (1)

forsetti (158019) | more than 7 years ago | (#17409130)

Don't authenitcate the application to the database. Authenticate the user to the database. The user supplies a credential (password, certificate, biometric, etc), and the application, acting on the user's behalf, forwards the credential to the database. The database consumes the credential, performs authorization, and delivers data to the user, again through the application.

Kerberos provides a great mechanism for this. Using pkinit, you can use various credential types. Or, stick to the basics and use passwords. Most databases support GSSAPI (or SSPI) now, and using GSSAPI in your app shouldn't be too difficult.

So, the user supplies credentials on demand, nothing is embedded in source, binary, or external configs.

Truly no easy answer (1)

cpct0 (558171) | more than 7 years ago | (#17409392)

Alas there are no real good way to keep things secure in source code. But there are a few good ways to keep things afloat anyways:

Q. Your coders are not to be trusted
A. Put a file containing a security token (using the generic term token here, depending on what you use - certificate, or others). Open the file, read the token, send it to your server
A2. Use SSL tunneling, using aforementioned certificate, add another file for server details
A3. Create a "mirror database" with all important information replaced with random equivalents, send a new security token file for your coders using that mock-up database.

Q. Your clients are not to be trusted
A. Have fun with the password. A good Xor is usually enough for it not to be recognized by the user if he opens the executable.
A2. If your code can be hacked, you got more problems than that. Add up a anti-hacking protection to your software. Then make nontrivial encryption with your password, beware that your cleverness in your code might not show up as clever in assembly and might be very easy to understand inside the assembly code, especially with modern optimizing compilers and linkers.
A3. If you use 3rd party external tools to access your database (dynamically linked libraries, like DLL in Windoze), game over. Don't. This can be sniffed. If your database is in another computer and your SSL code is outside your code, don't. Again, game over. Everything must be embedded inside your own executable, and you must protect your executable.

As you can see, the "good" answer is nontrivial in both case, and it must be noted that no matter what you will do, someone with nothing else to do and big to gain (if only for the challenge) _WILL_ get access to your database. That's not the way to go if you want my opinion. Trust is the way to go, and if they can open up the database, well good for them. If you are DoD or some whiznut organization that needs to keep its eggs secure to no avail, if you don't trust your operatives, if you don't trust anyone, well, create a portal application that makes allowance and disallowance possible, then create a protocol to access that portal application that is heaviliy encrypted, then give out (at least) a 2-way encoding system using a hardware key (potentially obfuscated) and a nontrivial password, hashed with a inner application key that contains the version of the software, and the specific build in it, and create a encrypted tunnel using these values. Even then, someone with enough means will be able the get through.

Remember, once you get all the pieces of the puzzle, you get all the pieces of the puzzle. End of transmission.

To end up as a rant, RTFM, Applied Cryptography is there as your friend. Especially the first few chapters explaining what can be done and not done.

Encrypted File System and other tricks (2, Interesting)

Midnight Warrior (32619) | more than 7 years ago | (#17409468)

Encrypted file systems have a similar problem. They need to decrypt the filesystem for authorized boots or mounts, but need to stay encrypted otherwise. One common trick here is to only make the decryption key available once, at start up, after which it is put into memory, preferable with a small amount of obfuscation to slow down memory walkers. You could then use something like FUSE [freshmeat.net] to mount the encrypted filesystem with your plaintext password.

As other folks have wisely pointed out though, the best posture is to use mandatory access control and restrict access to the configuration file. If you have the privileges, another good practices involves removing all compilers from the machine, firewalling all FTP traffic in or out, firewalling egress (outbound) HTTP traffic (pull in files to process), restrict SSH traffic to pre-defined nodes and enforcing that with a firewall ruleset. Preferably, you'd make all the firewall stuff occur on a separate box. What this does is restrict what tools will be available to an attacker. You can also remove fun programs like strings, ldd, od, *hexedit, and so on. "But I need to modify these tools!" you say. Leave SVN or CVS clients on the node, check your changes into SVN/CVS on your test bed machine, and then just check out the latest stable branch on your exposed machine. Then you get good protection and good configuration management all in one swoop.

Other tricks involve establishing a proxy process or strict limiting what can be done with the compromised username/password. A proxy process might be a setuid C program that only does one thing and accepts no user input. If you must accept user input, be extremely strict (use sscanf on all inputs and limit the size of the buffer accepted) and then have an experienced C developer review your code for improper bounds handling. This proxy process might do things like move files to a read-only directory structure (static web pages in a DMZ), or it might be a CGI script that updates rows in a database. We've actually used the CGI script idea because it a) it a cross-platform way of talking to the database, b) is a good decoupler of otherwise complex code, and c) strongly limits what can be done as an attack. Be careful of the venerable SQL injection attack there though.

A good use of a proxy process might be the transparent mounting/unmounting of an external USB drive, perhaps against a hidden partition on the stick. The drive would have your key. Sure it's obfuscation, but it's complicated enough to decode that it will slow somebody down for a while.

The last trick is to limit what can be accomplished with the username/password that is obtained. We have some processes whose job is to inject data into the database for the backend to all of our tools. That database user is limited to select, insert, and update operations. With Oracle, I could even restrict which specific tables get which privileges.

The best thing to do is to write a document that some folks call the Security Design Document to define your security posture, what you are known to protect against, and where you are vulnerable. Assign a risk mitigation matrix (vulnerability, threat, countermeasure, residual risk) row to each vulnerability. Be honest and then let your manager understand the position you've left them in and try to assign a cost to each countermeasure/mitigation so they can make a decision on what to close or leave open.

You are always going to have vulnerabilities. Everyone does, even the best systems. What makes the difference is those who analyze, understand, and counter that risk in a way that is appropriate to the situation. Direct exposure to the Internet is a situation that should warrant better risk analysis, but rarely does.

The mysqlinfo file (1)

suso (153703) | more than 7 years ago | (#17409624)

A long time ago, I started this mentod of doing this on suso.org. It caught on and now I encourage all my customers to do it:

http://www.suso.org/docs/databases/mysqlinfo.sdf [suso.org]
http://www.suso.org/docs/databases/saferdbpassword s.sdf [suso.org]

I've thought about trying to spread the word about it and even making an RFC, but I don't have the time for that.

LOGIN (1)

sciop101 (583286) | more than 7 years ago | (#17409646)

This thread is about LOGIN. Routers/Switches and other networked boxes have embedded passwords.

1. Do a cryptographic hash of the stored password for obfuscation

2. Secure the login connection

Is this over-simplifying?

No really good solutions (2, Interesting)

rlp (11898) | more than 7 years ago | (#17410852)

You've got a machine A on the interior LAN, that needs credentials to access a DB on machine B on the interior LAN. You've got two choices:

1) You can store the credentials somewhere on machine A.
2) The service (typically a Web server) on machine A can run with an account that's either has privileges to access the DB or has privileges to access credentials stored somewhere else to access the DB.

If an intruder gets access to machine A and gets root / admin privileges - then they can gain access to the DB. Obviously, you're first priority is to make sure that this does not happen! Use good firewalls and firewall rules. Make proper use of a DMZ. Check your application for security problems (buffer overflow, SQL injection, etc). Keep up to date on patches. Your second line of defense, is to:

1) Try to insure that an intruder is detected.
2) Make them work for it (access to DB)
3) Have a good audit trail
4) Monitor your network and application

I'll address item #2. Assume that you put the credentials in the configuration file or a separate file on machine A. You should encrypt the credentials (using an encryption application NOT kept on machine A). The key can be hard-coded in the (web) application. If you want, you can use layers of keys(Encrypted key b decodes key in config file. Encrypted key c decodes key b, Encrypted key d ...), but this quickly reaches a point of diminishing returns and can become a maintenance nightmare. You can obfuscate the key or even build it on the fly to make it more difficult to extract it from the application (for binary apps - it helps to strip the symbol table). Use the OS permissions to restrict access to the config / key file and the (web) application. This won't stop a determined intruder, but they'll have to work for access, and it will slow them down.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>