Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Honeywords — Honeypot Passwords

Soulskill posted about a year and a half ago | from the oh-bother dept.

Security 110

CowboyRobot writes "Businesses should seed their password databases with fake passwords and then monitor all login attempts for use of those credentials to detect if hackers have stolen stored user information. That's the thinking behind the 'honeywords' concept first proposed this month in 'Honeywords: Making Password-Cracking Detectable (PDF),' a paper written by Ari Juels, chief scientist at security firm RSA, and MIT professor Ronald L. Rivest (the 'R' in 'RSA'). Honeywords aren't meant to serve as a replacement for good password security practices. But as numerous breaches continue to demonstrate, regardless of the security that businesses have put in place, they often fail to detect when users' passwords have been compromised."

cancel ×

110 comments

Sorry! There are no comments related to the filter you selected.

This... is a very good idea. (3, Insightful)

Nadaka (224565) | about a year and a half ago | (#43668441)

It really is.

Re:This... is a very good idea. (4, Interesting)

marcosdumay (620877) | about a year and a half ago | (#43668553)

It's an interesting and intriguing idea, yeah. But I still didn't settle on "good".

How is an attacker supposed to get such passwords? He certainly can't phish them or get them in transit or while in memory. We are protecting against the password database leaking, but then, it's a set of salted hashes, so it's useless for the attacker... Unless it's something so easy to crack that you can be sure that it'll get cracked, but then, you are probably receiving several login attempts with those passwords already.

Re:This... is a very good idea. (0)

Anonymous Coward | about a year and a half ago | (#43668623)

If you know your DB is secure, why bother doing anything more? And if you won't do anything unless you're certain it's 100% secure, you'll never put anything up at all. This is just a technique to let you know if your salt sucks.

No it's a miners canary (3, Informative)

dutchwhizzman (817898) | about a year and a half ago | (#43672591)

You do this so you can tell that somehow your design and security measures have failed. If these accounts get used, whether it is with the proper password or just the username (or other user data in your databases) you can be sure that you have a data leak somewhere. By smart placement of the data and adding new "honey data" regularly, you should be able to predict where and when you had a breach. Don't just use user/password combinations for this concept, but also put other "honey data" that might get stolen in, so someone that steals your address database or entire customer data (internal theft by employees) will get caught. Depending on how your system is built up and used and the type of data, you can even use it to pinpoint the employee or exact server that has been compromised.

Techniques like this have been in use for many many years. Most maps have on purpose flaws in them so illegal copies are identifiable. Most address databases for sale commercially have fake addresses in them as well. I've used this sort of techniques before on large customer databases. I'm surprised that this is getting so much attention, I thought it was "industry best practice" for a while now?

Re:This... is a very good idea. (5, Insightful)

djmurdoch (306849) | about a year and a half ago | (#43668669)

It's exactly intended to detect theft of your password database. If you salt in a known way, then it's inconvenient for the attacker, but it's still possible to brute force it. And if there's a bug in whatever hashing scheme you used, it might be easy.

Wouldn't you like to know when someone has done that?

Re:This... is a very good idea. (1)

SolitaryMan (538416) | about a year and a half ago | (#43670807)

I think you are far more likely to screw up this scheme than password hashing + salting.

The idea may be good, but more complicated and more complicated = more buggy = less secure.

Re:This... is a very good idea. (2)

djmurdoch (306849) | about a year and a half ago | (#43671217)

Yes, but even if you don't screw up the hashing, brute force attacks are possible. This approach discourages those, because an attacker won't know which of the broken passwords is safe to use without being immediately detected.

Re:This... is a very good idea. (1)

mozumder (178398) | about a year and a half ago | (#43671501)

I agree this is good.

I'd go a step further and put in words with known difficulty levels of computation, all the way from "password" to "Password1234" to "April01,2001" to "gh89w$5ag", etc.. to see what level of effort the attackers were able to breach.

Re:This... is a very good idea. (2)

hairyfeet (841228) | about a year and a half ago | (#43668863)

Nice to see I'm not the only one scratching my head trying to figure out WHY you would want to do this instead of...you know,actually doing it smart by using salted hashes? I mean correct me if I'm wrong but I thought the whole point was to make it so even if they got it they couldn't use it because it would take too long to decrypt, so how exactly would these "honeywords" work, unless you suspect somebody on the inside is stealing from you in which case i think you have bigger problems.

Re:This... is a very good idea. (1)

JonySuede (1908576) | about a year and a half ago | (#43668995)

things is most salted password tables I saw in open-sources products (no reason to believe that proprietary is different) looked likes this :
TABLE PASSWORD

INTEGER user_id,
CHAR VARYING salt,
CHARACTER VARYING hash,
CHARACTER VARYING algorithm

If the attacker get your database, your still screwed.

Re:This... is a very good idea. (0)

Anonymous Coward | about a year and a half ago | (#43669273)

In your example the salt is still doing what it is supposed to. I can't start generating a rainbow table instead of trying to brute force an individual password. It's tempting to claim that you are missing out on a bonus effect of effectively increasing your keylength by the length of your salt, but that's really kind of dumb. If you were going to hide bits of information necessary to validate a password in some mystical other "safe" place, why wouldn't you just stick half of your hash there, and make it impossible to guess at passwords using this database instead of just harder?

Re:This... is a very good idea. (1)

JonySuede (1908576) | about a year and a half ago | (#43670915)

It's a mall part of defense in depth, any sensitives information that is not atomic should be stocked separated. Every speed bump you put into an attacker road is an opportunity for detection, a point for auditing.

It's only going to get faster generating those rainbow table, see the post on gpu somewhere lower...
The true solution is proper keys derivation and management using a dedicated security equipment, ex.: a java card with a keypad to enter the master key. Re-keying capability is a most and a currently safe algorithm like AES-256 in CBC with PKCS7 padding, have someone random from the company enter a new key each year and now your approaching password storage security. From there calculate MD5/SH1/RC4... using a daily one time use salt to populate your identity database across your systems that refuse to be federated.
The keys in the java card are quite safe, those cards are not like the plugin...

Re:This... is a very good idea. (1)

mattpalmer1086 (707360) | about a year and a half ago | (#43683243)

Well, that's an interesting proposal, but has weaknesses all of its own. I don't understand what you mean by using daily one time use salts with MD5 or SHA1. RC4 is a stream cipher, not a hash algorithm, and PKCS7 is cryptographic message syntax, not a padding specification. Given that a password is likely to be less than 16 characters in length, you are only going to have a single encrypted block, so I'm not sure what CBC mode gets you. So I'll ignore the cryptographic buzzword part of your proposal - please feel free to elaborate on it.

For the rest, complexity is the enemy of security. Using reversible encryption certainly lets you change the key every now and again, but now your super secret key must be present in the process that validates passwords, archived securely, etc. It can't just reside on a java card.

What additional security does using reversible encryption buy you? It prevents offline brute force attacks on the password database, but on the other hand, compromise of the key automatically compromises all passwords in that database. What additional security does changing the key buy you? It lets you decrypt and re-encrypt existing passwords, changing the value recorded in the database without the user having to change their password. Someone who had compromised your password database could now... what? Strong encryption already prevented offline brute force attacks, so changing the key regularly is only useful if someone has compromised your key, or you suspect they have. If they have done that they already have all your users existing passwords, requiring you to issue new passwords to all your users anyway. So key changing only mitigates the vulnerability that the key can decrypt all passwords - something salting doesn't suffer from in the first place.

Salting has the great advantage that it is simple, keeping cryptographic secrets is not required and is still good enough for most practical purposes. Compromise of a salt only lets an attacker mount a brute force attack against a single password. If you also use a tunable iterative password hashing algorithm, you can selectively increase the strength every year to keep up with advances in hardware. There are even hash algorithms designed not to work well on parallel GPU architectures.

By the way, I'm not saying that salting is the only way to do this, or that reversible encryption should never be used. Just that your proposal doesn't give me any confidence that my security would improve a lot, but does give me a lot of extra complexity and cost to manage.

Re:This... is a very good idea. (4, Informative)

LordLimecat (1103839) | about a year and a half ago | (#43669309)

Salts and hashing algorithms arent supposed to be secret. The only requirement for the salt is that it be unique, and the only requirement for the hashing algo is that it be secure.

Re:This... is a very good idea. (3, Insightful)

eddy (18759) | about a year and a half ago | (#43669003)

You'd do it because salted passwords are falling to increasing GPU power. It's a brand new world.

Preemptive comment (2)

eddy (18759) | about a year and a half ago | (#43669141)

Doesn't mean you SHOULDN'T use a good KDF like scrypt [wikipedia.org] of course, but remember that these "honey accounts" can give you more information. If your data is stolen you want to know about it as soon as possible for all sorts of reasons. If someone breaks in and steals your super-safe account database, you still have a problem you want to detect and fix; the break-in itself. This can be one more layer in that protection.

Re:This... is a very good idea. (1)

cheater512 (783349) | about a year and a half ago | (#43670773)

Actually good secure salting password algorithms (e.g. not plain SHA1) can't fit in to a GPU.
They chew too much memory to create one hash that a GPU can't provide but a CPU can provide easily.
Its one of their features.

Re:This... is a very good idea. (0)

Anonymous Coward | about a year and a half ago | (#43673367)

When you can get graphics cards with gigabytes of RAM, how does that work? I guess it is something to do with the design of a GPU being very different to a CPU, but a little more detail would be nice.

Re:This... is a very good idea. (1)

hairyfeet (841228) | about a year and a half ago | (#43673155)

Citation please? Because the only failures i have heard of is those using already depreciated crypto like DES or SHA, have yet to hear of anybody breaking into 512bit AES, in fact last I heard that the length of time required to brute force a 512bit AES using an 8 digit password of the usual numbers, characters, and symbols and properly hashed was on the order of longer than the sun will survive, were talking tens of billions of years.

Throwing more hardware at it only gets you to a certain point, after that if you don't find a hole in the crypto you can throw every GPU on the planet at it and run into a brick wall. this of course doesn't mean you shouldn't think ahead, given how powerful systems are today 1024bit or even 2048bit is doable and the number of years required to break that with 100 Blue gene supercomputers is frankly insane. Remember friend you don't have infinite time here, that database won't be good forever so unless you have a citation I don't see how one could crack it even if you were given say a year to do so, the numbers were are talking about are just too insanely huge.

Re:This... is a very good idea. (4, Insightful)

Cormacus (976625) | about a year and a half ago | (#43669259)

I think the point is that this method doesn't actually prevent any of the breaches that best practices (salting, using a strong hash alg, etc) protect against; rather it provides early warning that your best practices failed. If any one of your honeypot passwords get used, immediately shut everything down ala Madagascar then find and fix the hole the hackers used.

Re:This... is a very good idea. (2)

GLMDesigns (2044134) | about a year and a half ago | (#43669625)

I think this is an excellent idea. I've been using versions of this for years and it's come in useful.

This is an early warning system in case things go wrong. And things do go wrong even when competent and paying attention to best practices.

Re:This... is a very good idea. (5, Interesting)

rickb928 (945187) | about a year and a half ago | (#43670223)

I've done this for more than a decade. I first heard about this in database development, seeding the subscription table, for instance, with fake subscribers to both test that delivery was made ( I and my address was one of the fakes) and to catch thieves using the list. Virtually every mailing list I've handled has had trap users in it. Every mail server I've built has had traps in it both to verify spam and catch the thieves.

This is virtually BAU for me and my fellow admins on servers that we maintain. Trap users and such are very handy. I usually have a few users with no shell or anything on the server(s) just to catch this, and log analyzers that watch and report.

And I expect we'll get pwned again some day. It used to be script kiddies pretending to be ninjas haxrs, but nowadays it's mostly random attackers that hate me, or generic botnet and compromisers by the tens of thousands. Sometimes I would rather not run a mail server.

Fortunately, the last few times we've had trouble, I was able to trace back close to the offenders. The university network guys were marginally interested, but the ISP in the southeastern US took action. I don't expect them to do that again, so I just watch and wait.

But trap users, seeding honeywords, very good ideas.

Re:This... is a very good idea. (2)

ShanghaiBill (739463) | about a year and a half ago | (#43669889)

I mean correct me if I'm wrong but

You are wrong. Good security is characterized by defense in depth. Adding another layer of defense is usually better than just trying to strengthen an existing layer. Of course you should use salted passwords. That is good practice, but there are a number of ways that it could fail.

unless you suspect somebody on the inside is stealing from you in which case i think you have bigger problems.

Really? What is a "bigger problem" than the cost of the lawsuits, lost customers, and PR disaster that would result from an insider stealing your customers' sensitive data?

Here's why Strongbox does it (1)

raymorris (2726007) | about a year and a half ago | (#43671409)

I can tell you two reasons to do it. Often, the billing company manages the password list. The top billing companies including Paypal, CCBill, etc. do not use strong hashes by default. Instead, they use DES, which was secure in 1972. Hundreds of thousands of sites use a third party security package like Strongbox to provide password security for those passwords which are generated by the biller(s). Strongbox or a similar third party solution can use "honeywords" to detect breaches even if they can't control the algorithm Paypal uses.

Secondly, when SB DOES manage the passwords, it was designed to use a highly secure, yet highly portable hash. With glibc supporting crypt($1$), the best algorithm at the time was salted MD5. While still secure in THIS CONTEXT, salted MD5 could fall given the fall of MD5 in other contexts. In case salted MD5 is ever broken, this system would alert SB users if their list was compromised next year or in 2016 or whatever.

Due to the popularity of untrained programmers writing web apps in an inherently insecure language, PHP, breaches are VERY common.

Re:Here's why Strongbox does it (1)

marcosdumay (620877) | about a year and a half ago | (#43675759)

So, the idea is that programmers that aren't comeptent enough to choose a good authentication library will implement that and discover they are not competent?

If you are going to change your authentication routines, why would you just put an alarm instead of making them secure?

Apparently you completely missed the point (1)

raymorris (2726007) | about a year and a half ago | (#43677829)

Apparently I completely failed to communicate the points.

First, most often, the programmers choosing the algorithm are NOT employed by the owner of the site. The owner of the site wants to protect themselves from dumb decisions made by the billing company. Most sites use a third party biller such as Paypal, CCBill, Zombaio, etc. If you run geektutorials.com, billing through Paypal, you have no control over what algorithm Paypal uses to store your passwords*. You do, however, want to be notified when your passwords, which were hashed by Paypal, get compromised. So the person or company wanting the fix is not the person or company who made the bad programming decision.

The other scenario is when the programmers DID choose a god, secure algorithm. Ten years ago, many good programmers chose MD5 or salted MD5, which were secure at the time. Millions of passwords were hashed with MD5 and salted MD5. Later, MD5 was cracked. Suddenly, the properly hashed passwords were at risk. Strongbox or another system with this feature would alert you when the code which was understood to be secure when written is later compromised.**

* You actually CAN replace the biller's script, and we do that too. That doesn't fix the bad algorithm already used for existing users, though. So even if the biller's bad choices are fixed, an alarm is still required.

** MD5 is currently broken _for_certain_purposes_. _Salted_ MD5 _for_passwords_ is still secure today, assuming reasonable input validation. It might be broken next month, though, so an alarm is wise.

Re:Apparently you completely missed the point (1)

marcosdumay (620877) | about a year and a half ago | (#43680895)

But if an atacker gets your PayPal password by breaching PayPal's database, he'll use it to long into PayPal's site, not yours, and it is PayPal that must impement the fix, not you. Of course, the person that wants the fix isn't the same that wrote the flaw, the problem is that the person with the capacity of actualy creating the fix is the same person that created the flaw (and sometimes doesn't want it exposed).

Your second scenario is useful. If your code stop being updated, you get an alarm. The problem now is how does that compare with false positives? Maybe you want strong dummy passwords, and not weak ones like I was assuming.

Re:This... is a very good idea. (1)

Charliemopps (1157495) | about a year and a half ago | (#43671539)

This is like a fire alarm in a dorm. Everyone knows the buildings made out of cinder block, but you just don't know what could happen... and fire alarms are cheap so why the hell not?

Re:This... is a very good idea. (1)

marcosdumay (620877) | about a year and a half ago | (#43675719)

It's probably more like a fire alarm in a concrete building that goes out every time someones fires a match or smoke a cigarrete.

Re:This... is a very good idea. (2)

Nemyst (1383049) | about a year and a half ago | (#43668913)

It is a good idea. It takes very little effort, it's compatible with every possible backend, and it's an additional warning for when things go awry. Remember, you never plan for things to go wrong.

Re:This... is a very good idea. (1)

kasperd (592156) | about a year and a half ago | (#43669243)

Unless it's something so easy to crack that you can be sure that it'll get cracked, but then, you are probably receiving several login attempts with those passwords already.

You could use a difficult to guess, but plausibly looking username like for example trillyps658. Then combine that with an easy to guess password like for example password12345. The username would be stored in plaintext in the database, the password would be salted and hashed like the rest of the database, but could be one of the first thousand passwords being attempted.

In that case, what you are detecting is really that the usernames are leaked and not the passwords. But if the passwords are too difficult to break, you may never learn, that the database has been leaked. You could also let the database contain email addresses and look for any email sent to some of those addresses.

Re:This... is a very good idea. (1)

marcosdumay (620877) | about a year and a half ago | (#43675707)

In that case, what you are detecting is really that the usernames are leaked and not the passwords.

That's why I'm not sure about it. The article got me thinking about how unused usernames could leak. At most sites they are simply public data, but even if not published outright this is way more likely to lead to false positives than a real security breach detection. And false positives make people lenient.

Besides, as I said, it does not detect the most likely situations where your passwords may leak.

This is an ok idea, definitely not a great one (5, Insightful)

TiggertheMad (556308) | about a year and a half ago | (#43669039)

Ok, for those who didn't RTFA, or don't know anything about security, you have a list of users and encrypted passwords in a DB. They log on and their password is checked against the DB. The problem is how do you know if someone has stolen your DB so they can crack it offline? (Offline brute force attacks are much more effective since they are thousands of times faster) So the author proposes that you give each user several possible passwords in the DB, only one of which is the correct one. If other passwords are used to logon, a danger alarm goes off, and you know someone has stolen your DB.

There are several problems with this idea. To make it work, you have to have a second DB listing all the passwords, and some sort of marker indicating which ones are real and which are fakes. You can't put this in the main DB, because then the hackers would have stolen this info too, and can tell which passwords are real. So you have a second, more secure system for this. Aside from the problems in maintaining a separate parallel system, one might ask the question, "why isn't your primary DB as secure as the secondary DB?". If attackers can breach your main defenses how do you know they cannot breach your backup network? What happens if your secondary system goes down?

More insidious, there is the recursive security problem. The point of doing this is for the assurance that your password DB is secure. How will you know if an attacker has gained access to your secondary password DB? Well, that would require a third password DB.......

Agreed. (0)

Anonymous Coward | about a year and a half ago | (#43669509)

How to determine if an incoming account+password combo is a honeytoken is definitely the problem here.

One idea would be to add a column to the database where you store an internal hash of some account info that is available at the point of verification, where all honey accounts are hashed using one key and all normal ones using another, and then hard code this check into the application, working on the assumption that the separation between your source and the database is fairly good. It would also req. the attacker to actually look even if he also got access to the source (or reverse-engineer binaries/byte code). The column might rouse suspicion, but...

Heck, you could just add a boolean column with some innocuous name to achive the same thing I guess. Set "no_newsletter" on all fake accounts.

Re:This is an ok idea, definitely not a great one (4, Informative)

gamanimatron (1327245) | about a year and a half ago | (#43669587)

Some responses (informed by the actual paper [mit.edu] ):

The second DB doesn't have any of the the password hashes, it just knows which one is correct. It's a single table of (userid, hashid) where hashid is just some small integer.

The idea seems to be that the second system can be a smaller, less complicated single-function server, easier to harden and could be running a different OS/Webserver/DB stack. You could (by sacrificing real-time validation) even have the second system entirely firewalled off and unreachable to an attacker, just polling the login servers to validate the sessions at some small interval.

If the second system goes down, one approach would be to just accept any of the passwords until it comes back up. Then check the logs of what happened while it was offline and act accordingly (invalidate sessions, raise alarms, whatever).

Overall, I like the idea tremendously. It seems like it's not quite all there yet, but we're probably going to start implementing some variant of it immediately.

I patch the patch! (1)

TiggertheMad (556308) | about a year and a half ago | (#43669935)

The idea seems to be that the second system can be a smaller, less complicated single-function server, easier to harden and could be running a different OS/Webserver/DB stack. You could (by sacrificing real-time validation) even have the second system entirely firewalled off and unreachable to an attacker, just polling the login servers to validate the sessions at some small interval.

And how are you going to implement password resets in any sort of timely fashion on this magic one-way ultra-secure box? I can pretty much respond to any answer you will give me with either a) won't scale or b) new security vulnerability.

Re:I patch the patch! (1)

gamanimatron (1327245) | about a year and a half ago | (#43670799)

*shrug*

In my hypothetical offline-validator scenario, it doesn't have to scale because it's not running at transaction time. Go ahead and reset the password, generate a bunch of new fake hashes and store the index of the "real" one in the same log that will be picked up for validation later on. With asymmetric encryption, the log could be stolen outright and be of no use at all to an attacker.

That said, I'd probably lean towards an online validator just so I could stick attackers in a honeypot and keep them from messing with my users. Though, as someone else pointed out here, by far the most likely use for the stolen passwords is not on my site, but to use them to log into bank accounts.

Re:This is an ok idea, definitely not a great one (0)

Anonymous Coward | about a year and a half ago | (#43669809)

I would have thought that you would add fake users - if anyone tries to log in as "Rollo Tomassi" you know your DB has been compromised.

Re:This is an ok idea, definitely not a great one (1)

Patman64 (1622643) | about a year and a half ago | (#43672057)

To make it work, you have to have a second DB listing all the passwords, and some sort of marker indicating which ones are real and which are fakes. You can't put this in the main DB, because then the hackers would have stolen this info too, and can tell which passwords are real. So you have a second, more secure system for this. Aside from the problems in maintaining a separate parallel system, one might ask the question, "why isn't your primary DB as secure as the secondary DB?". If attackers can breach your main defenses how do you know they cannot breach your backup network? What happens if your secondary system goes down?

You don't necessarily need a second DB. Just make which-one-is-the-right-one be a function of some other data, like the username.

Re:This is an ok idea, definitely not a great one (0)

Anonymous Coward | about a year and a half ago | (#43677321)

...and if an attacker can get your password database, surely they can't get to your code that makes that decision.

Re:This is an ok idea, definitely not a great one (1)

Tom (822) | about a year and a half ago | (#43673027)

Yeah, like all great ideas, it needs polishing.

I wouldn't put the real/fake distinction into the database at all. I would put it into the code. For example, a simple noise function (pseudo-random, but deterministic) using a never-changing part of the user data such as the ID as input determines which of the n passwords is the valid one.

But frankly, the fact that a single user has more then one password at all would be a dead-giveway that there's some kind of trap to any skilled attacker. If you're not interested in securing individual accounts, but to detect mass-hacking, you could just insert fake accounts and make them trigger the alarm. Again, the distinguisher doesn't have to be in the database at all.

Re:This... is a very *academic* idea. (0)

Anonymous Coward | about a year and a half ago | (#43669311)

Well, it assumes that you have a "separate hardened computer system where secret information can be stored". It uses this information to allow one to know which passwords are valid.

If you have such a system, you already can store and check salted secure hashes in a good manner. So really, it assumes that a system exists that already would solve the original problem...

Re: This... is a very good idea. (0)

Anonymous Coward | about a year and a half ago | (#43671371)

No. Didn't RTFA (sorry, already using something similar that's not broken), so it might be a bad summary...

Simple explanation: if your passwords are properly stored, there's no way you could detect a "failed attempt" with a stolen password without a successful login. Unless the proposal includes fake users. Even then... your login/customer DB has been stolen. This is the "well, shut everything down, we're done" alarm for a good amount of companies.

Re:This... is a very good idea. (0)

Anonymous Coward | about a year and a half ago | (#43671771)

It really is.

It's also nothing new.

Re:This... is a very good idea. (1)

mwvdlee (775178) | about a year and a half ago | (#43672661)

So now hackers must compromise the login tracking database as well, before deciding on which passwords to use?

First proposed this month, first used years ago. (1)

Anonymous Coward | about a year and a half ago | (#43668463)

This is new news? I implemented this same thing 4 years ago and it's hard to imagine nobody else has considered it.

Re:First proposed this month, first used years ago (-1)

Anonymous Coward | about a year and a half ago | (#43668701)

I implemented first posts decades ago and appears you hacked your way around it.

Re:First proposed this month, first used years ago (0)

Anonymous Coward | about a year and a half ago | (#43668771)

The difference is, you didn't tell anyone. http://xkcd.com/664/

Ron Rivest stole my idea! (1)

Anonymous Coward | about a year and a half ago | (#43668467)

I'm actually a bit annoyed right now: I've been working on this concept for about a month now. I guess I should be honored by the "great minds think alike" thing, but damnit, I wanted to finally get my name out there.

It's a good start and part of the technique I've been working with... great way to catch exfiltrations in progress, but we could go a bit further. Patches to critical services like SSH could be developed that would accept lists of common bruteforced passwords and automatically block and alert, or even pass the connecting client over to a honeypot.

Re:Ron Rivest stole my idea! (2)

Em Adespoton (792954) | about a year and a half ago | (#43668595)

I'm actually a bit annoyed right now: I've been working on this concept for about a month now. I guess I should be honored by the "great minds think alike" thing, but damnit, I wanted to finally get my name out there.

It's a good start and part of the technique I've been working with... great way to catch exfiltrations in progress, but we could go a bit further. Patches to critical services like SSH could be developed that would accept lists of common bruteforced passwords and automatically block and alert, or even pass the connecting client over to a honeypot.

I've been doing this for years via fail2ban; just doing blacklisting, not honeynet redirecting, but still...

One thing I used to have set up was a redirect to a secondary firewall table for hosts entering the wrong passwords; the secondary firewall table had redirects to a dummy server that was configured with a completely fake network and service topography... so if someone started attacking using an IP, the information they gleaned would be completely misleading without actually providing an active honeypot. Dummy server has since been repurposed and now I just block though; don't have the time to waste examining what people/bots are up to these days.

Re:Ron Rivest stole my idea! (0)

Anonymous Coward | about a year and a half ago | (#43668705)

I've been doing this for years via fail2ban; just doing blacklisting, not honeynet redirecting, but still...

fail2ban is great, but it has flaws. Slow crawl attacks evade it pretty easily, and real users sometimes get locked out because they forget their password and commit too many failures. This approach has none of those flaws, and gives you significantly more flexibility.

One thing I used to have set up was a redirect to a secondary firewall table for hosts entering the wrong passwords; the secondary firewall table had redirects to a dummy server that was configured with a completely fake network and service topography...

Yep, that's a high interaction honeypot. Great and valuable stuff, but slightly different methodology with some different applications than this. One nifty thing with honeywords is the theoretical capability to detect actual data exfiltrations in progress.

Re:Ron Rivest stole my idea! (1)

Em Adespoton (792954) | about a year and a half ago | (#43668813)

I've been doing this for years via fail2ban; just doing blacklisting, not honeynet redirecting, but still...

fail2ban is great, but it has flaws. Slow crawl attacks evade it pretty easily, and real users sometimes get locked out because they forget their password and commit too many failures. This approach has none of those flaws, and gives you significantly more flexibility.

One thing I used to have set up was a redirect to a secondary firewall table for hosts entering the wrong passwords; the secondary firewall table had redirects to a dummy server that was configured with a completely fake network and service topography...

Yep, that's a high interaction honeypot. Great and valuable stuff, but slightly different methodology with some different applications than this. One nifty thing with honeywords is the theoretical capability to detect actual data exfiltrations in progress.

Indeed. I've been thinking of tweaking my fail2ban jails to add a script checking for specific username login attempts. If I then seed my password file with these accounts, it should be similar to what they're doing with honeywords, with minimal effort on my part.

My setup wasn't a high interaction honeypot though; there was no actual network, just a set of canned responses for the scanners. I had a tarpit going for a while, but got bored with that, as my assets have never had enough visibility for that to be at all educational or a deterrent for attackers.

Re:Ron Rivest stole my idea! (1)

mattventura (1408229) | about a year and a half ago | (#43669235)

I've done something like that before. Set up some fake username and passwords (common ones like "test" and "password"), and send them to a honeypot, a reporting system, or just firewall their IP if they log in with those credentials. It keeps legitimate users unaffected (unless a hacker is specifically targeting your server and knows a valid username) and catches most of the typical script kiddies/brute force attacks.

Re:Ron Rivest stole my idea! (1)

Sloppy (14984) | about a year and a half ago | (#43670853)

denyhosts (a program very similar to fail2ban) has explicit built-in support for that sort of thing. You can trivially configure it so that if anyone tries to log in as any of a certain list of users, that triggers the ban.

Wanna brute force my mysql user's password? Ok. I don't have a mysql user anyway, but your first attempt to be him, is also your last attempt .. to be anyone.

Re:Ron Rivest stole my idea! (1)

Cenan (1892902) | about a year and a half ago | (#43668689)

As opposed to just straight up monitoring if someone is pilfering with your shit?
C'mon now, are you suggesting this as a fix for the flawed philosophy that a user account can have any sort of privileges against a database that stores user information? There are certain methods within certain processes that might need those kinds of privileges, but the problem only arises when you equate a user and a process, and assign permissions to them from the same set.

Looking at a finer grain than [yes|no] assignment on a user/process level would make for a much better use of your time, than trying to patch yet another hole in the whole concept of "security" in IT right now.

Re:Ron Rivest stole my idea! (0)

Anonymous Coward | about a year and a half ago | (#43669171)

As opposed to just straight up monitoring if someone is pilfering with your shit?

This is a way of monitoring if someone is pilfering your shit that's pretty easy and compatible with virtually every definition and form of "shit" out there.

C'mon now, are you suggesting this as a fix for the flawed philosophy that a user account can have any sort of privileges against a database that stores user information?

I don't assume that I have every possible breach point identified or sealed, even though I try my damnedest. There are classes of attacks and methods of pivoting that we won't catch on to until an attacker uses them, and there are misconfigurations that won't be identified by us before they're exploited. That's just reality, and not acknowledging it is fooling ourselves. That's where defense in depth comes in.

Looking at a finer grain than [yes|no] assignment on a user/process level would make for a much better use of your time, than trying to patch yet another hole in the whole concept of "security" in IT right now.

Role based authentication is great. It's another layer. But depending on any one layer for protection is never an optimal place to be. I think the honeywords concept has a lot of merit as an additional layer in a hedgehog defense.

Re:Ron Rivest stole my idea! (0)

Anonymous Coward | about a year and a half ago | (#43680379)

Please watch your language. He didn't "steal" your idea, you may have had the same idea independently.

How does this work? (1)

ZombieBraintrust (1685608) | about a year and a half ago | (#43668469)

If I make a copy of the password database and place it on my machine then how will an alarm reach the admins?

Re:How does this work? (1)

Cluelessthanzero (1885004) | about a year and a half ago | (#43668519)

When you try logging in with a honeyword password in that database is when the alarm goes off, methinks.

Re:How does this work? (0)

Anonymous Coward | about a year and a half ago | (#43668599)

This ain't new --- I did this back in the 80's

Re:How does this work? (0)

Anonymous Coward | about a year and a half ago | (#43668793)

If not entirely new, it's still way underleveraged.

Why is there no /etc/honeywd file? Why don't any service daemons support this approach out of the box? Why don't network and host level IDSes come with "customize me plz!" honeyword rules built in?

Re:How does this work? (1)

allo (1728082) | about a year and a half ago | (#43669133)

if it would be implemented that way, the attackers would steal passwd, shadow AND honeywd. Nothing gained.
It will only gain something, as long as the honeyword-response is done by a blackbox, which can start an alarm when its asked for a honeyword.

Re:How does this work? (5, Informative)

zindorsky (710179) | about a year and a half ago | (#43668537)

When you use one of the fake ID and passwords to try to log in. That will set off an alarm in the system that someone has stolen the database. Think about it - it's really quite clever.

Re:How does this work? (2)

rsborg (111459) | about a year and a half ago | (#43669061)

When you use one of the fake ID and passwords to try to log in. That will set off an alarm in the system that someone has stolen the database. Think about it - it's really quite clever.

Isn't this kind of like ISP-based spam detection - when you create a list of honeypot email addresses that no one would ever email on purpose - anyone who sends to those email addresses are likely a spammer, and should be added to the spam sender blacklist.

Re:How does this work? (0)

Anonymous Coward | about a year and a half ago | (#43669653)

you are right, fake IDs. for whatever reason, this topic talks about fake passwords. They are real passwords. It's the accounts that are fake, at least in that they don't correspond to real users.

Re:How does this work? (1)

im_thatoneguy (819432) | about a year and a half ago | (#43670911)

To be clear, this isn't a Fake ID and Password. I was thrown for a moment too, since honeyaccounts are already used.

Problem #1 with a honey account is that you have to have many many many accounts to increase your odds of the attacker happening to try one of your accounts and logging in with it. Ideally you would have as many fake accounts as real ones to increase the odds of them testing a honey account early on instead of them potentially accessing dozens of accounts (which ones?) before triggering an alarm.

Problem #2 which TFA addresses is that attackers could theoretically identify the traits of a honeyaccount (weird password, no google results for name/location/email etc).

Between these two problems you might not know if an attacker is just cautiously only accessing accounts that are verifiable and real. You could have a data leak and an attacker tiptoeing around the 'laser sensors' if you will.

What this does is different. If a new user registers "John Doe" and they choose a password "JohnsYourUncle3901"

The system immediately generates several "alternative" passwords.

Pass1: "YourUncleJohn1931"
Pass2: "JohnsYourUncle481"
Pass3: "BobsYourUncle3810"
Pass4: "UncleJohn3994"

Now the attacker now has to brute force not one but 4 passwords to just compare them. And with a sufficiently 'smart' system of generating random passwords it should be all but impossible to identify a real password from a fake one.

You now have a second database which is simply:
[John Doe GUID] | "3"

That way you know if anyone tries to use password 1,2 or 4 (or 4 - 31) the account is probably compromised. Another advantage is that instead of creating huge numbers of fake accounts you can just create huge numbers of fake passwords.

Re:How does this work? (2, Interesting)

Anonymous Coward | about a year and a half ago | (#43668549)

There are a couple of ways:

1) You attempt to log in to a service using the honeyword. This trips routines in the code that recognize the account as a honeypot account and not something real, which throws alerts. This seems to be the method being suggested in the paper, though I've only glanced at it so far.

2) A better way that I don't see mentioned: using an IDS. Set up custom rules at host and network layers to look for the occurrence of a given set of 'honeyword' strings and their encrypted variants. Chances are when an IDS throws a flag on this rule, its because your auth database is being exfiltrated over a plaintext protocol (which will often be the case with SQL injection type attacks). If you're unrolling SSL or using something like mod_security to do the scanning, you can catch even the exfiltrations that are using HTTPS.

Re:How does this work? (0)

Anonymous Coward | about a year and a half ago | (#43668559)

When you log in using those credentials.

Re:How does this work? (0)

Anonymous Coward | about a year and a half ago | (#43668563)

When you try to log into bobsmith23 with password test123 ?

Re:How does this work? (2)

ZombieBraintrust (1685608) | about a year and a half ago | (#43668571)

An adversary who steals a file of hashed pass-words and inverts the hash function cannot tell if he has found the password or a honeyword.

Nevermind it was in the second link. Basically the attacker gets a 1 in 3 chance on each login of tripping alarm when logging in. Stronger passwords would stand out from the honeywords though if the honeywords are weak passwords. If honeywords are strong passwords then weak passwords would stand out.

Re:How does this work? (1)

N1AK (864906) | about a year and a half ago | (#43672763)

Should be reasonably easy to handle. Put the real password through a password rating system and then generate alternative passwords with similar rankings.

Re:How does this work? (1)

marcosdumay (620877) | about a year and a half ago | (#43668583)

The alarm will reach them when you try to use the fake passwords to access their servers.

Re:How does this work? (1)

Parafilmus (107866) | about a year and a half ago | (#43668629)

If I make a copy of the password database and place it on my machine then how will an alarm reach the admins?

It won't, if all you do with the passwords is keep them on your own machine.

But if you try to use of the passwords to access the machine you took them from, that's when you risk alerting the admins.

Re:How does this work? (2)

Em Adespoton (792954) | about a year and a half ago | (#43668651)

If I make a copy of the password database and place it on my machine then how will an alarm reach the admins?

I'll answer the "How does this work?" part, as your comment has nothing to do with the information provided.

If you have a copy of the password db, good for you. If you crack the accounts and try to use an elevated account to access something, and that elevated account is a dummy account, alarm bells will go off everywhere. That's how this works. And everyone should be doing it.

Back in the day, I did something "similar" where I created a bunch of "default" accounts using common usernames and passwords, and gave those accounts no access. Syslog was set to send an alert and blacklist the incoming IP for 3 hours. In the 3 years I had this running, I only caught a handful of attackers, but hey... that shows that they would have got through had those accounts actually been in use.

Now I just use fail2ban.

Re:How does this work? (1)

Purpleslog (1645951) | about a year and a half ago | (#43668675)

I think this is the scenario: 1) Honeywords (userid/password pairs) are seeded into an organization passwd file. 2) Bad guy acquire the passwd file (the "how" doesn't matter). 3) Bad guys use offline resources to decrypt the stolen passwd file. 4) Bad guys attempt to access the organization's systems/applications using the now decrypted userid/password pair aka the honeyword 5) The use of the honeyword (exact userid with the exacted seeded password) set of alert. 6) The Org can deny access and perhaps do other stuff like start counter measures, auto generate forensic info, pull the bad guys into a fake systems to to track and study them, etc.

Honeywords? No... (1)

snarfies (115214) | about a year and a half ago | (#43668473)

A better name?

Power Words! As in, Power Word Stun, Power Word Kill, etc.

See also http://www.d20srd.org/srd/spells/powerWordStun.htm [d20srd.org]

Re:Honeywords? No... (1)

eudas (192703) | about a year and a half ago | (#43669553)

"I prepared Explosive Runes today."

Would this actually detect breaches? (0)

Anonymous Coward | about a year and a half ago | (#43668503)

Seems fine so long as someone is actually interested in the passwords for the original target site. I bet a lot of hackers capture the passwords and e-mail addresses from vulnerable sites, then try them on more valuable ones. Unless sites are co-ordinating breach detection, there won't be detection in these cases.

Why stop there? (0)

140Mandak262Jamuna (970587) | about a year and a half ago | (#43668545)

They should also create fake user accounts, with fake social security numbers, fake credit card numbers etc etc. Then the thieves would waste so much of time pursuing these fake data and eventually give up not being able to tell a really dumb user with a dumb password and a fake account with a dumb password.

But it is not new. I have done something similar. But my lips are sealed about that project.

Claimed attacks (0)

Anonymous Coward | about a year and a half ago | (#43668555)

First, let me say: this is a fantastic idea. It should be in everyone's best practices for dealing with any such databases.

This reminds me of how often we see high profile targets claimed to have been successfully hacked. I remember some claim regarding hacking nuclear reactors a while ago. Often all they mean by this is they were vulnerable to known exploits. If I ran a nuclear reactor (or similar high security system), I'd put up software and hardware than was susceptible to known exploits, but controlled nothing other than a lot of write only logging (read via a different physical connection, or require a jumper). Combine tactics like that with the one mentioned here, and you both get a nice record or people trying attacks, as well as indications of real successful breaches, and the attackers may not be able to tell the difference. Thats fantastic!

A problem with this is... (2)

xxxJonBoyxxx (565205) | about a year and a half ago | (#43668575)

When you "seed your authentication databases with fake passwords", you've really just added a bunch of accounts with the same username/password across multiple systems. A smarter (less invasive) approach might be to compare actual hack attempts against existing or recent lists of known usernames; if they're close, that's a tip-off that someone knows more about your authentication store than he or she should.

Re:A problem with this is... (2, Interesting)

Anonymous Coward | about a year and a half ago | (#43668661)

When you "seed your authentication databases with fake passwords", you've really just added a bunch of accounts with the same username/password across multiple systems.

Not necessarily. The username/password combinations don't have to be the same, and they can be trapped higher up the chain in the code that processes authentication requests so that they can't actually be used to gain access to systems. Better yet, they can be used to redirect attackers to higher interaction honeypots where their nefarious deeds can be monitored. Imagine your SSH daemon integrated with honeywords for the root account and other common ones, that redirect attackers to a heavily sandboxed kippo session. Now you're able to get a lot more intelligence about the attacker's methods.

A side benefit: these act to dissuade attackers in the same way as "sting operations" act to dissuade Johns and car thieves... attackers have to think twice when they run across what seem to be juicy targets. Combined with sophisticated deception techniques, you could end up feeding your competitor industrial espionage "secrets" that only served to delay and misdirect, "punishing" them for trying to steal your secrets.

Re:A problem with this is... (1)

xxxJonBoyxxx (565205) | about a year and a half ago | (#43668969)

>> username/password combinations don't have to be the same

If you've implemented SSO on even groups of systems, they will be the same. :)

>> can be trapped higher up the chain in the code that processes authentication requests so that they can't actually be used to gain access to systems

To do that, you need to set a "fake" flag on the credentials, and bad guys can use that to filter out the fake creds from the store.

>> these act to dissuade attackers in the same way as "sting operations" act to dissuade Johns and car thieves

In other words...they mostly don't? As I said earlier, if your attacker will be trying multiple valid sets of credentials, you can detect them without needing this extra complexity. A smart attacker would also snoop your activity logs before using any stolen credentials to avoid locked or dormant accounts, and to see if he/she can figure out which accounts are automated, maintenance, or otherwise frequently used enough to be of interest. Even with that low level of recon would avoid the control you seek to introduce.

Re:A problem with this is... (2)

rsborg (111459) | about a year and a half ago | (#43669097)

To do that, you need to set a "fake" flag on the credentials, and bad guys can use that to filter out the fake creds from the store.

Clearly the "fake cred" would never be a flag in the users table (or even in the same database/system). For example, it could be a process that scans your logfiles and alerts based on username.

If someone has pwnd your system and can rewrite logs you have a *much* bigger problem than stolen passwords.

Re:A problem with this is... (1)

xxxJonBoyxxx (565205) | about a year and a half ago | (#43669555)

>> Clearly the "fake cred" would never be a flag in the users table (or even in the same database/system). For example, it could be a process that scans your logfiles and alerts based on username.

That's my point. If you're already doing this, you don't need to inject fake credentials into your databases to detect unusually accurate snooping.

Re:A problem with this is... (2)

rsborg (111459) | about a year and a half ago | (#43669721)

>> Clearly the "fake cred" would never be a flag in the users table (or even in the same database/system). For example, it could be a process that scans your logfiles and alerts based on username.

That's my point. If you're already doing this, you don't need to inject fake credentials into your databases to detect unusually accurate snooping.

We're not on the same page.

How would you know if a login using a valid credential set is legitmate or from a stolen password? Answer: you don't. However, if you have fake users in your system that *no one* would ever login with, then you can know your system credentials have been compromised.

And this is to detect if your password db has been stolen, not "snooping". Using this method to uncover snooping would never find your fake credentials unless you constantly test those fakes.

Re:A problem with this is... (1)

MozeeToby (1163751) | about a year and a half ago | (#43669811)

The "fake" flag, as you put it can be stored on a separate server and since it is storing such a small, tiny fraction of your user data (a map of usernames to an integer indicating the correct hash to look for) it can be much more tightly restricted.

Re:A problem with this is... (1)

MozeeToby (1163751) | about a year and a half ago | (#43669767)

The whole purpose of this system is to detect "actual hack attempts". No one is going to brute force a good password directly on the service, they're going to get a leaked/stolen copy of the password database and try to crack the passwords locally. With this system, the attacker doesn't know which hash is the one that will actually grant access. You could have 100 hashes for each user; enter the correct password and access is granted, enter a random password and access is denied, enter a password that generates any of the other 99 hashes and warnings are instantly sent to the admins of the system telling them that their password DB has been compromised allowing them to respond appropriately.

The advantage of adding multiple fake passwords for each user rather than seeding fake username/password combinations is that it can detect the attack even if it is directed at a single user as opposed to the entire user base.

+1 for linking to actual paper in summary (0)

Anonymous Coward | about a year and a half ago | (#43668753)

Too often summaries only link to spammy blogs instead of the actual sources, so good job here.

Re:+1 for linking to actual paper in summary (1)

fisted (2295862) | about a year and a half ago | (#43668873)

You must be new here. While reading TFS is already discouraged, it's an absolute no-no to read TFA. You couldn't be breaking /. netiquette in a worse way.

Phishing (1)

Martin S. (98249) | about a year and a half ago | (#43668909)

A better idea would be to seed the honey pot passwords to phishing attempts while waiting for the host to close them down.

Honeytoken (4, Interesting)

ZouPrime (460611) | about a year and a half ago | (#43669043)

Isn't this just a special case of a honeytoken?

http://en.wikipedia.org/wiki/Honeytoken

Similar to an Email Canary (1)

harrigan (539413) | about a year and a half ago | (#43669905)

Re:Similar to an Email Canary (1)

Sloppy (14984) | about a year and a half ago | (#43670905)

Ugh. I hate that idea. It's based on presumption that people have HTML email reading turned on. I wouldn't trigger your canary while snooping through your emails, not because I'm clever or anything, but just because I wouldn't think to turn HTML on. Worse, if I did think to turn on HTML email, I would also have to enable external images, yet another non-default in most IMAP clients.

I'll Raise You.. (1)

SuperCharlie (1068072) | about a year and a half ago | (#43670133)

How about a monster whopping database full of salted hashed fake accounts with a tiny percentage of viable logins that notify you when they have been used.

There.

uhh (1)

Anonymous Coward | about a year and a half ago | (#43670265)

why not have fake logins? Then you don't need a separate list. If someone tries to login with the fake login, then you know there's a problem. A fake login with view-only permissions. True, they might not pick the fake logins...

But then, what if you had more fake logins than real logins? Then you'd have a better chance they'd pick the fake login.

Obviously there's some number which balances performance to security that is a sweet spot.

Why? (0)

Anonymous Coward | about a year and a half ago | (#43670637)

Why would any business want to do this? One of their canned statements after a breach is always "although security was compromised, we have no evidence that customer accounts have been accessed". Implementing such a policy means they lose the ability to spew that piece of meaningless (yet evidently popular, given it's prevelance) reassurance.

For added security (1)

Riceballsan (816702) | about a year and a half ago | (#43671397)

This is a good measure for high tech solutions, but why not also take it further. create new honeywords... put them on post it notes under people's keyboards etc...

Strange failsafe (1)

ls671 (1122017) | about a year and a half ago | (#43672571)

From TFA:
"The researchers acknowledge that attackers might subvert their system by launching a denial-of-service attack against a honeychecker server. In such an event, they recommend using a failsafe: if a honeychecker server becomes unavailable, temporarily allow honeywords to become valid logins."

Letting everybody in seems like a weird way to failsafe;-)

brilliant (1)

Tom (822) | about a year and a half ago | (#43672999)

This is one of those small ideas that are so simple and seem so obvious that upon reading them your first thought is "why didn't I think of that?".

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?