Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Schneier: We Don't Need SHA-3

Unknown Lamer posted more than 2 years ago | from the just-what-nobody-wanted dept.

Encryption 143

Trailrunner7 writes with this excerpt from Threatpost: "For the last five years, NIST, the government body charged with developing new standards for computer security, among other things, has been searching for a new hash function to replace the aging SHA-2 function. Five years is a long time, but this is the federal government and things move at their own pace in Washington, but NIST soon will be announcing the winner from the five finalists that were chosen last year. Despite the problems that have cropped up with some versions of SHA-2 in the past and the long wait for the new function, there doesn't seem to be much in the way of breathless anticipation for this announcement. So much so, in fact, that Bruce Schneier, a co-author of one of the finalists not only isn't hoping that his entry wins, he's hoping that none of them wins. ... It's not because Schneier doesn't think the finalists are worthy of winning. In fact, he says, they're all good and fast and perfectly capable. The problem is, he doesn't think that the world needs a new hash function standard at all. SHA-512, the stronger version of the SHA-2 function that's been in use for more than a decade, is still holding up fine, Schneier said, which was not what cryptographers anticipated would be the case when the SHA-3 competition was conceived. 'I expect SHA-2 to be still acceptable for the foreseeable future. That's the problem. It's not like AES. Everyone knew that DES was dead — and triple-DES was too slow and clunky — and we needed something new. So when AES appeared, people switched as soon as they could. This will be different,' Schneier said via email."

Sorry! There are no comments related to the filter you selected.

Too slow? (0, Insightful)

Anonymous Coward | more than 2 years ago | (#41447475)

As I understood, it has to be slow to be hard to break via dictionary attacks etc. ...

Re:Too slow? (4, Insightful)

gigaherz (2653757) | more than 2 years ago | (#41447507)

Disclaimer: I'm not a security expert so don't expect what I'm saying to be accurate.

Dictionary attacks have nothing to do with breaking hashes. If you mean stuff like rainbow tables, that's specific to hashes used to store passwords, which doesn't even need anything > SHA-256, because passwords don't have that much entropy to begin with.

What you need for security are essentially too properties: the entropy in the hash system (how random the values seem to be, in relation to the input), and the collision resistance (how hard is it to generate two inputs that result in the same hash, AND how hard it is to generate an input for a given hash number).

Cryptographic Hashes are used for a lot other purposes, and many of them DO require to be fast, and have a very high collision resistance. The most notable may be generating signatures for cryptographic purposes (to verify a message was sent by the entity that claims to have sent it, generally).

Re:Too slow? (4, Informative)

wisty (1335733) | more than 2 years ago | (#41447607)

> Dictionary attacks have nothing to do with breaking hashes.

There's two kinds of hashes you should use - those which are meant to be slow (for password hashes), and those which are meant to be fast (for message signing). SHA is meant to be fast.

Re:Too slow? (2, Informative)

gigaherz (2653757) | more than 2 years ago | (#41447741)

If you rely on hashing speed to hash passwords, you are doing it wrong. computers get faster, constantly. It's not speed that matters, it's the number of possible combinations making it exponentially too large to brute force, relative to the time to calculate each hash. Who cares if you can calculate missions of hashes in one second, if you still need to spend longer than the age of the universe to get a reasonable number of inputs to use as a dictionary? It's just simpler to use a plain-text dictionary and perform the hashing element by element. In which case the hashing speed does not matter AT ALL, it's how many attempts the site allows before either locking you out or increasing the time between attempts too much.

As I understand it, that's why you salt the passwords AND use a user-specific string (based on username, email and/or similarly constant data) to introduce more variation so that they can't use generic rainbow tables or even site-specific rainbow tables.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41447841)

If you get hacked an the password DB gets stolen, it is very likely they also know how you salt and/or create the user-specific string. so in that case, trying to find the password by a user still becomes trying all possible password until you find one that matches.

If the hash algorithm is extremely fast, this means they can very speedily check a large number of possible passwords. Since most people use pretty short passwords, a fast algorithm makes it pretty doable (like 50-50, mostly based on password length) if you want the password for a specific user. Otoh, if you have a very slow hash algorithm, each single try takes a long time, making brute force checking a lot less doable, maybe even impossible.

And yes; computers become faster and faster, but this is meant only to delay the crackers, giving you enough time to block all passwords and announcing to all your customers that they should change their password on all services where they use the same password.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41449457)

If you get hacked an the password DB gets stolen, it is very likely they also know how you salt and/or create the user-specific string. so in that case, trying to find the password by a user still becomes trying all possible password until you find one that matches.

If the hash algorithm is extremely fast, this means they can very speedily check a large number of possible passwords. Since most people use pretty short passwords, a fast algorithm makes it pretty doable (like 50-50, mostly based on password length) if you want the password for a specific user. Otoh, if you have a very slow hash algorithm, each single try takes a long time, making brute force checking a lot less doable, maybe even impossible.

And yes; computers become faster and faster, but this is meant only to delay the crackers, giving you enough time to block all passwords and announcing to all your customers that they should change their password on all services where they use the same password.

This is not correct. Brute-force/dictionary attacks are completely infeasible for any but weak passwords. It might be feasible if no salts are used and the database is large (high probability of weak passwords; compute hashes and attempt to match against the entire database of hashes). A strong, random password is not affected by such an attack. GP has it right: its the preimage resistance and second preimage resistance that matters. For hashing, you want these security properties, plus speed and simplicity of implementation in hard- and software.

Re:Too slow? (2)

ewanm89 (1052822) | more than 2 years ago | (#41447967)

I just found an SQL injection attack and downloaded the whole password database. I know crack it at my own leisure. Now I can come back any time and use those user names and passwords. Now what is the bet some of those user name and passwords are used somewhere else by some of the users? When salting you need to do it very specific, you do not want to use the same salt as another system, you do want your salts to all be unique to a given user on your system suggestion is random data from a PRNG (technically for salting it doesn't need to be cryptographically secure random, though it doesn't hurt). Finally when salting don't just append the salt to the password and has as it may open up other avenues of collision attacks instead prepend the password length too.

Re:Too slow? (5, Funny)

AlXtreme (223728) | more than 2 years ago | (#41448031)

I just found an SQL injection attack and downloaded the whole password database. I know crack it at my own leisure.

If the passwords are decently salted and the salt is unknown good luck with that. Remember to switch planets when the Sun goes nova.

Re:Too slow? (2, Informative)

Anonymous Coward | more than 2 years ago | (#41448757)

The sun will not got nova. It will turn into a red giant and then a white dwarf.

Re:Too slow? (1)

amorsen (7485) | more than 2 years ago | (#41449189)

The salt is difficult to keep unknown. Every part of the web application which needs access to the salted hashed password also needs access to the salt, so if your security fails and allows access to the salted password, it probably allows access to the salt as well.

Sometimes you get lucky and the attackers get only the salted hashed password, but you cannot design your security around getting lucky.

Re:Too slow? (4, Informative)

Hast (24833) | more than 2 years ago | (#41449337)

Perhaps I'm misunderstanding your point, but the idea of the salt is not to keep it secret. The idea is that each users password is combined with a unique string (the salt) so that if you try to attack the password database with a dictionary attack you have to process each password individually.

Re:Too slow? (3, Informative)

aaron552 (1621603) | more than 2 years ago | (#41449341)

What a salt does do, however, is make rainbow tables impractical. It doesn't need to be private, you can store it in plaintext in the same table as the password hashes.

Re:Too slow? (1)

sjames (1099) | more than 2 years ago | (#41450589)

The salt is known if you have the password file. The point is to use enough salt that rainbow tables are infeasible.

Re:Too slow? (1)

TheLink (130905) | more than 2 years ago | (#41449293)

I just found an SQL injection attack and downloaded the whole password database. I know crack it at my own leisure.

Sure, but while the site is exploitable can't you pwn the rest of the site? You probably can pwn the rest of the database.

The solution it seems is to use different passwords for every site (or at least sites that matter). It doesn't even matter if the passwords are short. Once the hacker has enough access to get the passwords they normally have enough access to get the rest of the juicy data, or even change it.

Given the vast numbers of sites with weak security it seems a waste of time to use very long passwords. Just use passwords long enough that they won't brute force it via HTTP (which will probably look like a DoS/DDoS attack).

Re:Too slow? (1)

ewanm89 (1052822) | more than 2 years ago | (#41449601)

Of course, but I'm betting even your educated users do not do that. And yeah, you need about 12 characters before brute force is out against salted MD5, this is why slower algorithms like bcrypt help (blowfish/sha-1/sha256 multiple times with some special stuff thrown in to make it hard to build hardware accelerators for it.)

Re:Too slow? (1)

fast turtle (1118037) | more than 2 years ago | (#41449611)

Very valid point on pw length as it's what I tend to follow. For those sites with a "critical" pw, I tend to use as high an entropy and length as possible. For places like /. and other forums that are not important, I use a lower quality pw as I can and will replace the account if the forum is important enough to me. Otherwise, I post AC if I'm just stiking my nose in the tent to see if there's anything interesting.

Re:Too slow? (4, Insightful)

DrXym (126579) | more than 2 years ago | (#41448059)

The purpose of slow hashes and salts is not to make passwords crack proof but to force the attacker to spend an inordinate amount of time extracting each and every password. The salt prevents reverse hash lookups. The slow hash imposes a penalty on every lookup in a dictionary / brute force attack. It's about damage limitation and buying time to warn users about the break.

Hashes like bcrypt are configurable too so the number of rounds is a parameter to the power of two so it can be made more secure / slower if necessary as time progresses. With 2^10 rounds it's approximately 8000x slower to make a hash than SHA1 which server side isn't a big deal but it is for someone running through a dictionary.

It's so bad that attackers would probably only bother to try a subset of common passwords (e.g. top 1000 passwords) before moving onto the next one. Enforcing password quality during signup would probably block these from hitting too.

Re:Too slow? (1)

petermgreen (876956) | more than 2 years ago | (#41448235)

Users will use weak passwords*, web servers will get compromised.

Ideally you would have a seperate "password checking server" that did nothing but store and check paswords and was locked down very tight but most sites can't really justify the cost of that. So on most sites the password database and any related secrets such as a fixed part of the salt are just one bug in a php webapp away from being revealed to an attacker.

Using a deliberately slow hashing technique will increase the time taken for the hacker to crack passwords in that scenario potentially buying you time to warn your users.

*and if you make rules to try and stop them they will do the minimum needed to comply with those rules.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41449057)

Ideally you would have a seperate "password checking server" that did nothing but store and check paswords and was locked down very tight but most sites can't really justify the cost of that.

I would suggest you lookup "LDAP" and "Active Directory." And, if you are factoring in hardware cost, "virtualization."

Re:Too slow? (4, Informative)

swillden (191260) | more than 2 years ago | (#41448253)

If you rely on hashing speed to hash passwords, you are doing it wrong.

If you rely only on hashing speed to protect your passwords, you're doing it wrong.

The problem with fast hashing is that it facilitates brute force password searches. Salt prevents rainbow table attacks, but targeted brute force attacks against a specific password can be quite feasible given typical user password selection. There are two solutions to this: Make users pick better passwords or find a way to slow down brute force search. The best approach is to do both; do what you can to make users pick good passwords (though there are definite limits to that approach), and use a parameterized hash algorithm that allows you to tune the computational effort.

The common way to slow down the hash is simple enough: iterate it. Then as computers in general get faster you can simply increase the number of iterations. In fact, you can periodically go through your password table and apply a few hundred more iterations to each of your stored password hashes. The goal is to keep password hashing as expensive as you can afford, since whatever your cost is, the cost to an attacker is several orders of magnitude higher (since the attacker has to search the password space). Oh, and it's also a good idea to try to keep attackers from getting hold of your password file. Layered defense.

As I understand it, that's why you salt the passwords AND use a user-specific string (based on username, email and/or similarly constant data)

User-specific strings are good too, as another layer to the defense, but you have to assume that an attacker who gets access to your password file probably gets that data as well.

to introduce more variation so that they can't use generic rainbow tables or even site-specific rainbow tables.

Salt is sufficient to eliminate rainbow table attacks.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41449523)

The common way to slow down the hash is simple enough: iterate it. Then as computers in general get faster you can simply increase the number of iterations. In fact, you can periodically go through your password table and apply a few hundred more iterations to each of your stored password hashes. The goal is to keep password hashing as expensive as you can afford, since whatever your cost is, the cost to an attacker is several orders of magnitude higher (since the attacker has to search the password space). Oh, and it's also a good idea to try to keep attackers from getting hold of your password file. Layered defense.

This. Don't make a slow algorithm: a generic hash function like the SHA standard should be as fast as possible. Cryptographically, this isn't even necessary at all: the only reason to have a slower hash function is to protect stupid (/forgetful/otherwise incapable of choosing good passwords) users.

to introduce more variation so that they can't use generic rainbow tables or even site-specific rainbow tables.

Salt is sufficient to eliminate rainbow table attacks.

The reference to site-specific rainbow tables implies the same salt was used for all passwords. While that helps, a capable attacker can still generate a rainbow table for this particular website (instead, salts should be uniquerandom for every user, as this requires a rainbow table for every distinct salt).

Re:Too slow? (1)

kasperd (592156) | more than 2 years ago | (#41450101)

a capable attacker can still generate a rainbow table for this particular website

No need for the additional overhead of using a rainbow table for this. Just apply a generic brute force dictionary attack without using rainbow tables. It will be much faster, if there is just a single leak.

The rainbow table only helps if it can be precomputed, which you can only do if that salt is leaked before the database. If a site repeatedly have leaks of their password database, and the salt remains unchanged between leaks, then for any subsequent leak the salt is previously known, and a rainbow table can be used. But after the second leak the only users still using the site will be those who don't care about security anyway.

salts should be uniquerandom for every user

That is correct, and it should change whenever the user update their password. You can combine the two and have one value, which is constant for the site and one that is unique for every user. It is the salt, which is unique for every user, that provides the majority of the security. The other value, which is fixed for all users on the site is sometimes referred to as pepper. If the pepper value is leaked, it provides no additional security, but you still have the security provided by the unique salt values. But if the pepper value is not leaked, then nobody can even start brute forcing the password hashes or compute a rainbow table, even if the entire password database is leaked. That gives more time to get passwords changed, if you are lucky and only the password database is leaked and not the pepper.

For example if the pepper value is a hardcoded value in the source code for the site, then it doesn't have to be present anywhere in the password database. Some attacks cause a leak of only one of the two, and in those cases you have more time to get passwords changed. If they are leaked simultaneously, you are left with only the security of the salt, which is what you'd have, if you weren't using pepper in the first place.

If either the pepper or the database is leaked, you have to change the pepper value (as soon as you have closed the hole, which allowed the leak in the first place). But until all passwords have been changed, you still have to support the old pepper value. That means to properly use this, you need to have a hardcoded array of pepper values in the application, and store an array index in the user entry in the password database.

The use of such a pepper value makes sense if you believe in defence in depth with many layers of protection. Using the pepper without simultaneously using a per user salt, means you don't know what you are doing. The per user salt is easier to implement than a pepper value stored in the application code itself, and the per user salt also provides more security.

Changing pepper after breach (1)

DragonWriter (970822) | more than 2 years ago | (#41450687)

If either the pepper or the database is leaked, you have to change the pepper value (as soon as you have closed the hole, which allowed the leak in the first place). But until all passwords have been changed, you still have to support the old pepper value. That means to properly use this, you need to have a hardcoded array of pepper values in the application, and store an array index in the user entry in the password database.

Or you drop support for the old pepper and mass-expire all the passwords and require users to use a password reset mechanism (equivalent to the "lost password" functionality you should have) after the breach before they can log in.

This is somewhat less convenient to users, but simpler and -- assuming the security of your password reset mechanism -- more secure than continuing to support passwords with the old pepper until they are changed.

Re:Too slow? (1)

The Atog Lord (230965) | more than 2 years ago | (#41448599)

You're missing the distinction between an online attack and an offline attack. In an online attack, where the attacker goes to the website and starts typing in passwords, then lockout will do just fine. But when the attacker has stolen the password file, he gets as many guesses as he wants, bounded only by computing power. And in that case, the hashing speed will be a limiting factor in how long it takes him to break the passwords.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41448459)

> Dictionary attacks have nothing to do with breaking hashes.

There's two kinds of hashes you should use - those which are meant to be slow (for password hashes), and those which are meant to be fast (for message signing). SHA is meant to be fast.

Perhaps I'm being pedantic, but I don't think a "hash" is the proper term for the former (storing passwords). It would probably be better to call it a key derivation function (e.g., PBKDF2).

However, that isn't too accurate (IMHO) for something like bcrypt either. While PBKDF2 was designed to stretch passwords/phrases so that they could be used for the input of things like AES, bcrypt's intent however was never to be fed into another alogirthm, but simply to be a way to store passwords for checking on logins. Of course PBKDF2 can be used for the for the same thing, but that's not it's main purpose AFAICT.

So I think the nomenclature should be normalized to something like the following:
  * hash: a one-way algoirthm that takes arbitrary input and has a fixed-length output; usually fast
  * key-derivation function: an algorithm that takes input and expands it to a desired size for input into another function; usually slow; may use a hash in its operation
  * password storage function: an alogirthm that takes input and has an output that's primary purpose is to be compared on login; usually very, very slow; may use a hash or KDF in its operation

While something like PBKDF2 can be used to store passwords, that's not its forte, and something like bcrypt or scrypt would be better.

In a related note: most general programmers should not be using hashes directly, but should be calling higher level functions (PBKDF2, HMAC, bcrypt, etc.) to accomplish a specific goal. If you're calling SHA-1/2/MD5 directly, there's a good chance you're making a mistake of some kind.

Re:Too slow? (1)

ewanm89 (1052822) | more than 2 years ago | (#41447625)

There are too types of dictionary attacks, one is used for breaking passwords using a dictionary of likely passwords and trying them one by one hoping that you find the password in the dictionary, this can be coupled with bruteforcing techniques to try things like add number to the end, replace e with 3 etc. And some crackers will even start running bruteforce through combinations not in the dictionary when the dictionary runs out. Now in hash world a collision exists, this is where another set of the same data (like a different password) also yields the same hash (so would allow authentication). Dictionaries don't just apply to passwords, but any data set one could build into a dictionary to help find collisions.

The other to break certain block ciphers using ECB mode (building a dictionary of what a particular block maps to as in ECB each block is encrypted entirely independently, so if another block happens to be the same pattern, it gets encrypted to be exactly the same.

The first is related to hashing as that is usually the way the passwords are hidden in database tables and so is relevant though cryptographic hashes are used for many other things not just passwords and so there are other attacks against cryptographic hashes. While ECB dictionary attacks are totally irrelevant and out of context in this case and can be discarded.

Re:Too slow? (1)

gigaherz (2653757) | more than 2 years ago | (#41447777)

As I was trying to explain in the reply below, the time it takes to calculate the hash is meaningless. Relying on that time as a way to prevent intrusions would be like a bank using a maths puzzle to lock the safe, and then claiming that it takes too long to solve, so they would notice the attempt before it happens. It just doesn't work that way.

You have two strengths in preventing such intrusions: first is the exponential complexity of reversing the hashing process (brute forcing, unless the algorithm is proven broken), and second is the artificial delay used to prevent mass attempts at the password. There's attacks for everything, but if any of those 2 fail, everything fails.

Re:Too slow? (4, Informative)

ewanm89 (1052822) | more than 2 years ago | (#41447811)

Not strictly try, one reason bcrypt/scrypt/PBKDF2 is recommended over straight salting and hashing is that it is slower to hash and in the case of BCRYPT it is also deliberately designed to be harder to build dedicated accelerators or use parallel processing to help speed it up, therefore slowing down a brute force attack. Yes, time shouldn't be the only factor, but most cryptography has a time element, given enough time one can break your the whole banks password database through bruteforce, don't you want to make it as slow as possible to even make attempts (offline as well as online). If I can break this diplomatic cable, it's great, but if it takes 70 years it's already declassified before I broke it anyway does it matter I could break it given 70 years?

Re:Too slow? (2)

loufoque (1400831) | more than 2 years ago | (#41448023)

It's not at the scale of 70 years. Brute forcing a 128-bit space would take at best millions of years and require that most of the planet mass be converted to energy.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41448797)

What if the first "guess" is right.... yay I saved the planet!

Re:Too slow? (3, Funny)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#41448857)

Conveniently, converting most of the planet's mass into energy serves as an effective substitute for diplomacy in many situations.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41449017)

70 years seems like a long time, but the question is how many years improved hardware and other small gains will reduce that to.

Re:Too slow? (2, Informative)

Anonymous Coward | more than 2 years ago | (#41448211)

would be link a bank using a maths puzzle to lock the safe, and then claiming that it takes too long to solve

Um, isn't that *exactly* how encryption works? :p

The point is, the timescales are exponential. A hash that's 100 times faster to compute only knocks 2 orders of magnitude off the time it takes to crack the hash (10^10 universe-lifetimes instead of 10^12, w00t), but it makes it 100 times more usable in the golden path case of computing a hash on an in-core string.

Re:Too slow? (1)

Anonymous Coward | more than 2 years ago | (#41449503)

would be link a bank using a maths puzzle to lock the safe, and then claiming that it takes too long to solve

Um, isn't that *exactly* how encryption works? :p

Well, it's certainly exactly how safes work.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41448583)

dude, it's 'two', not 'too'. Seriously?

Re:Too slow? (2)

Chrisq (894406) | more than 2 years ago | (#41447539)

As I understood, it has to be slow to be hard to break via dictionary attacks etc. ...

No - you use long, cryptographically random, salts to avoid dictionary attacks. Any hash or cryptographic function that is fast enough to use will be fast enough to attack with a dictionary unless you do this. Of course user education and password rules forbidding short alpha-only words are important too!

Re:Too slow? (2)

ewanm89 (1052822) | more than 2 years ago | (#41447639)

No, they avoid certain classes of dictionary attack like rainbow table attacks, this is where the dictionary has the hash it matches to precalculated in the dictionary. Me taking a dictionary and salting and hashing each word and seeing if it matches is a dictionary attack.

Re:Too slow? (4, Informative)

petermgreen (876956) | more than 2 years ago | (#41448269)

No - you use long, cryptographically random, salts to avoid dictionary attacks.

There are basically two types of salt, fixed salts stored in the server configuration and per-password salts stored in the password database. They defend against different things.

Fixed salts stored in the server configuration defend against someone who has got your password DB but not your server configuration.
per-password salts stored in the password DB defend against precomputed attacks.

Neither provides a defense against someone who has both your password DB and server configuration and is going after an individual password. That is where deliberately slow hash functions come in.

Re:Too slow? (1)

sjames (1099) | more than 2 years ago | (#41450731)

Salt also protects against rainbow tables in itself. If I add 4 bytes of salt, your rainbow table becomes 4 billion times larger to have the same effectiveness as it did when there is no salt (assuming the 4 bytes can hold any value).

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41447677)

Depends what it's used for. A digest of a mail message or download for example you want to be very fast, a dictionary attack modifying the message is prevented by the size of the data. For a server serving lots of clients you also want it to be fast.

Speed is only a factor when cracking a stored password, which is a small minority of the use cases. Digests as a checksum are far more common.

Re:Too slow? (4, Interesting)

kasperd (592156) | more than 2 years ago | (#41447769)

As I understood, it has to be slow

This is a common misconception. The source of this misconception is the way people have tried to protect weak password through the use of hashing. If you take a strong password and hash it with a hash function satisfying all the requirements for a cryptographic hash function, then that password is well protected. That construction doesn't work for weak passwords. If you apply a salt while hashing, you move the threshold for the strength of passwords which can be brute forced. This is quite clearly an improvement over plain hashing. I know of no cryptographer, who has disputed, that it is a good idea to use salts while hashing passwords.

But there are still some passwords, which are too weak to be protected by a salted hash. This has lead to some people saying this hash function is insecure, because it is too fast. What they should have been saying was, this password mechanism is insecure, because it is using the wrong kind of primitive. This is an important distinction. Even if the hash function satisfies all the requirements of a cryptographic hash, then a salted hash cannot protect a weak password.

When building cryptographic systems you often need to apply different classes of primitives. Common primitives are hash functions, block ciphers, asymmetric encryption, digital signatures. Examples of primitives in each of these four classes are SHA512, AES128, RSA, RSA (yes RSA does fall in two different classes, there are other primitives, which fall in only one of those two classes). If you want to send an encrypted and signed email, you typically use all those four classes of primitives.

To protect semiweak passwords better than through a salted hash you really need a new class of primitive. For lack of better term I'll call that primitive a slow function. Claiming that a hash function is insecure because it is fast would be like claiming AES128 is secure because you can derive the decryption key from the encryption key.

The formal properties I would say a slow function should have are first of all that it is a (mathematical) function mapping bitstrings to bitstrings, and that it requires a certain amount of computing resources to compute the full output from any single input. Properties that I would not require a slow function to have includes collision resistance and fixed size outputs. Those are properties you expect from a hash function, which is a different kind of primitive.

People have tried to squeeze both kinds of properties into a single primitive, which if they succeeded, would be both a cryptographic hash and a slow function. But they haven't always paid attention to the differences in requirements. And often the result has been called a hash function, which confuses people, since it is different from a cryptographic hash.

One nice property of slow functions as I would define them is that given multiple candidate functions, you can just compute all of them and concatenate the results. And you will have another slow function, which is guaranteed to be at least as strong as the strongest of the functions you started with.

Once you have all the primitives you need, you can combine them into a cryptographic system, that achieves some goal. If you want to protect passwords, I think you are going to need both a slow function and a hash function. For the combined system, you actually give a formal proof of the security. The proof of course assumes, that the underlying primitives satisfy the promised criteria. I guess the password protection you would implement given the above primitives would compute the slow function on the password, and then compute a hash of password, salt and output of the slow function.

Additionally you could prove that the regardless of the strength of the slow function, the password as well protected as it would have been with just a salted hash. That way by handling those as separate primitives, you can reason about the security assuming the failure of one of the primitives. Such a construction would eliminate the main argument about some of the existing slow functions, which is that they haven't had sufficient review.

Re:Too slow? (4, Interesting)

kasperd (592156) | more than 2 years ago | (#41447815)

Claiming that a hash function is insecure because it is fast would be like claiming AES128 is secure because you can derive the decryption key from the encryption key.

That obviously should have said "Claiming that a hash function is insecure because it is fast would be like claiming AES128 is insecure because you can derive the decryption key from the encryption key."

Put differently. If you use AES when you should have used RSA, you don't blame AES for being insecure. If you use a SHA512 when you should have used a slow function, you shouldn't blame SHA512 for being insecure.

When you reason about the security of cryptographic systems, you usually show how many times a certain primitive must be invoked to break it. And if that number is large (usually meaning 2^128 or more), then you consider the system to be secure. It is not threat if the primitive itself is fast, because once you have to execute it 2^128 times, it will still take "forever".

But for protection of weak passwords you can't give such guarantee. Those can be broken with a relatively small number of invocations of the primitive. At that point the resources it takes to compute the primitive matters, and adding another requirement to a primitive means it is no longer the same kind of primitive.

Re:Too slow? (5, Informative)

BCoates (512464) | more than 2 years ago | (#41447883)

The proper name for these "Slow functions" is Key Derivation Function. They've been around a long time and are what OSes use to protect login credentials and what encrypted archive formats like RAR use.

Some examples are crypt (obsolete, vulnerable) PBKDF-2 (repeated application of salt-and-hash), bcrypt (repeated rounds of a special extra-slow variant of blowfish), and scrypt (an attempt to defeat GPU and custom hardware attacks by requiring lots of low-latency RAM).

Single-round salted hash is only a "better than plaintext" hack solution, it's never been the correct way to store passwords.

Re:Too slow? (2)

kasperd (592156) | more than 2 years ago | (#41448421)

The proper name for these "Slow functions" is Key Derivation Function.

They are related, but not exactly the same. The slow functions that I was suggesting does not require every bit of their output to be unpredictable. It just requires that the output as a whole to not be easily computable. It doesn't matter if it turns out some long subsequence of the slow function output is easily computable. There is also no requirement that the output of the slow function be random looking. It could start with a sequence of 1MB of zeros, as long as it is followed by something that cannot be computed as quickly. There is also no requirement that the slow function isn't reversible.

So a slow function as I suggest it is not guaranteed to be usable as key derivation function. OTOH it may be that any key derivation function would satisfy my suggestion for a slow function. But I am not entirely sure about that either. It boils down to the question about whether a key derivation function is required to be slow?

Can you point to a formal definition which shows the difference between a key derivation function and a cryptographic PRNG?

Some examples are crypt (obsolete, vulnerable) PBKDF-2 (repeated application of salt-and-hash), bcrypt (repeated rounds of a special extra-slow variant of blowfish), and scrypt (an attempt to defeat GPU and custom hardware attacks by requiring lots of low-latency RAM).

What would you do if you are required to design an ultra secure password protection based on the above? You have four suggestions, but you might then be faced with the requirement that every stored password must be secure against an attacker who can defeat three of the four candidates. You need something that is secure as long as one of the four is not broken, but you don't know in advance which of the four.

If the formalization I was suggesting was being used, then you could just compute all four and concatenate them. You'd still need a hash function, and composing multiple candidate hash functions into one secure hash functions is harder. But you'd just a cryptographic hash function, which means simpler requirements than the more complicated primitive.

Re:Too slow? (0)

Anonymous Coward | more than 2 years ago | (#41447895)

I think one problem with existing ways to create slow hashes by composing them out of fast ones is that they decrease the entropy. (Admittedly, a theoretical vulnerability so far, but one you should worry about.)

Re:Too slow? (1)

kasperd (592156) | more than 2 years ago | (#41450447)

I think one problem with existing ways to create slow hashes by composing them out of fast ones is that they decrease the entropy.

The main point with my proposal to split the hashing and slowness into separate is that each of them have a much smaller set of requirements, and thus a much smaller set of possible vulnerabilities. The specific problem you mention is not considered a major threat, but my proposal still protects against it.

In my suggested model, the slow function is where you could choose to use an iterated hash function. But there is no requirement that the slow function preserves all of the entropy. If the slow function was to lose a bit of entropy, that would not be a major problem. If it lose so much entropy, that the output is completely predictable, then it is a real problem. But you'd be able to demonstrate the entropy leak by using the birthday paradox to find an example collision a long time before the lost entropy became a real problem.

The point of my construction is, that you can still argue about the security of the system even if the security of the slow function completely breaks down. The worst case for the slow function would be if the output was constant (if every input results in the same output, you just need to compute it once). But even in that case, you can make a full system, which requires one invocation of the cryptographic hash for every password that is attempted.

Protecting against the entropy loss in iterated hashing is by now means a new idea. What I think is a new idea is that of splitting the slow part and the hashing into separate primitives and providing two proofs for the security of the system with a broken slow function and with an unbroken slow function.

Re:Too slow? (1)

jellomizer (103300) | more than 2 years ago | (#41448159)

If the algorithm is slow, it doesn't really help prevent it from being cracked. Because the hacker can just put more computing power into it.
Then you also mean the person who is trying to protect the data now has to get a faster computer to keep the load and the application running at the right speed... Thus giving an extra cost to the protectee however the hardware speed increase will negate the disadvantage to the hacker. It is just a losing proposal for the system owner.

However if the algorithm is fast that means we can increase the performance on the box, and give more resources to extra security measures. Such as adding a salt (and Pepper) to the Data to be hashed, Putting in a system that puts a system idle sleep if the value is incorrect...

1x 2x 100x slower isn't a big deal. Big O is.

Re:Too slow? (1)

Zero__Kelvin (151819) | more than 2 years ago | (#41448635)

"As I misunderstood, it has to be slow to be hard to break via dictionary attacks etc. ...

FTFY

Useful replacement (3, Insightful)

l2718 (514756) | more than 2 years ago | (#41447479)

Faster computation of hash functions is very important, especially to low-power devices. In other words, even if the improvements in cryptographic strength are irrelevant I'd expect the new standard to be adopted quickly.

Re:Useful replacement (2)

ewanm89 (1052822) | more than 2 years ago | (#41447651)

Faster computation of cryptographic hashes add weaknesses as they make bruteforce collision finding faster as one can try possibilities quicker.

Re:Useful replacement (1)

Goaway (82658) | more than 2 years ago | (#41447713)

This is only a problem for one single use of hashes, namely storing passwords in a database, and there are perfectly satisfactory solutions to it.

It is not a problem for most other uses of hashes.

Re:Useful replacement (2)

ewanm89 (1052822) | more than 2 years ago | (#41447831)

There are a few uses, but yes it only affects certain types of collision. But it is a weakness in those use cases. Does it matter if the hashing is slightly slower for checking the HMAC from a security standpoint? Yes from a usability standpoint I don't want to be waiting 5 minutes while computer decrypts a webpage, but it doesn't detract or add to the security of the algorithm in such use cases.

Re:Useful replacement (1)

nzac (1822298) | more than 2 years ago | (#41447787)

I think all the finalists are 512 or more bit hashes that make collisions far harder than the current bit lengths.

If you are just meaning passwords then chose a more suited hash function as this is not what SHA-3 is for.

Re:Useful replacement (1)

Anonymous Coward | more than 2 years ago | (#41447801)

No they don't. Hashes that can be brute-forced if you can only calculate them fast enough are weak per se. 512-bit hashes cannot be brute-forced even if you can calculate 2^64 per second, so it is advantageous that they can be evaluated quickly for the mainline case of using them to hash things.

Re:Useful replacement (1)

delt0r (999393) | more than 2 years ago | (#41448007)

At 512 bits far a hash we need on average 2^256 hashes to find a collision using the birthday paradox/trick. Lets assume you have 100,000 cores each that can do 4 billion hash per sec (far faster than we can do right now). It will billions of trillions of zillions of gazillions of times longer than the current age of the universe to find this hash collision. Never mind that there are not enough fundamental particles in the universe for the required storage. Even at a hash per particle.

Hashes should be fast. If you want it to be slower, rehash a million times.

Re:Useful replacement (1)

fatphil (181876) | more than 2 years ago | (#41449287)

Not *brute force* attacks - dictionary attacks. The search-space is too large for brute force to be a realistic method for finding collisions.

Assuming you're salted and not in plain text (big assumption, alas), nothing can sensibly defend against weak keys - but that's because all the security is in the key. Weak keys are weak, end of.

Re:Useful replacement (1)

amorsen (7485) | more than 2 years ago | (#41449319)

Go right ahead then, pick any of the contestants and bruteforce a collision. You'll be famous.

We cannot even design a computer to COUNT to 2^128, so for any even minimally secure hash function of 256 bits or more, brute force won't happen. Not even if your GPU with 10000 cores can do a hash in one clock cycle.

Re:Useful replacement (1)

ewanm89 (1052822) | more than 2 years ago | (#41449719)

That's the point, it doesn't have to, each of those 10000 cores each do a hash attempt in a few clock cycles each, my nvidia gpu (GTX 465) which is 2 years old now can handle 2.75 million sha1 hashes a second (I tested it). Being the geforce consumer model for graphics, only half the processing power is available for use in CUDA general purpose computing. Salted md5 passwords upto 12 characters can be brute force cracked in about a month with ~$40,000 worth of off the shelf hardware (I dread to think how fast NSA or GCHQ could do it on their top secret supercomputers with classified performance specs).

Re:Useful replacement (1)

bill_mcgonigle (4333) | more than 2 years ago | (#41450515)

2.75 million sha1 hashes a second (I tested it)

Now compare the 2^ complexity of SHA1 to SHA512.

(I dread to think how fast NSA or GCHQ could do it on their top secret supercomputers with classified performance specs).

NSA is a government agency. Figure their costs to do anything are 3x that of industry (I'm being generous).

Now, figure out what they need to spend to out-R&D Intel and look at their budget. If they have working quantum supercomputers, are they building that massive western data center just for disinformation?

Compare this to the 1970's when they literally could spend 10x Intel's R&D budget and that may have been a different picture. But, the People win again, through standard value-based economics. Speaking of which, all 5 SHA3 entrants are not from the NSA.

Re:Useful replacement (4, Funny)

commlinx (1068272) | more than 2 years ago | (#41447671)

True, I normally use a 8-bit checksum for my hashing for best performance. On passwords in particular some people think hashing and password recovery are incompatible, but on the server I simply maintain a list of 256 complex looking passwords so a match can be quickly looked up and e-mailed back.

Does anyone know if that idea has been thought of before, maybe I should take a patent?

Re:Useful replacement (0)

Anonymous Coward | more than 2 years ago | (#41447695)

Er I think you need to re-think your implementation there...

Re:Useful replacement (0)

Anonymous Coward | more than 2 years ago | (#41448527)

I hope you were trying to be funny, because your system effectively offers only 1/40th of the combinations a 4-digit PIN offers.

Re:Useful replacement (1)

beezly (197427) | more than 2 years ago | (#41448315)

SHA-512 is a cryptographic hash function. Faster computation of hashes is exactly what you *don't* want.

Re:Useful replacement (1)

amorsen (7485) | more than 2 years ago | (#41449367)

Why wouldn't you want faster cryptographic hashes? It is trivial to slow a hash down as much as you want, but when you need it to go fast, it is very difficult to speed it up.

Re:Useful replacement (1)

fatphil (181876) | more than 2 years ago | (#41449447)

So, what do you know about crypto that NIST don't?

They *specifically* stated that "computational efficiency" was important in their official announcement of the FIPS 180-3 competition. That was second on their list, right after "security".

gigaherz (0)

gigaherz (2653757) | more than 2 years ago | (#41447483)

Some people will switch regardless, because newer is better. Others will not, because if it's not broken, don't fix it.

Then you can go guess which political parties they favour based on the choice!

Re:gigaherz (-1)

Anonymous Coward | more than 2 years ago | (#41447615)

You are a fucking retard.
Politics has nothing to do with whether a person thinks that a newer hash function is better than an older one.

I'm going to guess you're a conservitarad. A stupid moron who thinks that conservatism is better, even though it is worse for you and all your family. You probably also think that gay people are going to hell. Well, news for you, if there is a hell, it will be populated by those who oppress and deny a brother.

GNAA.

Re:gigaherz (-1)

Anonymous Coward | more than 2 years ago | (#41447803)

Same gigaherz from pcsx2? You ever gonna release source for that PS2 dev9 module that actually supported online play?

Future proofing (1)

WillerZ (814133) | more than 2 years ago | (#41447487)

However, SHA-2 could be broken tomorrow, and this time we won't have a decade's wait while a suitable replacement is designed.

Re:Future proofing (0)

Anonymous Coward | more than 2 years ago | (#41447519)

SHA-3 could be broken next day when it gets released, and that time we wouldn't have time to re-design new one.

Re:Future proofing (2)

AHuxley (892839) | more than 2 years ago | (#41447529)

When the US gov says it does not "need" a new code for the unwashed masses, a good reason usually gets hinted at 10's of years later.
http://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.27s_involvement_in_the_design [wikipedia.org]

Re:Future proofing (5, Interesting)

jd (1658) | more than 2 years ago | (#41447701)

To be fair, the NSA don't seem to have caused problems with the S-Boxes and differential analysis doesn't seem to have worked too well. On the other hand, COCACABANA et al were glorified 1940s-era Colossus machines - cracking codes via a massively parallel architecture. To me, that's the scary part. Turing's work on cryptography and massively parallel code breakers was 100% applicable to the design of DES because the keylength was so incredibly short. You could build enough machines to effectively break it.

How many DES engines do you think someone could have crammed onto a wafer in the 1980s? (Remember, each die can have multiple engines, and then the dies that work can be hooked together.) Link up a bunch of such wafers and you end up with a crypto engine from hell. It would have been VERY expensive, but I would imagine it perfectly plausible that a sufficiently detemined and rich organization (I would imagine the NSA might have been one such) could have potentially built such a machine when the rest of us still thought the 6502 was a really neat idea.

Doesn't mean anyone ever did. People could have reached Mars in the 1980s, so "could have" and "did" are obviously very different things. What people actually did is anyone's guess, though "nothing" sounds about right.

Had they built such a device, though, then near-real-time breaking of DES would have been possible at the time it was in mainstream use. Certainly, there were claims circulating that such devices existed, but a claim like that without proof is hard to accept. All I can say is that it's demonstrably not impossible, merely unlikely.

Back to SHA-2. Are we in the same boat? Are there ways to build something today, even if nobody is likely to have actually built it yet, that could endanger SHA-2? (To me, THAT is the measure of security, not whether anyone actually has, because they're not likely to tell you when they have.) Quantum computing is the obvious threat, since 512 bits is a lot of security, too much to attack in parallel with a classical architecture. Quantum computing, though, should let you scale up non-linearly. The question is whether it's enough. (I'm assuming here that there are no issues with preimages or timing that can be exploited to reduce the problem to a scale QC can solve even if classical machines can't.)

There have been a few murmurs that suggest SHA's security isn't as strong as the bitlength implies. Would that be enough? If Japan can build a vector machine the size of a US football stadium, then it is not physically impossible to scale a machine to those sizes. Nobody has scaled a quantum computer beyond a few bits, but I repeat, I don't care what people have publicly done, it is what is within the capacity of people TO build whether publicly or not that matters.

If you're not 100% certain that not even a quantum computer on such a scale, where all nodes were designed at the hardware level to perform JUST the task trying to break the has, then the hash is not safe for 20+ years. It may be unlikely, but there's nothing to say it might not be vulnerable right now. There's nothing physically impossible about it (as shown), it's merely a hard problem. And hard problems get solved. What you need in a crypto hash is something you can be sure WILL be impossible to break in a 20 year window, which means what you need is a crypto hash that is beyond anything where the components can be prototyped today. For a 30 year window, it needs to be beyond detailed theory. A 50 year window can be achieved if it's beyond any machine ANY existing theory can describe.

(It takes time to go from theory to prototype to working system to working system on the right scale. The intervals seem to be fairly deterministic in each subject. I believe this to indicate a mathematical model that underpins things like Moore's Law and which is independent of field. Know that model and you know when Moore's Law will fail. Moore's Law is merely the equivalent of Hooke's Constant for computing, failure is inevitable, and if I'm correct then just as QM explains why Hooke's model worked over the interval that it did, there is a model in Information Theory which will explain why Moore's Law works and when it will not. However, that's for another time, when I show how since the underpinnings can be modeled and since the practice is social in nature rather than technical, something non-physical like societies nonetheless obey QM-like laws and thus a deeper theory must exist that describes sufficiently large societies in a model that could legitimately be called Psychohistory. For now, it is sufficient to say that if you want security for a period of X years, certain things must not have been discovered/built.)

SHA-3 doesn't increase keylength, but it DOES make things considerably less vulnerable to a massively distributed attack on scales we now know to be possible using non-traditional technologies we now know can be used.

Re:Future proofing (1)

buglista (1967502) | more than 2 years ago | (#41448095)

Only in this case the US government (NSA) is saying they want a new one... and Schneier is saying he doesn't think one is strictly necessary at this point.

Re:Future proofing (1)

Chrisq (894406) | more than 2 years ago | (#41447545)

However, SHA-2 could be broken tomorrow, and this time we won't have a decade's wait while a suitable replacement is designed.

And SHA-3 could be broken the day after. Or some mathematical breakthrough could undermine an assumption that both use.

Re:Future proofing (1)

ewanm89 (1052822) | more than 2 years ago | (#41447657)

All remaining SHA-3 candidates use a different mathematical assumptions to the SHA-2 algorithms. So breaking one won't just break the other.

Re:Future proofing (5, Interesting)

jd (1658) | more than 2 years ago | (#41447729)

Very true. Which is why I'm anxious SHA-3 has as little (ideally nothing) in common with SHA-2, be it algorithmically or in terms of the underpinning mathematical problems used that are assumed to be hard.

I would have preferred Blue Midnight Wish to be still in the running (well, it's got a cool name, but more importantly it has a very different design).

I ALSO wish Bruce and the others would pay attention to those of us on the SHA-3 mailing list advocating a SHA-3a and SHA-3b where -3a has the best compromise between speed and security, and -3b has absolutely b. all compromise and is as secure as you can get. Why? Because that meets Bruce's objections. -3a may will be broken before SHA-2 is so threatened that it is unusable, because of all the compromises NIST want to include. -3b, because it refuses to bow to such compromises, should remain secure for much longer. You can afford to stick it in the freezer and let it sit there for a decade, because it should still be fresh BECAUSE no compromises were made. By then, computers would be able to run it as fast, or faster, than -3a could be run now.

So I have ZERO sympathy with Schneier. He is complaining about a problem that he is, in part, responsible for making. Other views WERE expressed, he thought he knew better, but his path now leads to a solution he believes useless. So, to NIST, Bruce, et al, I say "next time, leave your bloody arrogance at home, there's no room for it, doubly so when you've got mine to contend with as well".

Re:Future proofing (1)

Chrisq (894406) | more than 2 years ago | (#41447861)

Because that meets Bruce's objections. -3a may will be broken before SHA-2 is so threatened that it is unusable, because of all the compromises NIST want to include. -3b, because it refuses to bow to such compromises, should remain secure for much longer. You can afford to stick it in the freezer and let it sit there for a decade, because it should still be fresh BECAUSE no compromises were made.

There are some applications where this is very important, for example the electronic signing of documents for copyright purposes (i.e hash published to prove authorship), public time-stamping of documents etc. If someone can come back in 10 years time with an alternative document that produces the same hash you no longer have absolute proof!

Re:Future proofing (2)

chill (34294) | more than 2 years ago | (#41449199)

Is this still possible? Considering SHA-2 is really a take-your-pick suite of SHA-224, -256, -384 & -512, NIST could do the same with SHA-3 and create a family.

Hell, SHA-1 is still kosher according to FIPS 180-4 as of March 2012. I expect SHA-2 to hang around for many years to come.

I admit I have not been following the mailing lists and they might have nixed this idea totally. Thus, my question to you which is probably quicker than trying to dig thru the archives.

Re:Future proofing (4, Interesting)

plover (150551) | more than 2 years ago | (#41447997)

Bruce's argument is essentially "the devil you know." Five years ago it seemed like SHA-2 might succumb to an attack. However, it's been five years, and those attacks never materialized on SHA-512. That's five more years of convincing evidence that SHA-2 is fairly secure. None of the SHA-3 finalists have had the same level of scrutiny. Sure, they've been looked at seriously, but nothing like the widespread amount of attention that an actual standard gets.

Another consideration is practicality. If a new standard is published, millions of dollars will be expended changing systems around the world. Will all that money have been well spent? If there was no cryptographic reason for the change, all that money and effort was wasted.

And what about security? Will every replacement be perfect? I personally doubt it; mistakes are made and people screw up implementations all the time. An organization that hired a cryptographer to design and implement a secure solution in 2007 might feel they can do the work themselves today. But we know that cryptographic protocols are notoriously finicky when it comes to unintended information leakage or security. If a secure design is modified in any way, the potential to introduce a security bug means the risk of change is much higher than the risk of sticking with SHA-2.

Re:Future proofing (0)

Anonymous Coward | more than 2 years ago | (#41448461)

None of the SHA-3 finalists have had the same level of scrutiny. Sure, they've been looked at seriously, but nothing like the widespread amount of attention that an actual standard gets.

I would tend to disagree. While there are a lot of people 'looking' at SHA-2, the pool of researchers who really *know* hash function design -- and are likely to develop practical attacks against it -- is relatively small. And for the most part, those researchers have been seriously invested in looking at the dozens of SHA-3 candidates, rather than attacking SHA-2. To see this in action, take a look at the publication lists of the last few CRYPTO, FSE and EUROCRYPTs.

I wouldn't say this phenomenon is 100% responsible for the slowdown in work on SHA-2, but it certainly accounts for a good fraction.

Re:Future proofing (0)

Anonymous Coward | more than 2 years ago | (#41448713)

Bruce's argument is essentially "the devil you know." Five years ago it seemed like SHA-2 might succumb to an attack. However, it's been five years, and those attacks never materialized on SHA-512. That's five more years of convincing evidence that SHA-2 is fairly secure. None of the SHA-3 finalists have had the same level of scrutiny. Sure, they've been looked at seriously, but nothing like the widespread amount of attention that an actual standard gets.

It's better to have something and not need it, than need something and not have it.

SHA-3 is a nice Plan B. It will take time to be comfortable with it, but we currently have time given the current (known) strength of SHA-2/512. The competition was also a good way for people to flex their intellectual muscles in a focused way. It threw some new ideas out there, and we were able to see what worked and what didn't.

AES was started in 1997, and the winner announced in 2000. SHA-3 was started in 2007, and the winner announced in 2012. Perhaps a new algorithm for ciphers and hashes should be created every twenty years? So the AES replacement would be started in 2017, and the SHA-3 replacement in 2027. Whether it's "needed" or not?

Release early and release often in a way.

Name (1)

Anonymous Coward | more than 2 years ago | (#41447499)

I think the next hash should be called B455 DR0PP3R 2K12

I have an idea (5, Informative)

diamondmagic (877411) | more than 2 years ago | (#41447537)

How about we link to Schneier's actual blog post? https://www.schneier.com/blog/archives/2012/09/sha-3_will_be_a.html [schneier.com]

Re:I have an idea (4, Funny)

Walterk (124748) | more than 2 years ago | (#41447717)

You must be new here..

Re:I have an idea (4, Funny)

dkleinsc (563838) | more than 2 years ago | (#41448257)

Besides, Bruce Schneier doesn't need his blog entries linked from anywhere - he just breaks into webservers and puts links wherever he wants.

for the uninitiated [schneierfacts.com]

Re:I have an idea (0)

Anonymous Coward | more than 2 years ago | (#41448519)

Why?
Why do people listen to Schneier at all? He's a loud idiot and his books are terrible.
His blog entries as well for that matter.

Well, maybe there's one reason NIST should choose Skein. Skein isn't just a hash function, it's the large-block cipher Threefish and a mechanism to turn it into a hash function.

Mmh, a mechanism he says, oh my! That there mechanism wouldn't be a simple Merkle-Damgaard construction, using either Wide-Pipe or Fast-Wide-Pipe for proper security, eh?

I.e. valid but irrelevant (-1)

Anonymous Coward | more than 2 years ago | (#41447589)

Like so many other studies. Only the reverse is worse.

tl;dr a solution looking for a problem

Why the unneccessary government bashing? (5, Insightful)

Certhas (2310124) | more than 2 years ago | (#41447603)

Is it really necessary to have a snide remark at supposed government inefficiency there? Can't we bury this ideological attacks that are not really supported by facts or data, add nothing to the point and are in fact grossly misleading?

This is a hard mathematical problem. Ordinary research papers in mathematics often spend a year or more in peer review in order to verify their correctness. If you're building a key component of security infrastructure a couple of years of review is not at all unreasonable.

Re:Why the unneccessary government bashing? (4, Insightful)

Goaway (82658) | more than 2 years ago | (#41447725)

Yeah, that bit of snark really showed the author has no clue at all what goes into a process like this. Those years are there to give researchers time to really try and break all the candidates. You don't want to rush that part only to find out someone managed to break the winner the next year.

Re:Why the unneccessary government bashing? (-1, Troll)

khallow (566160) | more than 2 years ago | (#41448435)

Is it really necessary to have a snide remark at supposed government inefficiency there?

I would have thought that it was unnecessary as well, but given your subsequent remarks, it appears some of us need reminding of it. Sure, math is hard, Observing profound inefficiency in government is not.

Re:Why the unneccessary government bashing? (0)

Anonymous Coward | more than 2 years ago | (#41449315)

USPS, Social Security, and Medicare are all bankrupt.

Here on slashdot you constantly see people complain about the huge wastes in the military.

Here on slashdot you constantly see people complain about how poor our internet service is.

Here on slashdot you constantly see people complain about how bad the TSA is.
... how the FAA rules on electronics should be revised, a decade ago.

It sometimes takes hours to take care of renewing a driver's license.

It was only last year that my local mass-transit system consumed less gas per passenger mile than the average national gas mileage. (Mostly because few people were using it. I don't use it because the closest stop is a mile and a half from my house for instance.)

My mother sent me a letter Sep. 11 (as postmarked) and I didn't receive it until yesterday.

We spend a few billion on intelligence, and yet we are caught off-guard in an attack on our embassy.

We are 16 trillion in debt.

Yes, some of these are private (military contractors, ISPs) or semi-private (USPS), but they are all subject to great government interference. (military contractors have tp put up with bad rules, such as not being permitted to transfer digital CAD file among themselves (citation: my brother worked for L3), and their contracts should be canceled if project spending is getting out of control. And that still ignores the concerns about us running 11 aircraft carriers, etc.)(The ISPs are granted monopoly status by the government)(the USPS is a state monopoly that has unreasonable restrictions, such as not being able to raise rates, placed on it by congress)
 
I agree, in this case, it is reasonable for the NIST to take it's time. As long as SHA-2 meets most needs, there is no need to rush the next standard. If SHA-2 had weakened as we thought it would in 2006-7, then the argument that they are taking to long could be made, but Schneier is making the OPPOSITE point, so it's a bit of a wash. Still, it'll be nice to have the next standard in hand to begin implementing it in our next-gen technology.

Why stop now? (1)

nzac (1822298) | more than 2 years ago | (#41447761)

So much work from everyone involved and we just throw it away??

This is a standard for many years in the future. SHA-1 is still used in some current applications and is considered secure and people are still using MD5.

Everyone can just ignore the new standard and the researcher can have a decade or two to try to break it before its needed. Where is the harm?

But Seriously Folks (1)

Crypto Gnome (651401) | more than 2 years ago | (#41447869)

SHA-2 Veriditas, Neato!

Aesop said it best... (0)

Anonymous Coward | more than 2 years ago | (#41448021)

Those grapes are too sour anyways...

If it aint broke... (1)

GeekWithAKnife (2717871) | more than 2 years ago | (#41448127)

As the saying goes. Why don't we continue to run candidates in parallel to SHA-2 usage and when there are signs that SHA-2 is about to be compromised or obsolete we'll just switch to whatever candidate was the best afterwards. Naturally if SHA-3 is significantly better in speed, security and compression etc then we have already made SHA-2 obsolete and need not wait until it "breaks". My two pence.

NIST 800 is a waste of time (1)

gelfling (6534) | more than 2 years ago | (#41448289)

Has anyone ever actually read NIST 800? I just had to review 800 30 and 800 39 yesterday. Hand to god they're designed to put you in a coma. There is not enough ritalin on the the planet for that.

Quantum Botnets (1)

canadiannomad (1745008) | more than 2 years ago | (#41448311)

Really, any increase in key-length or change in algorithm ought to be done to save us from the issues that could arise from things like quantum computers, super computer bot nets, or further into the future quantum computer bot nets. I mean we don't have those things yet, but we can kinda see them coming, and ought to be thinking long and hard about how to break that issue permanently.

Geography Problem (0)

Anonymous Coward | more than 2 years ago | (#41448443)

Gaithersburg, MD. Not Washington DC.

Let's keep them on standby (1)

iceco2 (703132) | more than 2 years ago | (#41450519)

Nist started the SHA-3 competition when SHA-1 was proven weak, and no one was sure how long SHA-2 would last,
no one liked the idea of relying solely on the wide pipe SHA-512 when the underlying building blocks have been proved week, (using SHA-512 is a bit like using triple-DES).
However it is difficult to predict advances in cryptography, and though SHA-512 is not nearly as weak as we predicted it would be a few years ago, we don't know what new cryptanalysis will show up tomorrow, forcing us to leave SHA-2 family in a hurry.
So it is very good we have 5 new well studied hash functions. Choosing one now would do little good, because it could prove weaker tomorrow just like SHA-2 could.
If we don't pick a winner now and keep them all on ice, we could pick from them easily and quickly a replacement when we need it.

Yes we do need it, just maybe not yet (1)

cwoollard (620228) | more than 2 years ago | (#41450669)

The thing about security is that it is an on-going fight with those that want to cause mischief (or worse). You always need to do your best. That is why we need SHA3. Maybe it isn't required right now, but I am fairly sure that at some point we will need a new solution. Either because of flaws that have become known in a current system, increases in computing power which mean we need a new solution, or maybe something much worse. I would rather spend time coming up with a new solution now. Ironing out any flaws or issues in an open way. Than having to knock something up quickly to get out of a tricky situation. So we should all support this rather than sticking our head in the sand and thinking everything will be alright.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?