Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New SHA Functions Boost Crypto On 64-bit Chips

Soulskill posted more than 3 years ago | from the t-rkwmwxtnv-swc-yregtqx-jwdc-gtav-ytgu-gute-pcjkgwxcra dept.

Encryption 60

An anonymous reader writes "The National Institute of Standards and Technology, guardian of America's cryptography standards, has announced a new extension to the SHA-2 hashing algorithm family that promises to boost performance on modern chips. Announced this week, two new standards — SHA-512/224 and SHA-512/256 — have been created to directly replace the SHA-224 and SHA-256 standards. They take advantage of the speed improvements inherent in SHA-512 on 64-bit processors to produce checksums more rapidly than their predecessors — but truncate them at a shorter length, reducing the overall timespan and complexity of the digest." Further details are available from NIST (PDF).

cancel ×

60 comments

Tiny penises! (-1)

Anonymous Coward | more than 3 years ago | (#35249300)

CmdrTaco and kdawson have tiny penises!

Re:Tiny penises! (0)

Anonymous Coward | more than 3 years ago | (#35249498)

Do you know that from personal experience?

Re:Tiny penises! (1)

Anonymous Coward | more than 3 years ago | (#35250482)

Yes I do ;-) Want to be next, sweetie?

faster?? (2)

Firkragg14 (992271) | more than 3 years ago | (#35249322)

Wasn't there an article recently complaining that the speed of SHA made it relativly useless as a hashing algorithm to protrect passwords? Surely the increase in speed would have a greater effect on cracking speed than on the speed of legitimite authentication.

Re:faster?? (3, Interesting)

sl3xd (111641) | more than 3 years ago | (#35249392)

I thought this as well - you'd think being able to compute a hash faster makes it a bit easier to compute a rainbow table with the hash.

Then again, there are many other perfectly reasonable ways you'd want the hash to be faster - for instance, how git uses the sha1 hash throughout - or any hash-summing of a file to verify the contents are unchanged.

So the 'faster hash' really only means that it might be something to consider when using it for a password hash - but for data integrity checking, it can be a real boon.

Re:faster?? (1, Interesting)

parlancex (1322105) | more than 3 years ago | (#35249490)

Does git seriously use SHA1 for file integrity verification? Different hashes are for different purposes. The CRC class of hash functions actually makes certain statistical guarantees for the longest run of possible errant bytes in source data and are extremely faster, making them far more suitable for file integrity checks.

Re:faster?? (3, Informative)

petermgreen (876956) | more than 3 years ago | (#35249606)

IIRC the CRC hashes are only designed to protect against accidental changes while secure hashes are designed to protect against both accidental and malicious changes. This makes them more suited to distributed systems where not every participant is trustworthy.

Re:faster?? (1)

Kjella (173770) | more than 3 years ago | (#35249964)

Yep, CRC doesn't deserve being called a hash at all. It's just a checksum really, like the control digit on your credit card number. Good against random corruption, no good against a maliciously inserted payload.

Re:faster?? (0)

Anonymous Coward | more than 3 years ago | (#35249994)

Pedantic, It's half way between the two... it's really a message digest :)

CRC has its limits. (3, Informative)

jhantin (252660) | more than 3 years ago | (#35249638)

Different hashes are for different purposes.

No argument there.

The CRC class of hash functions actually makes certain statistical guarantees for the longest run of possible errant bytes in source data and are extremely faster, making them far more suitable for file integrity checks.

CRC is great for packet-sized input, but not so great over larger chunks of data; also, the way its design targets burst errors means that widely separated point errors aren't as effectively caught. There's a reason Ethernet jumbo frames haven't gone much over 9000 bytes -- Ethernet's CRC-32 is much less effective at message sizes over 12000 bytes [wareonearth.com] or so. Cryptographically strong hashes tend to be less sensitive to input length.

citation needed (0)

Anonymous Coward | more than 3 years ago | (#35253982)

your citation on the limit of ethernet frame size doesn't actually
address why 9000 is a upper bound. it just states this as a fact.
i couldn't quickly find a good reference either.

Re:CRC has its limits. (1)

Circuit Breaker (114482) | more than 3 years ago | (#35254438)

That's apples and oranges.

The 12k limit is more related to just having 32 bits than to cryptographic/non-cryptographic nature.

64k*64k ^=32 bit, so at 64k you are guaranteed that there is at least one two-bit error that is undetected. At 2048, you are also virtually guaranteed of a 3-bit errors that goes undetected -- that's true whether the hash is a plain CRC or a cryptographic one.

A CRC-128 would have been as good for this purpose as MD5 or SHA1.

Re:faster?? (1, Informative)

sexconker (1179573) | more than 3 years ago | (#35249432)

Wasn't there an article recently complaining that the speed of SHA made it relativly useless as a hashing algorithm to protrect passwords? Surely the increase in speed would have a greater effect on cracking speed than on the speed of legitimite authentication.

Yes and no.

Yes there was such an article.
No it doesn't mean shit - that's what salts and multiple rounds of an algorithm are for.

But then again, yes this is bad news bears because nobody can seem to keep their password file out of reach of hackers, nobody can seem to figure out why and how they should use a salt, and no one ever configures their crypto to do anything but the bog standard shit. This is a result of idiots mindlessly screaming "Don't roll your own crypto!!!!" and forgetting that the last word of that sentence is supposed to be "algorithm". You absolutely should have a non-standard crypto routine using a standard algorithm. This isn't security through obscurity, it's security though making script kiddies' rainbow tables useless.

Re:faster?? (2)

Firkragg14 (992271) | more than 3 years ago | (#35249704)

Yes but a) assuming your password file is invulnerable is stupid otherwise we would all just use plaintext, b) multiple rounds are great but tbh only exist because the current crypto algortihms are so fast so you have to multiple cycles to slow crackins and c) salts are great but if you have a fast algorithm factoring that into your cracking process isnt impossible. TBH though most of these issues boil down to the fact that MD5 and SHA are stupid algorithms to use for encryption and if you take the encryption uses of the algorithm out of the equation then faster SHA is great for file integrity checking.

Re:faster?? (2)

blair1q (305137) | more than 3 years ago | (#35249772)

My password file is in plaintext.

Its location, however, is knowable only by breaking a 4096-bit key that changes daily.

Re:faster?? (1)

reiisi (1211052) | more than 3 years ago | (#35249982)

I assume, then, that you have plenty of (probably dynamically generated) decoys scattered around so that it really can't be found without knowing the location in advance?

Re:faster?? (1)

blair1q (305137) | more than 3 years ago | (#35255740)

It looks exactly like a /. posting. So yeah.

Re:faster?? (3, Insightful)

Goaway (82658) | more than 3 years ago | (#35249440)

Cryptographic hashes for a huge number of things besides protecting passwords, which indeed they are somewhat poorly suited for.

Re:faster?? (0)

Anonymous Coward | more than 3 years ago | (#35249618)

Use key stretching.
Do 10,000 or 1,000,000 rounds of SHA if you want.

Re:faster?? (1)

shish (588640) | more than 3 years ago | (#35249812)

If you want it to be slow, just hash password + salt + 10 gigabytes of pseuo-random noise

Re:faster?? (1)

VortexCortex (1117377) | more than 3 years ago | (#35250600)

Hmm, pseudo random noise? Really? Then output could depend on the pseudo-random generator you use (which may be non-portable unless you implement your prandom gen yourself).

I would just use more iterations:

digest = HMAC( pass, salt );
for ( i = 2048; i --> 0; ) digest = H( digest );
return digest;

Re:faster?? (0)

Anonymous Coward | more than 3 years ago | (#35250150)

No, for two reasons.

One, it's easy to make things slower. Every cryptographer and most systems security people have known for a long time that making password verification slow is good for security, so they use PBKDF2 or whatever.

Two, SHA-512 has a larger block size, so it's only faster for long messages. It's actually slower for hashing passwords.

Besides, hash functions are used for more than just passwords. They're also used to check the integrity of files and SSL data. Especially with AES on-chip, the hash function can easily become the bottleneck. Most new message authentication is built on other primitives for this reason, but some sort of hashing is still required for signatures.

Finally, this is standardizing something that was done for a while as a cryptographically sound but non-standard hack. With SHA-3 coming out soon, not many people will adopt it that weren't using something like this already.

Re:faster?? (0)

Anonymous Coward | more than 3 years ago | (#35250324)

Wasn't there an article recently complaining that the speed of SHA made it relativly useless as a hashing algorithm to protrect passwords? Surely the increase in speed would have a greater effect on cracking speed than on the speed of legitimite authentication.

Lets talk about rainbow tables for a second, because they are the best brute force tool for cracking hashes (Mathematics being much faster in theory but much less attainable for mere mortals).

Rainbow tables work thusly: Take some input that fits into the range of passwords you are looking for -> hash it -> use a custom rolled function to re-map that hash back into the password space -> hash it again -> ...many cycles later... --> store the resultant hash and the starting password. Do this many times till you have run out of disk space.

You then have a list of the starting point and end point for each chain. There are additional tricks to make this work better, be more resistant to chains merging, but that is the Big Idea (tm).

To use the table to "look up" the password you follow the same pattern as computing the chains but at each step you check if the hash you have is in the list of "ends" you have handy. If it is then take the starting password and re-computer the chain until you find the hash you started with. You now have the password.

Studious readers will note that you map the hashes back into the keyspace. The more reasonably sized the keyspace the more effective a rainbow table will be. Salts combat this by making every password essentially much, much, much, much, much larger. This means that you will need a rainbow table that is not only large but impossibly difficult to compute.

This works, this works well. However, salts and hashes are generally stored together. It makes sense because you need the salt to verify the password. If the hacker has the salt and the hash then they may be able to recover the password with old fashion brute force. This is where the speed of SHA can be a problem and why password encryption/hashing functions are iterated thousands of times.

Iteration is only a linear slowdown, though. It does not add to the complexity fast enough to be truly effective. Real security comes from exponential gains in complexity.

I don't know if it is provable, but I do not believe it to be possible to achieve exponential complexity gains required to verify a string without the use of additional secrets that need to be present at the time of verification on one or both sides that need to authenticate (for example, a key on the server that encrypts the salts and hashes or an RSA private key on the user's side)

In short: salt and iteratively hash your passwords (who cares which algorithm you use, so long as it is not home grown. I'll leave THAT holy war to smarter men than I), use a passphrase with lots of complexity, and for the love of god support non password authentication schemes like openID (still uses a password usually but you get to pick who to trust and can even maintain your own if you prefer) and public key authentication.

Re:faster?? (1)

Jessta (666101) | more than 3 years ago | (#35251878)

If your goal is to hide the original data that was hashed then SHA on small amounts of data is not a good idea.
If your goal is to verify that the data you have matches the hash you have then have faster SHA is good.
 

Re:faster?? (1)

Cato (8296) | more than 3 years ago | (#35252678)

Using password salt and multiple iterations of SHA-xxx is enough to defeat rainbow tables, particularly if you choose a non-standard number of iterations - see http://slashdot.org/comments.pl?sid=1987632&cid=35150388 [slashdot.org] for a bit more.

Why? (1)

parlancex (1322105) | more than 3 years ago | (#35249416)

Unless I'm missing something why would we ever want performance improvements in a secure hash function? SHA isn't for verifying data, there are superior hashes in that respect with regards to performance and certain statistical guarantees. A secure hash is supposed to have 2 properties: 1) It needs to be irreversible 2) It should be slow so as to reduce the feasibility of brute force attacks.

A very slow hash function that takes maybe 5ms to process would be extremely usable for authentication in practical usage, but almost impossible to run dictionary attacks against. So why is faster better here?

Re:Why? (1)

Goaway (82658) | more than 3 years ago | (#35249454)

The single use for hashes where you want them to be slow is protecting passwords in databases. This is a tiny fraction of the use cases for cryptographic hashes.

Re:Why? (1)

parlancex (1322105) | more than 3 years ago | (#35249528)

The people using cryptographic hashes for non-cryptographic purposes are idiots. As an example already cited in above comments the CRC class of hash functions actually makes certain statistical guarantees for the longest run of possible errant bytes in source data and are extremely faster, making them far more suitable for file integrity checks and other similar tasks.

Re:Why? (1)

profplump (309017) | more than 3 years ago | (#35250158)

There are plenty of uses for cryptographic hashes that do not involve passwords.

For one thing, many people's definition of "integrity" includes protection against deliberate tampering, not just protection against bit rot/transmission errors, and CRC's linear nature makes it completely unsuitable for such use. For another CRC's "statistical guarantees for the longest run of possible errant bytes" make it good at detecting burst errors, but also make it possible for single-bit errors to go completely unnoticed.

Not to mention there are lots of functions that even you would consider worthy of cryptographic hashes -- like SSL/TLS negotiation -- that some people do on the order of billions a day. They'd probably like fast hash functions. Or at the other end of the scale, some of us would like to speak HTTPS to my embedded systems running at ~100 MHz, and deliberately slow hash functions are right out for that sort of application.

But hey, if your whole world is unprotected serial-line data transmission and password files then I guess cryptographic hash functions are only useful for password files.

Re:Why? (1)

VortexCortex (1117377) | more than 3 years ago | (#35250846)

Your view is too narrow. Algorithms designed for one purpose may also suit another.

I created a library that uses any cryptographic hash function (MD5, Whirlpool, SHA1-512, etc), in a special Cipher Block Chaining (CBC) mode to provide a flexible strong encryption & decryption system.

Having a more secure & faster encryption/decryption system for real-time encryption is great (bonus, without rewriting any of the encryption code I can take advantage of the new hash algorithms as they become available instead of being chained to a few dated AES algorithms).

Cryptographic hash functions can (and are) used in many areas of cryptography, not just for password hashing and file CRCs. See: using one way encryption (hashes) to perform two-way (reversible) encryption.

Re:Why? (1)

jvonk (315830) | more than 3 years ago | (#35256574)

See: using one way encryption (hashes) to perform two-way (reversible) encryption.

Okay, I have been wondering about this since last evening.

I'll bite: how does one perform reversible encryption using hashes, given that hashes are not bijective from the domain to the fixed hash output length codomain? Even if you form an algorithmic construct that limits the domain such that the hash function could potentially be injective in the codomain, how do you ensure that it is surjective? Furthermore, isn't the entire point of cryptographic hash function design to make it so that the inverse function is Very Hard To Determine (ie. how would 'decryption' work)?

Perhaps I misinterpreted your statement...

Re:Why? (1)

Goaway (82658) | more than 3 years ago | (#35256760)

Read up on modes of operation of ciphers. (For instance, see http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation [wikipedia.org] )

Specifically, you will notice that cipher-feedback, output-feedback and counter modes (CFB, OFB, CTR) all use only the encryption half of the symmetric cipher they are based on. Thus, you can trivially replace the symmetric cipher with a one-way hash function.

Thank you (1)

jvonk (315830) | more than 3 years ago | (#35257172)

I always appreciate the opportunity to learn... I hadn't considered the approach of using XOR symmetry in this particular way.

Re:Why? (0)

Anonymous Coward | more than 3 years ago | (#35249900)

Even if you are doing passwords, consider the processor side of things. You have 10,000,000 users all logging in and out randomly. You want to spend as little CPU time as possible hashing the passwords so you use a fast algorithm. If you want it to be slow anyway, you then pause the thread for a bit and let some other thread handle some other user, before returning the result.

Fast is almost always good especially when you are on a server.

Re:Why? (1)

elmartinos (228710) | more than 3 years ago | (#35249506)

A fast hash can be made slow very easily: just pipe its result through the function again, and do this a million times, and use this as the hash.

Re:Why? (2)

Goaway (82658) | more than 3 years ago | (#35249536)

Somewhat easily, yes, but not quite that easily. You should never use a cryptographic algorithm carelessly like that. Always look up the recommended ways to do these things, because naïve algorithms like the one you suggest tend to have unexpected weaknesses.

Re:Why? (0)

Anonymous Coward | more than 3 years ago | (#35249644)

Integrity Management. google it.

Re:Why? (1)

Bengie (1121981) | more than 3 years ago | (#35251502)

I'm curious why slowness should be even considered as useful. Even without a hash, brute forcing a 10char password would take at 1bil comb/sec would take over a century.

Even if they made hashing 10xs faster, it wouldn't accomplish much.

The real question is how easily can a collision be found.

Re:Why? (1)

pjt33 (739471) | more than 3 years ago | (#35252658)

What's in view isn't brute-forcing over the universe of possible passwords but over the universe of probable passwords - i.e. /usr/share/dict/words.

Re:Why? (0)

Anonymous Coward | more than 3 years ago | (#35252318)

Actually you are missing something.

Cryptographic hash functions are used for verifying data and faster is in fact better. The properties of a cryptographic hash function include:
1. It is easy to compute the hash value for a given message. (fast)
2. It is infeasable to find a message that matched a give hash value. ie given D you cannot find m such that HASH(m) = D
3. It is infeasible to modify a message, m, such that the hash stays the same. ie given m and D find m' such that D = HASH(m) = HASH(m')
4. It is infeasible to find two messages with the same hash. ie find messages a and b such that HASH(a) = HASH(b)

Cryptographic hash functions have a wide variety of uses. They are used in every SSL connection to hash all the data transfered to ensure that there is no modification over the wire.

Because of these properties, cryptographic hashes should not be used to store passwords. If you want to store passwords securely it might be beneficial to look into key derivation functions and key stretching. PBKDF2, bcrypt, and scrypt are all key derivation functions that should be used instead of a cryptographic hash to verify passwords.

Encryption on chip approved by (1)

santax (1541065) | more than 3 years ago | (#35249592)

Just what I would want to use to encrypt that really sensitive data.... No thanks. Let's keep encryption software based. It isn't controllable for backdoors as it is for 99.999% of the people depending on it, let us not make that 100%.

Re:Encryption on chip approved by (1)

maxume (22995) | more than 3 years ago | (#35249626)

If you don't trust the hardware to be secure for some activity, it hardly matters what software you are running on it.

Re:Encryption on chip approved by (1)

noidentity (188756) | more than 3 years ago | (#35249658)

I read this as an algorithm that is better-suited for modern 64-bit processors, NOT one which is implemented specially in hardware. At the very minimum, this would mean that it can easily be calculated using 64-bit integers (and using the entire 64 bits, not just the low 32 bits), and perhaps also easily implemented using SSE2, and allow lots of parallelism, etc.

Re:Encryption on chip approved by (2)

bk2204 (310841) | more than 3 years ago | (#35250034)

SHA-512 is indeed faster than SHA-256 on 64-bit processors. SHA-512 uses 80 rounds using 64-bit variables on block sizes of 128 bytes, and SHA-256 uses 64 rounds using 32-bit variables on block sizes of 64 bytes. Since on most 64-bit machines 64-bit operations are roughly as fast as 32-bit operations, you see a speed increase because you're processing twice as much data and doing only a little more work (80 rounds versus 64). Both algorithms are very similar internally, so a round in each algorithm generally performs the same amount of computation.

The traditional way to make a longer hash value into a shorter one is to truncate it, using the leftmost bits. This is used with DSA and is generally considered suitable for most purposes. I don't therefore really see a need for SHA-512/t; at best it seems like this is an effort to improve performance.

Now wait a moment (0)

Anonymous Coward | more than 3 years ago | (#35249726)

Arn't hashing algorithms meant inefficient? If they are efficient it just makes brute force attacks a whole lot easier, and it makes it easier to make rainbow tables. IIRC, the perfect hash algorithm would take like .5 sec to hash anything.

Re:Now wait a moment (2)

pjt33 (739471) | more than 3 years ago | (#35249984)

I suppose that would be one way to make Bittorrent CPU-bound rather than IO-bound.

Re:Now wait a moment (1)

Em Adespoton (792954) | more than 3 years ago | (#35250040)

...and you want an inefficient algorithm to check the integrity of your disk image why? It's not like someone's going to reverse engineer a 200MB file out of a tiny hash. It's also unlikely that they'll be able to force a collision.

Hashes have many uses; hashing an access key (such as an ascii password string) is only one, and there are hashing algorithms designed for that (the SHA family is not included). For that matter, there are meta-algorithms designed to safely use SHA-style algorithms cryptographically. Faster generation just means more iterations through the meta-algorithm. Time to calculate final hash tends to stay fairly constant.

Re:Now wait a moment (1)

F.Ultra (1673484) | more than 3 years ago | (#35250882)

No they are not meant to be inefficient. For the specific case of storing passwords, yes there you need a cryptographic construct that is inefficient but the hash is simply a component in that construct, you make it inefficient with salt and key stretching.

Hashes like SHA256 gets its strength from the fact that no matter how fast you make it run it will still take more time and energy than is left in the universe to brute force the entire key space.

What's the point of this? SHA-3 is next year. (1)

Myoukochou (1817718) | more than 3 years ago | (#35249974)

This is absolutely silly. I can't see why anyone, let alone NIST, would want this. They should know better:

- SHA-512 is only faster than SHA-256 in pure x86-64 versus x86; add SSE to the mix and start doing four SHA-256 blocks in parallel, and SHA-512 is about the same speed, or slower!
- SHA-256 is not particularly slow, overall: 150MB/s is quite possible with it. Half the speed of SHA-1, yes, but still not bad. That is gigabit on one core, and more than the sustained read speed of a hard disk (although not an SSD).
- Of the five SHA-3 finalists, really only JH is slower than SHA-256 in x86, or SHA-512 in x86-64, maybe that's a bad implementation; most of the other candidates run around twice as fast as SHA-512 at its fastest (i.e. not too far off SHA-1 speed, and some of them can be parallelised so can run much faster), especially in 64-bit. They can probably be made to run faster.
- The SHA-3 winner (Advanced Hash Standard?) will be announced next year - and will at that time already have faster, more secure drop-in replacements for SHA-224, SHA-256, SHA-384, and SHA-512 (and anyone using SHA-1 or, God forbid, MD5 will need a stern talking-to).

Why, then, would we want a kludge for more speed - in such a limited scenario - when an established, relatively well-analysed hash exists right now and can go at the same speed, and in a year or so, it will then be near-instantly obsoleted by a faster, better-designed hash function?

Re:What's the point of this? SHA-3 is next year. (1)

owlstead (636356) | more than 3 years ago | (#35250352)

What I don't get is the focus on 64 bit x86 computers. They take the fastest personal computers and see how well they run a hash function. It's way more interesting to see what happens when using smaller chips. Personally I think that the reference platform for the latest hash method should have been a 32 bit ARM or something, not a little endian 64 bit Intel chip.

Re:What's the point of this? SHA-3 is next year. (1)

skyraker (1977528) | more than 3 years ago | (#35250926)

You need to remember the concept of secret intelligence. The US government won't release any information on something unless they already have something better. So, I would assume that our intelligence communities allowed this 'release' because they already have something that may or may not be better than even SHA-3.

Link to the standard (3, Informative)

owlstead (636356) | more than 3 years ago | (#35250522)

If anyone is interested in the source material, here it is:

http://csrc.nist.gov/publications/drafts/fips180-4/Draft-FIPS180-4_Feb2011.pdf [nist.gov]

Fresh from the press, it seems.

By the way, the SHA-512/224, SHA-512/256, SHA-384 and SHA-512 are only different in their initial hash value, so it is very easy to implement these algorithms. Just change the constant and cut the required number of output bits. Personally, I think it is at least two hash functions too many.

Faster rainbow tables? (0)

Anonymous Coward | more than 3 years ago | (#35251142)

Yay! Faster performance to create rainbow tables even faster!

Hash calculations for password security are supposed to be slow.

Re:Faster rainbow tables? (0)

Anonymous Coward | more than 3 years ago | (#35252954)

Yay! Faster performance to create rainbow tables even faster!

Hash calculations for password security are supposed to be slow.

Hash functions are used for many purposes other than for password protection in a database. In the case where you care about hashing your password, define your hash function as calling SHA 1000 times.

Making things slow is easy. Making things fast or secure is hard.

Ability to Spymore (2)

NSN A392-99-964-5927 (1559367) | more than 3 years ago | (#35251276)

Remember governments want their cake and they want to eat it plus more. I do recall many years ago before 9/11 I was part of the movement who sent mass <b>"keywords"</b> to force the FBI into admitting the existence of Carnivore. These were the days before wikileaks and the source code was leaked onto planetsourcecode.com for 8 hours.

Eventually the FBI admitted what they were doing then scrapped the code and set about an entire overhaul of systems including the CIA DHS etc. The new code name was Magic Lantern, still the same code but now a different name with variations added in like Bit Locker into systems shipped part of Microsoft.

You might ask why this is important? Well since the fiasco of Carniovre, it has taken 11 years for the government to catch up and they have spent Billion's of Dollars of the good honest people of America to spy more.

Only SHA 512 is acceptable nowadays, Put it together 3DES http://en.wikipedia.org/wiki/Triple_DES and AES http://en.wikipedia.org/wiki/AES serpent http://en.wikipedia.org/wiki/Serpent_encryption_algorithm and blowfish http://en.wikipedia.org/wiki/Blowfish_%28cipher%29 You will really piss of governments royally and set them back 200 years of spy more!

She's real fine my 404... (1)

evilviper (135110) | more than 3 years ago | (#35251612)

Further details are available from NIST (PDF).

Unfortunately, no, they aren't... Too bad, since there's so little detail in the summary I have no idea what this is actually about.

Not Found

  The requested URL /publications/drafts/ fips180-4/Draft-FIPS180-4_Feb2011.pdf was not found on this server.

Re:She's real fine my 404... (1)

Fnord666 (889225) | more than 3 years ago | (#35254102)

The requested URL /publications/drafts/ fips180-4/Draft-FIPS180-4_Feb2011.pdf was not found on this server.

Please see owlstead's post [slashdot.org] which contains a good link to the document.

Does SHA2 still produce the same results? (1)

joeyadams (1724334) | more than 3 years ago | (#35252080)

Does the new standard change the actual hash function, or does it merely provide a faster way to compute SHA2?

Re:Does SHA2 still produce the same results? (1)

msaavedra (29918) | more than 3 years ago | (#35252316)

I'm not an expert on crypto, but it seems to me that, for instance, SHA-512/256 would not produce the same digest from the same input as SHA-256. I just conducted the following test on the linux command line:

$ echo hello | sha512sum
e7c22b994c59d9cf2b4 8e549b1e24666636045 930d3da7c1acb299d1 c3b7f931f94aae41edd a2c2b207a36e10f8bcb 8d45223e54878f5b316e 7ce3b6bc019629 -

$ echo hello | sha256sum
5891b5b522d5df086d0ff 0b110fbd9d21bb4fc716 3af34d08286a2e846f6be03 -

The first is the SHA-512 hash of the word "hello" (with spaces inserted to defeat the slashdot lameness filter) and the second is the hash for SHA-256. I don't see any way to truncate the the 512-bit output and get one that matches the 256-bit output. Therefore SHA-512/256 would not be compatible with plain SHA-256.

I don't see much utility in these new algorithms. Since we would already be calculating the 512-bit hash, why not just use it instead of truncating it? I suppose there are a few situations where for externally imposed reasons you just need a value of a certain length, but that's about it.

Re:Does SHA2 still produce the same results? (2)

butlerm (3112) | more than 3 years ago | (#35254172)

Since we would already be calculating the 512-bit hash, why not just use it instead of truncating it?

Because there are many applications where carrying the extra 256 bits either breaks compatibility or is storage/transmission cost prohibitive for some reason or another. ZFS style block checksums, for example. Hashed authentication of network packets is another.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...