×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

Transaction fees prevent DoS attacks too, even with infinite block size.

I don't think so. Let's say somebody wants to perform a DoS attack spending as few bitcoins as possible. Just take a tiny amount of bitcoins and spend it all on transaction fees one satoshi at a time. With transactions spending one satoshi in fee and not actually transferring any bitcoins anywhere, miners would have incentive to include those transactions in the blocks. After all, if there is no limit on the block size, a miner may as well take that additional fee.

That being said, I still think that off-chain transactions are a bit of a kluge.

I absolutely agree.

Some way of infinitely scaling in-chain transactions, while still providing an incentive to mine long-term, would be awesome.

This I also agree with, except from one detail. The current proof-of-work approach is wasteful and must be replaced by something else. There are some ideas about proof-of-stake, which may be suitable at some point.

about a week ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

Sorry to reply off-topic, but this part isn't true. We'll just start using more off-chain transactions.

That's actually not off-topic at all. The description of off-chain transactions mention that one way to do it is through the use of trusted third parties such as Mt. Gox! It does proceed to describe how a system could potentially be designed with auditing that can prove if fraud is happening, which would be an improvement, but it does not suggest any way to avoid such fraud.

If we forked every time transaction volume neared the limit then there would be no point in any limit at all

Sure there would. Requiring manual action to increase the transaction volume could protect against some kinds of DoS attacks, which would be possible, if there was no limit.

You can validate the chain of block headers without ever seeing the content of the blocks. The signature on individual transactions and their ancestors can be validated without ever seeing the full blocks, you just need a path from the block header to the transaction, which is only logarithmic in size. There are two reasons this is insufficient to solve the scalability problem. First of all the number of ancestors of a transaction could grow exponentially over time. Secondly checking for double spending requires a complete view of all the transactions in all the blocks. Solve those two problems, and you have solved the scalability problem.

about a week ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

No, there is no intention to tighten the blockchain rules at this time. This would cause a hard fork, and breaking compatibility with old versions is not considered lightly.

And it should not be taken lightly. But as I understand it, such forks have been done in the past, and another one will be needed due to transaction volume approaching a hard limit imposed by the current rules. The particular tightening of the rules about signatures could piggyback on another update, which would cause a fork. Is there any reason not to piggyback it on the next fork?

Mtgox's software is unique. The reference client, for example, can not be fooled by changing transaction IDs.

And of course changing the reference implementation to mitigate security bugs in alternative implementations has far lower priority than getting the actual bugs in those alternative implementations fixed.

There are two values, each with a 1 in 256 chance. 1/256 + 1/256 = 1/128.

That makes sense. So the success probability is about 0.8%.

But the paper is written to make a much broader claim, and I haven't seen the authors going out of their way to mitigate that misunderstanding in the press, much the opposite.

The news sites I follow haven't picked up anything except from the original paper.

I believe their research is incomplete, but is there anything incorrect in the research they did perform? And is there anything wrong about the conclusion they reached, which was that transaction malleability cannot explain the bitcoins disappearing from mtgox?

about a week ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

The bitcoin software started refusing to relay transactions with improperly padded transactions, even though they are still valid, if they make it into a block.

Are there any plans to stop accepting them in blocks?

The claimed attack is that people took these transactions, fixed them, and broadcast them.

I guess we can agree, that the article is not covering this attack, but rather a very different attack.

but they don't work very often, since it involves accepting a transaction over the p2p network, changing it, then broadcasting your version in hopes of winning the race to reach a miner first.

The paper says success rate is about 20%

But they aren't particularly useful for scamming mtgox (or anyone else).

Why not? If they have 20% success rate compared to the 0.4% success rate in the other rate, why not try it?

profiting on roughly one cycle out of every 128.

How do you get that to 128? One out of every 256 would sound more likely to me.

Either way the conclusion appears to be that money was not stolen from mtgox using any version of the malleability attack. The paper only argued that they weren't attacked with one particular variant, which would still be correct, though an incomplete investigation.

about two weeks ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

The transactions did happen by malleability attack. What makes you think they did not?

The paper suggested they happened due to a malleability attack, I have no reason to think otherwise. It was not me who said that was nonsense.

It would look like any other transaction.

The paper carefully explained difference in the looks of the involved transactions. By saying an attack would look like any other transaction, you are contradicting the paper, and you are providing less evidence to support your case than the paper did. Hence the paper is more trustworthy than your statement.

They failed to steal anything, hence proving the MtGox story is bullshit.

First of all the paper did not say anything about who those were targeted at, neither if they succeeded. It is likely that they failed to steal anything, but unless the attacks were targeted at you, you cannot know if they succeeded.

Even if we assume those copy-cats failed to steal anything, that doesn't prove anything.

Remember that the spike happened after MtGox closed withdrawals.

Yes, I already quoted that from the paper.

The observation in the paper was that if it was true, when mtgox said in their announcement, that they have closed withdrawals, then those attacks could not have been directed at mtgox. So they could be excluded from the set of attacks, that could have stolen money from mtgox.

The observation made in the paper was that the total number of attempted malleability attacks across the entire bitcoin network during the period were the alleged thefts happened were much fewer than the amount of bitcoins, that were allegedly stolen that way.

I can't figure out who you are trying to say is right - mtgox or the researches. And I don't see much in your comment pointing one way or the other. For now the methodology used in the paper appears sound to me. I haven't seen the raw data though, and due to the nature of the attacks only half the raw data will be in the blockchain. If they did publish the raw data, I don't know if it is possible to independently verify the validity of said data.

about two weeks ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

Standard bitcoin community response to any bad news: it's not really bad.

Except the comment you are replying to said the opposite. It was denying the statement made by these researches saying that the alleged theft did not happen. (I know that's a lot of negations, better count them before replying.)

about two weeks ago
top

Isolated Tribes Die Shortly After We Meet Them

kasperd Re:Correlation != Causation (351 comments)

Correlation is not causation. It's entirely possible that dying natives cause visiting Europeans.

How can we even be sure there is a correlation? We can measure mortality of the tribes that we do find. But then we need to compare that number to the mortality of the tribes that we do not find. Measuring the mortality of tribes that we do not find sounds tricky.

about two weeks ago
top

MtGox's "Transaction Malleability" Claim Dismissed By Researchers

kasperd Re:Dear slashdot, (92 comments)

Just that this paper is nonsense.

Care to answer a few questions then?

  • How did the transactions found by these researches happen, if not by a malleability attack?
  • If a malleability attack would not result in transactions looking like what was found by these researchers, then what would it look like?
  • What is the explanation for the spike found just after the announcement, if that was not due to copy-cats attempting malleability attacks?

about two weeks ago
top

Why Are We Made of Matter?

kasperd Re:Matter, anti-matter... (392 comments)

Are we sure there were equal amounts?

The way I have understood what's been said so far is this. The universe started with equal amounts matter and antimatter. Matter and anti-matter can only be produced and annihilated in equal amounts. Today we have reached a state, where there is much more matter than antimatter.

This is obviously inconsistent. So one of those three statements has to be wrong. I for one don't know which one of them is wrong. And I also haven't come across a physicist who had solid evidence for which of them is wrong.

One possibility I have been wondering about is that of antimatter galaxies. Seen from a distance, wouldn't an antimatter galaxy look exactly like one made of matter? I have been told this is not a possibility either, since that would imply that somewhere there would have to be a boundary between matter and antimatter, where a lot of annihilation would be going on and producing gamma-radiation, which we have not observed. I am wondering if the reason we are not observing this boundary is because those regions of space are by now so empty that there is no significant amount of annihilation going on anymore. Or could it possibly be the case that those boundaries are actually so far apart, that there just isn't any such boundary within our event-horizon. That would imply that the antimatter is out there somewhere beyond the event horizon and maybe 10^12 years from now it will be visible.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

If you have a list of ten million passwords, and you hash each password and then compare to the password database, you're just generating a rainbow table on the fly. There's no difference between that and doing the ten million hashes beforehand, or getting the list from somebody who already did.

Rainbow tables don't work that way. A rainbow table is not based on a dictionary. When generating a rainbow table you will be hashing pseudorandom inputs (chosen according to a probability distribution). And you are not hashing every input just once, you may end up reaching the same input multiple times. Also a rainbow table does not store all the computed hashes.

Case one: the bad guy wants to crack any account, and doesn't care which. The bad guy benefits from large numbers, because it increases the odds of somebody using a lame password.

I did not say having a large number of users made the system harder to attack. I said the slowdown salting does to the attack is proportional to the number of users. If salted hashes are used there are two factors involved as the number of users increases. More users means higher probability of somebody using a really lame password, this benefits the attacker, I am making no claims about the exact size of this factor. But salting means each password from the dictionary has to be hashed more times, which is a disadvantage to the attacker. In the ideal world these two factors cancel out. In the real world those two factors probably don't cancel out exactly. Nevertheless I stand by my statement about the slowdown of the attack introduced by salting, as it is the other factor, which there is most uncertainty about.

So let's assume an attacker wants to find just one valid password for one user. And let's assume there are n users and that in order to find one valid password, the attacker need a dictionary containing m passwords. So far those assumptions say nothing about how passwords are stored, and they are general enough to cover any such scenario. We don't know what n and m will be in a concrete scenario. What I stated is, that the number of hashes an attacker need to compute is n times larger, if the password database is salted than if plain unsalted hashes are used.

If the passwords are not salted, the attacker need to computer just m hashes and compare those against the password database. That comparison is easy to perform by simply sorting the hashes. If OTOH the passwords are salted, the attacker need to computer m*n different hashes in order to find the one combination, where there is a match.

If n is reasonably large, and if there is no strict password policy, it is likely that m will be just 1. But even in that case, the calculations are still valid.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:WTF? (277 comments)

An old-school salted hash == partial verification for the whole entry. So the old-school solution is strictly worse than this.

You are right. I misunderstood that detail the first time around. The two bytes, which are leaked, are not two bytes of the password, but rather two bytes of the salted hash.

An attacker could still utilize those two bytes to perform an offline attack to reduce the length of a dictionary by a factor of 65536, followed by online attempts at logging in using this much shorter dictionary. However the article did mention how that attack can be detected by the server side.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

You do not understand what you are talking about. Salting has absolutely no influence on brute-forcing.

I give up. You have clearly demonstrated that you do not know what you are talking about, and that you are not willing to learn. I don't know why you think you can convince me about something by repeating a statement, which I know is not true.

If you are not willing to accept that you were mistaken, there is not point in this thread continuing any further.

The number of users has absolutely no influence on the time it takes to brute-force one. You clearly do not know what "brute-force" means. Maybe read up on the concepts before spouting utter nonsense?

  • You should read what I wrote instead of making something up, which I did not write.
  • I'd say taking a university degree in cryptography does count as reading up on the concepts.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

Either it is insecure, or it is vulnerable to DoS. So what is your point?

If you use a salted hash based on a cryptographic hash with no known weaknesses, then you won't be as vulnerable to DoS attacks. And security-wise it is a justifiable solution. Hashing and salting add a lot of security. It will slow down an attack significantly without a significant cost for legitimate usage. That's what you expect from good cryptography. Iterating the hash will OTOH slow down legitimate usage and attacks by the same factor. Slowing down legitimate usage by the same factor that you slow down attacks is not good cryptography.

Instead of slowing down legitimate usage without being able to slow down attacks by even more, you should be looking at adopting protocols, that provide real security improvements. For example it is entirely possible to perform password authentication without the server ever having a chance of picking up the password in cleartext. Such protocols provide real security improvement. You can also increase computation cost on the client side rather than server side, and slow down brute force of a leaked password database that way. The later is still not great, because you are still only slowing down the attacks by the same factor as legitimate usage. But at least you don't make yourself vulnerable to DoS attacks, if those extra computations happen on the client rather than the server.

If you go with salted hash with only 1 or 2 iterations of the hash function to protect yourself against DoS attacks, and you push for adoption of protocols that hide the password from the server, then you are doing more for security than most sites. And should those salted hashes leak, only the very weakest passwords will be brute-forced. In that situation if a user's password is broken, the user bear the responsibility for choosing such a weak password in the first place.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

The ideal would be some form of client certificate. That way, the server either stores a copy of the key, or just stores a hash of it so it can recognize the key material when presented with it.

Certificate means a trusted third party signed a certificate stating that this particular public key belongs to this user. I'm not hooked on the idea of a trusted third party for this. Having the server store the public key or a hash of it, like you suggest, is a better approach. But then it is not really a client certificate.

That approach is sort of similar to what I describe, except that in my scenario the private key is computed on the fly, and in your case it is stored on the computer. Each approach has advantages. It is possible to design the protocol such that the client can choose whichever of the two approaches it prefers, and the server won't know which of the two is in use.

One drawback of storing the private key on the computer is, that there is now a file you can lose, and if you do, you lose access to all sites. My approach would only require you to remember a password, and then you can always get a new computer and use it. That may be a drawback in some scenarios, as if someone learned your password, they could authenticate as you. OTOH if only the private key is required, somebody stealing your device could authenticate as you (though the private key could be encrypted using a password).

Another drawback of storing the private key is that you would be using the same key with many sites, which could then violate your privacy by deducing that all of those accounts on different sites belong to the same person. My approach would use a different private key for each site since it would depend on the salt. The protocol for setting the password in the first place could enforce uniqueness of the salt by requiring it to be a hash combining inputs from both client and server.

If this doesn't work, maybe a system where an ephemeral key is generated and used, which is signed by the user's real key (which is kept offline.)

If you do go with the stored key approach, then this additional layer of indirection would be beneficial to the security.

but it would get rid of passwords altogether.

I don't believe in getting rid of passwords. If you don't have any passwords at all, then anybody stealing your hardware could authenticate as you. For me the goal is not to get rid of passwords, but to ensure you never need to present your password to an untrusted device.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

What really needs to happen is separation of duties and storing the hashes the same way companies store private keys used for signing... a physically secure, hardened appliance with a limited interface out. Backups are done to a USB port physically on the appliance, and the data never is exposed on the network, only calls to use it.

I say the effort is better spend on new protocols, where the server will never be able to learn the password, even if an administrator decided to install software to capture data after it has been decrypted by SSL. Such protocols are possible, but not widely deployed.

How many users wouldn't want a system, where the administrator couldn't leak the users passwords, even if they wanted to? As an added bonus you can safely use the same password on all sites, that makes use of such a more secure protocol. The implication of that would be, that you only have to remember one password, and that would hopefully get users to choose a slightly stronger password, than they do today.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

Salting actually provides no security at all against brute-forcing. Salting helps against rainbow-tables, but that is a different attack.

You clearly did not understand what I wrote. Salting slows down attacks proportional to the number of users. The only way you can attack salted hashes as fast as unsalted hashes is if you are attacking a system, which is only ever used by one single user.

Rainbow tables is just a way to start the attack before the leak happens. If you already have the leaked hashes there is no point in using a rainbow table, since rainbow tables are slower than an ordinary brute force attack.

If there is many small leaks from identical hash funnction, you would have to decide when there is enough to bother with a brute force attack. If you use rainbow tables, most of the computation can be reused for each leak, because that part can be computed before the leak happens. But overall you end up spending more CPU time on the attack, than you would have, if you just waited for all the leaks to happen and only then started a brute force attack.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

But there are other ways, for example requiring users to solve a captcha in addition or rate-limiting individual IP addresses.

Rate-limiting individual IP addresses is of limited value. It is not that hard for an attacker to attack you through different IP addresses. And in many cases an attacker would even be able to get a few requests coming through the same IPs as some legitimate users. If an attacker has access to a botnet, the situation gets even worse.

I predict that killing the login service due to lack of CPU capacity requires a much smaller botnet than flooding the network connection would.

Using a captcha may help. But the implication would be, that if you are under attack, you are going to require a captcha from lots of legitimate users. That's not very user friendly. I am not aware of any formal analysis of the security of captcha schemes, but in cases like this breaking the captcha is only going to permit a DoS attack, and not gain actual access, so it is not that bad. And you can even adjust the difficulty of the captcha dynamically to keep server load under control.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

All of those methods for slowing down password validation are DoS attack vectors.

You can protect against that by moving part of the computation to the client side. Once upon a time, I wrote a proof-of-concept in javascript. Much better solutions are possible, if you designed a new protocol and were able to get clients to support it.

A rough idea goes like this. Client sends username to the server. Server responds with a salt. Client uses salt + password to seed a PRNG. Output from the PRNG is used to generate an asymmetrical keypair. The client then signs a session ID (from the SSL layer) using the generated secret key. Client sends public key and signature to the server. The server then validates the public key by using a salted hash using a different salt value, and it validates the signature.

For each user, the server would need to store two salts and a hash value. The most expensive part of the above calculation is the generation of the asymmetrical keypair, which happens on the client. Thus you are better protected against DoS attacks. And that computation actually requires more CPU time than a typical iterated hash for password validation. The second most expensive part of the calculation is the signing using the secret key, which also happens on the client.

The validation of the signature does require a bit of CPU time on the server. But that happens after you validated the public key. For an attacker to even get the public key is actually harder than breaking the password protection schemes we are using today. Effectively from the above you could remove the signature validation step of the protocol, and it would still be more secure, than what we are currently using.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:How many times ... (277 comments)

The real question you should be asking is. How many times do you have to reference it before it turns true? In the real world people getting unauthorized access due to weak passwords happen a lot more often than any body getting tortured to hand out their password.

about two weeks ago
top

NYU Group Says Its Scheme Makes Cracking Individual Passwords Impossible

kasperd Re:He pretty much agrees with you on page 12. (277 comments)

Salts do not help that much today, as brute-forcing is faster than generating rainbow-tables.

Salting provides lots of security against brute force attacks. Let's assume you have a system with one million users and you have a list of the 10 million most common passwords. If the system uses unsalted hashes, you only have to hash those 10 million passwords once to know which users have been using a password from that list. If OTOH the hashes are salted, you have one million salts and 10 million passwords. That's 10 billion combinations you have to try in order to know which users used which passwords from your list.

about two weeks ago

Submissions

top

Was NAT responsible for Skype outage?

kasperd kasperd writes  |  more than 3 years ago

kasperd (592156) writes "Skype have published a post-mortem that explains some details about the incident. It still leaves a few questions unanswered.

The outage was caused by overloaded supernodes, from the article it is hinted that less than one percent of the Skype clients act as supernodes. If supernodes are this prone to get overloaded, why did Skype not use more clients as supernodes?

The article says supernodes help to establish connections between regular nodes. Does that mean the supernodes are responsible for NAT hole punching?

Would the outage still have happened if none of the Skype clients were behind NAT? Or would a situation where all Skype clients had a direct Internet connection have meant less load on supernodes due to lack of need for hole punching and more nodes available to act as supernodes?"

Journals

top

Secure wireless mice

kasperd kasperd writes  |  more than 9 years ago

Most of you probably already know how annoying the wire on the mouse sometimes may be. That is why the wireless mouse was invented, and now I'm looking for one. But as with any other wireless equipment, security is an important issue. Sometimes these devices work over longer ranges than expected.

The possibility to sniff the input is not my only concern. Authenticity is also important, I don't want anybody within a range of 100m to be able to control my computer. So any product that doesn't do both encryption and MAC (message authentication codes), is out of the question.

It wouldn't be difficult to produce a secure product. Good ciphers and MACs exists, and key exchange can safely be done while the mouse is placed in the recharger. But finding a product that actually does this proves to be difficult.

I searched for wireless mice satisfying most of my needs (that is optical wireless mice with at least three mouse buttons). And I picked five well known manufacturers from the list. None of the informations I could find online answered my questions. So I decided to contact the companies and ask. The result were depressing.

  • The first company had a wide range of wireless mice, but only one product with encryption. And even this product wasn't trustworthy, as it was based on proprietary algorithms. Security through obscurity is generally considered a sign of weakness, and is advised against in more than one place.
  • The second company did not know what encryption and MAC is, and did not consider it to be necessary.
  • The third company never replied to my email.
  • The fourth company replied to my email, but did not try to answer my questions. Instead I was referred me to a reseller. The reseller had never heard about the product.
  • The fifth company did not provide any contact informations on their webpage.

So I am starting to worry, that maybe secure wireless mice simply does not exist. Where should I look for a secure wireless mouse? And if I find a manufacturer, that can provide a good description of a secure product, how should I verify that the implementation actually match the description?

Of course my considerations about wireless mice also applies to keyboards. The keyboard may in fact be even more sensitive than a mouse, and since I don't move my keyboard as much as I move my mouse, I have decided to stick with wired keyboards.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...