Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Microsoft Takes Down No-IP.com Domains

kasperd Re:Time For Decentralized DNS (495 comments)

Using blockchain technology for decentralized consensus.

If you are thinking about using bitcoin style proof of work, then I'd say that is a poor choice. It is an extreme waste of processing power, and it is not even needed for DNS. The purpose of the proof of work is to prevent double spending. But if you tried to perform a double-spending like action on a DNS system build on similar principles, the only damage you'd cause would be to your own domain.

But by all means, let's get data and hosting decoupled. DNSSEC provides the ability to validate records, wherever you got them from. But it still has the centralized authority. I'd rather see that once a zone hand over authority over a subdomain to a different public key, then a signature with that key has to be used to hand authority back or transfer it to a new key.

about a month ago
top

Winners of First Seized Silk Road Bitcoin Auction Remain Anonymous

kasperd Re:Can bitcoins be blacklisted? (88 comments)

it it possible or even practical to identify a bitcoin as having been a "direct descendant" of a coin involved in a given transaction and/or as a coin that has been "co-mingled" with such a coin?

Definitely. That is easy to do. However since each transaction can have multiple inputs and outputs, the set of descendants is likely to grow over time, until eventually most bitcoins are descendants of that transaction.

it may make it practical to for major players and for that matter anyone who uses BC to "locally blacklist" seized bitcoins.

If there isn't any consensus in the "community", then such a blacklist is unlikely to have any effect.

If some miners decide to blacklist transactions involving certain coins, then other miners are just going to pick them up. If only a minority of miners are in on the blacklisting, then this is going to cause a fork in the blockchain. Other miners have to decide, which fork they are going to bet their resources on. If there isn't consensus on what to blacklist, there could be so many forks blacklisting different subsets, that each fork is going to become irrelevant leaving only the chain with no blacklisting as viable.

Even if you could manage to get a majority of miners to agree on exactly what should be blacklisted, it is of questionable value to the miners to attempt blacklisting. It could be seen as introducing a dangerous precedence for introducing blacklists. This would introduce a new and even more unpredictable danger to anybody owning bitcoins.

Traders could decide to blacklist certain bitcoins. This would mean you would refuse to accept blacklisted coins. But if you are selling goods for bitcoins, then you'd have to announce in advance, which coins you consider blacklisted, otherwise you'd have disputes where the buyer of goods says they have paid, but seller of goods says the received bitcoins are no-good. And as receiver of bitcoins you'd also have to decide how diluted the blacklisted bitcoins would have to be, before you'd accept them. And in all, there'd have to be consensus about both the set of blacklisted bitcoins and the dilution threshold. Otherwise nobody will know, if the bitcoins they are accepting are good or not, and without such knowledge blacklisting wouldn't have the intended effect, instead you'd just be rejecting arbitrary payments, you might as well flip a coin and say no-thanks to a certain payment.

I think the only consensus that has a real chance of being reached is that bitcoins are not blacklisted.

about a month ago
top

Bitcoin Security Endangered By Powerful Mining Pool

kasperd Re:Ghash.IO is not consistently over 51%, yet anyw (281 comments)

Keep in mind; if the miners did have to communicate with the pools constantly and synchronously with their mining, it could slow down their mining, and therefore give them a competitive disadvantage.

True. I was assuming it was obvious, that the communication had to be asynchronous. And I can't see any reason to communicate with other pools more often than once per block.

Once a node has started computing, it should be able to go on for quite a while without any communication. If the node doesn't hear anything else, it should just keep doing whatever it was doing. The only thing that can render the continued computation completely pointless for the node is if a node somewhere (in the same pool or any other pool) successfully mines a block. If communication has been totally dead for an hour, it is probably a waste of energy to keep trying to mine a block, since somebody else likely mined it already. But if you haven't heard anything for five minutes, just keep trying to mine the same block you were already working on.

This means the most important information to get synchronized between nodes is the fact that somebody mined a block. This should be totally independent of the pool, so this can be communicated between nodes even if they are in separate pools.

The other information a node needs to receive is information about which transactions to include in the block. It's no big deal if that information is lagging a bit behind. You could update the list of transactions multiple times while trying to complete a block, but if it lags a couple of blocks behind, nothing is going to break.

about a month and a half ago
top

Unicode 7.0 Released, Supporting 23 New Scripts

kasperd Re:Linear A? (108 comments)

But why? We couldn't understand Linear B

That shouldn't be a prerequisite for including it. After all, having the text represented on a computer would be a useful tool in getting to understand it.

about a month and a half ago
top

Unicode 7.0 Released, Supporting 23 New Scripts

kasperd Re:Fragmentation - Ghost of Steve Jobs, is that yo (108 comments)

It's a set of numbers from zero to 2^32 - 1 that map to symbols

Actually it only goes from 0 to 1114111, mainly because that's the range you can achieve with UTF16.

about a month and a half ago
top

Bitcoin Security Endangered By Powerful Mining Pool

kasperd Re:Ghash.IO is not consistently over 51%, yet anyw (281 comments)

I believe 98% of miners are using standard mining tools which communicate with the selected pool only

So, we are dealing with a (minor) weakness in the standard mining tools.

What i'd like to see happen is a pool cross-submission scheme, where: instead of miners having just one pool configured, they have at least 3 configured, and: while they may only be requesting work units from 1 pool; they could send a 'heads up' to all the secondary pools, when a new block is detected...

Sounds like a reasonable solution.

about a month and a half ago
top

Bitcoin Security Endangered By Powerful Mining Pool

kasperd Re:Ghash.IO is not consistently over 51%, yet anyw (281 comments)

A miner connected to the bitcoin network AND the pool, could in theory foil the attack.

If you are mining without communicating with the rest of the bitcoin network, you are putting somebody else in charge of that communication, which means you are giving somebody the power to cheat. Any miner not intending to cheat should be considering that to be a vulnerability in the mining software.

In other words, any miner not intending to cheat have an interest in running mining software, that does communicate with the rest of the bitcoin network, even if the rest of the mining pool doesn't.

about a month and a half ago
top

Civilians Try to Lure an Abandoned NASA Spacecraft Back to Earth

kasperd Re:Hack (53 comments)

Its mission ended in 1997 and it was sent a shutdown signal.

What's the purpose of sending a shutdown signal to an abandoned probe? If it is abandoned, does it matter if you shut it down or not?

about a month and a half ago
top

Bitcoin Security Endangered By Powerful Mining Pool

kasperd Re:Ghash.IO is not consistently over 51%, yet anyw (281 comments)

Take steps to prevent accumulating 51% hashing power, including: not accepting new miners

Why is this even necessary? I was under the impression that a mining pool would not be able to pull off an attack without it being immediately visible to the miners in the pool. Doesn't that mean that having a pool with majority of the processing power isn't enough to pull of an attack, you also need all miners in the pool to conspire to perform the attack?

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:Fuck IPv6 (305 comments)

To not make the IP addresses overly lengthy.

The size of the IPv6 address was chosen carefully. But you can never predict everything, and a few use-cases has shown up for more than 128 bits. But we'll just have to make do with the 128 bits we got, because nobody want to go through this entire upgrade process one more time.

So, why was it set at 128 bits in the first place? First of all, the address just like the IPv4 address consist of a network part and a host part. Due to the too short IPv4 address, the boundary between the two parts was first made variable at byte boundaries, and when that turned out to not be enough to avoid running out, the boundary was permitted to be at any bit. Even that was not enough to avoid running out. With IPv6 this mistake was not to be repeated. Hence each of the two parts had to be made large enough.

From IPv4 deployments we learned that 32 bits was not enough. In fact we have more or less removed the host part of the address (with lots of complications) and we have forced utilization way beyond the reasonable, and 32 bits is still not enough. 36 bits for the network part might be enough, if utilization was at 100%. However research has lead to the concept of an HD-ratio which indicate what percentage of the bits in an address can be effectively used when it need to have a hierarchical structure that can be utilized in routing. Research show that a reasonable HD ratio is in the range 80% to 90%. If we have 45 bits and 80% HD ratio, we have 36 bits effectively.

Instead of making the network part be 45 bits, which is an awful size for a computer to work with, it was rounded up to 64 bits. Those additional bits were put to reasonable use. In front of the 45 bits were put 3 bits which splits up the addresses in 8 blocks of which the first and last are used for addresses that need special handling in the protocols. The other 6 blocks are there such that we have 6 chances for getting the address allocation right in order to avoid running out again. After the 45 bits were put a group of 16 bits that can be used for subnetting within a site.

Some ISPs are so scared about those 45 bits running out that they have already now commandeered some or all of the 16 bits intended for subnetting within a site. This is most likely a reflex reaction caused by too many years of being forced to be extremely careful with allocation of IPv4 addresses. It is not like those 45 bits are going to be running out.

For the host part there was a desire to introduce auto configuration, which could generate the network part from a MAC address. If you also wanted to have room for addresses not generated from a MAC address, that mean the host part had to be at least 49 bits. This too was rounded up to 64 bits. Is it wasteful to round up from the 94 bits of documented need to 128 bits of actual address size? I'd say it would have been wasteful to require CPU time being spent on the bit operations needed to save a mere 34 bits on the size of the address.

Saving CPU time by rounding up the size of the addresses makes sense to me. Saving CPU time by eliminating needless fields from the header also makes sense to me. In fact three different 16 bit fields that routers would need to process when forwarding an IPv4 packet got removed such that routers no longer need to waste processing time on those on IPv6.

Why did it suddenly turn out that 128 bits was not quite enough? Once we got the chance to work with the much larger size of addresses, people suddenly realized, that it is possible to apply cryptographic operations to part of the IP address. With IPv4 that was unthinkable due to only having a total of 32 bits. But with the 128 bits cryptography suddenly came within reach. However cryptographic primitives with only 128 bits are considered to be on the weak side by now, and we can't even use the entire IPv6 address for cryptographic operations. So where cryptographic data in the IP address makes sense, we have to compromise on the security, but it still provides some benefit compared to not being able to do that cryptography in the first place.

This is not the only reason 128 bits is not quite enough. RFC 4193 defines a way to generate local prefixes with low risk of collisions, this is to replace RFC 1918 where collisions is a real problem. RFC 4193 leaves 16 bits for subnetting. But with RFC 1918 you could use 10.0.0.0/8 in which you had 24 bits and could realistically use up to 21 bits of that for subnetting. This is not to say RFC 4193 put you in a worse position than RFC 1918 did, but we are just 5 bits short of saying that it is unconditionally better.

Then you can look at protocols such as 6to4 and Teredo. 6to4 needed to embed an IPv4 address inside the network portion of the IPv6 address. That fits just fine. But due to deployments of NAT on IPv4, 6to4 is not usable on all IPv4 networks. Along came Teredo to solve that problem. Teredo however uses UDP and need to embed both IPv4 address and port number, and it need both client and server addresses to be embedded along with a few flag bits. In total that's 112 bits that you would want to embed inside the network part preferably with bits to spare for subnetting. So on top of the 112 bits you need a prefix on the order of 16 to 32 bits and 16 bits for subnetting, that's about 144 bits that need to fit inside the network portion of the IPv6 address.

That was obviously not possible. So first of all, Teredo used not just the network part of the address but also the host part. That means Teredo would not be suitable for connecting an entire network but only for single hosts. That means the bits for sunetting would also not be used. This was not quite enough to make all the embedded data fit inside the IPv6 address, so the server port number was hardcoded in the protocol such that it would not have to be embedded in the IPv6 address.

If only ISPs had deployed IPv6 in time, there wouldn't have been any need for contraptions like Teredo.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:Why IPv6? (305 comments)

Why does my ISP issue me with only a 32 bit address?

Not enough competition. You only get to choose among those companies who are actually in the area and can get a physical wire to your address. Plus most consumers don't see the connection between the problems they experience and the lack of IPv6 connectivity on their internet connection. But things are moving forward, I might actually get native IPv6 at home next week, and I live in a country which is lacking far behind the rest of the world.

Why does my server host only give me 32bit addresses?

For the same reason you haven't moved to a competitor, which does have IPv6 support. For hosting there is more competition, because it is easier to move. And I believe that is part of the reason why the percentage of hosting companies with IPv6 support is larger than the percentage of ISPs with IPv6 support.

You can get dual stack hosting, if you make it a large enough priority that you are willing to switch hosting provider to get it. That's the positive side. The number of customers actually switching hosting provider to get dual stack is small, but I am one of those who has done it. We don't need 100% of customers ready to switch hosting provider to get IPv6. I think that if just 30% of customers were ready to switch hosting provider, then 90% of the hosting providers would deploy IPv6.

the default settings in IPTables are 32bit?

iptables is for IPv4, ip6tables is for IPv6.

but there seems to be no more forward motion.

There is forward motion. It is happening 13 years too late. If we keep being 13 years behind schedule compared to my calculations, then by 2020 we'll have 86% of users on IPv6.

it strikes me that some group has dropped the ball; but which group?

I would say the ball was dropped in 1999, when the technical spec wasn't followed up with policy adjustments. The introduction of CIDR as a stop-gap measure in the early 90's meant changes in how IPv4 addresses were handed out. Once the IPv6 spec was finalized, there should have been another change. A new policy ensuring that those deploying IPv6 would get easier access to IPv4 addresses than those not deploying IPv6 could have made a difference. Did IANA drop the ball? Or were they simply following a policy set by policymakers, who had dropped the ball?

The last /8 in APNIC is being rationed as is the last /8 in RIPE. But those account for only about 2% of the total pool, not something that can give a strong incentive. Imagine if 30% of the IPv4 pool could have been handed out according to a policy set to give incentive to deploy IPv6. That didn't happen, and by the time IANA ran out of addresses, IPv6 deployment had hardly gotten started.

I think the problem now is that nobody knows how to set the right incentives to deploy IPv6. The benefit you get from deploying IPv6 at this time are not great because only a minority of those you need to communicate with have IPv6 at all, and they still have IPv4 as well. Those who are being most hurt by lack of IPv6 deployment are those who don't have IPv4 addresses, those who can do something about the deployment is those who do have IPv4 addresses. It will have to get a lot worse before it starts to get better.

I find it interesting that 25% of people in the poll have chosen "When we build a new internet" as the answer as to when IPv6 will arrive.

One could argue that by deploying IPv6 we are building a new internet. Just like the previous internet was build on top of infrastructure originally intended to support telephone calls, the new internet will be build on top of infrastructure originally intended to support the old internet. But really this is just a play on words. What's more interesting is the games being played with peering. I get the feeling providers are in two camps, those who think getting early into the IPv6 deployment means you get a better place in the hierarchy vs those who think that whatever place you had in the hierarchy on IPv4 is the place you are entitled to in the IPv6 hierarchy when you finally decide to get started with it. It will be interesting to see which of those camps "win". And it could change the structure of the internet, because it is peerings that make up the internet.

I suspect some are joking but that others, like myself, have a gut feeling that the entire internet needs an overhaul.

I can think of plenty of other areas where an overhaul could be needed.

  • We need to get rid of protocols that can be abused for amplification attacks, or we need to squeeze a spoofing protection layer in between IP and UDP
  • We need to be able to track down the source of a flood of packets from the receiving end without involving administrators of intermediate routers. And we need to be able to push filters all the way across the internet to the source of those packets. And wee need to achieve that while maintaining the principle of keeping all intelligence at the edge of the network. And all the while each intermediate router must only need a constant amount of memory to support this operation.
  • We need opportunistic end-to-end encryption with optional validation of the identity of the peer after the encrypted channel has been established. Making the validation optional is a key point to security.
  • We need to get rid of the overloading of meaning of IP addresses. Today IP addresses are related to your physical location, but they are simultaneously used to track reputation, and ISPs are enforcing limitations on what their customers can do with IP addresses belonging to the ISP.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:Fuck IPv6 (305 comments)

I agree with this, 0.0.0.0.0 - 255.255.255.255.255 is much easier

That's it. I have now officially heard that suggestion too many times.

I have seen it come in two variations. Extending the IP address from four octets to five octets has been suggested frequently. It was funny, when it was mentioned in the IPv4.1 spec published as an April's fool joke a few years back. It was funny then because it was written as a suggestion by somebody with enough of a clue to include the diagrams making it blindingly obvious why it is a non-solution (which would only be slightly more work to deploy than IPv6.)

Another variation is the suggestion to increase the maximum for each octet from 255 to 999 to fully utilize all three digits. Increasing the range to 0-999 would give almost 40 bits of address space, slightly less than the extra octet, which would give exactly 40 bits of address space. But how much address space do we really need? Calculations based on population growth and HD-ratios has shown 45 bits to be on the safe side, and based on that the recommendation to assign a /48 to each site out of the /3 assigned to IANA was approved.

But each of the two suggestions above gave us only about 40 bits, which is less than 45. But if we combine the two, we get almost 50 bits. That should be enough, right? Well, what we have discussed is only notation. The suggestion tends to be made by people who haven't bothered looking at what wire formats actually look like. The only exception was the IPv4.1 spec, which did specify a wire format (and that was one of the primary hints telling the reader, that it was a joke. Another hint was the name 4.1 for something published April 1st. That the IP was extended from 4 bytes to 4+1 bytes just made it extra fun.)

So if we were to accept the notation with five groups of numbers ranging from 0 to 999, what wire format could we use? IPv4 wire-format is a no-go, because there are not enough address bits. We could invent a new format. If we managed to come up with a new format, which is obviously better than both IPv4 and IPv6, then we still have a 20-year deployment task to complete and a deadline three years ago, which makes a new wire-format a no-go as well. This leaves us with only one possible wire-format to apply that notation to, which is IPv6.

Is such a contraption possible? Sure, have a patch. And as you can see, it works:

$ ssh 256.93.800.0.1 uname
Linux

And it can use familiar looking addresses such as 127.0.0.0.1 for localhost, 192.168.273.35.102 to address a host on your LAN, or 203.0.113.42.789 to address a host using 6to4 behind a router with IPv4 address 203.0.113.42.

This may not be exactly what you had in mind, but it is as close as you can get when you missed the 1998 deadline for improving the official IPv6 spec.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:IPv6 Addresses (305 comments)

I didn't write that list, but I can explain to what extent the points are true or not.

  1. With IPv6 you don't have to deal with NAT and other workarounds due to shortage of IP addresses. This can lead to a simpler and cleaner network topology with IPv6. This makes the topology easier to learn for administrator and attacker alike. It also makes it easier for the administrator to secure. If the administrator forget to put a firewall where one is needed, then on IPv4 they may have been saved by a NAT in place. But in that case leaking information about network topology would be the least of your concerns. Also a NAT doesn't prevent an attack from performing a traceroute into your network, they just have to wait for outgoing connections, which they can utilize to trace back into the network.
  2. It is true, the IPv6 stacks are not as well tested as IPv4 stacks. But you are not going to solve that problem by simply waiting. You need to give others incentive to move ahead with native IPv6 and get those stacks hardened. One way you could move ahead was to keep your LAN IPv4 only and deploy translation on the edge of your network to connect to an IPv4 backbone. That will still give you many of the benefits of native IPv6 while giving others incentive to deploy IPv6 and get the stacks tested. Bear in mind that most of the weaknesses are link-local only. The implication of that is that first of all enabling IPv6 on your backbone connection doesn't put you at risk. And disabling IPv6 on your backbone connection doesn't protect you against an insider attack, since IPv6 is enabled by default on every modern OS, and for some it is even the case that IPv4 only is no longer an officially supported configuration.
  3. IPSec was originally developed as part of the IPv6 spec. At some point IPSec support was mandatory in IPv6. IPSec did get backported to IPv4, but is not mandatory. Moreover it was changed to being optional but strongly recommended in IPv6. This means through the years the security advantage of IPv6 in this field has been reduced to the point of being almost exactly the same as IPv4. But the advantage was never that significant in the first place because in spite of IPSec support being mandatory, the IPSec design is overly complicated and difficult to get right. Plus it is not mandatory to perform any key distribution. Two fully compliant IPv6 stacks with full IPSec support still communicate in clear by default.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:IPv6 Addresses (305 comments)

If the system complains that the passwords are similar, not merely identical, then it must be storing unhashed passwords.

Comparing the password you are changing from and the new password is trivial since you have to type in both in order to change your password. If they accept your new password, they could store the old password using the new password as encryption key. That way every time you change password, the old password can be used to decrypt the previous, which can be used to decrypt the previous and so on. With that approach it is trivial to compare the similarity to every one of your older passwords.

Or they could do like Cacadril suggested and compare the most obvious variations of your new password with stored hashes. This is however going to require a lot more CPU time. One could use a combination of the two approaches and store a salted hash of the current password plus an encrypted version of a hash of all the older passwords with a weaker salting that remains constant per user and uses no iteration. You still can't extract the older passwords, even if you know the decryption key, but you can generate variations of the new password and efficiently check against decrypted hashes.

Or you can drop all of that complexity and simply only check for similarity with the most recent password and exact matches further back.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:IPv7 (305 comments)

I expect most people will just wait for IPv7

That would be pretty stupid. Moving from IPv6 to IPv7 would be a downgrade. IPv6 is so much better than IPv7 that even the people who wrote the IPv7 spec has given up on IPv7 and moved to IPv6.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:Why IPv6? (305 comments)

I know about the IPs running out, that is 100% clear; but does IPv6 have any other benefits?

Do you need any other benefits? The workarounds currently used because of shortage of IP addresses are causing problems. Those problems will go away once IPv6 is deployed. Most users have never seen an Internet without NAT. They simply haven't seen what the Internet is supposed to be capable of.

There are a few other changes. There was a little bit of cleanup in the protocol as well. The experience with IPv4 had shown a few things that were more complicated than they needed to be. Those simplifications means that hardware designed to route IPv6 packets can be slightly faster than hardware designed to route IPv4. Current performance measurements actually show a greater benefit from IPv6 than can be explained from those simplifications alone. Nobody really knows why there is that difference, because you only see a clear tendency when you look at the Internet at large, if you look at individual networks there is too much variation due to other causes.

If you want to know if there are any more advantages, the answer is going to be too technical for the average user to understand.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:We have large-scale deployment already (305 comments)

I would consider anybody choosing any option other than "before 2020" as misinformed.

Some of them are misinformed, but I think there is also some fraction of them, who are deliberately spreading misinformation. I believe there are people who have economic incentive to spread misinformation. For example by pretending IPv6 isn't going to be deployed, you may be able to save some investments now and convince customers to choose your services even though you don't offer IPv6. Anybody who understand where the world is going would choose a vendor with dual stack support rather than an IPv4 only vendor.

The Internet is growing as is the percentage of deployment of IPv6. The growth of the Internet is even speeding up. If you could return every single IPv4 address to the available pool, all of them would be consumed again before 2020, that's how fast the Internet is growing.

Even though some people try to hold IPv6 deployment back, it just won't be possible. By 2020 there will be more users with IPv6 access than users without IPv6 access. I am thinking whoever chose the answers for the poll was misinformed enough to believe there is any way large scale IPv6 deployments could be postponed.

about a month and a half ago
top

When will large-scale IPv6 deployment happen?

kasperd Re:What do you mean Large Scale Deployment? (305 comments)

That will happen whenever a major OS vendor ( apple, microsoft, or google ) or browser developer decides to default to IPv6 and failing over to IPv4 instead of the reverse.

All of them are using IPv6 by default and falling back to IPv4 if IPv6 does not work. They have been doing that for many years.

about a month and a half ago
top

AMD Preparing To Give Intel a Run For Its Money

kasperd Re:Just like Bulldozer? (345 comments)

Think of it this way, when you've worked on code that's 10 years old and you think "this would be so much better if we could throw it away and start from scratch" imagine that Intel thinks the same way with x86 only it's dealing with a 40 year long chain of incremental improvements.

At least AMD64 did do some preparations for ditching some of the cruft. The 64-bit mode of the AMD64 architecture left out some of the features of the original x86 design. If we can get the 16 bit BIOS interfaces replaced with 64 bit interfaces, then it would make sense for the next generation of CPUs to switch on in 64 bit mode. After that, it won't be long before you can completely drop the 16 and 32 bit support from the CPUs. Support for 32 bit user mode on a 64 bit kernel may need to live for a little longer though.

about 2 months ago
top

AMD Preparing To Give Intel a Run For Its Money

kasperd Re:Just like Bulldozer? (345 comments)

Yup, and the BS about them being first to 64-bit...maybe in the consumer sector, but Intel, IBM and DEC all had 64-bit chips before the Athlon was even designed let alone shipped.

That is true. However AMD were the first to make a 64-bit architecture, which was x86 compatible. And it was also the first 64-bit CPU to be in a price range that was acceptable to average consumer. But most importantly, AMD designed an architecture so successful that Intel decided to make their own AMD compatible CPU. Today Intel probably earns most of its money on CPUs using AMD's 64 bit design.

But if AMD now want to go and build an entirely new design, which is nothing like x86, they may very well be repeating the exact same mistake Intel made to let AMD64 get the lead.

By now it might be safe to ditch all 8, 16, and 32 bit backwards compatibility with the x86 family. But AMD64 compatibility is too important to ignore.

about 2 months ago

Submissions

top

Was NAT responsible for Skype outage?

kasperd kasperd writes  |  more than 2 years ago

kasperd (592156) writes "Skype have published a post-mortem that explains some details about the incident. It still leaves a few questions unanswered.

The outage was caused by overloaded supernodes, from the article it is hinted that less than one percent of the Skype clients act as supernodes. If supernodes are this prone to get overloaded, why did Skype not use more clients as supernodes?

The article says supernodes help to establish connections between regular nodes. Does that mean the supernodes are responsible for NAT hole punching?

Would the outage still have happened if none of the Skype clients were behind NAT? Or would a situation where all Skype clients had a direct Internet connection have meant less load on supernodes due to lack of need for hole punching and more nodes available to act as supernodes?"

Journals

top

Secure wireless mice

kasperd kasperd writes  |  about 10 years ago

Most of you probably already know how annoying the wire on the mouse sometimes may be. That is why the wireless mouse was invented, and now I'm looking for one. But as with any other wireless equipment, security is an important issue. Sometimes these devices work over longer ranges than expected.

The possibility to sniff the input is not my only concern. Authenticity is also important, I don't want anybody within a range of 100m to be able to control my computer. So any product that doesn't do both encryption and MAC (message authentication codes), is out of the question.

It wouldn't be difficult to produce a secure product. Good ciphers and MACs exists, and key exchange can safely be done while the mouse is placed in the recharger. But finding a product that actually does this proves to be difficult.

I searched for wireless mice satisfying most of my needs (that is optical wireless mice with at least three mouse buttons). And I picked five well known manufacturers from the list. None of the informations I could find online answered my questions. So I decided to contact the companies and ask. The result were depressing.

  • The first company had a wide range of wireless mice, but only one product with encryption. And even this product wasn't trustworthy, as it was based on proprietary algorithms. Security through obscurity is generally considered a sign of weakness, and is advised against in more than one place.
  • The second company did not know what encryption and MAC is, and did not consider it to be necessary.
  • The third company never replied to my email.
  • The fourth company replied to my email, but did not try to answer my questions. Instead I was referred me to a reseller. The reseller had never heard about the product.
  • The fifth company did not provide any contact informations on their webpage.

So I am starting to worry, that maybe secure wireless mice simply does not exist. Where should I look for a secure wireless mouse? And if I find a manufacturer, that can provide a good description of a secure product, how should I verify that the implementation actually match the description?

Of course my considerations about wireless mice also applies to keyboards. The keyboard may in fact be even more sensitive than a mouse, and since I don't move my keyboard as much as I move my mouse, I have decided to stick with wired keyboards.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...