Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Security Focus Interviews Damien Miller

ScuttleMonkey posted more than 8 years ago | from the bread-and-butter dept.

Software 80

An anonymous reader writes "The upcoming version 4.3 of OpenSSH will add support for tunneling allowing you to make a real VPN using OpenSSH without the need for any additional software. This is one of the features discussed in SecurityFocus' interview of OpenSSH developer Damien Miller. The interview touches on, among other things, public key crypto protocols details, timing based attacks and anti-worm measures."

Sorry! There are no comments related to the filter you selected.

Effective cryptography is a hard problem. (5, Informative)

Sheetrock (152993) | more than 8 years ago | (#14306914)

As suggested in the article, the better security gets, the more it will interfere with usability.

For example, if you create a VPN with this latest OpenSSH, a lossy network will hold up your traffic. Despite the fact that TCP/IP will try to continue operating with dropped packets, with OpenSSH if you miss one packet the loss cascades into succeeding packets until the client and server are able to resync or the packet is delivered. This accumulation of tolerances is not a problem with IPsec, which is designed cipherwise to work around occasional packet loss.

Most experts agree the product of the best cryptography will be indistinguishable from random noise. This means that it is difficult to share the benefits of compression with file encryption because random noise compresses very poorly, as anyone who attempts to archive their MP3s of today's artists will attest. Additionally, if you accidentally store your encrypted files amongst files containing random noise you run the risk of generating new data during decryption.

The secret is to understand the technology before you use the technology. The problem with encryption is twofold -- some people are overconfident in what they're using and either lose data or risk more than they would if they were fully informed, and others think it's too difficult a topic to broach and leave themselves open to exploitation by network explorers. Certainly when I was in the second category I became convinced of the problem once I saw tools like 'tcpdump' and 'ethereal'.

Re:Effective cryptography is a hard problem. (5, Insightful)

blahnana (124569) | more than 8 years ago | (#14306942)

Most experts agree the product of the best cryptography will be indistinguishable from random noise. This means that it is difficult to share the benefits of compression with file encryption


Surely not if you compress it and _then_ encrypt it?

Swapping the order introduces other problems (0)

Harmonious Botch (921977) | more than 8 years ago | (#14307083)

You have to use randomly chosen types of compression. If you use the same type of compression repeatedly, it will tend to induce similarities into the plaintext, and thus the ciphertext is vulnerable if the eavesdropper can aquire multiple transmissions for a comparison attack. But why bother to compress? Bandwidth, memory, and disk space are cheap and getting cheaper, wheareas governments are becoming more intrusive. Better yet, add some random garbage.

Re:Swapping the order introduces other problems (0, Informative)

Anonymous Coward | more than 8 years ago | (#14307489)

Wrong, wrong, wrong, wrong, wrong. Please, amateurs this is one area where you can do the most harm by offering your "opinion" as fact. Please, whenever you find yourself wanting to tell people how smart you are about crypto, just post "Sorry, I'm a bit stupid about this, but I wanted to post" instead of whatever nonsense (like the above) you were going to choose.

Re:Swapping the order introduces other problems (2, Informative)

Thundersnatch (671481) | more than 8 years ago | (#14311123)

You're spouting complete nonsense. A secure block cipher in a secure mode of operation revelals nothing about the similarities between files. Loock up CBC mode on Google - a large random initialization vector is used to ensure that identical (or similar) plaintext blocks encrypt completely differently. I also suggest a thorough reading of Applied Cryptography by Bruce Schneier.

OpenPGP, for example, uses gzip compression before encryption with every file. Yet PGP is widely considered very secure. Why? Because a secure mode of operation for the cipher (AES, 3DES, whatever) is used, with a random IV that ensures even identical files produce completely different ciphertext.

Re:Effective cryptography is a hard problem. (0)

Anonymous Coward | more than 8 years ago | (#14307028)

Additionally, if you accidentally store your encrypted files amongst files containing random noise you run the risk of generating new data during decryption.

eh? Can you elaborate on this?

Re:Effective cryptography is a hard problem. (3, Funny)

HermanAB (661181) | more than 8 years ago | (#14307042)

"Grammer tip" - Looks like you need a "Speling tip" too. :-) Oh well, what the hell...

Re:Effective cryptography is a hard problem. (1)

TheLetterPsy (792255) | more than 8 years ago | (#14308930)

If you give up, you will never effect change.

Yes and no. (4, Interesting)

jd (1658) | more than 8 years ago | (#14307119)

You are correct, but only as far as you go. It is possible to compress first and then encrypt. Indeed, this is generally regarded as the superior method, precisely because the compression will disguise a lot of the information that cryptography will leave behind.


Secondly, cryptography is generally expensive on the CPU but cryptographic processors exist. Motorola's processor unit (before they spun it off) had a very nice unit called the S1, which could encrypt or decrypt four streams in parallel. They had a very nice manual, describing the complete protocol to communicate with it. Despite this, I never have yet seen a Linux driver for it. A pity, regardless of what you think of the S1, simply because it would have been a good opportunity to win over those who do use such chips.


TCP offload engines are also beginning to come into the picture. When TCP stacks didn't do a whole lot, it cost more to offload than you'd gain by having a co-processor. These days, a glance at the multitude of QoS protocols defined in papers, the staggering range of TCP algorithms in Linux, and the complex interleaving of the Netfilter layers -- it almost has to be better to have all that shoved onto a network processor.


(Notice that I'm including more than just the basic operations here. It's the ENTIRE multitude of layers that is expensive. Linux supports Layer 7 filtering, virtual servers, DCCP. There's even an MPLS patch, if anyone cares to forward-port it to a recent kernel. IGMPv3 isn't cheap, cycle-wise. Nor is IPSec.)


There is also the crypto method to consider, too. RSA is expensive but ECC and NTRU are considerably cheaper. SHA-1 is much slower than TIGER and is not clearly better. Whirlpool is also better than SHA-1 on speed and strength.


I'll also mention that OpenSSH is sub-optimal on the implementation, that there are patches out there to make it faster. I mentioned those the last time OpenSSH became a hot topic. Even if the patches themselves aren't "good enough", they must surely be evidence that it is possible to tighten the code a great deal in places. If nothing else, slow code is more vulnerable to DoS attacks.

Re:Yes and no. (1)

anethema (99553) | more than 8 years ago | (#14307365)

Wicked reply, all great points.

Re:Yes and no. (0)

Anonymous Coward | more than 8 years ago | (#14307518)

Both IGMPv3 and MLDv2 are intended to be very cheap for hosts. You keep a table of group memberships, and every few seconds you receive one packet from the network and send one in reply. If that's "too expensive" then you've got a serious problem somewhere.

And, the existence of IGMPv3 and MLDv2 demonstrates a very important limitation of off-load. All today's users have to do in order to get MLDv2 is to upgrade to a new enough OS, for example a recent Linux 2.6.x or Vista. With off-load they may just find that there's a service bulletin which says "Sorry, versions up to and including 2A of the hardware cannot support MLDv2". Your expensive "accelerator" is now a brick.

You make it sound as if off-load is new, something that only came into the picture now because this is exactly the moment when it's needed. That isn't true, you've been able to buy this type of thing for a decade or so, and each model has quickly become obsolete.

Re:Yes and no. (3, Interesting)

rodac (580415) | more than 8 years ago | (#14307528)

You dont understand the problem. The problem is not the TCP overhead (which is neglible) nor can it be solved by TCP offload engines.
The problem is that TCP over TCP just doesnt work and has well understood and well documented perfromance characteristics.

IPsec which does work, as CIPE and things like IPIP and GRE all have in common that they do NOT use TCP as a transport. IF you use TCP as transport for the tunnel and IF you transport TCP atop said tunnel it will just not work.

When tail packetloss occurs in TCP there will be a retransmission timeout, this will stall the tunnel and wreak havoc for the clock in the next upper layer TCP causing even worse stalls. Somehing that was well documented and understood many years ago.

You just can not run TCP over TCP. It just doesnt work. An offload engine will not change this.

TCP over TCP (2, Insightful)

chosner (940487) | more than 8 years ago | (#14308075)

>>You just can not run TCP over TCP. It just doesnt work. Actually this is not true. TCP over TCP is a problem when you have packet delay and the back off times on the redundant layers cause a meltdown and stop your connection. When congestion is at a reasonable level, this will not happen. So TCP over TCP works fairly well if you don't have a near capacity link.

SHA-1 Encryption - What's that? (0)

Anonymous Coward | more than 8 years ago | (#14311270)

Interesting comments, but SHA-1 isn't an encryption algorithm. It is a digest algorithm. Also, RSA and ECC aren't normally used for bulk encryption so speed is not a real issue there either. So, what is your point?

Re:SHA-1 Encryption - What's that? (1)

jd (1658) | more than 8 years ago | (#14311850)

SHA-1 is not used to encrypt, but digest algorithms are often used in block encryption as part of the chaining algorithm. Generally, the key used for encryption is varied between blocks. The algorithm used - the "mode" - is a significant determinant of the speed and strength of the encryption. Digests are often used in strong encryption modes to adjust the key between each block, essentially signing every block.


RSA and ECC are not used for bulk encryption because they are computationally too expensive. If they were computationally cheap enough, nobody would use two-stage encryption. It adds vulnerability risks. Rather, RSA and ECC are used to encrypt keys for block and stream algorithms, which increases the risks (you now have to contend with the possibility of flaws in ANY of the algorithms used) and increases overhead (you have to have the code for every algorithm stored somewhere AND you have to contend with having a multi-layer system). It doesn't help that keys are going to be changed every so often, which means you also have to pick the right algorithm for every packet.


My point? I'd have thought that obvious. The entire process is unnecessarily slow and unnecessarily at risk of vulnerabilities.

Re:Effective cryptography is a hard problem. (4, Informative)

interiot (50685) | more than 8 years ago | (#14307124)

the better security gets, the more it will interfere with usability
What does that have to do with TCP-over-SSH? Secure or not, TCP-over-TCP is always considered harmful [sites.inka.de] (PDF [www.ispl.jp] ).

On the other hand, if TCP-over-TCP is your only option (eg. due to the lame firewall my employer set up), then SSH is a great option.

But what does that have to do with increasing security again?

NEXT ARTICAL (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14307203)

More Delays for Ender Movie
Posted by ScuttleMonkey in The Mysterious Future!
from the wiggin-out dept.
  Arramol writes "IGN reports that difficulties in hammering out a screenplay have resulted in more delays for the Ender's Game movie. Despite attempts by several teams of writers, no script has yet been written that meets necessary standards in the minds of Warner Brothers or author Orson Scott Card. The latest plan involves an entirely new script written by Card himself."
This story is currently under construction.

( Read More... )

Re:NEXT ARTICAL (-1, Redundant)

Anonymous Coward | more than 8 years ago | (#14307419)

thanks dude!

Re:Effective cryptography is a hard problem. (1)

johnw (3725) | more than 8 years ago | (#14307543)

Further tip - "Effect" and "Affect" are both verbs, but they don't mean the same thing. ("Effect" is also a noun.)

Re:Effective cryptography is a hard problem. (1)

QuietLagoon (813062) | more than 8 years ago | (#14308281)

Informative? It looks more like "funny" to me.

To wit: Additionally, if you accidentally store your encrypted files amongst files containing random noise you run the risk of generating new data during decryption. Didn't anyone read this post before they moderated it?

Re:Effective cryptography is a hard problem. (1)

tdemark (512406) | more than 8 years ago | (#14308466)

Most experts agree the product of the best cryptography will be indistinguishable from random noise

So you are saying it will be written in Perl?

Re:Effective cryptography is a hard problem. (1)

gkuz (706134) | more than 8 years ago | (#14308903)

Grammer tip: 'Effect' is used as a noun. 'Affect' is used as a verb.

ef-fect tr.v. 1. To bring into existence. 2. To produce as a result. 3. To bring about.

af-fect n. 1. Feeling or emotion, especially as manifested by facial expression or body language: "The soldiers seen on television had been carefully chosen for blandness of affect" (Norman Mailer).

Re:Effective cryptography is a hard problem. (0)

Anonymous Coward | more than 8 years ago | (#14312333)

Since it was mentioned that the vpn feature supports layer 2/3, non-tcp packets can be forwarded as well as tcp packets.

Re:Effective cryptography is a hard problem. (0)

Anonymous Coward | more than 8 years ago | (#14315134)

The secret is to understand the technology before you use the technology.

In your case, the secret should be to understand the technology before you start spouting off meaningless drivel. You are one clueless SOB.

Re:Effective cryptography is a hard problem. (1)

HermanAB (661181) | more than 8 years ago | (#14315461)

You may effect change - verb. So much for your smart quote. English is a complex language...

Part 1 of the Interview (0, Redundant)

uchihalush (898615) | more than 8 years ago | (#14306928)

Could you introduce yourself? Damien Miller: I am one of the developers of OpenSSH and OpenBSD. I have been working on OpenSSH since starting the project to port it to other platforms (initially Linux) back in 1999, but found myself working more and more on the native OpenBSD version of OpenSSH and on the OpenBSD operating system itself as time went on. I also maintain a couple of other free software projects, most notably a collection of NetFlow tools (pfflowd, flowd and softflowd). The upcoming OpenSSH version 4.3 will add support for tunneling. What type of uses is this feature suited for? Damien Miller: Reyk and Markus' new tunneling support allows you to make a real VPN using OpenSSH without the need for any additional software. This goes well beyond the TCP port forwarding that we have supported for years - each end of a ssh connection that uses the new tunnel support gets a tun(4) interface which can pass packets between them. This is similar to the type of VPN supported by OpenVPN or other SSL-VPN systems, only it runs over SSH. It is therefore really easy to set up and automatically inherit the ability to use all of the authentication schemes supported by SSH (password, public key, Kerberos, etc.) The tunnel interfaces that form the endpoints of the tunnel can be configured as either a layer-3 or a layer-2 link. In layer-3 mode you can configure the tun(4) interfaces with IP or IPv6 addresses and route packets over them like any other interface - you could even run a dynamic routing protocol like OSPF over them if you were so inclined. In layer-2 mode, you can make them part of a bridge(4) group to bridge raw ethernet frames between the two ends. A practical use of this might be securely linking back to your home network while connected to an untrusted wireless net, being able to send and receive ICMP pings and to use UDP based services like DNS. Like any VPN system that uses a reliable transport like TCP, an OpenSSH's tunnel can alter packet delivery dynamics (e.g. a dropped transport packet will stall all tunnelled traffic), so it probably isn't so good for things like VOIP over a lossy network (use IPsec for that), but it is still very useful for most other things. Some companies have included crypto features in their hardware, for example Intel included a PRNG in some chipsets, and VIA bundled a full hardware set of crypto functions in its recent CPUs. How and when can OpenSSH take advantage of specific types of hardware like these? Damien Miller: OpenSSH depends on OpenSSL for cryptographic services and therefore depends on OpenSSL to take advantage of hardware facilities. On OpenBSD at least, this support is seamless - OpenSSL has hooks to directly use Via Padlock instructions (which are amazingly fast) or go via the crypto(4) device to use co-processors like hifn(4) or ubsec(4). On other operating systems, OpenSSL needs some application support to tell it to load "engine" modules to provide access to hardware services. Darren Tucker has posted patches to portable OpenSSH to get it to do this, but we haven't received any test reports back yet. Why did you increase the default size of new RSA/DSA keys generated by ssh-keygen from 1024 to 2048 bits? Damien Miller: Firstly, increasing the default size of DSA keys was a mistake (my mistake, corrected in the next release) because unmodified DSA is limited by a 160-bit subgroup and SHA-1 hash, obviating the most of the benefit of using a larger overall key length, and because we don't accept modified DSA variants with this restriction removed. There are some new DSA standards on they way that use larger subgroups and longer hashes, which we could use once they are standardized and included in OpenSSL. We increased the default RSA keysize because of recommendations by the NESSIE project and others to use RSA keys of at least 1536 bits in length. Because host and user keys generated now will likely be in use for several years we picked a longer and more conservative key length. Also, 2048 is a nice round (binary) number. Do you plan to add any other algorithm to generate/exchange keys? For example, why didn't you include an implementation of ECC, used by the NSA? Damien Miller: ECC (Elliptic Curve Cryptography) has some speed and key size advantages, but there are two impediments to use using it. First, no ECC key exchange method has been specified for the SSH protocol. This isn't too much of a problem as the protocol has a great extension mechanism that allows us to define new methods without breaking other implementations or having to go begging to the IANA for a number reservation. The second reason is more of a killer: many ECC methods are patented. The NSA made the press recently for licensing these patents, something that we have neither the means nor the desire to do. There are ECC methods that are not patented, but the whole area is a minefield that we don't really want to navigate. Also, some of the ECC methods that are patented are the optimizations that give ECC its performance advantage. On modern machines, the key exchange isn't much of a delay anyway and can often be avoided by using the connection multiplexing support that has been in openssh-3.9 (reusing the one SSH connection for multiple commands, file transfer or login sessions). The recent version 4.2 "added support for the improved arcfour cipher modes from draft-harris-ssh-arcfour-fixes-02. The improves the cipher's resistance to a number of attacks by discarding early keystream output". Could you tell us something more? Damien Miller: Remember that RC4 is a stream cipher; generating a stream of random-looking bytes (based on the key you feed it) that you XOR with the data that you want to encrypt. Fluhrer, Mantin and Shamir found that this early keystream can be correlated with the original key. This unfortunate property may be used to construct an attack that recovers the original key. In its strongest form, this attack is devastating (it is the basis of the 802.11 WEP crack for example) - fortunately the use of RC4 in the SSH protocol has been better engineered, but it still needed to be fixed. An easy and computationally cheap way to avoid this attack is to simply discard this early keystream. These new cipher modes discard the first 1.5KB of keystream. This doesn't slow down the cipher at all, and so these modes are recommended for people who want to use a faster, but weaker cipher than AES. I.e. use these in favour of the original 'arcfour' cipher. A future release of OpenSSH will probably remove the old method from the default list of accepted ciphers.

Thanks guys (4, Informative)

pchan- (118053) | more than 8 years ago | (#14306952)

OpenSSH just keeps getting better. Not just a great shell client and server, but support for multiple streams, secure tunnels, SCP, SFTP, every authentication method you could want, and finally VPN (the next logical extension). OpenSSH ships with every Linux distribution I can name (well, except embedded ones), the BSDs, and MacOS, and is available for Windows (under Cygwin) and every other major UNIX and UNIX-like OS out there. The code is all available to anyone for any purpose with no real restrictions (other than giving some credit to the developers), so you could include it in any app you make, regardless of license (GPL included). Thanks, everyone who works on this valuable tool. I think I'll go buy a T-shirt [openbsd.org]

FINALLY! (2)

dteichman2 (841599) | more than 8 years ago | (#14306969)

Thank you devs! I've been waiting forever for the ability to do VPN like that. /me builds shrine

Hacker Summary (5, Informative)

this great guy (922511) | more than 8 years ago | (#14307012)

For those hackers who are already familiar with the forwarding features of ssh (-L, -R and -d options), and who are wondering what the hell is this new "support for tunneling", here is a hacker summary. Quoting TFA:

[This] new tunneling support allows you to make a real VPN using OpenSSH without the need for any additional software. This goes well beyond the TCP port forwarding that we have supported for years - each end of a ssh connection that uses the new tunnel support gets a tun(4) interface which can pass packets between them.

Tun(4) interfaces are indeed very convenient. That's all folks !

Re:Hacker Summary (3, Insightful)

interiot (50685) | more than 8 years ago | (#14307129)

Holy cow, that's very convenient indeed. Though, most likely this will only make IT firewall admins scowl even more at the mention of SSH forwarding.

Great detailed article - way to go OpenSSH'ers (1)

xmas2003 (739875) | more than 8 years ago | (#14307033)

As someone who uses and deploys OpenSSH in a fairly large environment as part of my 'day job', hats-off to the OpenSSH team. Great interview by Federico Biancuzzi (who apparently is a freelancer) as some nice questions were asked and some interesting, detailed answers were provided by Damien - this is not your usual fluff writeup - RTFA highly recommended.

telnet forever! (5, Funny)

Gravis Zero (934156) | more than 8 years ago | (#14307055)

$ telnet 293.myremotepc.com
login: mr_moo
password: moowoo
> lynx slashdot.org


ssh is great and all but telnet is secure enough for me as far as __ALL_YOUR_BASE_ARE_BELONG_TO_US__ wha? who typed that? what's __H4X0RZ_4EVA!__

CONNECTION TERMINATED.

Re:telnet forever! (-1, Troll)

Anonymous Coward | more than 8 years ago | (#14307085)

gravis, you are a fucking moron.

Re:telnet forever! (-1, Redundant)

Anonymous Coward | more than 8 years ago | (#14307583)

not cool.

Re:telnet forever! (0)

Anonymous Coward | more than 8 years ago | (#14308594)

Who fucking modded this up??! Lame.

Re:telnet forever! (1)

Guido von Guido (548827) | more than 8 years ago | (#14309057)

I believe this person works for one of my customers.

Tunneling with servers (1)

XNormal (8617) | more than 8 years ago | (#14307077)

This new tunneling mode requires upgrading both the client and server. It should be possible to get more-or-less the same functionality using only client-side support: capture the packets sent to the tun interface, decode individual TCP streams (similar to slirp [sourceforge.net] ) and convert them to port forwarding requests compatible with old servers.

Re:Tunneling with servers (1)

Phatmanotoo (719777) | more than 8 years ago | (#14307407)

Except that older sshd don't have the ability to honor requests to forward to any random port on-the-fly, AFAIK. You specify which ports you're gonna forward at connection time.

Re:Tunneling with servers (1)

!equal (938339) | more than 8 years ago | (#14307618)

ssh's "-D" option lets you do "dynamic port forwarding."

I disagree on one point. (4, Interesting)

Z00L00K (682162) | more than 8 years ago | (#14307135)

There is actually a point in locking out (blacklisting) IP addresses from where a brute force is attempted. This since those bots often try one site at a time and scans for known login/passwords. It isn't that common that an attacker uses several different sources at the same time when attacking a site unless it's a DOS attack.

Blacklisting will at least make it harder for stupid bots.

Re:I disagree on one point. (1)

MichaelSmith (789609) | more than 8 years ago | (#14307172)

Blacklisting will at least make it harder for stupid bots.

Getty does it, and for good reasons. I don't see why sshd should not.

Re:I disagree on one point. (2, Informative)

EngMedic (604629) | more than 8 years ago | (#14307239)

for those of you in search of a solution for exactly this problem that doesn't involve iptables hackery or whatnot, check out denyhosts: http://denyhosts.sourceforge.net/> . It's a cronjob/daemon that lurks over ssh logs and updates hosts.deny based on rules you specify. simple, quick, gets rid of most of the annoying sshd bots.

Re:I disagree on one point. (1)

neomunk (913773) | more than 8 years ago | (#14310412)

Mod parent up.

That is a handy piece of code you've pointed out there.... thanks.

kick arse vpn (3, Informative)

marcushnk (90744) | more than 8 years ago | (#14307285)

Anyone seen this before?:
http://www.hamachi.cc/ [hamachi.cc]

Loos like a better way of doing VPN.. though ssh with in built vpn is going to be nice...

it's good indeed (0)

Anonymous Coward | more than 8 years ago | (#14307362)

solid and documented (!) security architecture,
built-in NAT-to-NAT tunnelling,
zero-configuration,
slick gui (where available),
stable, supported and rapidly maturing

it is really f$cking amazing

Like OpenVPN, only proprietary (1)

Phatmanotoo (719777) | more than 8 years ago | (#14307463)

It looks like an udp-based, ssl-based VPN, only that it uses the "third server" trick to get NAT-to-NAT traversal. From the site:
How Hamachi Works Hamachi is a UDP-based virtual private networking system. Its peers utilize the help of a 3rd node called mediation server to locate each other and to boot strap the connection between themselves. The connection itself is direct and once it's established no traffic flows through our servers.

Looks nice, but nothing spectacularly new. It might be handy if you need to set up a VPN through NAT-to-NAT, but I'm sure there are many other (open source) systems which achieve the NAT-to-NAT thing.

Since I don't need NAT-to-NAT, my ssl VPN of choice is OpenVPN. Moreover, it can be used over udp or tcp.

Re:Like OpenVPN, only proprietary (1)

M'Barr (149352) | more than 8 years ago | (#14310026)

A nice plus of OpenVPN is HTTP Proxy support- you can tunnel through a HTTP Proxy. The worst networks i've been on lock everything but HTTP, and that goes through a proxy server. No arbitrary UDP packets going out...

OpenVPN and NAT-to-NAT (1)

Phatmanotoo (719777) | more than 8 years ago | (#14307517)

I couldn't resist to do a quick search... and here it is:
nat-traverse [sourceforge.net]
They even give specific examples of how to use it in combination with OpenVPN.

Moreover, this technique looks like it should work with any kind of NAT, whether full-cone, restricted-cone, or symmetric. On the other hand, the "third-node" (mediator) technique will not work with symmetric NAT.

Hamachi and NAT-to-NAT (1)

apankrat (314147) | more than 8 years ago | (#14307621)

Moreover, this technique looks like it should work with any kind of NAT,

Looks can be deceiving. Hamachi's main strength IS its NAT traversal capabilities. In addition to symmetric, cone-this, cone-that types, it supports traversing a handful of completely obscure NAT types. Like reverse sequential NAT (external ports are allocated in decreasing order), burst overloaded NAT (ports are incremented in random increments), and random port NAT. Statistically it can connect 95% of all UDP-capable peers. The rate of standard NAT traversal techniques (including nat-traverse thingy) is about 80%.

And, yes, I am involved with Hamachi project, so I do know what I am talking about.

Re:kick arse vpn (2, Informative)

gfilion (80497) | more than 8 years ago | (#14308708)

Anyone seen this before?: http://www.hamachi.cc/ [hamachi.cc]

Loos like a better way of doing VPN.. though ssh with in built vpn is going to be nice...

Here's my not so humble opinion about Hamachi:

Software review: Hamachi [filion.org]

In short: some good, some bad, some really great, some horrible.

Your review is strange to say at least (1)

apankrat (314147) | more than 8 years ago | (#14310777)

.. yes, a public, routable, full blown, IP address.

No, it is not. It's exactly the opposite. The address is private and globally UNroutable.

Who the fuck do they think they are distributing IP addresses like that?

Hamachi uses 5.0.0.0/8 for *private* networking. We are not distributing Internet addresses, we are distributing IPs used in Hamachi's own routing domain. Which of course is fully isolated from Internet.

The only problem Hamachi can run into later on is if IANA starts assigning IPs from this subnet to Internet nodes. These nodes will need to employ some creative routing to get Hamachi going. That's it, no dead end. No broken Internet.

IPv6 is not an option. At least not now. Try talking your parents through setting up IPv6 stack on windows, and then making poor box work again.

PS In case if anyone has any doubts, - I'm involved with Hamachi project. /apankrat

Re:kick arse vpn (2, Insightful)

Wesley Felter (138342) | more than 8 years ago | (#14310831)

"Whenever someone thinks that they can replace [IPSec] with something much better that they designed this morning over coffee, their computer speakers should generate some sort of penis-shaped sound wave and plunge it repeatedly into their skulls until they achieve enlightenment." -- Peter Gutmann [auckland.ac.nz]

Re:kick arse vpn (1)

apankrat (314147) | more than 8 years ago | (#14311252)

Wesley, I am fully aware of this quote. Hamachi was not designed 'over the morning coffee'. Please have a look at http://hamachi.cc/security [hamachi.cc] .

Alex (ap@hamachi.cc)

Re:kick arse vpn (1)

Wesley Felter (138342) | more than 8 years ago | (#14311355)

Hamachi's security looks pretty good, but I just couldn't resist using the quote. If Hamachi is as secure (and thus as complex) as IPSec, why isn't it IPSec? And even so, Hamachi's protocols and code haven't gotten as much peer review as IPSec. Rather than reinventing the wheel, why not put your talents into (for example) designing a usable opportunistic mode for IPSec?

Re:kick arse vpn (1)

apankrat (314147) | more than 8 years ago | (#14311582)

Hamachi is a second attempt at zero-conf system.

I dealt with IPsec *very* extensively in the past and in the first revision we were in fact using IKE for p2p and SSL for client-server security. The issue was essentially that IKE was an overkill and SSL added 4 extra messages to the login sequence. Former affected development schedule and latter affected server performance under the load (yes, I am aware of HW SSL accelleration, it was not an option at that moment).

For the second revision we stripped down all unused IKE features (like proposals, notification messages, etc), at which point it started to look more like JFK. We then reused this security handshake for replacing SSL and also added support for PSK (preshared key) authentication. And that's what exists at the moment.

Bulk p2p traffic is essentially ESP in AES256-HMAC-SHA1 mode. I think the only difference is that IV goes *after* the data and not in front of it. Same padding method, replay protection, etc

Basically we were going after IKE/SSL version that is highly optimized for our specific needs. We didn't do it from scratch and we didn't mix security paradigms from different protocols. We combined independent parts, so technically we didn't re-invent the wheel. We assembled it from prebuilt parts.

I completely agree that it needs a peer review. I would be more than willing to engage in one if there are people interested in doing this.

Requires root at both ends (0)

Anonymous Coward | more than 8 years ago | (#14307311)

This requires you to be the admin at both ends to install this new version, this isn't always possible. If this were already the case then you could simply run pppd over an ssh session creating two ppp0 interfaces on each end (I do exactly this at the moment, but obviously may investigate tun0 further if I can actually find an advantage). This has been possible since they added pty support in to the pppd daemon, or you might have to kludge it with silly looped back cables from ttyS0 -> ttyS1 physically :)

With pppd at one end you can use slirp at the remote without needing root or an upgrade of sshd , with this you gain a natted vpn.

My reason for using this is to connect out to gain an Internet interface from behind socks, my sockified ssh client locally and root on an Internet based host lets me have a high speed ppp0 for direct access for things like online gaming or poorly written applications that can't cope with a socks f/wall or squid proxy. I'm not convinced companies will open an ssh hole in to a host on their network which will then gain multiple tun* interfaces for external VPN staff, especially when the norm is IPsec. For connections IN to work, a single ssh session is obviously all you need anyway unless you're some kind of Windows user or something that needs a host of open ports etc and GUI things run to be able to work.

SSH tunnels is a stupid idea. (3, Interesting)

rodac (580415) | more than 8 years ago | (#14307498)

SSH tunnels and VPN can already today be done using ssh and pppd. i have used it for many years. It is still a stupid idea and useless for other things than toy networks.

SSH uses TCP as transport. You should NOT transport TCP/ip ontop of TCP. TCP over TCP has well known and well documented poor performance characteristics.

Google for TCP over TCP to find any number of researchpapers on why this just doesnt work, or try running IP traffic yourself across an SSH tunnel and find out first hand why TCP over TCP just dont work well.

Maybe, I hope, they plan to add a new SSH mode that uses UDP and will use UDP-SSH as basis for the tunnel. That would work. But you can neveruse more than one single TCP layer in any stack. If not (i.e. they plan to tunnel traffic atop a TCP ssh session) it will fail and they will learn.

VPN over TCP (1, Redundant)

apankrat (314147) | more than 8 years ago | (#14307688)

Running VPN over TCP is bad for another major reason, which seems
to completely escape the attention of people promoting this type
of VPNs.

TCP is an UNAUTHENTICATED sessioned transport and the state of
entire VPN DEPENDS on it. Anyone capable of closing TCP session
can bring VPN down. Moreover VPN nodes may not even get a chance
to exchange a single packet if an attacker proactively resets all
connection attempts.

This is drastically different from standard VPNs that use IP or
UDP for data delivery. In order for a packet to alter VPN state
it must first be authenticated.

Essentially TCP-based VPNs are not resilient. They might be OK
for an occasional use, but deploying them in a production is
far too risky.

Re:SSH tunnels is a stupid idea. (0)

Anonymous Coward | more than 8 years ago | (#14356577)

You didn't get the point.

OpenSSH tunneling is not intended for permanent VPN connections.

It's probably the most simple way to establish "ad hoc" and temporary layer2/3 tunnels.

Yes, you can always drop TCP connections and thus you can drop the SSH VPN connections. This is true for all TCP traffic, OpenBSD even provides a tool tcpdrop(8) in the default install.

But it is not a "security" problem if you _understand_ the scenario for SSH VPN (or even SSH) and if you don't assert that this is intended as a replacement for fixed VPNs. And btw., you could also authenticate your TCP sessions with TCP MD5 signatures (RFC 2385).

SSH VPN tunneling is great for legitimate reasons.

Indeed, PPP over SSH is the useless toy because it's much more complicated and you could use something better with less efforts.

OpenSSH tunneling is focussed on simplicity and many network administrators are happy about this powerful and easy-to-use tool. For everything else, use IPSec. OpenBSD is doing some great work on improving their IPSec support, for with the new ipsecctl(8) tool.

Thanks!

sections of interview are hidden / commented (1)

mieses (309946) | more than 8 years ago | (#14307516)

view source on the 2nd page of the interview.

Re:sections of interview are hidden / commented (2, Interesting)

bodger_uk (882864) | more than 8 years ago | (#14307910)

Those hidden bits in full:

Another statistic suggests that more than 80% of the SSH servers on the Internet run OpenSSH. I'm wondering if you have ever verified which version they are running, and what is the average behaviour of an OpenSSH administrator. Does people update the server as soon as a new release is available?

Damien Miller: Funny you mention this, we just completed another version survey with the assistance of Mark Uemura from OpenBSD Support Japan. The results of this should be going up on OpenSSH.com [openssh.com] soon.

I don't have detailed OpenSSH version histories for usage surveys before last year's. Certainly the use of paleolithic versions (such as 2.x) is very infrequent, but beyond this it is difficult to tell how quickly users update - many vendors will keep relatively ancient versions (such as 3.1p1) on life-support with spot security fixes. This will avoid known security problems, but it doesn't give their users the benefit of any of the proactive work that we do, nor any of the new features.

It is worth noting that OpenBSD, which has a very conservative policy on its stable trees, typically updates supported OpenBSD releases to the latest OpenSSH version when it is released.

Being very popular means also being a good platform for a worm. Did you adopt any specific measures to fight automated attacks?

Damien Miller: Privilege separation alone probably makes a worm targeting a bug in sshd impractical. An attacker would need to break into the unprivileged sshd process that deals with network communications and, because this just gives them access to an unprivileged and chrooted account, then exploit a second vulnerability to either break the privileged monitor sshd or escalate privilege via a kernel bug. This would add a fair amount of complexity, fragility and size to a worm - it would probably need to implement a fair chunk of the SSH protocol just to propagate.

We also implemented self re-execution at the c2k4 Hackathon. This changes sshd so that instead of forking to accept a new connection, it executes a separate sshd process to handle it. This ensures that any run-time randomizations are reapplied to each new connection, including ProPolice/SSP stack canary values, shared library randomizations, malloc randomizations, stack gap randomizations, etc.

Without re-exec, all sshd child processes would share the same randomizations. This would allow an attacker to exhaustively search for the right offsets and values for their exploit by making many connections (millions probably) to the server. With re-exec, each time they connect the values will all be different so there is no guarantee that they will ever stumble upon the right combination.

Another security improvement, just introduced in openssh-4.2 was the "zlib@openssh.com" compression method. This was an idea that Markus Friedl had after the last zlib vulnerability was published.

The SSH protocol has supported zlib compression for a long time, but the standard "zlib" protocol method requires this to be started early in the protocol: after key exchange, but (critically) before user authentication successfully completed. This exposes the compression code to unauthenticated users.

Our solution is to define a new compression method that still performs zlib compression, but delays its start until after user authentication has finished, so only authenticated users get to see it. This is another significant reduction in attack surface with effectively zero performance impact. This also makes the writing of a worm that targets the zlib code in OpenSSH impossible.

Did you develop any measure to fight timing based attacks?

Damien Miller: There are two classes of timing attacks, one of which matters and the other is not so important.

The not so important timing attacks allow active detection of which usernames are valid by differing timings in authentication failure, e.g. a valid username might take a little while to return (as the authentication method does the work of verifying their supplied credentials) where an invalid username might return quickly (as the authentication method returns early because it knows the username is invalid and destined to fail). We implement defences against these attacks by sending a fake username and credentials to the authentication backends. This hasn't been 100% effective when we delegate authentication to external libraries (e.g. PAM in portable OpenSSH) as they can do their own checks which return early anyway. I don't think these "attacks" matter that much because all they do is reveal the existence of something that isn't much of a secret anyway.

The other class of timing attacks are more scary - these are attacks that allow a passive observer to recover information relating to authentication secrets such as passwords. Attacks of this type have been found by Solar Designer and independently by Song, Wagner and Tian.

A simple attack of this type is watching the early parts of the protocol for a packet which contains a response to a password request. With some knowledge of the protocol these are fairly easy to spot or guess and once an attacker has obtained one, they can directly recover the length of the password. To prevent this, OpenSSH pads passwords up to a minimum of 64 characters. After 64 characters, brute forcing attacks are infeasible anyway, unless you have picked an utterly stupid password like "a" x 64.

A stronger attack involves watching the protocol, *after* the user has authenticated and established a session for occasions where they type a password in, such as running "su" or ssh'ing to another host. Without countermeasures, these can be clearly distinguished in the protocol as a one-way stream of short packets (keystrokes) without replies because the server will disable TTY echo. This will give a passive observer information about password length and inter-keystroke timing. To defeat this attack, OpenSSH sends back fake replies when TTY echo is turned off.

Re:sections of interview are hidden / commented (1)

SpinJaunt (847897) | more than 8 years ago | (#14308131)

Pesky paranoids..

maybe a new form of stenography? or maybe its eliptical curve control to aid with the slashdotting?

I wonder, do you check your logs as as vigourously as you do websites html source?

Re:sections of interview are hidden / commented (0)

Anonymous Coward | more than 8 years ago | (#14310197)

Dude, read the printable version. It's all there.

Lets do a bit of SCHADENFREUDE! (1)

xquark (649804) | more than 8 years ago | (#14307619)

Is that the correct use?

Re:Lets do a bit of SCHADENFREUDE! (0)

Anonymous Coward | more than 8 years ago | (#14310550)

Well, we got an equivalent word in Swedish (skadeglädje), and no, that's not how you use it.
You generally use like "to feel skadeglädje", or more commonly to be "skadeglad".

OK but ... (1)

FishandChips (695645) | more than 8 years ago | (#14307765)

yes, ssh is a tool used daily by huge numbers of people and hats off to the development team for that gift to us. However, a serious black mark for the standards of documentation. In reality, no documentation at all that is easy to find apart from the man pages and an FAQ that assumes fairly high-level knowledge, if you check the website. There are plenty of third-party how-tos, but how do I know I can trust what a third-party says? It's 2005. I just find it incredible that this of all program suites is still in the 1970s hairy-beardy geek era with regard to providing clear and comprehensible information to the end-user.

Re:OK but ... (0)

Anonymous Coward | more than 8 years ago | (#14307814)

Fuck end users. Seriously three-fourths dont even know where openssh comes from and the developement team and OpenBSD never get any fucking credit, and unlike other free operating system products few, if any of the developers are actually employed working fulltime on this stuff. (e.g. see HP and their public domain claim kudos to them for rectifying it their not the only one and are notable only because they actually fixed it.)

Re:OK but ... (0)

Anonymous Coward | more than 8 years ago | (#14307927)

>I just find it incredible that this of all program suites
>is still in the 1970s hairy-beardy geek era with regard
>to providing clear and comprehensible information to the
>end-user.

http://openssh.org/manual.html [openssh.org] ,and this covers both the man pages for the application, as wel as the RFC's on the protocol and some additional extensions.

What is your problem? The documentation is top-notch, written by mostly the same team that creates OpenBSD (which prides itself in maintaining the most up-to-date, comprehensive manuals).

>However, a serious black mark for the standards of
>documentation. In reality, no documentation at all
>that is easy to find apart from the man pages and
>an FAQ that assumes fairly high-level knowledge,
>if you check the website.

In the true tradition of Unices, the manual pages ARE the documentation. If you can't understand the man pages, then you should get another job or hobby. Seriously. If you think 'high-level knowledge' is a prerequisite for reading a bloody manual, you're not even trying to understand that manual. What did you expect? Some linux-like 'how-to', written by an unknown 13y. old, which showed you exactly what keys to punch on your keyboard, in order to configure a very specific setup that particular 13y. old envisioned?

It's a sad, sad world, full of blighted idiots who refuse to think for themselves..

Re:OK but ... (1)

FishandChips (695645) | more than 8 years ago | (#14308146)

Thanks for proving my point about the the 1970s hairy-beardy geek era. I think you'll find that the world has moved on a little since the era of "the manuals are the documentation". Boy, was that a crap era. But as I said, this is 2005 and folks look for a little more these days. You may not like it or even understand it, but folks are folks that's just what they do.

STFU & RTFM (0)

Anonymous Coward | more than 8 years ago | (#14311027)

And stop whining too.

If they keep going they can make another OpenVPN (1)

chosner (940487) | more than 8 years ago | (#14308128)

It's nice to see OpenSSH following in the footsteps of OpenVPN. They are using the TUN interface and the OpenSSL library, just like OpenVPN starting doing three years ago. I think this is a cool addition and will be fun to play with, but if you are thinking of using it to build a serious VPN, there are a lot better, more mature VPN products out there that have robust feature sets built on top of this kind of tunneling, like OpenVPN. Oh, and OpenVPN runs TCP-over-UDP, unless you really want TCP-over-TCP, in which case it can do that too.

Software free tunneling (1)

HeWhoRoams (895809) | more than 8 years ago | (#14308223)

When I read this my thoughts immediately drifted to network infiltration. If someone doesn't have to take the additional time to find the 3rd party software necessary for VPNing into my network, wouldn't that step an entire step out of breaking in? Granted it would be nice to VPN into work from anywhere using a protocol I can use about anywhere, but what would also prevent an entire internet cafe from launching a brute force attack using the same method?

Re:Software free tunneling (0)

Anonymous Coward | more than 8 years ago | (#14309907)

Umm... Not enabling the VPN functionality in the server?

Re:Software free tunneling (1)

RazzleDazzle (442937) | more than 8 years ago | (#14310432)

These are 2 options in the sshd_conf file

#AllowTcpForwarding yes
#PermitTunnel no

You can disable TCP forwarding if you want
You have to manually enable tunneling as it appears it is not on by default

A brute force attack is no more feasible than it was before. Don't use password auth (or use good passwords) and you should be just fine.
Also, if you have SSH access to an SSH server, you can likely already then access most devices that the SSH server can access already.

I would trust an OpenSSH based VPN more than OpenVPN for a million reasons - or for however many lines of code OpenSSH actually is. Nothing against OpenVPN (I have never used it) but I just trust the OpenBSD folks a lot more than most when it comes to my security.

Just my $.02

Damien (2, Funny)

blair1q (305137) | more than 8 years ago | (#14308780)

Damien, do you think that catchers will start using the inside protector any time soon?

chroot (1)

fusionsquared (865008) | more than 8 years ago | (#14308876)

when will sftp chroot become integrated instead of a patch/hack?

Re:chroot (2, Informative)

Nimrangul (599578) | more than 8 years ago | (#14309895)

When the code is good, clean, free and something the developers want.

Miller? (1)

d_54321 (446966) | more than 8 years ago | (#14308894)

Did anybody else first read this as interview of Dennis Miller?

SSH is SSH and VPN is VPN (1)

queenb**ch (446380) | more than 8 years ago | (#14312632)

I really don't see the need for it. SSH is for probably 90+% of it's users, a console for a remote box. I really don't feel the need to establish a VPN from my desk to the server down the hall. I'd rather not have that built in to my SSH by default, thank you kindly. I'm from the "old school" - if it ain't installed, it ain't a problem.

Frankly, we're pretty happy using SSH as it is right now. I'd like to see something like easier tunneling of X of an SSH session. Other than that, unless you can spank the already rather well done open source VPN's that are out there, I could care less about VPN as a part of SSH.

2 cents,

Queen B

Re:SSH is SSH and VPN is VPN (1, Informative)

Anonymous Coward | more than 8 years ago | (#14313401)

"I'd like to see something like easier tunneling of X of an SSH session"

WHAT???

You, "olde scholar" find too dificult just `ssh -X user@host` and then `startx`?

Now: how can it be any easier!?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?