×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

GA Tech: Internet's Mid-Layers Vulnerable To Attack

timothy posted more than 2 years ago | from the soft-creamy-underbelly dept.

Programming 166

An anonymous reader writes "Evolution has ossified the middle layers of the Internet, leaving it vulnerable but security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.' Extinction sucks, especially when it's my favorite protocols like FTP."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

166 comments

It's hard to take seriously... (4, Insightful)

msauve (701917) | more than 2 years ago | (#37173698)

an article which discusses "the six [sic] layers..."

I understand that IP protocols predate the 7 layer ISO/OSI model, but that's what everything is mapped to in modern terms.

The article seems even more confused, when it reverses the layers, claiming that "at layers five and six, where Ethernet and other data-link protocols such as PPP (Point-to-Point Protocol) communicate..."

What are they teaching at GA Tech? This is networking 101.

Re:It's hard to take seriously... (-1, Offtopic)

BootysAsses (2443734) | more than 2 years ago | (#37173704)

A few days ago, I was overexerting myself trying to get rid of a virus that held my entire PC hostage. It wouldn't give it back to me unless I paid $50! I tried all of the usual software, but nothing could remove it! My gigabits were slower than ever!

That's when I found MyCleanPC [mycleanpc.com]. I went to their website, ran a free scan, and that virus vanished off of my PC right this minuteness! I couldn't believe how fast my gigabits became afterwards!

From then on, my PC ran like new. Not only did MyCleanPC [mycleanpc.com] totally clean up my system, but it increased my speed! Everything was overclocking and running at maximum gigabits!

MyCleanPC [mycleanpc.com] is outstanding! My computer is running faster than ever! MyCleanPC [mycleanpc.com] came through with flying colors where no one else could! MyCleanPC [mycleanpc.com] totally cleaned up my system and increased my speed!

If you're having problems with your PC, then I sincerely recommend using MyCleanPC [mycleanpc.com]. It'll not only totally clean up your system, but your computer will be running faster than ever!

But even if you're not having any visible problems, you could still be infected or have errors on your PC. So get MyCleanPC [mycleanpc.com] and run a scan right this minuteness so your system will be totally clean like mine!

Watch their commercial! [youtube.com]

MyCleanPC: For a Cleaner, Safer PC. [mycleanpc.com]

Re:It's hard to take seriously... (0)

Anonymous Coward | more than 2 years ago | (#37173756)

N.B This advertisement has been brought to you by you friendly neighbourhood Microsoft evangelist.

* MyCleanPC requires Microsoft Windows.
* Most viruses require Microsoft Windows.

Re:It's hard to take seriously... (0)

Anonymous Coward | more than 2 years ago | (#37173886)

Microsoft evangelist?

Hardly. That spam was been brought to you by your local depraved zombie bot which has been pushing the same thing for the past few stories.

I hope somebody in the admin circle adds it to the block filter shortly.

Re:It's hard to take seriously... (1)

retchdog (1319261) | more than 2 years ago | (#37174386)

zombie bot? maybe, but it's probably some third-world peon doing this for pennies an hour.

but maybe that's what you meant by zombie bot.

Re:It's hard to take seriously... (1)

PopeRatzo (965947) | more than 2 years ago | (#37174518)

zombie bot? maybe, but it's probably some third-world peon doing this for pennies an hour

Same thing.

Re:protocols (1)

kakarote (2294232) | more than 2 years ago | (#37174966)

One really has to wonder how the process of transferring files is going to "evolve", the FTP protocol pre-dates TCP/IP itself and is as useful as ever.

Re:protocols (1)

leenks (906881) | more than 2 years ago | (#37175498)

People still use FTP? I exclusively use SFTP and/or SCP these days. I can't remember when I last used FTPS, let alone plain FTP.

Re:It's hard to take seriously... (-1)

Anonymous Coward | more than 2 years ago | (#37174442)

GOSO [goso.com] is a social media and brand reputation management suite based on Microsoft's in-house tools and designed to help businesses manage multiple identities at multiple locations. Those identities include but are not limited to, social networks, geo-location application check-ins, customer review websites and online brand mentions.

Re:It's hard to take seriously... (1, Insightful)

postbigbang (761081) | more than 2 years ago | (#37174014)

It's pretty freshmen-ish stuff. FTP hasn't been used in a long time. Glass-screen protocols went the way of the 386 long ago. I'm surprised these guys don't understand various secure protocols, key exchange methods, and so forth. Nice fluffy stuff, but very dated for the reality check. Show me someone using ftp and I'll show you a password theft followed by a crack. Ye gawds.

Re:It's hard to take seriously... (2)

JMZero (449047) | more than 2 years ago | (#37174102)

Variants of FTP are used widely in business to business transfers - sometimes secured with SSL, but often just by plaintext passwords, obscurity and/or IP whitelists. FTP is consistent between a large variety of platforms and lots of sysadmins like the simplicity of scripting, for example, a nightly FTP file transfer.

Is there better solutions? Of course. But FTP is still very common - and lots of businesses still employ much more arcane tech than it. For a lot of businesses, terminal servers were a real boon, because now they could all connect to a single old desktop (which in turn has a much more arcane connection to some mainframe in a basement that everyone's scared of).

It'll be a long time before FTP dies.

Re:It's hard to take seriously... (1)

Skal Tura (595728) | more than 2 years ago | (#37174182)

Our customers demand FTP, no matter how much we educate about SFTP and show how easy it is, still they insist on using FTP.
if ftp goes down that's likely to get complaints faster than http being down. loss of SSH access they barely even notice oO;

Re:It's hard to take seriously... (0)

postbigbang (761081) | more than 2 years ago | (#37174202)

Go on, gimme that loaded .357 so that I can swing it around. And you give it to them.

SFTP is one idea. SCP. Notice the S?

Re:It's hard to take seriously... (1)

tepples (727027) | more than 2 years ago | (#37174246)

SFTP is one idea. SCP. Notice the S?

Which of these two protocols' client comes with Microsoft Windows brand operating systems? And which budget shared web host supports file uploads using such protocols?

Re:It's hard to take seriously... (1)

Tanktalus (794810) | more than 2 years ago | (#37174272)

First question: Dunno. Probably neither. Not hard to get, though. Second question: I switched to dreamhost.com because I can use rsync over ssh. Not posting any referral link to discourage thoughts about having a financial reason to say anything. Also: I don't work for them. I merely use them. I understand they have a less-than-stellar reputation, but for my purposes, it's been nearly nothing but positives.

Re:It's hard to take seriously... (1)

Dhalka226 (559740) | more than 2 years ago | (#37174292)

And which budget shared web host supports file uploads using such protocols?

Dreamhost [dreamhost.com]. Being able to SSH in and pull down something with their pipes using wget has come in handy a number of times as well.

The client thing, meh. If people are mucking around in command line FTP programs they're savvy enough to download one; if they're using a GUI an awful lot of them have SFTP support these days, including FileZilla (free/Free). I guess I could see an argument if they're just entering an FTP URL into their Explorer window.

Re:It's hard to take seriously... (0)

Anonymous Coward | more than 2 years ago | (#37175568)

If you check he is also heavily funded by Cisco amongst others

Re:It's hard to take seriously... (0)

Anonymous Coward | more than 2 years ago | (#37175300)

So? require them to tunnel it inside an ipsec tunnel, problem solved.

Re:It's hard to take seriously... (1)

maxwell demon (590494) | more than 2 years ago | (#37175596)

So what is the advantage of sftp over ftps?

Re:It's hard to take seriously... (4, Informative)

Alioth (221270) | more than 2 years ago | (#37175664)

FTP (and FTPS) uses two ports: one fixed port number and the other random. You also have passive mode and "active" mode for FTP (but everyone these days uses passive, except one particularly backward vendor I had to deal with).

This causes firewall headaches because now the packet filter must understand FTP and selectively punch holes in the firewall for the data connection, and close them when the data connection finishes. Either the packet filter in the OS kernel must understand FTP, or you must use an FTP proxy that can dynamically modify your packet filter rules.

SFTP requires none of this. It works on a single port and this port doesn't change with each file you want to transfer or directory listing you want to see. You can also use the scp command which is much cleaner for scripting than writing FTP scripts. SFTP is a *lot* easier and cleaner to support, and the encryption is built right into the protocol, not added ad-hoc some time later.

Re:It's hard to take seriously... (1)

garyebickford (222422) | more than 2 years ago | (#37174248)

Haha. One system I had to build and maintain at a previous employer, not that long ago (1999):
PC .BAT job runs a Qualcomm application that dials up Qualcomm periodically to connect to their satellite truck monitoring system, capture session into a file in a special directory
PC .BAT job looks periodically to see if a new file has come in; uses TFTP to transfer it to a Sun workstation - call it Sun-1.
Sun-1 shell script mails the file to a special email account on another workstation
Sun-2 uses fetchmail and procmail to filter the incoming data (now timestamped by dint of having been emailed) into a Perl script that logs into a database server and inserts the data.
Sun-2 cron job runs all the Perl scripts that collect, doing the inserts.
Sune-3 (a web server on the DMZ supported with an outbound-only tunnel from the database server) runs queries on the database and tells the user where the truck is at, where it's been and what it's doing, displayed as a graphic layer on a mapping system (we got our map data directly from the feds.)

All this mostly because the Qualcomm application only ran in DOS, and only worked via dial-up. For the web user it was really sexy, but underneath it was a complete kluge, of necessity given the available tools.

FTP over TLS (1)

tepples (727027) | more than 2 years ago | (#37174258)

Show me someone using ftp and I'll show you a password theft followed by a crack.

Crack this: FTP over TLS [wikipedia.org].

Re:FTP over TLS (0)

postbigbang (761081) | more than 2 years ago | (#37174290)

Gimme a tasty weak hash and I'll have you in about 20min. Less if you did something stupid. About several million years if you used a thick seed and 256 (now 251) bit encryption.

Re:FTP over TLS (1)

errandum (2014454) | more than 2 years ago | (#37174464)

Because that makes a lot more sense than just use SFTP or SCP.

And something I noticed, files I transfer with SCP either fail or or things actually get done right. With FTP and others I've lost count of the times files actually got corrupted while transferring without any kind of warning.

That adding to security concerns should be enough to force the switch in an enterprise environment.

Shell account (1)

tepples (727027) | more than 2 years ago | (#37175246)

Don't SFTP, SCP, and anything else tunneled over SSH require a shell account? A lot of budget web hosting services provide FTP but no shell account.

Re:It's hard to take seriously... (1)

colinrichardday (768814) | more than 2 years ago | (#37174282)

Show me someone using ftp and I'll show you a password theft followed by a crack.

Does that include anonymous FTP? Or using FTP between two computers in my apartment?

Re:It's hard to take seriously... (-1, Troll)

postbigbang (761081) | more than 2 years ago | (#37174324)

>>Does that include anonymous FTP?

Smartass, aren't ya?

>>Or using FTP between two computers in my apartment?

Yeah, because you probably use WEP for security.

Re:It's hard to take seriously... (1)

colinrichardday (768814) | more than 2 years ago | (#37174512)

No, it's a wired connection. And why is mentioning anonymous FTP being smartass?

Re:It's hard to take seriously... (1)

TheRealGrogan (1660825) | more than 2 years ago | (#37175624)

I wasn't going to pay any attention to that silliness, but I feel like saying that I use FTP all the time as well.

Not for server work (SSH protocols for that), but I use FTP between computers here. It's a fast and reliable way to transfer data. If it's a lot of small files I tar it up first though. (I would always want to archive that kind of stuff for any method of data transfer, though)

I still use FTP clients to download stuff where I can too. (e.g. kernel and other source tarballs, distro mirrors for ISOs etc.)

I don't want FTP to go away. However, I don't think that's got anything to do with the premise of the article... it's just used as an example of the "evolution" he's talking about. It's got nothing to do with the "middle protocols" in question, it's one of the application protocols and I doubt it's going anywhere. Reliable resuming of transfers (by inserting markers in the data stream) means less wasted bandwidth.

Re:It's hard to take seriously... (1)

Opportunist (166417) | more than 2 years ago | (#37174682)

Correction. FTP should not be used anymore. It is used. Widely. Why? Because it works, and because the person who could change it left the company years ago. But slowly.

Turn back the time a decade. We're at the downturn after the dot.com bubble blew up, a lot of more or less sane IT people are out of a job (along with all the duds that got their job by spelling TCP/IP halfway correct and knowing that it ain't the Chinese secret service), and all of them are looking for work, any kind of work will do. So they're cheap, and that's where companies buy in and get their IT infrastructure up to speed. Back then, people cared even less about security than they do today, what they wanted was an IT infrastructure that works.

In comes FTP, along with scripts to grab and send files around.

And these servers still exist out there, they have never been touched, they were never secured, and I'm even sure that the passwords are still the same they were before 9/11 happened.

Re:It's hard to take seriously... (1)

mikael_j (106439) | more than 2 years ago | (#37174922)

Back then, people cared even less about security than they do today, what they wanted was an IT infrastructure that works.

Of course, I've seen ISP environments that used FTP heavily (as well as TFTP for a bunch of automated stuff). Why? Because when you're running an encrypted tunnel through another encrypted tunnel that runs between two trusted hosts on a segment of the network that does not allow incoming traffic from anywhere but the NOC it just seems silly to add another layer of encryption and the potential issues that could come with that for daily log transfers...

Re:It's hard to take seriously... (0)

Anonymous Coward | more than 2 years ago | (#37174834)

I use FTP on a daily basis to transfer files between my home and work. Of course... I do tunnel over SSH. Its not a security problem to use FTP, its how you use it.

What are you talking about? (3, Informative)

reiisi (1211052) | more than 2 years ago | (#37174196)

ARPANET predates the OSI model, and the current Internet Protocols came after the definition of the OSI stuff. (That's a little hard to see in the current wikipedia articles, but it's there.) The IETF in fact deliberately chose to combine two of the OSI layers.

The article does have some issues. I'm not sure if the author actually doesn't understand the paper he or she is trying to summarize. Maybe the intent was to make it easier for the lay person to understand. But there is some creativity going on, and parts of the summary don't really reflect the paper.

The paper itself is offering a framework of analysis of the evolution of the Internet Protocols. It might have been interesting to see a bit more analysis of ARPANET and some of the other protocols the IP protocols eventually replaced. It might have been interesting to see them address the OSI model a bit more, but the OSI model never was really implemented fully, and might be considered not part of the evolution.

I see that the take IPv6 up as a competitor of IPv4 instead of the heir apparent, which is probably a useful thing to do, if we want to understand why so many IT managers are still failing to move in a timely manner.

I'm not sure I understand their work well enough to either agree or disagree, but I think it offers food for thought, including the idea that IPv4/6 doesn't actually have to be the only protocol existing at that layer.

Re:It's hard to take seriously... (1)

Charliemopps (1157495) | more than 2 years ago | (#37174358)

You're missing the point. A good example would be fast food restaurants. There used to be a Mexican based fast food chain called Taco Bell. It used to be the only place to get burritos, but then McDonalds introduced their breakfast burrito and drove Taco Bells nearly extinct like FTP. Please ignore the fact that you drove by 3 of them this morning or that it's impossible to update your website without using FTP.

Re:It's hard to take seriously... (3, Informative)

mgiuca (1040724) | more than 2 years ago | (#37174438)

I've never really been a fan of the OSI model. The idea of the hierarchy is great; sandwiching it into discrete layers seems problematic.

Wikipedia's definition of the OSI model [wikipedia.org] states that "there are seven layers, each generically known as an N layer. An N+1 entity requests services from the layer N entity." Makes sense. So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP, so it should be in the layer above IP, but it doesn't actually provide transport (or at least, isn't meant to). HTTP is in layer 7, but it can be sent directly on top of TCP, which is in layer 4, skipping over two layers. (Or it can be tunnelled over SSL, but still skipping layer 5.)

I prefer to think of the IP stack being a directed acyclic graph of technologies, each depending on another, rather than an explicit linear division into layers.

Re:It's hard to take seriously... (4, Informative)

Pentium100 (1240090) | more than 2 years ago | (#37174874)

Well, you can imagine a "null" layer that does nothing, just passes the data unmodified to the next layer.

For example, HTTPS would be HTTP over SSL, SSL wouls be level 6 (presentation). If you use HTTP without SSL then level 6 is empty or uses the "null" protocol.

ICMP is part of IP, while you could say that the ICMP packet is inside an IP packet it is easier to imagine ICMP as just a part of IP, because it is used that way (for example, to signal that some other packet could not be delivered).

Just because I can send the HTTP packet inside an Ethernet frame (without IP or TCP), does not mean that the model is broken, it's just that "null" is a valid protocol.

Re:It's hard to take seriously... (1)

mgiuca (1040724) | more than 2 years ago | (#37174956)

Good point about the null. I see that it works that way for non-SSL traffic, but I still don't see how the "session layer" sits in between HTTP and TCP (even if you consider it to be "null"). It seems like session layer protocols are an entirely different sort of connection.

As for ICMP, I see what you mean that it's sort of part of the IP protocol (IP wouldn't work without ICMP), but it is syntactically formed inside an IP packet, and I do believe it is constructive to think of ICMP as being "on top of" IP and not part of it (that's certainly how you'd implement it -- your ICMP code would certainly be calling your "construct IP packet" code at some point).

Re:It's hard to take seriously... (4, Informative)

lennier (44736) | more than 2 years ago | (#37175092)

So, why are both ICMP and IP considered to be in layer 3?

Because the Internet protocols are not in fact part of the OSI model, despite lots of teaching materials claiming this. The neat little OSI layer diagrams you see with all the layers filled in are mostly retcons invented long after OSI was dead.

The actual Internet protocol suite is not part of the OSI model but the 4-layer Internet model [wikipedia.org] (Link, Internet, Transport, Application). Link is like OSI layers 1 and 2, Internet is like OSI Layer 3, Transport is like OSI Layer 4, Application is like OSI Layer 7, but there is no actual Internet equivalent of OSI's layers 5 and 6. Pretty much everything above 4 runs at Layer 7.

In the Internet model, it makes perfect sense for DHCP, IP and ICMP and routing protocols like RIP and OSPF to be at the Internetworking level because they are both protocols dealing with datagram transmission between interconnecting disparate packet-switched services, while TCP and UDP are in the Transport layer because they make dealing with raw datagrams somewhat more pleasant.

It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

Re:It's hard to take seriously... (1)

mgiuca (1040724) | more than 2 years ago | (#37175154)

Thank you. Yes, the four-layer Internet Protocol Suite thing makes a lot more sense. Rather than trying to say "there are seven layers stacked on top of each other," it seems like here, the protocols are arranged into four logical "protocol groups" with clearly-defined roles, and no sense of "protocols in layer N run on top of those in layer N-1". In the IP suite, it seems valid for protocols in the same group to run on top of each other (e.g., HTTP runs over SSL; ICMP runs over IP).

Re:It's hard to take seriously... (4, Informative)

FireFury03 (653718) | more than 2 years ago | (#37175552)

It would perhaps be sensible to invent a whole new layer model now that we have a lot more protocols. HTTP, for instance, should be a layer of its own, since so many things are now tunnelled over it. That would be sensible, though, so good luck.

Thinking of a fixed set of layers stops being useful as soon as you get moderately complex network setups because these days encapsulations tend to happen at all sorts of layers. Modern networks can probably be thought of more as a stack of protocols with the link layer at the bottom, application at the top and chopped up repetitive bits of the stack in the middle.

e.g. take for example a modern connection to a website and we probably see this kind of stack:
HTTP
SSL
TCP
IP
PPP
PPPoE
Ethernet
ATM VC-Mux
ATM
G.922.5 data link layer
Physical ADSL

And that's just for a plain home ADSL connection. In more complex networks it is common to encapsulate stuff further, for example using GRE tunnels or IPSEC tunnels, and it isn't uncommon to see something more like:

HTTP
SSL
TCP
IP
IPSEC ESP
IPSEC AH
IP
Ethernet
GRE
IP
GRE
IP
PPP
PPPoE
Ethernet
ATM VC-Mux
ATM
G.922.5 data link layer
Physical ADSL

And you can keep adding encapsulation layers at pretty much any point in the stack.

Re:It's hard to take seriously... (2)

Animats (122034) | more than 2 years ago | (#37175504)

So, why are both ICMP and IP considered to be in layer 3? ICMP is built on top of IP.

The real answer to that is that it's a Berkeley UNIXism. Some early TCP/IP implementations, including the one I worked on, had ICMP at a layer above IP, in the same layer with TCP and UDP. The Berkeley UNIX kernel, like other UNIX versions of the period, had real trouble communicating upward within the kernel, because this was before threads, let alone kernel threads.

To get around that kernel limitation, ICMP was crammed in with IP. This had some downsides, including the demise of ICMP Source Quench for congestion control, which didn't fit well into the mode of ICMP as an error-reporting mechanism for IP.

How to mod article? (3)

whoever57 (658626) | more than 2 years ago | (#37173710)

Surely this article should be nodded "massive ignorance"! It's the simplicity of the middle layers that enables the development of the upper and lower levels. It also makes the middle layer much more immune to security issues.

So the internet is just like a human being then? (4, Funny)

antifoidulus (807088) | more than 2 years ago | (#37173764)

Well, I know for myself a good swift "attack" on my "middle layer" does cause me to fall to the ground and writhe around for a while, so I guess the internet and I do have a lot in common, really vulnerable mid-sections.

How did this article make it? (3, Insightful)

norpy (1277318) | more than 2 years ago | (#37173772)

Not only did they combine the presentation and application layers from the OSI model they completely misunderstand WHY that the transport layer is less diverse in number of protocols.

They propose that we should create new transport protocols that do not overlap with existing ones.... The reason we only have a handful of them is because of the fact that there are not many ways to differentiate a transport protocol.

Re:How did this article make it? (1)

after.fallout.34t98e (1908288) | more than 2 years ago | (#37173940)

There are very many ways to differentiate them (I would hypothesize an infinite number of different ways), most of them are just not efficient.

The trouble I have is understanding why we need so many. A transport protocol is a mechanism for ensuring the delivery of data from one point to another. Surely it would be far simpler to optimize a network (or write faster software for routers/switches) if there were only a few standard protocols (granted there are only a few in wide use and most network hardware is designed for that fact).

Fail (0)

Anonymous Coward | more than 2 years ago | (#37173788)

Yes, because it's very difficult to understand that protocols which aren't end-to-end require more standardisation then other protocols due to having to cross many nodes thus leading to a situation of relying on a select tried and true protocols. Yes, very difficult.

Unstated, and important, assumptions? (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#37173790)

There seems to be the unstated(but vital to the conclusion asserted) assumption that competition actually makes protocols more secure and that competition must occur at the protocol level, rather than the implementation level. Without those assumptions holding, all this article really says is that people use TCP and UDP a lot. Yup. That they do.

This seems like it might be true in the (not necessarily all that common) case of a protocol whose security is fucked-by-design competing with a protocol that isn't fundamentally flawed, in a marketplace with buyers who place a premium on security, rather than price, features, time-to-market, etc.

Outside of that, though, much of the competition and security polishing seems to be at the level of competing implementations of the same protocols(and, particularly in the case of very complex ones, the de-facto modification of the protocol by abandonment of its weirder historical features). It also often seems to be the case that(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...

Re:Unstated, and important, assumptions? (1)

masterwit (1800118) | more than 2 years ago | (#37174788)

(unless you are in the very small formally-proven-systems-written-in-Ada market, or something of that sort) v1.0 of snazzynewprotocol is a bit of a clusterfuck, and is available in only a single implementation, also highly dubious, while the old standbys have been polished considerably and have a number of implementations available...

Careful that we do not open Pandora's box here... (You know exactly what I am talking about, heh)

But on another note your exactly right. This article seems to talk about how protocols "evolved" but this is just as useful as painting a picture of the internet:
Time and time again I will see models looking at a picture of the internet "all at once", but without knowing what and why with each individual link, protocol, implementation, etc... this is a complete waste of time.

As you have said in so many words above, what these "researchers" did is a complete waste of time. Maybe they need to do some research on peering 101?

Disclaimer: I have not read the actual paper just a poorly written article linked by Slashdot. Perhaps there is much more to their work; if so, I do apologize.

Re:Unstated, and important, assumptions? (2)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#37174930)

As best I can tell, after going back and reading the paper, TFA is a miserable hatchetjob that has almost nothing to do with the paper.

The paper dealt with modeling the survival or culling of protocols at various layers, under various selection criteria, from a sort of evolutionary-biology standpoint. This did entail examining what conditions resulted in monoculture end states, and what conditions might result in stable multiple-protocols-at-each-layer end states; but all at the level of a fairly abstract model, not an empirical examination of the State of The Intertubes, or much specifically security-related material(In TFA's defense, the paper did suggest that, if you wanted a stable-state outcome with multiple middle layer protocols, they would have to be non-overlapping, which TFA managed to at least parrot accurately, and both agree that the internet as it exists is pretty much an IP monoculture; but the two otherwise bear surprisingly little resemblance to one another.)

TFA seems to be the result of picking the page with the least math, skimming it, and then adding some security-related alarmism...

Really? Why not link to the original paper? (5, Informative)

Anonymous Coward | more than 2 years ago | (#37173794)

It's the very first Google hit, is still on a public server, and doesn't obviously distort the conclusions like TFSA in an effort to get more clicks. A+ for poorly crafted summaries, Slashdot.

http://www.cc.gatech.edu/~sakhshab/evoarch.pdf [gatech.edu]

As long as... (1)

v(*_*)vvvv (233078) | more than 2 years ago | (#37173806)

... there is human error there will be weakness. Before innovation, there is caution and upkeep. Careless server admins just leave their gates open, a la Sony. A simple misconfiguration and the East goes dark, a la Amazon.

But like all things founded on good democratic freedoms, we are free to be idiots. And unless we add socialized security, the internet will always be full of gaping weaknesses. And all of us, including those that serve responsibly, will suffer their consequences. A la the United States of America.

Not that either is good or bad, but just sayin' this is the world we surf in.

Use mutt (1)

jrumney (197329) | more than 2 years ago | (#37173810)

Evolution always seemed to be too like MS Outlook to me, this article just seems to confirm that, judging by the odd intelligible snippet I can make out from the overuse of metaphors and confused language of the summary. But fear not, mutt does not suffer these problems, and nor does Thunderbird if you need your middle layers of the internet client to have pretty icons.

SCTP (0)

Anonymous Coward | more than 2 years ago | (#37173818)

They forgot a major, new, "middle layer" protocol. Next.

Alrighty (2)

khallow (566160) | more than 2 years ago | (#37173828)

security breaches could be countered by diversification of protocols, according to Georgia Tech, which recommends new middle layer protocols whose functionality does not overlap, thus preventing 'unnatural selection.'

Let's have a lot of protocols right, but to prevent too much diversity (that is, stuff that doesn't work) we'll need to make sure these comply with one or two protocols that everyone will use...

Hmmm, "Middle layer protocols whose functionality does not overlap"... does that mean that we prune the vast abundance of current protocols with sometimes overlapping functionality? I guess we could call that "diversification" though at this level of semantic mismatch, we could call it "Frank" with equal justification.

I guess I'm not quite sold on the argument presented here.

Other things hampering evolution (2)

jhantin (252660) | more than 2 years ago | (#37173916)

Evolution at the middle layers is also hampered by the proliferation of middleboxes [wikipedia.org]: monkeying with packet headers for policy-enforcement and profit. It's also pretty well de rigueur for IT departments to configure both middleboxes and "smart" switches to drop any unrecognized middle layer packets.

Let FTP die already (1)

Dwedit (232252) | more than 2 years ago | (#37173918)

Let FTP die already. Clear text passwords suck.
The only legitimate use of FTP is a way of transferring files over a LAN to something which doesn't have a good implementation of a CIFS or SSH server.

Re:Let FTP die already (2)

colinrichardday (768814) | more than 2 years ago | (#37173990)

Let FTP die already. Clear text passwords suck.

How do clear text passwords suck for anonymous FTP?

Re:Let FTP die already (1)

wmbetts (1306001) | more than 2 years ago | (#37174296)

Anonymous runs an ftp server? Aren't they worried about the FBI?

Re:Let FTP die already (1)

KiloByte (825081) | more than 2 years ago | (#37174478)

FTP has more flaws than just clear text passwords. Requiring multiple connections, often in opposite ways, for one.

the paper is rubbish (0)

Anonymous Coward | more than 2 years ago | (#37173942)

Oh good lord. This paper was rubbish. I was at the conference presentation. Be assured that no one is taking it seriously. Their model can produce any kind of hourglass, and has essentially nothing to do with the internet. It can't account for any of the actual, observed diversity at the waist of the hourglass, and has zero predictive power (which *should* be a test for any model). It isn't grounded in anything particular about protocols or networks. Please just ignore this junk.

analyze this bullshit (0)

Anonymous Coward | more than 2 years ago | (#37173952)

>Anyone who has used the Internet for very long knows about its evolution by the number of extinct protocols that are no longer used.

No i know about the evolution by the fitness for a purpose. Like easy identification of a resource by an URL while being able to serve many different server names in a transparent way on a single IP (webhosting).

>For instance, FTP (File Transfer Protocol) used to be the only way to transmit files too large for SMTP (Simple Mail-Transfer Protocol),

Wrong. FTP was not the only way to transmit files "to large for SMTP" (Did i somehow miss a magical size limit in SMTP). I could name a few others, like uucp, tftp, smb, the novell filesystem, zmodem, xmodem, nfs etc.

> but clever programmers have devised ways of using server-side algorithms to deliver large files using HTTP (Hypertext Transfer Protocol).

It was always my impression that serving a large file via http does not require a specially clever programmer. Somehow it just works

> As a result, FTP has become virtually extinct on all but legacy systems.

Its the result on not being able to combine many customers ftp servers onto a single IP

>Researchers at the Georgia Institute of Technology wondered if these evolution and extinction phenomena on the Internet were in any way similar to evolution and extinction in nature.

Well - yes?

> After all, protocols could be viewed as species that compete for resources, with the weaker ones eventually becoming extinct. Similarly, the evolution of the Internet's architecture could be described as a competition among protocols, with some thriving and others becoming extinct.

Weaker ones?

> To test their theory, the group headed by computer science professor Constantine Dovrolis crafted a research program that tracks the evolution of architectures, called EvoArch. The overall goal was to help understand how protocols evolve in order to develop better ones that protect the Internet from the wide variety of threats it is facing today and to prevent extinctions that ossify the Internet, making it more vulnerable to attacks.

All right. So supporting a large number of protocols makes the internet more safe? Linux Kernel bug seem to speak another language. Its good if unused protocols become so extinct that you can turn them of on you server.

> The general conclusion derived from EvoArch was that unless new protocols are crafted to avoid competition, they will inevitably lead to extinctions.

Yes. Its orthogonality. But it does not have anything to do with protocols becoming extinct. These guys make it sound like the extinction is the problem in reality its the lack of orthogonality in the designs (and when it comes to security also in the layer functions - on how many layers are here half-assed attempts to authenticate?).

Re:analyze this bullshit (0)

Anonymous Coward | more than 2 years ago | (#37174040)

Did i somehow miss a magical size limit in SMTP

It sure seems that way, see rfc2821 [ietf.org] - servers weren't actually required to support individual messages larger than 64k for a long time, though were encouraged to do so of course. 64k is not really a lot of binary data, but is quite a lot of plain text.

message content
      The maximum total length of a message content (including any
      message headers as well as the message body) MUST BE at least 64K
      octets. Since the introduction of Internet standards for
      multimedia mail [12], message lengths on the Internet have grown
      dramatically, and message size restrictions should be avoided if
      at all possible. SMTP server systems that must impose
      restrictions SHOULD implement the "SIZE" service extension [18],
      and SMTP client systems that will send large messages SHOULD
      utilize it when possible.

Plus, SMTP AFAIK still doesn't require 8-bit cleanliness, meaning everything sent by SMTP gets encoded inefficiently into 7-bit ASCII, which is incredibly wasteful.
 

What we need is a P2P (0)

Anonymous Coward | more than 2 years ago | (#37173968)

file sharing / distributed FS protocol that lives outside tcp/ip!

Re:What we need is a P2P (3, Informative)

spauldo (118058) | more than 2 years ago | (#37174154)

There are plenty of those already. NetBIOS is an example of a non-TCP/IP peer-to-peer filesharing protocol (I'm talking LANMAN style NetBIOS, not NetBIOS over TCP/IP). It doesn't route outside your local network though. There's the good ol' IPX/SPX, which can actually be routed if your router supports them - while not filesharing protocols in themselves, they do support some very well-established filesharing protocols. You could probably adapt bittorrent to work on IPX/SPX.

The problem is we can't even get IPv6 routed on the internet, much less some obscure non-IP protocol. Hell, we never even really got all of IPv4 - multicast would have been great for streaming video if anyone had bothered to set up their routers for it.

That being said, you don't need to use TCP and UDP. You can create new protocols to run over IP, and the internet will generally pass them (your local firewall might be a different story). They'll stick out like a sore thumb to anyone searching for them, though.

ossified? (1)

Cyko_01 (1092499) | more than 2 years ago | (#37174008)

forgive me, but nothing useful turned up on Google or urban dictionary. what does this word mean? (I am a native English speaker)

Re:ossified? (1)

Cyko_01 (1092499) | more than 2 years ago | (#37174016)

...unless this is some strange reference to bone formation

Re:ossified? (0)

Anonymous Coward | more than 2 years ago | (#37174168)

You're right. A better word in this context for English speakers would have been "fossilized." Kudos on your latin / spanish :)
I'm pretty sure the metaphor was supposed to be that those layers were left behind to die, unmaintained, rather than "evolving" along with the outer layers. Not that I agree that the 7 layers have changed at all, since it's the individual protocols that reach their own "extinction," like Appletalk and IPX did.

Re:ossified? (0)

Anonymous Coward | more than 2 years ago | (#37174046)

How 'bout a regular dictionary?

http://dictionary.reference.com/browse/ossified

Re:ossified? (1)

Livius (318358) | more than 2 years ago | (#37174084)

They're trying to say 'petrified' (in its figurative meaning) but they think it will sound more impressive if they incorrectly use a somewhat similar word.

Re:ossified? (5, Informative)

JMZero (449047) | more than 2 years ago | (#37174146)

No - the figurative sense of ossified is correct and common. Petrified is usually used figuratively to mean something like "scared stiff". Ossified, in common figurative use, means that something has become stiff and inflexible (often through disuse or rot) - like tissue that has become bone.

If you check a reasonable dictionary (eg. http://dictionary.cambridge.org/dictionary/british/ossify_1?q=ossified [cambridge.org]) you'll find this definition.

Re:ossified? (0)

Anonymous Coward | more than 2 years ago | (#37174684)

They're trying to say 'petrified' (in its figurative meaning) but they think it will sound more impressive if they incorrectly use a somewhat similar word.

Just like you tried to sound more intelligent by making up the word "figurative" which clearly doesn't exist. Snark Snark!

Of course one [thefreedictionary.com] exists in the dictionary, and the other one [thefreedictionary.com] does not. Oh wait...

It should have been read "OSIfied" (0)

Anonymous Coward | more than 2 years ago | (#37175260)

This has such a nice ring and the twist-in-tongue is really beautiful. But I'm afraid that almost no one here ever read X.200, the OSI reference model ...

Re:ossified? (1)

Alioth (221270) | more than 2 years ago | (#37175688)

I recommend WordReference:

English definition: http://www.wordreference.com/definition/ossified

Synonyms: http://www.wordreference.com/thesaurus/ossified

(WordReference will also give you the definition in a variety of languages).

What did the GT grad say to the VT grad? (0)

Anonymous Coward | more than 2 years ago | (#37174054)

Do you want fries with that?
The premise and solution provided seem a little whimsical.

According to Georgia Tech? (0)

Anonymous Coward | more than 2 years ago | (#37174092)

Maybe researchers at Georgia Tech?

Or did some idiot named Mr. Tech name his kid Georgia?

More outstanding editing...

seriously (0)

Anonymous Coward | more than 2 years ago | (#37174164)

Let FTP die? go f__k yourself

Network effect? (1)

michael_cain (66650) | more than 2 years ago | (#37174416)

Having skimmed the article, I am concerned that they seem to ignore the well-known network effect: the value of a network to those attached to it increases at a rate faster than linear as a function of the number of others attached. This property has generally meant that once a network-layer protocol is sufficiently well established, it is hard to displace; a winner-take-all situation. Telegraph network. Telephone network. In the data world, IP, ATM, and a handful of others slugged it out, and eventually IPv4 reached critical mass and "won".

puma cat (0)

Anonymous Coward | more than 2 years ago | (#37174500)

on a les puma cat sur la site http://www.puma-ferrari-cat.com http://www.edhardyfrance.biz http://www.louboutin-sandales.com http://www.fingers-five.com

We all like to party but ur there 2 learn dammit (0)

Anonymous Coward | more than 2 years ago | (#37174888)

"Anyone who has used the Internet for very long knows about its evolution by the number of extinct protocols that are no longer used"

I'd have to think real hard to name any besides gopher.

"For instance, FTP (File Transfer Protocol) used to be the only way to transmit files too large for SMTP (Simple Mail-Transfer Protocol), but clever programmers have devised ways of using server-side algorithms to deliver large files using HTTP (Hypertext Transfer Protocol). "

LOL It takes more ingenuity to send a large file via HTTP.

"Researchers at the Georgia Institute of Technology wondered if these evolution and extinction phenomena on the Internet were in any way similar to evolution and extinction in nature"

I often wonder could a more useless question evolve from a group of monkies armed with glitter bombs?

All successful protocols have the following traits in common:

1. fulfill a real need
2. Do not require disruptive change unless abs necessary
3. simple/low cost

The IETF is full of morons who disregard the above for their own academic reasons. As a result their work never sees the light of day.

"In particular, the six layers of the Internet have evolved into an hour-glass shape where protocols at the very top and bottom continue to evolve, but where those toward the middle have become stagnant, leaving unnecessary security-risk opportunities open for exploitation."

Your mom is an unnecessary security risk.

"In the middle layers, however, extinction has left only a few survivors, ossifying its structure. At the transport layer (layer three), TCP (Transmission Control Protocol) competes with only a few other alternatives, such as UDP (User Datagram Protocol),"

Let me guess you found the list of registered IP protocols at IANA and drew some rediculous conclusion about the "decline" of all those protocols that have never actually ever been used by anyone.

"and at layer five, the network protocol, IP (Internet Protocol) and ICMP (Internet Control Message Protocol) are used almost exclusively." ...
"Diversity resurfaces at layers five and six, where Ethernet and other data-link protocols such as PPP (Point-to-Point Protocol) communicate "

It is actually layer 73. When you "ossify" something you set its value to 73 just because.

"From running simulations with the EvoArch program these researchers have concluded that the only way to reintroduce diversity into the middle layers without inevitable extinctions is to create protocols that do not overlap with the others. By thus eliminating competition for the same resources, a rich set of middle layer protocols with increased security should be able to survive"

The reason we don't see new L4 protocols is because TCP and UDP are good enough compared to the crap you have to go through to get E2E support for a new protocol implemented at the socket layer by all operating system vendors.

Our middle-layer animals have become ossified (0)

Anonymous Coward | more than 2 years ago | (#37174916)

The lynx, the tuna, and the lemming have become seriously ossified. They have overlapping functionality. Both the lynx and the lemming have legs. This is not acceptable. We must create a new lynx-lemming hybrid and kill off all remaining lemmings-only and lynxes-only. The tuna is an even bigger abomination. Much like the lynx and the lemming and probably the lynx-lemming hybrid, it has a brain and a central nervous system. However, it can swim. We must remove its brain. That way the tuna will swim and the lynx-lemming hybrids can follow each other off of cliffs, but will drown. This means the brainless tuna and the lynx-lemming hybrid will not be competing for the same ocean.

It really wasn't designed for security. (1)

NicknamesAreStupid (1040118) | more than 2 years ago | (#37175044)

More for integrity, but the service layer architecture is purely based on trust. It turns out, that you can more readily do the most when you have trust, which partly explains the rapid growth of the Internet. However, a bunch of trusting souls make an irresistible target for those who are willing to exploit their trust. I believe the only way to deal with them is to move faster than they can. FTP should have been enhanced to the point that few would use the older version, hence a smaller target. I don't mean secure FTP. I refer to features and functionality. There should be no reason to use HTTP for file transfers, but that is now more common than FTP. Perhaps it has evolved after all, into HTTP.

Too academic to be useful (0)

Anonymous Coward | more than 2 years ago | (#37175256)

Another study by academics who have no real world experience. Move along, nothing to see here.

End-to-End (0)

Anonymous Coward | more than 2 years ago | (#37175690)

These guys aren't aware of the end-to-end argument, I take it. Essentially, it's not possible to secure the mid-points of data communication as the mid-points have no idea of what they're transmitting. Only at the endpoints do you have enough information to properly secure the communication.

In essence, securing the middle layers can only give you a small amount of protection, at best, and at worst they can introduce a large overhead.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...