Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Engineers Ponder Easier Fix To Internet Problem

Soulskill posted more than 2 years ago | from the have-you-tried-turning-it-off-and-then-on-again dept.

The Internet 75

itwbennett writes "The problem: Border Gateway Protocol (BGP) enables routers to communicate about the best path to other networks, but routers don't verify the route 'announcements.' When routing problems erupt, 'it's very difficult to tell if this is fat fingering on a router or malicious,' said Joe Gersch, chief operating officer for Secure64, a company that makes Domain Name System (DNS) server software. In a well-known incident, Pakistan Telecom made an error with BGP after Pakistan's government ordered in 2008 that ISPs block YouTube, which ended up knocking Google's service offline. A solution exists, but it's complex, and deployment has been slow. Now experts have found an easier way."

Sorry! There are no comments related to the filter you selected.

Well???? (3, Funny)

Anonymous Coward | more than 2 years ago | (#39826619)

1. Tell everyone routing is broken.
2. Break it.
3. ???
4. Profit.

Please tell us so we can get to 4.

Re:Well???? (1)

Anonymous Coward | more than 2 years ago | (#39826827)

3. Start a laundry service that goes to network admins' offices and cleans their soiled underwear onsite?

After all, if their entire company is knocked off line, you know they'll shit their pants.

Re:Well???? (1)

Eponymous Hero (2090636) | more than 2 years ago | (#39826895)

but then the underpants gnomes will stop stealing them and the cycle will break

Re:Well???? (1)

Anonymous Coward | more than 2 years ago | (#39826939)

but then the underpants gnomes will stop stealing them and the cycle will break

Ugh! You environmentalists have always gotta spoil a great business!

Re:Well???? (4, Funny)

dgatwood (11270) | more than 2 years ago | (#39826965)

Or crawl through the barrage of bullets muttering something about uptime (obligatory xkcd [xkcd.com] ).

Re:Well???? (1)

Dwonis (52652) | more than 2 years ago | (#39830291)

3. Fake virus attack.

How do you think I lasted 30 years in IT?

Problem (4, Insightful)

girlintraining (1395911) | more than 2 years ago | (#39826635)

So they've finally solved the problem of repressive governments disconnecting citizens from the internet, preventing the free flow of information, being co-opted by large corporations, and a litany of jurisdictional issues that have caused many people's lives to be ruined?

"No, they just made it so this can only be done by those people, and not your people. Our people are, of course, better than your people, being authoritative, responsible, and all of that."

Exactly (-1)

Anonymous Coward | more than 2 years ago | (#39830061)

+1 exactly

Let's tell the story of Kemal Ataturk. A gay small-time colonel in the Ottoman army, at a time when being gay was much less popular among muslims than today. He was a small (well, for a Turk he is kinda small) bitter, vengeful man, killing men under his own command because of where they come from. When he did amass power, it was immediately clear to everyone what exactly this man was : a genocidal racist maniac. And yet over 40 million muslims followed him. The very victims of his genocide followed him. And they were right in doing so. Why ? Because a racist genocidal maniac organizing a society around a genocide ... is still better than islam and it's caliph. Alternatives matter.

And frankly, this man, genocidal maniac and racist traitor all packaged up in a small skin shouting for massacres ... is one of the greatest forces for good that the 20th century knew. He deserves the reverence most of the world still has for him.

It does not matter if someone satisfies some absolute standard of morality, flawless and perfect. If you want that, the only place for you to go is to be a religious extremist. For the rest of us, the worst monster in the world can easily be better than the alternatives available.

I don't really know of comparable figures from the history of America, so I can't give you examples of them. But I'm sure they exist. Evil, racist maniacs who are heroes because they improved the situation of everyone, surely you Americans can find someone from history class.

In this case: there is no large, perfect state that actually has some power and resources. The UN, aside from being a bunch of powerless castrated whining kids, routinely condones genocide, even aids and abets it when it suits its purposes like in Sudan or China. That organisation's purpose is to lay claim to the world as a whole, in an extremely undemocratic manner. And the other alternatives ? China ?

Demanding perfect morality from everyone involved in such a world ... is lunacy, and will totally defeat anything you might want to accomplish.

Instead let's find a powerful organisation that itself is as free as possible. Powerful in the sense that, if need be, it can beat any other organisation, manu militari. Better yet if that organisation has financial interests in remaining so free, and a land filled with people who believe in freedom. A land with companies spreading the internet, and content on the internet paying taxes to it. And frankly, that said organisation doesn't let you download harry potter without paying for it, and is looking for a way to enforce that rule, doesn't really register on the radar.

Freedom is the end goal, and America is not the end goal, though it may yet turn out to be just that some time in the future. But it is the next step. The internet under the American government's control is a good place, and there just simply is no alternative. Rejecting the entire world because everyone is flawed will not yield anything useful.

The summary didn't explain the whole story... (1)

Anonymous Coward | more than 2 years ago | (#39826641)

....so why read TFA?

Solution is called Rover, Uses Reverse DNS (4, Insightful)

billstewart (78916) | more than 2 years ago | (#39832539)

TFA wasn't very detailed either, but it mentions that the new protocol is called Rover. Project website is here. [secure64.com] The short summary is that you can use Reverse DNS to advertise the BGP Autonomous System Number (ASN) that's authoritative for your block of address space, and use DNSSEC to protect the Reverse DNS tree. If somebody else starts advertising that they've got a route to your address block, routers (or route servers sitting next to the routers, because your standard router doesn't actually know how to do this) can verify whether that's correct.

Bad summary (3)

jeffasselin (566598) | more than 2 years ago | (#39826653)

", but routers don't verify that the route 'announcements.'" what?

Please fix this sentence, it hurts when I try to read it :-(

Re:Bad summary (2)

fahrbot-bot (874524) | more than 2 years ago | (#39826957)

It's a partial direct quote from TFA:

But the routers do not verify that the route "announcements," as they are called, are correct. Mistakes in entering the information -- or worse yet, a malicious attack -- can cause a network to become unavailable.

Re:Bad summary (2)

msauve (701917) | more than 2 years ago | (#39827055)

Bad summary - there are two links, both with text which describes problems with the current system, and neither of which would seem to point to this "Easier Fix" mentioned in the headline. Why not link some text which at least makes minimal sense, like "...experts have found an easier way."?

", but routers don't verify that [sic] the route 'announcements.'"

It's easy to understand your confusion - you're not reading what was written.

Playing grammar nazi today (1)

wbr1 (2538558) | more than 2 years ago | (#39826671)

...but routers don't verify that the route 'announcements.'

Verify that the announcements what? Are legit, make toast, or perhaps fart in their sleep?
My spelling and grammar are not good, but come on, edit those submissions and look professional!

Re:Playing grammar nazi today (0)

Anonymous Coward | more than 2 years ago | (#39826849)

Don't verify the router announcements. There. Are you happy? It (the sentence) should not have quoted the term 'announcements' because that is the term used to describe the messages generated by the BGP protocol and the router-borne application. Next silly question?

Re:Playing grammar nazi today (2)

Belial6 (794905) | more than 2 years ago | (#39827431)

Do you really not know the answer to your question?

Re:Playing grammar nazi today (1)

PPH (736903) | more than 2 years ago | (#39829331)

Don't worry. We've just demoted itwbennett to the IT department. In charge of maintaining the routing tables.

The big fix... (3, Interesting)

icebike (68054) | more than 2 years ago | (#39826693)

The solution is to have routers verify that the IP address blocks announced by others routers actually belong to their networks. One method, Resource Public Key Infrastructure (RPKI), uses a system of cryptographic certificates that verify an IP address block indeed belongs to a certain network.

Well duh! You would have thought this was the case already. Why are we worrying about state sponsored cyber attacks if we leave a hole this big wide open?

Can any network gurus out there tell me if this problem still hangs around after ipv6? Does it get bigger?

Re:The big fix... (2)

zAPPzAPP (1207370) | more than 2 years ago | (#39826869)

So what if the IP does not belong to the router, but to another router behind this one, which is where the info originated?

This is a network after all.

If the internet was only 2 routers head-to-head, the problem would be trivial.

Re:The big fix... (1)

the_B0fh (208483) | more than 2 years ago | (#39827401)

Seriously? Go look up how DNSSEC works. You understand DNSSEC also covers subdomains right? And the DNS server that covers apple.com may not cover *.security.apple.com

How the hell are you upvoted, I have no clue.

Re:The big fix... (0)

Anonymous Coward | more than 2 years ago | (#39827821)

Seriously? Go look up how DNSSEC works. You understand DNSSEC also covers subdomains right? And the DNS server that covers apple.com may not cover *.security.apple.com

You propose making IP networking hierarchial like DNS? gg

IP is already hierarchical (1)

billstewart (78916) | more than 2 years ago | (#39832793)

IPv4 and IPv6 address space assignment is already hierarchical. IANA owns the whole space, and delegates it to the regional registries (ARIN, RIPE, etc.) which hand out blocks to customers, whether it's an IPv4 /8 to a big ISP (which hands out /24s, /29s, etc. to its customers) or an IPv6 /48 to an end user company (which assigns subnets to different buildings and LAN segments.)

And the Reverse DNS tree maps those hierarchies.

Re:The big fix... (2)

Trahloc (842734) | more than 2 years ago | (#39828983)

Did you just poorly explain your analogy or have you never actually worked with BGP? You can announce your ips over one uplink, switch it to another uplink, then move them to a third all in a few minutes if you feel like it. You could tunnel your traffic across the internet and announce them from japan if you're bored enough. DNSSEC and BGP have nothing to do with each other and should never be compared to one another. BGP is proof positive that anarchistic systems DO work and trying to make it fall in line with some sort of structure is worse than the occasional screw up that can happen by some fat fingers.

Re:The big fix... (1)

the_B0fh (208483) | more than 2 years ago | (#39829307)

I think it's me explaining it poorly. I was trying to point out you can send more shit along than just what's behind the current router.

How Rover (the fix) Works (1)

billstewart (78916) | more than 2 years ago | (#39832765)

Rover [secure64.com] uses the Reverse DNS tree to advertise records that say that some address block [e.g. 0.192.in-addr.arpa] belongs to some ASN [e.g. 65535]. And you can use DNSSEC to verify that the rDNS advertisement for the address block is valid. This lets your routers (or at least the router-server you've got sitting next to your routers) validate whether a BGP announcement they receive is plausible.

And BGP's not at all anarchistic - the ASN assignments and IP address block assignments are both owned by IANA or its delegates (ARIN, RIPE, etc.), which is why it's meaningful to discuss whether a route advertisement is legitimate. The problem this is trying to solve is that people have been announcing routes they don't legitimately own, whether it's the kind of fat-fingers classful addressing autosummarization mistake that takes your two Class C subnets and announces the Class A that contains them, or whether it's Pakistan's PTT advertising YouTube's address blocks to keep Pakistanis (and the rest of the world) from watching politically incorrect videos on YouTube. (The fat-finger version happens more often, which is why you'll see ISPs that own a /8 advertising it as two /9s, so they can use longest-match to protect their space.)

Re:The big fix... (3, Interesting)

jd (1658) | more than 2 years ago | (#39827429)

Poisoned router tables will indeed "infect" other routers, radiating out until the correct route has a preferred weighting to the toxic route.

A wonderful example of this occurred in 1995 in England, when Manchester University's computer centre decided that it WAS America. (Now, I know they tend to have an ego problem there, but this was impressive.) Because redirecting traffic to Manchester required fewer hops and utilized greater bandwidth to any other route to America, you can guess what happened next. It took quite some time for the engineers to clean up the mess, because the newly discovered Northwest Corridor^wWormhole had been discovered by so many routers and the information was being gossiped around. Just as with humans, once gossip starts it is very hard to stop - even when the source admits it was false.

There's not a lot you can do in a case like that. Once an authenticated router starts having delusions due to buggy software/hardware, there's not much any other router can do to determine that it truly is a delusion. Multipath helps (if you support dividing traffic between multiple routes, according to viability, you'll only lose a percentage of traffic, not all of it) but you'd need active path monitoring to go any further. Which would reduce bandwidth (which is already excessively limited) and increase complexity (the primary cause of hallucinating hardware).

Re:The big fix... (2)

Anaerin (905998) | more than 2 years ago | (#39826919)

Wouldn't an easier way be to attempt a trace to the advertised destination through the router that advertised it? Then if Router A says "You can get to 192.x through me!", and probes sent to the 192.x range through that router fail, it's obvious that Router A is sending false/misleading advertisements. Then it doesn't matter at all who owns the group, just that packets go through correctly. It will also let the network self-manage optimal routes, as it can measure the return speed for the (newly discovered) route, and compare it with it's existing route - If the newer route is faster, it becomes the preferred route, if not it gets filed in a list in order of speed, with fastest first. When the current route fails, the router can go back through it's list to get the "next fastest route" and try that instead. To verify that it really is the correct route, and that it leads to the same servers, the router gets it's packets to return though it's existing route, thus verifying that the packet travelled to the same machines on the same range, even if they did go through the new route.

Re:The big fix... (1)

Urza9814 (883915) | more than 2 years ago | (#39827259)

I'm no network engineer, but if it's a malicious attack, wouldn't you have to assume that the attacker has at least some level of control over the router in question? And if so, if they're trying to force connections to a certain IP through that router, couldn't they just route those probe packets to themselves and spoof a response? Even if you encode information giving it a new route to go through in order to verify, couldn't the attacker just direct their response through that route as well (while maybe adding a few of their own hops first)?

Re:The big fix... (1)

Anaerin (905998) | more than 2 years ago | (#39827905)

Yes, they could. But adding in those extra hops would delay packets sent through, making connections via the malicious route slower and thus less preferable. If the router in question is your only option then you're screwed, but if you're multi-homed, this will give you some way of verifying that differing available routes lead to the same range. If it's a malicious spoofing then you won't be able to get a response from the real site from the malicious router.

Re:The big fix... (1)

the_B0fh (208483) | more than 2 years ago | (#39827501)

You understand that BGP is used to manage routes right? Someone like Google would have tons of peers. That means they advertise their network to those peers.

That means Comcast gets those route advertisements.

So do TimeWarner

So do Verizon.

And so on and so forth.

Now, you're in some dumbfucktown, but you're cool and hip and have redundant links to a couple of local ISPs. Guess where those ISPs get their links from? Time Warner, Verizon, etc. All of whom have a bunch of peering points...

Following how the route advertisement comes to you is like reading one of those "find-your-adventure" books that used to be popular in the 80s where you have to follow the story by jumping from page to page, based on the decision you made.

Re:The big fix... (2)

Anaerin (905998) | more than 2 years ago | (#39827825)

Okay, so you run dumbfucktown.net, and you have connections through comcast and vzw.

vzw starts advertising that they can get to thisnet.com faster with an announcement.

Your router:

  • Sends a traceroute to thisnet.com through vzw
  • Sends a traceroute to thisnet.com through comcast
  • Compares the two results, to see which completes, and which is faster.
  • Pings thisnet.com through vzw, with a comcast return route
  • Pings thisnet.com through comcast, with a vzw return route

If any of these fail, the new route is rejected completely. If they succeed, the route is classified and entered into the preference list based on it's performance.

If a route starts failing/returning errors, the requests are retried on the next available route in the list, continuing down the list until it is exhausted.

All dumbfucktown.net cares about is if the packets get there, and how fast. It doesn't care who owns the route, how new or old it is, or anything else.

This isn't foolproof, but it would work, I believe.

Disclaimer: IANANE

Re:The big fix... (2)

karnal (22275) | more than 2 years ago | (#39828323)

One of the issues with this is that you don't get a choice in the network that packets return on. For instance, let's say I have a comcast circuit and a tw circuit. Let's also say I start a conversation with someone.tw.com. In my limited experience, someone on the tw network will always send their packets back to me using my tw circuit - regardless of which circuit I use to initiate the traffic on.

There are ways to fix this issue - one way is to make sure the device that issued the traffic out to someone.tw.com is only advertised out of your comcast circuit and not your tw circuit. Usually when having multiple circuits I've not seen this - unless you do something like nat the outgoing traffic on the egress router. That has it's own sort of issues depending on what you're trying to accomplish.

Re:The big fix... (0)

Anonymous Coward | more than 2 years ago | (#39829217)

Disclaimer: IANANE

Yeah, we know.

Your plan will not work as too many variables are out of your control and subject to change at any time. Specifically:

  • You can't control the return routing. You can't control any routing once the packet leaves your network.
  • Other networks filtering rules will likely break this scheme enough to cause trouble.
  • Traceroute and ping are not reliable enough (imho) to make this scheme work predictably

Your plan will add additional complexity and won't scale nicely without melting down some core routers.

The scheme does work on really small scales - like a multilinked end-point site can use something like this in a script to control routing policies etc.

Re:The big fix... (1)

sjames (1099) | more than 2 years ago | (#39830475)

Nobody allows source routing anymore. You CAN'T ping with a set return route. The reply will come back through whatever route the far end and it's upstreams choose.

Re:The big fix... (1)

wwbbs (60205) | more than 2 years ago | (#39831601)

Your mixing up the different layers of networking. BGP operates at a much lower level than PING and/or Trace Route. Such that your statement is more misleading than informative.

It's more complex than that (1)

billstewart (78916) | more than 2 years ago | (#39833719)

First of all, the destinations aren't individual addresses, they're blocks of addresses, so you don't necessarily know a working address in the block. And even if you do know an address of a machine in that block, you don't know if it's willing to answer pings from you. (Both of which are really annoying, when you're trying to debug by hand :-)

And as Urza9814 points out below, if it's a malicious attack, the Bad Guy can be sure to answer (e.g. the Pakistan PTT probably put up a web page saying "You're not allowed to watch YouTube in Pakistan!", even though their route advertisement went to the whole world.) On the other hand, the random misconfigured router that's advertising a route to a whole /8 probably didn't do that, even before all the traffic for that /8 melted its T1 or E1 line.

And ping/traceroute response times aren't very predictable, especially on routers which make those low priority processes, so if a router takes an extra 10ms to answer because its CPU is busy with more important work, that doesn't mean it's 1000 miles farther away.

And routing isn't symmetric, it's often asymmetric, especially if you're messing around with testing routes to see if they're valid - if Router Z's best route to you goes through Router B, and you sent your test ping out Router C, you're still going to get the answer on Router B if you get it at all.

BGP has lots of metrics - raw speed isn't the only one, and often it's not the best one. Maybe your T1 line is the shortest-ping-time route to YouTube, but you want to use your 100 Mbps Ethernet for YouTube traffic anyway because the T1 doesn't have enough capacity, or maybe you want to use the cheap DSL line for web browsing traffic and save the T1 for database queries. Using raw speed as a routing metric is highly likely to lead to route instability and congestion collapse, as well as routers spending all their time calculating route changes (and thus being slower to respond to pings), chaos, anarchy, dogs and cats living together...

There are boxes out there which actually do track packet response times and reliability and use that to do routing, most commonly in hosting environments, and some people really like them. But they're a game for multi-homed end-users, not Internet backbones.

Re:The big fix... (3, Informative)

Vancorps (746090) | more than 2 years ago | (#39827021)

Problem is the same size. If I have two or more routes to the same network then multiple routers are responsible for a given ip block. Its not really an attack vector because your create peering agreements with your providers and they are each responsible for holding up their own end of the deal. As disruptive as BGP errors whether malicious or through fat fingering are, it's not really that big of a deal to fix once the problem is identified.

I would think a DNSSec like infrastructure could help remove the possibility of malicious route modifications but in the end, if it's state sponsored then any system can be broken by even the proposed solution.

Re:The big fix... (1)

tearmeapart (674637) | more than 2 years ago | (#39827141)

BGP is at a layer higher than ip, so whether you are using IPv4 or IPv6 (or any other protocol), it does not matter.

(And yes, i realize that the implementations of BGP often involve sending TCP/IP and UDP/IP packets around, but the point is that switching to IPv6 is not going to change your BGP configuration or BGP tables.)

Re:The big fix... (4, Informative)

jd (1658) | more than 2 years ago | (#39827255)

BGP for IPv6 is essentially the same as BGP for IPv4, so if the protocol has a security hole then it will appear on both. However, because IPv6 is designed from the outset to be a hierarchical addressing scheme, address tables should end up being much smaller (even though each entry is longer) which in turn means that accidents should be less common. If it's easier to see the consequences of your actions, you (in theory) should be less likely to make mistakes.

Back in the days when IPv6 mandated IPSec, the problem of malicious router table poisoning simply wouldn't have existed -- all router protocol traffic would be encrypted and every link would be encrypted distinctly, where the keys used for encryption are securely exchanged in an encrypted form via IKE or IKE2 and where the key exchange encryption key is either a shared secret or a public/private key pair. It would not eliminate accidental corruption, but attacks would be out of the question.

Also back then, automatic address assignment, router and service discovery (via anycasting) and router-level IP mobility (the routers automatically redirected packets if you moved between networks) meant that manual router configuration was almost unnecessary. Virtually everything could be discovered - including MTU - and so nothing really needed to be configured. This would have eliminated manual errors. In fact, that was the whole point of all these automated mechanisms. There would be no manual entry and therefore there would be no manual errors.

Telebit added a nice touch, creating a routing protocol that permitted segments of the network to be transparent (essentially the same as NAT, only far more fine-grained and flexible), although it seems they made the grievous error of not making their protocol public. Certainly I've seen nobody attempt to use it and there has been no reference to it since Telebit went under. Further, the lack of NAT is something that has held back IPv6. Given that Telebit had a working NAT equivalent in 1996, this is incredibly annoying. (Apologies if they did make it public, but it is still true that it's not used and that complaints about a lack of NAT have been a serious issue - made all the more serious precisely because the problem was solved and the solution deployed very very early on.)

So the answer is "if IPv6 is deployed as close to originally intended as possible, the problem simply doesn't exist - in any form; but that if IPv6 is deployed as it is currently used, the hole will hang around although it will be a little smaller".

Re:The big fix... (1)

Lennie (16154) | more than 2 years ago | (#39827527)

"However, because IPv6 is designed from the outset to be a hierarchical addressing scheme, address tables should end up being much smaller (even though each entry is longer) which in turn means that accidents should be less common."

The hierarchical addressing scheme for IPv6 never happend in practice. IPv6 is routed the same way as IPv4.

But less prefixes are needed, because with IPv4 much smaller networks are allocated to providers. With IPv6 every provider network basically just needs only a few prefixes because IPv6 is such a large address space.

Re:The big fix... (1)

jd (1658) | more than 2 years ago | (#39827587)

*SIGH*

*Beats ISPs over the head with a wet herring*

Ok, correction noted, though it's good to see that at least fewer prefixes are needed so I'm at least half-right.

*Beats IPSs over the head with a smoked herring, then lets the cats loose*

Re:The big fix... (1)

WaffleMonster (969671) | more than 2 years ago | (#39828331)

BGP for IPv6 is essentially the same as BGP for IPv4, so if the protocol has a security hole then it will appear on both. However, because IPv6 is designed from the outset to be a hierarchical addressing scheme, address tables should end up being much smaller (even though each entry is longer) which in turn means that accidents should be less common. If it's easier to see the consequences of your actions, you (in theory) should be less likely to make mistakes.

I don't think this will play out. Yes there will be some aggregation thanks to generous allocation policies but it won't do anything about proliferation of shit routes by every dinky little corporation in the world and rich tech geeks who insists on being multi-homed. I would be very surprised to see the situation change much with IPv6.

Re:The big fix... (1)

unixisc (2429386) | more than 2 years ago | (#39829931)

Isn't IPSec still mandatory under IPv6? If it ain't, what made the IETF drop that requirement?

Re:The big fix... (0)

Anonymous Coward | more than 2 years ago | (#39830087)

IPSec is required to be implemented in IPv6 stacks. It is not, however, a requirement that applications running over IPv6 use IPSec. It's there if they want it; most of them don't.

Re:The big fix... (1)

dissy (172727) | more than 2 years ago | (#39830055)

Further, the lack of NAT is something that has held back IPv6.

Whaa? Why?

Instead of using a private IP block behind a NAT box, just use a real IP block and a stateful firewall box.
Identical effect as NAT would have, with the bonus that you are guaranteed no other machine on the internet will be on a private network with the same IPs you use so VPN tunnels won't break.

You also gain the bonus feature that with a single config line change, you can put one of your private "NATed" machines out in your DMZ and don't have to reconfigure anything else but one entry on the firewall. No renumbering, no rerouting, no vlan adjustments, no NAT/PAT forwarding exceptions.

Why on earth you would want to cripple your network so badly when a stateful firewall will give an identical effect?

Re:The big fix... (1)

TheLink (130905) | more than 2 years ago | (#39830911)

You also gain the bonus feature that with a single config line change, you can put one of your private "NATed" machines out in your DMZ and don't have to reconfigure anything else but one entry on the firewall

To people who care about security and know their stuff that is a bug not a feature. Think about what happens if one day someone fat-fingers the firewall config. The DMZ servers would be hardened so they might survive the exposure. The other machines on your private network are unlikely to be safe when accidentally exposed to the world. In many real world corporations there are usually servers that can't be locked down that tightly.

IPv6 proponents using such "features" as arguments for adopting IPv6 gives me the impression that the IPv6 proponents have no idea of how things really work in the real world. Heck the IPv6 bunch even took the trouble to reinvent DHCP poorly (talking about SLAAC and not DHCPv6)- they even left out DNS till 2007! Clueless idiots living in ivory towers.

Re:The big fix... (1)

Tacvek (948259) | more than 2 years ago | (#39832419)

You also gain the bonus feature that with a single config line change, you can put one of your private "NATed" machines out in your DMZ and don't have to reconfigure anything else but one entry on the firewall

To people who care about security and know their stuff that is a bug not a feature. Think about what happens if one day someone fat-fingers the firewall config. The DMZ servers would be hardened so they might survive the exposure. The other machines on your private network are unlikely to be safe when accidentally exposed to the world. In many real world corporations there are usually servers that can't be locked down that tightly.

Really? That's your argument?

If you are using a many-to-many NAT setup (as many reasonably sized companies would require), you are able to place up to one machine in the DMZ per external IP. So the mistake in question is already possible without

Furthermore many large companies have never used NAT, and they don't have these problems. They have only ever used public IP addresses, and a stateful firewall. They avoid issues like you are talking about by being careful, and having security in depth. For example having multiple firewalls, can prevent accidentally placing a machine in the DMZ with a single mistake. You could make it such that an IP address must be explicitly listed in the edge firewall to be in the DMZ. If you also have the inner firewall configured to require stateful connections for all machines, then the only way to accidentally expose a machine is to make two mistakes. The mistakes could be placing an internal machine in the DMZ vlan and also adding its IP address to the edge firewall, or managing to mess up the configuration of both firewalls simultaneously.

Re:The big fix... (0)

Anonymous Coward | more than 2 years ago | (#39833037)

In most NAT setups you can't accidentally expose entire private networks in the way you mentioned since you would not have enough public IP addresses.

SLAAC vs. DHCP (1)

billstewart (78916) | more than 2 years ago | (#39833323)

Harrumph. SLAAC wasn't a poor reimplementation of DHCP, kid. DHCP was that new stuff defined in 1993, though it was based on BOOTP and RARP (from 1984-1985, which let a workstation look up the IP address that was manually assigned to its MAC address.) SLAAC was based on the autoconfiguration capabilities in IPX and XNS, which were also around in the early 80s. If your office equipment didn't need to talk to anybody else, you could just plug everything in and it would Just Work. If you needed to talk between multiple subnets, either because you had multiple offices or distance limitations, you'd announce the subnet blocks, and everything would still Just Work. Compared to typing MAC addresses into RARP/BOOTP server tables, which is what we had to do for diskless workstations, or even compared to typing IP addresses into individual client PCs or diskful workstations, Autoconfiguration was really cool!

And NAT wasn't around until the mid-90s either, and took a while before it only broke lots of things instead of everything, and people stopped doing lots of the cool things that it broke.

That's not to say that SLAAC doesn't have its problems, such as protecting against bogus router advertisements, but bogus DHCP servers are also theoretically a threat.

Most of your assertions are wrong (1)

billstewart (78916) | more than 2 years ago | (#39833209)

First of all, IPSEC or its IPv6 equivalents don't help you here. The main problem is some router advertising "I've got a great route to ASN12345" or "My ASN 67890 owns 12.0.0.0/8" when that's not true, and some other router it's connected to believing those bogus advertisements and passing them along. If BGP were wrapped in IPSEC, that would mean that nobody could eavesdrop on the bogus advertisements, but the problem isn't protecting the transport layer for the bogus advertisements, it's verifying the advertisements themselves. IPSEC might be able to protect you against forged BGP messages on trusted connections, but that's a much less important attack. IPv6 itself does eliminate the "classful route autosummarization" version of fat-fingering that lets your antique Cisco router decide that since it sees two /24 subnets from a Class A /8 block, it should be efficient and advertise the Class A block to its upstream, which has resulted in things like MAE-East's traffic all pointing at one little incompetent ISP's T1 line or a small ISP in South America grabbing any connections from the rest of South America to AT&T (which is why most ISPs that do have a /8 block will also advertise a pair of /9s.)

The "everybody's going to happily use hierarchical addressing" concept never played out, because it didn't address business reality or other end user needs. Businesses need their incoming connections to be dual-homed for reliability, and even if they don't have their own Provider-Independent address space, they still need to advertise their PA space from ISP A on their ISP B connection and vice-versa, so the route tables are going to be almost as large from subnet advertisements even without PI space. Reliability was critical even before the whole world moved onto the Internet, and even though ISPs have become much more reliable than they were in 1995, IPv6 support was pretty much experimental and spotty until, well, yesterday or maybe tomorrow, so you still can't be sure that your connection to ISP A will reach everybody in the world when your connection to ISP B is down. Businesses had three main reasons for wanting to own their own IPv4 address space - dual-homing for reliability, convenience when switching ISPs for business/pricing/etc. reasons, and being sure they could get enough address space if they switched ISPs. IPv6 doesn't help the dual-homing issue, though it does fix the getting-enough-space problems, and makes renumbering easier (though RFC1918, NAT, and DNS have eliminated 99% of that problem.) Switching ISPs happens on a timescale of months or at least weeks, so you can deal with the time it takes for DNS caches to expire and browser sessions to close, and IPv6 lets you use multiple IP addresses on the same interface, so maybe a coordinated early effort by the big ISPs, ICANN, IANA, and IETF could have helped the politics, but the ISPs and ICANN were dragging their heels for decades, Jon Postel was dead, and Cisco was way late in supporting IPv6. So that didn't happen.

Eventually some people realized that dual-homing really really was important, and that no amount if "But IPv6 was meant to be used hierarchically" would get people to give up their PI space or route advertisements, and tried to do something about it, giving us the appallingly ugly breakage called shim6, and don't hold your breath waiting for every piece of software in the world to adopt it before anybody can give up dual-homing. Meanwhile, the equipment that's been most stubborn about renumbering is IPSEC tunnel servers and clients, which tend to have hard-coded addresses instead of using DNS or other servers to find out who to talk to - I'm not convinced that IPv6 gear of the same vintage wouldn't have had the same problems.

IPv6 didn't eliminate manual router configuration, though it theoretically lets you move some functions from routers onto other servers. Autoconfiguration meant that you didn't have to assign IP addresses manually onto end-user PCs, and since DHCP hadn't yet taken over the world when that stuff was first written, that was pretty cool, and of course nobody had to worry about forged RA announcements or MITM attacks under XNS or Netware. But you still have to do manual address assignments if you're going to connect to the outside world, because you really do need to use your own address space and not somebody else's, and you still need to use BGP to advertise routes, and you need to use some mechanism for getting the IPv6 addresses for your servers into your DNS servers.

Telebit - yeah, they did lots of amazingly cool stuff back in the day.

Re:Most of your assertions are wrong (1)

jd (1658) | more than 2 years ago | (#39833935)

It would not eliminate accidental corruption, but attacks would be out of the question.

That deals with your comments about erroneous advertisements. If you don't read my posts, don't expect me to bother with more of a reply than what I've already said.

router-level IP mobility (the routers automatically redirected packets if you moved between networks)

That deals with your comments about multi-home. Transient IP addressing assumed everyone could be multi-homed. Again, it really helps if you read what you're replying to. As I've noted before, I was one of the first IPv6 adopters, I know about the early concepts because I was there.

Re:Most of your assertions are wrong (1)

billstewart (78916) | more than 2 years ago | (#39874735)

Sorry, didn't see your reply until today.

> > It would not eliminate accidental corruption, but attacks would be out of the question.
> That deals with your comments about erroneous advertisements. If you don't read my posts, don't expect me to bother with more of a reply than what I've already said.

I read it. I contradicted it, and explained why you were wrong. IPSEC and its equivalents protect you from an attacker forging the network-layer origin of a message. They don't protect you from an attacker giving you a message with bogus contents. If your router 2001:1111:: is talking BGP-over-IPSEC with router 2001:2222::, the IPSEC will protect you against router 2001:3333:: impersonating your friend 2001:2222::, as you say. But it's not going to protect you against 2001:2222:: sending you a BGP announcement that's deliberately bogus, and it's not going to protect you against 2001:3333:: sending a bogus announcement to 2001:2222::, who's dumb enough to believe everything he reads, and 2001:2222:: appending his ASN to the bogus announcement and passing it on to you. So you're still going to get that message about YouTube's address block being hosted in Pakistan, and you're going to get it across a nice secure IPSEC connection that nobody can eavesdrop on.

I also spent a lot of time reading IPv6 standards docs back in the early 90s, and again from the mid-2000s on, though I didn't actually implement any of it until recently. When you're talking about

(the routers automatically redirected packets if you moved between networks)

are you talking about special-case mobile IPv6, or are you talking about real implementations of IPv6 that a typical business web-hosting server might use, or are you talking about the hopelessly optimistic science fiction that was being written into early standards documents? If you're talking about Mobile IPv6, some of that did get developed, and maybe even deployed, though I haven't seen much of it in a while.

But if you're talking about the kinds of commercial service that JoesGarage.com can buy today from BigISP1 and BigISP2, and Joe's access line to BigISP1 gets run over by a backhoe or either his premises router or his ISP's access router fails, are you saying that packets addressed to JoesGarage.com's BigISP1 /48 address will automagically get rerouted over to his access line from BigISP2? And that it'll happen without Joe buying some Provider Independent address space? And without BigISP1 building a GRE or VPN tunnel from some server to Joe's port on BigISP2, or BigISP2 having a special arrangement with BigISP1 to carry each other's subnet routes for customers that want to pay extra for it? Tell me more, because I'm not seeing that in the market.

And yes, I picked BigISP1 and BigISP2, not SmallISP3, because I realize that you can get tunnels from Hurricane Electric across both of your big slow-moving ISPs. On the other hand, the kinds of failure scenarios we worried about in the early 90s included not only backhoes and access router failures, but also backbone failures on the part of the ISPs, which were one of the main reasons people dual-homed back then. Sprint and MCI occasionally just failed and fell off the rest of the net, and they were a large fraction of the commercial part of "the net". Trusting a single ISP meant you had to worry about technical failure and Chapter 11 Fade, though you might not be as scared of those today.

Re:The big fix... (1)

arglebargle_xiv (2212710) | more than 2 years ago | (#39829021)

The solution is to have routers verify that the IP address blocks announced by others routers actually belong to their networks. One method, Resource Public Key Infrastructure (RPKI), uses a system of cryptographic certificates that verify an IP address block indeed belongs to a certain network.

Well duh! You would have thought this was the case already. Why are we worrying about state sponsored cyber attacks if we leave a hole this big wide open?

The issue here is that trying to solve any problem using PKI leaves you with two problems. What's new in this case is that it doesn't require successfully deploying a PKI, which means it's not predestined to failure.

Another issue in this case is that a large percentage of BGP issues are due to glitches caused by legitimate ASes (misconfiguration, that sort of thing). Authenticating an erroneous mesage from a legit AS isn't going to help the problem.

Re:The big fix... (0)

Anonymous Coward | more than 2 years ago | (#39834151)

>> a large percentage of BGP issues are due to glitches caused by legitimate ASes(misconfiguration, that sort of thing).

Amen to that. The claim of 30% of sites being (in some way) unreachable, and the even more fantastic insinuation that involving the cert-granting industry in this will somehow solve the problem, is a little far fetched. BGP is not for idiots. Brandishing a cert will not magically cure that monday morning sticky fingers problem. Nor will it stop a determined attacker. BGP, to the initiated, is simple, elegant and set & forget. This proposal is neither of these. Granted, in this case a solution may be worth pursuing, but this ain't it. And by the time the various committees have co-opted all the money and power "interests" on the solution, we'll probably end up with something at least as horrendously broken as IPv6. Blegh.

huh (0)

Anonymous Coward | more than 2 years ago | (#39826731)

WUT?! =O

Those ITEF Punks (1)

Anonymous Coward | more than 2 years ago | (#39826975)

FTFA - "The specifications are currently in "internet daft" status before the Internet Engineering Task Force."

But will it make the Internet Harder, Faster, and Stronger?

-AC

Re:Those ITEF Punks (0)

Anonymous Coward | more than 2 years ago | (#39827993)

FTFA - "The specifications are currently in "internet daft" status before the Internet Engineering Task Force."

But will it make the Internet Harder, Faster, and Stronger?

-AC

I guess the next step is "internet not completely crazy" status.

Congressional Approval (4, Funny)

ComputerInsultant (722520) | more than 2 years ago | (#39827025)

Do these engineers have approval from the US government to make these changes? Changes like this could break the ability to break the Internet. Can't have that.

Piggy backing on DNS is not a good idea. (2, Informative)

Anonymous Coward | more than 2 years ago | (#39827283)

Their suggest solution "stores the legitimate route information within the DNS". They think a centralized DB is better than BGPSec!? How stupid.

With the current situation there is at least a trust relationship where if a router consistently provides bad origins you can remove them from your list of routers to listen to announcements from. With storing the routing data in DNS you will be giving the core internet routing technology all the problems DNS has (more government control (SOPA, PIPA anyone)) and will be eliminating much of the benefit of a distributed trust network.

No, the right way to do this is to make the ISPs bite the bullet and implement BGPSec. Unfortunately there is little incentive for ISPs to implement this.

Re:Piggy backing on DNS is not a good idea. (1)

Lennie (16154) | more than 2 years ago | (#39827631)

Actually there is a lot of incentive, but the problems for BGPSec are:
- you need to change the software on the router
- the vendor has to add it
- it will use a lot more CPU on your router, which can cause more problems than it solves
- DNSSEC and BGPSec are brittle, many mistakes have been made, just look up

and so on.

Re:Piggy backing on DNS is not a good idea. (0)

Anonymous Coward | more than 2 years ago | (#39831957)

All Rover is going to do is inform a router that a specific path is good. Let say some oppressive country puts a route advertisement that all (US) .gov and .mil traffic goes through their routers (which actually has happened). Rover isn't goint to fix BGP as it stands today. It's going to validate the path, just like DNSSEC validates a URL/IP address.

An invisible man-in-the-middle is a serious problem.

Re:Piggy backing on DNS is a good idea. (1)

billstewart (78916) | more than 2 years ago | (#39833475)

You're misunderstanding how this works. It's not using DNS for routing. It's using the Reverse DNS tree to hang ASN ownership records onto IP address blocks, so you've got a mechanism for validating announcements about ASNs owning those blocks, and that ownership is already hierarchical, which is why there's a concept of "legitimate" here at all. The Reverse DNS tree has records about names such as 2.0.192.in-addr.arpa, which are about IP addresses 192.0.2.*, and is maintained (or not:-) by the people who actually own that address space. And ISPs do need to bite the bullet and administer their rDNS and DNSSEC already.

The problem we're trying to solve is that a "distributed trust network" makes it too easy to trust any random advertisement that comes down the wire, which breaks things if that advertisement is bogus. For instance, if some router says they've got a route to 192.0.2.0/24, you're going to look at it, see if it's better or worse than the route you've been using, and pass it along. But if they're lying, and they've got a better metric than your alternative routes, and you trust them, suddenly all your traffic for example.com is going into their black-hole or go to their man-in-the-middle server or whatever. This gives you a way to check whether their advertisement is plausible, and reject it if it's not. Every couple of years this happens with a big chunk of IP address space and some big ISP loses connectivity to South America or MAE-East or whatever, but it's probably happening on a small scale all the time.

This doesn't affect what governments and their henchpersons can do to the net. While we'd all like to believe that IP/IPv6 address space comes from the Internet Gods, in fact IANA, ARIN, RIPE, etc. are subject to interference by governments, though IP addresses and the corresponding rDNS names like 2.0.192.in-addr.arpa aren't very interesting to the Trademark Mafiaas and usually get left alone. But if they do decide to have a trademark lawsuit about who owns 31.3.3.7, or IPv6 2001:face:booc::/48, it's going to affect the IP address block, not just the DNS names 7.3.3.31.in-addr.arpa or c.0.0.b.e.c.a.f.ip6.arpa.

Ok people, I'll explain. (0)

Anonymous Coward | more than 2 years ago | (#39828049)

It's just like domainkeys and other systems that use DNS as a distributed database. And like DNSSEC prevents untrusted DNS caches from interfering with DNS information, this prevents untrusted routers interfering with routes.

It's all rather simple if you think about it and keep saying 'distributed database' plus 'signed route relationships'.

fat fingering or malicious? (1)

Anonymous Coward | more than 2 years ago | (#39828307)

"Any sufficiently advanced incompetence is indistinguishable from malice."

Dependancy inversions (1)

WaffleMonster (969671) | more than 2 years ago | (#39828497)

This does not really seem to solve much of anything. It just makes detecting and chasing down potential routing problems much easier.

What good does querying a DNS server do if the routes have been hijacked and the hijackers took the extra step of making DNS inaccessable? Is a router really going to pull a route if they can't access a DNS server? If they even think about it good luck beating down complex failures.

I think combination of sane filtering practices and centrally managed but globally duplicated authoritative routing data is a better approach if your goal is to actually solve the problem rather than making it easier to detect and mitigate after the fact.

Re:Dependancy inversions (0)

Anonymous Coward | more than 2 years ago | (#39828751)

What do you think the routers would be querying the DNS for? Authoritative routing data stored in a centrally-managed, globally-duplicated system would be my guess.

Re:Dependancy inversions (0)

Anonymous Coward | more than 2 years ago | (#39832341)

What do you think the routers would be querying the DNS for? Authoritative routing data stored in a centrally-managed, globally-duplicated system would be my guess

DNS is not centrally managed nor is it globally duplicated.

Story summary (1)

Impy the Impiuos Imp (442658) | more than 2 years ago | (#39828521)

"Gee, hardly nobody is using the new, super-duper complicated system to securely verify this stuff. Oh wait, we already have a secure way to do this in place. Let's use that!"

What if...? (2)

thepacketmaster (574632) | more than 2 years ago | (#39828651)

The existing BGP protocol definitely has some security issues that need to be fixed and a PKI solution sounds great. However, is it wise to use a technology to verify route announcements when that technology relies on proper routing to be in place so it can communicate with DNS servers? On the surface this seems like a catch-22 or chicken/egg situation. I haven't had a chance to read the draft yet but hopefully this has been taken into account. Perhaps the article just didn't explain it well enough.

To good to be true? (0)

Anonymous Coward | more than 2 years ago | (#39830403)

The advantages with ROVER are that no changes need to be made to existing routers, and it can work alongside RPKI. "The whole infrastructure of securing the answer [of whether the route is legitimate] already exists," said Gersch, who has authored two specifications for how to name a route and the type of record that could be inserted into the DNS.

I read statements like this from an engineering perspective and all I can think is, this sound to good to be true. We should use this because we won't need to do anything to make it work. I suddenly feel like I'm talking to someone in sales and marketing. It may be a bad transcription from the article's author, but it sounds like someone trying to sell me something.

The first statement is wrong on the face of it. Changes will have to be made to routers in order to make decisions based on the new data, magic doesn't just make it work.

The second statement is partially true. But it would be more accurate to say, a distributed database (DNS) already exists in which we can try to store this data. The way it is stored and the software for pulling the data, however, doesn't already exist. So, parts of the infrastructure exist, but the implied whole infrastructure certainly does not. At a minimum software changes, and possibly hardware changes, will have to be in place throughout the internet for these changes to work. I'm not sure how that is any different than any solution to this problem.

There's a working solution out there: RPKI (2)

8-Track (581029) | more than 2 years ago | (#39830441)

I think the paragraph that says RPKI is complex and deployment has been slow is a lie, quite frankly. The five Regional Internet Registries (RIRs) have been heavily involved in the RPKI system, because they are the authoritative source on who the legitimate holder of a certain IP address block is. They launched a service to facilitate RPKI on January 1st, 2011 and adoption has been incredibly good for such a cutting edge technology (for example compared to IPv6 and DNSSEC). Since the launch, more than 1500 ISPs and large organizations world-wide have opted-in to the system and requested a resource certificate. The service that the RIRs offer, along with several open source packages by third parties for management, ensure that network operators only have to worry about entering data and not any of the crypto, making it robust and easy to use. With their certificate, an ISP can make a validatable claim – known as a Route Origin Authorisation (ROA) – about their prefixes, stating "As the holder of these IP prefixes, I authorize this Autonomous System to originate them". There are over 800 ROAs in the global system, describing more than 2000 prefixes ranging from /24s to /10s, totaling to almost 80 million IPv4 addresses. All in all, RPKI has really good traction and with native router support in Cisco, Juniper and Quagga, this is only getting better. Global deployment statistics can be found here: http://certification-stats.ripe.net/ [ripe.net]

There's one problem with current plans ... (1)

garry_g (106621) | more than 2 years ago | (#39830511)

One thing ISP experts are pretty skeptical about current plans of securing BGP (apart from the fact that it has to be used wide-spread to actually make it work) is that they use some central key for signing the routes - which means that governmental agencies could easily shut down whole parts of the internet by revoking a signed key, effectively removing a prefix/route from the global routing table ... therefore, any technically sound protocol (and safe from "the powers that be") would need some public repository that can not be controlled ...

ROVER is ridiculous (1)

Anonymous Coward | more than 2 years ago | (#39830829)

I saw ROVER presented a couple weeks ago at an Internet conference (RIPE-64) and I hope any engineer that actually looks at the protocol realizes it's hacks upon hacks and doesn't solve anything.

It's really not worth a news article, so I'm just trying to warn anybody before they get their hopes up. I'll just leave you with the slides if you want to check for yourself:

https://ripe64.ripe.net/presentations/57-ROVER_RIPE_Apr_2012.pdf [ripe.net]

what is the problem? (0)

Anonymous Coward | more than 2 years ago | (#39831045)

I don't see anyone here mentioning the fact that there are already policy tools at BGP level. Hell there is even a routing policy database where you can find which routes belong to which AS and group of ASNs and filter the received announcements as is. The Pakistan route injection was NOT a BGP fault but was only a fault caused by a relaxed routing policy at some Tier 1 ISPs. So I only see RPKI as another way of making money out of suckers than a real solution. End of story, nothing to see here move along.

ROVER is a SPOF! (0)

Anonymous Coward | more than 2 years ago | (#39831617)

If DNS fails, ROVER fails too. This is a very bad idea!!! We have already seen the problems of DNSSEC caused by law enforcement. RPKI is a nightmare for managing a network. Both solutions are prone to cause more trouble than without them. The best way to gain some protection is to use prefix filterlists generated based on the routing objects in the whois databases of the RIRs. Also set prefix limits. Have some kind of sign-off for larger changes in routing objects.

The problem is not the lack of security solutions, it's the lack of using the available solutions. A lot of ISPs still don't use prefix filterlists. Shame on them! Do you believe those ISPs will implement RPKI or ROVER? It's too complex for them.

And don't forget that the routers have to handle the choosen security solution for 400k+ routes. It's not hard to overload a routers CPU by a lot of routing changes. If you add complex security solutions it will be easier than ever before, by mistake ;-)

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?