Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet

Is the Internet Shutting Out Independent Players? 357

ikekrull asks: "After looking to see how I could set up my company's LAN to be multi-homed ? , I found that it would be next-to-impossible for me to do this. 'Providerless' IP addresses are no longer allocated to anybody in this part of the world (New Zealand) by APNIC ? , unless you meet requirements (financial and political) that are pretty much unmeetable by anyone but a large ISP. Does this put control of the entire internet further and further into the hands of large corporate players, and and is anyone particularly interested in changing this situation?"

"ISPs aren't advertizing routes for competing ISPs, and since IP blocks are heavily filtered upstream, this won't do much good anyway. The reasons for this are clear (Routing table growth was getting way out of hand), hence the introduction of CIDR ? , and the allocation of IPs to ISPs, with a resulting lockout on availability of routable IP space to individuals or smaller groups.

With the availabilty of IPv6, and the cost of RAM, I find it somewhat hard to believe that either IP address blocks are scarce, or that the size of routing tables are unmanageable any more. This might have been true with an 8MB Cisco 10 years ago, but surely it would be a negligible cost to put 1-2GB of RAM on even a reasonably budget router at todays prices.

Obviously, IPV6 isn't really here yet, but i would like to think that when (if) it arrives, we will see a more open routing system.

Is anybody working on returning some kind of equal standing to 'the little guys' when it comes to internet routing infrastructure, and how a more 'open' system could work in practice on tomorrow's (or today's) internet?"

This discussion has been archived. No new comments can be posted.

Is the Internet Shutting Out Independent Players?

Comments Filter:
  • At least in the states - my employer (AT&T) offers multi-homed and backup connections at T1 speed and above. (Routing is via BGP4.) You need to accept IPs from one ISP or another, so they're not really "yours," but it still works. I presume Aussie ISPs do the same thing, but I may be wrong.
  • by Anonymous Coward on Friday November 30, 2001 @01:09PM (#2637103)

    Here - 217.53.98.174 - doesn't seem to be responding; use that one.
    • Neither does 192.168.10.73 -- in fact, you could have all of 192.168.10!
    • Better yet, while trying to prove to a manager that some of our NT (MCSE) admins don't have a clue, this was heard:

      NT guy: "Somethings wrong with the network, I can't access my share drive."

      LAN guy: "Can you ping your default gateway?"

      NT guy: "What address is that?"

      LAN guy: (mumbling something about bodily functions and low SAT scores) "Its 172.358.44.261"

      NT guy: (remember, he passed Microsofts TCP/IP course) "Nope, it doesn't respond."
  • uhm... (Score:2, Troll)

    by Anonymous Coward
    The person that wrote this has 0 clue of what's involed with routing. He needs to go read books before submitting stuff like this.

    "just add a gig or two of ram to a cisco router"
    hahahaha

    Also, IPv4 is running out of IP's. Plain and simple. Therefore, these IP's need to be given to people that have a clue what to do with them and not piss them away. I work for a major webhosting company and we have to fight for our ips everytime we need more. It's getting harder and harder for us. Luckily we own our entire Class B now, but I know soon a time will come when we dont... heh

    Research before whining to /.
    • I was hired once to supervise a Windows admin who used a private class B for everything... we were on 172.x.x.x addresses, which allowed for 16 Class B's. So we had a class B for our main office (70 nodes), a class B for the branch office (15 nodes) ... a class B for the colo (6 nodes)... a class B for each VP's home PC.... say what?

      He was convinced that they'd be faster if we didn't subnet 'em.
    • Re:uhm... (Score:2, Informative)

      by boog3r ( 62427 )
      you are a webhosting company and you need a /16?

      holy crap! have you guys ever heard of http1.1? the reason you have such a hard time getting ips is that arin wants to cut down on webhosting companies that do not use http1.1.

      i have to agree with arin on that too, with correct dns handling, http1.1 is a very viable method for webhosting and reduces both need and use of ip addesses.

      btw, ipv4 is not exactly running out of ips soon. the ips are still there. they are running out of _allocatable_blocks_ of ips. if you look at the lower networks (4.0.0.0/8 is one) the utilization of ips is horrendous. older companies and organizations have been camped on huge amounts of ip addresses for the last 10-15 years. if arin bit the bullet and forced these internet first-comers (and heavy wallets) to relinquish ip space we would see the 'ipv4 crisis' go away.

      like you said, "Research before whining to /."
      • Yep. For example... (Score:2, Informative)

        by Wntrmute ( 18056 )
        older companies and organizations have been camped on huge amounts of ip addresses for the last 10-15 years. if arin bit the bullet and forced these internet first-comers (and heavy wallets) to relinquish ip space we would see the 'ipv4 crisis' go away.

        I'll say...

        arachne:ckloote {101} whois -a 40.0.0.0
        Eli Lilly and Company (NET-LILLY-NET)
        Lilly Corporate Center
        Indianapolis, Indiana 46285
        US

        Netname: LILLY-NET
        Netblock: 40.0.0.0 - 40.255.255.255

        Coordinator:
        Eli Lilly and Company (ZE16-ARIN) hostmaster@lilly.com
        317-277-7000

        Domain System inverse mapping provided by:

        DNS1I.XH1.LILLY.COM 40.255.22.1
        NS1.IQUEST.NET 198.70.36.70
        AUTH40.NS.UU.NET 198.6.1.18
        AUTH62.NS.UU.NET 198.6.1.19

        Record last updated on 17-Jul-2001.
        Database last updated on 29-Nov-2001 19:56:47 EDT.

        Yeah, Eli-Lilly is a big company, but please tell me why they need their own class A? They don't, but they managed to get it back in the early days, and won't give it up. I'm sure there are many more cases like this.
        • HP has 15.x.y.z as well, along with a number of smaller class 'B's and some class 'C's.

          Considering HP hype their 'citizenship': ("To honor our obligations to society by being an economic, intellectual and social asset to each nation and each community in which we operate."), and the fact that they're already proxied and firewalled to buggery, I think they really should consider giving net 15 back.

      • I personally like Virtualhosts and http/1.1, of course my clients have a much different idea. We have server using thousands of ips, on for each webpage hosted on the machine. Why? because ignorant business owners (clients) want it.

        These people go, "I'm sharing what with who?" and decide they need their own IP address. Whats even worse, IMHO are those who run shell boxen and need an IP address for every person because they want to have reverse dns on IRC.

        At work, I must admit it is nice to have an IP address for each of my servers.. but really, I should setup NAT. Why should I waste IP address space for a laptop?

        I really think hosting companies should tighen up on ip usage more. Of course, they offer them and people will keep buying them as long as their clients beg for them.. even if they don't really need it.
      • Re:uhm... (Score:4, Interesting)

        by NoBeardPete ( 459617 ) on Friday November 30, 2001 @03:05PM (#2637797)


        Here's an example of the kind of ridiculousness that results from some institutions having lots of IP addresses. I'm a student at MIT, which has all of net 18. I've been the network administrator for my fraternity for a couple years, which uses all of 18.216.xxx.xxx. That's right, we've got some 64k IP addresses, of which maybe 60 are assigned, and 40 actually point to a running computer. That means %99.9 are being wasted.

    • Well, the lookup part should be doable. It's entirely reasonable to keep a route for every single possible /24 route in an array. Not any fancy CEF lookup table or B-Tree or anything fancy. Just allocate an array for every single /24. There are only 16M of them! Let's say you need 64 bytes per route to keep the state you need (next hop, outbound interface, route source, timeout, etc.) and you are only using 1GB of RAM! 1GB of DDR RAM is worth less than the power cords for a high end router. In fact, I think it will be realistic to store host routes for all 2^32 addresses within a few years. Sure, 512GB or so seems like a lot of RAM today, but 512MB seemed like a lot only a few years ago.

      Getting rid of the larger net blocks will make better use of available address space not worse. The addresses are not being 'pissed away'. From an allocation point of view, if I have a /24 it doesn't matter if I got it from ARIN or from my ISP. A /24 is being used either way. Any ISP that's going to last is going to sit a on significant portion of it's allocated addresses for future growth. I would argue that the waste involved in every major ISP keeping it's own 'reserved' pool is greater than the waste involved in having only ARIN keep a 'reserved' pool and allocate /24's out dynamically. Hell, you admit yourself you are sitting on a /16! How much of that is actually being used by cusomers, and how much is being 'pissed away'? It's just like older MacOS where each application had a fixed amount of memory allocated to it. It lead to huge amounts of waste as each app had to have memory allocated for the worst case scenario. This is like today where hugh fixed chunks are handed out to ISPs to manage. They are all going to sit on a bunch of unused space 'just in case'. On the other hand, any decent OS allocates memory out a page at a time on demand leading to better use. The same could be done with address space. Give each organization the /24's it needs. If they are not using them, yank them back.

      Now, whether or not BGP can keep up with all the updates is a different story. But with the vast amounts of bandwidth between core routers and GHz processors cheaply available, I think a box could be built to handle it. Especially given that most routing is done by ASICs and the CPUs sit around at 2% utilization most of the time.
      • A fast router typically will not use DRAM due to the high latency involved in the lookups. There's also more to it than just looking up the destination address and forwarding. There's also access control list (ACLs), multicast routing, and so on which do not work in your scenerio. Also, how long does it take to populate a class A route into the table? There's also overlapping routes and source routing as well.

        I'm sorry, but routing is often not as simple as just looking up the destination address and forwarding the packet, especially when you're trying to do this to 10+ million packets/second.

        I'm working on a product now that handles well over a million packets per second and has to perform some rather complex routing, besides handling many different encapsulations and mapping each source to a potentially different routing table (there can be multiple routing tables internally).
  • by Anonymous Coward on Friday November 30, 2001 @01:13PM (#2637133)
    Having a multi-homed network is extremely stressful on the rest of the Internet, and you're going to have to pay for the privilege.

    Yes, routers have gotten a lot more advanced, but if every Tom, Dick, and Harry wants to have their own APNIC-assigned IP block, it is going to cost a lot of money for the backbone providers and everybody else to accomodate the routing tables. Unless you're big enough to make a reasonably large dent in their bottom lines, they aren't going to care about making you happy because it's just too damn expensive. (And guess who would wind up paying for your pleasure? Every user of consumer-grade connections, that's who.)

    You should be quite satisfied that you can even get high-speed connectivity (not to mention, connectivity from multiple providers at once) where you're at. Here in the USA, the most technologically advanced society in the world, it's difficult if not impossible to get *any* high speed service outside a major metropolitan area. Before my cable monopoly upgraded its network, I couldn't get any service at all that wasn't long distance dialup.

    My advice to you: count your blessings, and find a different way to solve the problem.

    Just my 2c.

    ~wally
    • "Here in the USA, the most technologically advanced society in the world".

      I think you mean Finland.
      • Before my cable monopoly upgraded its network, I couldn't get any service at all that wasn't long distance dialup. My advice to you: count your blessings, and find a different way to solve the problem.

      Wait... do you think that your cable monopoly upgraded its network because:

      • A: You sat on your arse and counted your blessings.
      • B: You, or people like you, kept asking and expecting more from them.

      No, I don't think counting your blessings is a particularly useful way of dealing with this issue long term. It's been my experience that whining and griping like a spoilt bitch is the only way to get action. The very same people who will berate you for doing that will be the first ones to jump onto the new services that you help to create through your demands.

    • Having a multi-homed network is extremely
      stressful on the rest of the Internet, and you're going to have to pay for the privilege.


      Whatever happened to eliminating single points of failure? Did that philosophy die out with ARPAnet?

      You should be quite satisfied that you can even get high-speed connectivity (not to mention,
      connectivity from multiple providers at once) where you're at. Here in the USA, the most
      technologically advanced society in the world, it's difficult if not impossible to get *any* high
      speed service outside a major metropolitan area. Before my cable monopoly upgraded its network, I couldn't get any service at all that wasn't long distance dialup.


      Well, that's residential internet access... if you've got the money to pay for commercial connectivity, you'll have more options.
    • by mj6798 ( 514047 ) on Friday November 30, 2001 @04:06PM (#2638101)
      Here in the USA, the most technologically advanced society in the world, it's difficult if not impossible to get *any* high speed service outside a major metropolitan area.

      I'm not sure whether the first part of your sentence is an attempt at irony or reflects an actual belief. In the US, you can get the most high-tech gadgets if you are willing to pay for it and put in the effort. But US society on average is pretty low-tech and relies on pretty outmoded technology, in just about every area of life. In part that's because Americans can get away with it (if energy is cheap and homes are large, for example, you can live with inefficient and bulky appliances), in part it's because the government is reluctant to set high-tech standards.

      The US free-market approach doesn't work for communications networks: the average and short-term market forces determine what you can get at any price. If your cable provider only wants to sell you MSN-tied-in asymmetric marketing-driven pseudo-Internet-access because that's what 95% of the US population is satisfied with, then that's the only thing you are going to get at any reasonable price.

  • Woah. (Score:5, Insightful)

    by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Friday November 30, 2001 @01:14PM (#2637140) Journal
    but surely it would be a negligible cost to put 1-2GB of RAM on even a reasonably budget router at todays prices.
    Paper is cheap. I'm going to give you a list of 1 million names and phone numbers. Quick! Find Mr. Smith's phone number!
  • by alphaque ( 51831 ) <dinesh&alphaque,com> on Friday November 30, 2001 @01:15PM (#2637146) Homepage
    It's the scarcity of IP addresses (then) and the growth of the routing tables which forced the situation we are in today. You're not alone in New Zealand suffering from it, most of us in Asia outside of Japan are too.

    These methods and models of doling out IP addresses leave some of our internet data centres hopelessly inadequate at providing something as trivial as fault-tolerant links thru two or more ISPs within the same country as each ISP would refuse to route blocks belonging to other ISPs.

    However, I dont think that arguing the increased RAM capacities of routers being capable of storing the huge routing tables is the answer.

    CIDR and its ilk was developed to partly address huge routing tables, but the key point it addresses is propogation of new route changes which need to be sent to more routers and thus generating more traffic instead of being confined to just the edge (in context) routers as used now.

    If the propogation of new and changed routes could be addressed without generating additional traffic, and believe me when I say bandwidth isnt cheap in Asia, then I would agree with utilizing larger RAM in routers to store these tables.

    Incidentally, I was a couple of minutes short of FP. :)

  • You don't want every Tom, Dick, & Harry setting up networks like Loose Cannons. And Domain names, Darn-It! There are no more left, except of course www.clownpenis.fart.
  • Old routers? (Score:5, Informative)

    by kneecap ( 4947 ) on Friday November 30, 2001 @01:18PM (#2637163)
    Even the in new Routers from Cisco you can't put 1 to 2 Gigabytes of RAM in them, most top out at 256 or 512MB. RAM for PC's might be cheap but most of the RAM for routers and such have not come down in price like the RAM for PC's.

    Here in the US there is similar requirments, BackBone providers often filter routes at a /19 level. ARIN's minimum block size is /20 or for Multi-homed ISP's that qualify for a /21 also get a /20. But if you want you routes (and IP's) to be globaly distributed with no problems, then you need a /19 or bigger.
    • The reason RAM for PC's is so amazingly cheap is based on two reasons that don't really apply to things like Cisco routers. The first is that the supply is huge and the demand has been relatively low. The second is that because of the vastness of the PC market, the components are more readily made in bulk and thus can be made for cheaper. If you look at RAM for just about anything else, the price for what you get has fallen a little over time but has stayed pretty consistent. That is, if it cost you $200 to have an adequate amount of RAM before, it still costs you $200 to have an adequate amount of RAM.

      Now, I'm not a network engineer, but another factor to possibly consider is the specifications required for router memory. Does it require a higher level of performance, error correction, etc, than the average PC? If it does, then that will also raise the cost.
    • Router Memory II (Score:3, Informative)

      by NetJunkie ( 56134 )
      Router memory is cheap, UNLESS you buy it from Cisco. Viking and Kingston both make excellent memory for Cisco routers at a *MUCH* cheaper cost than Cisco. It's not like Cisco memory is anything amazing, it's just OEM memory.
  • Let's pretend you're APNIC. Now let's pretend you've got 100 million geeks clamoring for IP's. How much of your resources do you spend on customer-service and hand-holding before you throw up your hands in despair and start setting some limits?
    Perzackly.
    Now, consider the fact the Joe and Jane Geek have to have a connection to use those nice shiny new IP addresses. And you soon see why we have the present hierarchy of telco's and ISP's.

    • I don't know about APNIC in particular.. but in general, it's getting harder and harder to get provider-independent IP space, and more importantly, the AS# to go with it, unless you are a big, huge provider yourself.

      So.. what about some company that wants to set up a datacenter online. They NEED multi-homing, but they don't need thousands of addresses... they are basically shut out of the system. It's getting basically impossible for a small network to multi-home on the internet.
  • by Xenopax ( 238094 ) <xenopax.cesmail@net> on Friday November 30, 2001 @01:19PM (#2637173) Journal
    Not be to be blunt or anything, but hasn't it occured to you that eventually we will end up with a few major ISPs? We watched for years as small ISPs struggled and went out of business, while the large players sucked up the business.

    Nope, I sure as hell not suprised we're going down this road. All this new policy will do is speed up the natural selection of companies until a few monster ISPs (probably run by an existing monster like AOL/Time Warner/Nullsoft) run everything.
  • NAT? (Score:4, Interesting)

    by bartle ( 447377 ) on Friday November 30, 2001 @01:20PM (#2637175) Homepage

    An idea that I had been toying with was to buy 2 internet connections, say DSL and cable modem, then use NAT to use them both simultaniously. In a simple scenario, seems like it could be accomplished by picking up 2 of those cheap home gateways and setting up a non routeable network. Internally the machines would be set to use one of the gateways by default, if that connection went down you could switch to the other one. Externally multiple DNS records could be used to distribute the traffic among multiple ips, all of which point back at the non routable network.

    Even though I concieved this idea for a low end home network, the basic idea should be applicable to a business that really wants a redundant connection. Just buy multiple connections from multiple sources, keep your machines in a non routeable network, then use some fancy equipment (a Cisco PIX for example) to make everything work. Bit of a kludge, but I think it's a viable solution.

    • by swb ( 14022 ) on Friday November 30, 2001 @01:27PM (#2637229)
      Someone tried selling me on a box that did that, except it would take several high speed connections (like 4 or 8 ethernet ports on the box, you supply the other end) and then via NAT and then intelligently load balance the traffic across those connections. I think it had the ability to transparently redirect traffic based on protocol to these presumably cheap broadband connections.

      The idea was that instead of buying another expensive T1 because everyone's reloading Slashdot all the time, you buy cheapie DSL connectivity as needed and run your "unimportant" traffic out this box and the business-critical gets more of the T1.

      It's a neat idea.
    • Re:NAT? (Score:3, Interesting)

      by Junta ( 36770 )
      Make it even better, use a full-fledged PC with three interfaces to serve as router (one address for each connection, and one internally). Though I don't know of any way to do it now, I would assume it could be a logical extension of NAT to NAT over two interfaces rather than one and use load balancing on outgoing traffic to figure out where to NAT the traffic through. If one went down, automatically put everything on the remaining connection. Higher throughput dynamically managed (more efficient than manual allocation) and failover, all without you needing to do a lot of manual work to keep things balanced and working right. All of this is assuming a non-routable private subnets, which is for many companies out there unacceptable...
      • I should've asked this in my comment, but does anyone know if there is a NAT implementation that allows you to specify more than one interface/address for a NAT rule?
    • OS X Multihoming (Score:2, Interesting)

      by WiseWeasel ( 92224 )
      I just wanted to voice my support for MacOS X when it comes to multihoming. It automatically detects the fastest connection available from the different ones set up in the Network System Pref. This is great when an Airport (802.11b) network becomes available, or one of your providers goes down at any time. It will even trigger a dialup connection if the broadband goes down, or switch broadband providers if you're lucky enough to have several. This truly works very well, and for laptop owners, it's a crucial capability.
    • Re:NAT? (Score:3, Informative)

      by GiMP ( 10923 )
      What you are looking for is speed, not multihoming. What you are talking about is having 2 ips, one for each connection... and then balancing the load across them.

      Linux can do this, it has the ability to "shotgun" ethernet connections into a larger one.

      However, this is not what this person wants. The problem is IP addresses and routing. In your configuration, if one of your connections die you use an IP address. If one of the connections in a multi-homed environment dies, you still want the traffic for the ips on the 2nd line to be routed to your network.

      What this means is, you need cooperation by your ISPs if you wish to be multihomed. Sure, for a home-connection where you are just looking for speed, shotgunning your data is fine.. but it just isn't the solution this person needs.
      • Re:NAT? (Score:3, Interesting)

        by bartle ( 447377 )

        What you are looking for is speed, not multihoming.

        I'm looking for redundancy and I can't think of a better way to get this than using two completely different ISPs.

        However, this is not what this person wants.

        Perhaps, but the what the submitter wants is very difficult to achieve. Using dual IPs is less than ideal, but it allows outgoing traffic and incoming email to continue to flow without interruption.

        What I most like about this solution is that you're not overly dependant on a single ISP for anything. Not only are you protected in case of a temporary failure, but you can dump an ISP overnight if they make some policy changes you don't like. While I realize this idea might not appeal to a monoolithic corporation, a smaller one might want to consider this level of control and redundancy.

  • by Cutriss ( 262920 ) on Friday November 30, 2001 @01:20PM (#2637176) Homepage
    Unfortunately, the very reasons you're eagerly awaiting IPv6 are probably the reasons that you won't ever see it, and you probably already know those reasons.

    The Internet stopped being about information about five years ago (Or at least that wasn't the point anymore) and it's now all about eCommerce and BS like that. The very same companies that got on the Internet in the first place to deliver information are now delivering information only from their marketing departments, and not from engineers or researchers. Commerical interests have all but drowned out its original spirit, and are also partially the reason for the inception of Abilene (Internet2). Of course, it probably won't be long before that new promised land gets pillaged and raped. The Internet as we know it seems to be in an eternal state of loss of innocence, I'm afraid. I don't think the solution is to supplant or supercede the original 'net, but to just have a user-maintained network...kinda like what the network-area neighborhoods are designed to accomplish, except on a much grander scale. When the corporate interests don't exist, then the public can do with it as they see fit.
    • perhaps unis could all connect up to Internet 2, and make it just for information/education. then you can pay a uni a $20 connection fee so you can point yourself to their Internet 2 server farm and go!!
    • Geez. The reason the internet is about eCommerce and business and stuff like that is because that is what is paying for it. It's companies like Sprint and Teleglobe which have invested in creating the backbones and the pipes which keep the internet running.

      These are not like the rivers and valleys which create themselves. The internet needs to be created and needs to be paid for. Yes, the government did get involved in it in the beginning but the large percentage of capital investment on the internet is by private interests.
    • Please remove your rose colored glasses and back away slowly :) Like you, I was on the internet five years ago (and before). Unlike you, I do not recollect it so fondly. NNTP newsgroups were not appreciably more interesting or informative than they are today. In many cases, newsgroups only become valuable once they have been around long enough to develop a kind of "lore" and archive of all the discussions that have taken place in them over the years. This, necessarily, was absent or sparse back when the groups were newer. It may have been true that the spirit on the WWW was more collegial and the participants less concerned with personal benefit, but mainly the web was populated by dabblers who wanted to put up a scanned picture of their dorm room or girlfriend. "Valuable" sites were mainly collections of interesting links to the small number of pages that contained data of real use. And the simple fact that very few websites existed and that they were mainly the creations of students or professors in educational institutions (or porn) meant that the content of the web was limited. It was not possible in those days to perform a search for "half-hitch" and get back dozens of valuable, instructive hits about how to tie knots. The attitude of the web may have been different but the breadth was sorely lacking. In other words, the things you remember fondly are not only still present on the net, but improved in almost all cases! In addition, we have access now to commercial sites that simply didn't exist in those days. It may be that those sites speak more loudly than your favorite nook of the web, but they are at worst hogging the limelight. They are not pushing out the original spirit. Not by any means.
  • by yakfacts ( 201409 ) on Friday November 30, 2001 @01:21PM (#2637183)
    One real problem is that IPv6 is still not ready
    for prime time.

    There are many high-end routers that cannot deal
    with IPv6 and will not be able to without a hardware upgrade, as they use ASICs to store tables of IP addresses and those ASICS expect four bytes.
    • Juniper routers can all handle IPv6 fine. The latest release of JUNOS (5.1) includes support for this, and it runs on any M series router (Juniper router) without any hardware upgrades necessary. So when you say many high-end routers can't handle IPv6 you must be referring to Cisco :-)
  • Peer to Peer (Score:4, Interesting)

    by horster ( 516139 ) on Friday November 30, 2001 @01:21PM (#2637184)
    yes, but I believe the solution rests with a layer on top of the internet - namely something like peer to peer systems of today where nodes can shift more easily, appear and disapear without hurting the overall network.

    the real problem is with NAT (network address translation). How to two peers behind such a NAT firewall anounce their presence to each other and then communicate without the assistance of a 3rd peer with a proper IP address and place on the internet. if anyone knows the answer to this quiestion, I'd love to hear it!

    really, how do you announce a service behind a firewall? that seems to be the question of the day.
  • IPs for the elite? (Score:5, Informative)

    by Thornbury ( 540039 ) on Friday November 30, 2001 @01:24PM (#2637205)

    It's true, you can't get portable IPs of your own anymore. The advent of CIDR and the segregation of netblocks were in an effort to reduce global routing tables.

    Putting in 1-2Gb of memory in a router is still incredibly prohibitive. It just can't be done in the mainstream (common) routers.

    You can still be multi-homed with netblocks from one ISP to be received by another. This happens this way in the US, and I'm sure it happens with APNIC and RIPE-issued blocks. You get the same effect, without all of the hassles of truly having your own blocks. At least we don't have the /19 barrier for advertising that used to be prevalent in larger ISPs. There is some give and take. The give on that is that the larger ISPs have gone to regional aggregates.

    For instance, I don't want to have to pay for my addresses in the US now thanks to ARIN. (Don't get me started.) My ISP takes care of that. The justification process of getting addresses isn't fun, but it's a lot better than the Inquisition your provider has to go through. I'm not saying that economy is bad, but it's a fact of life with IPv4.

    It's possible that controls will be loosened in an IPv6 world, but I don't think so. We've been down that path before. With tiny fragmented blocks of IPv6, we're creating a nightmare of routing tables the likes of which we've only imagined with IPv4. Aggregation is here to stay, and I beleive the days of the portable netblock are long gone.

    Of course, if you can justify your need for your own blocks, you can go directly to your registry. If not, isn't it enough to have your networks SWIPed to you?

    The days for "vanity" addresses are long gone. Maybe you should think up a clever .com domain name instead while you still can.

    • by figment ( 22844 ) on Friday November 30, 2001 @03:50PM (#2638002)
      Thank you. All the comments i am reading are confusing that PI-space is required to run BGP. That is not required, all you need are two semi-coooperative isps, one that's willing to punch holes in it's aggregate and the other that'll relay your advertisements.

      Again, just as he said:
      You can still be multi-homed with netblocks from one ISP to be received by another.

      PI-space only makes it a bit easier in transition, but it doesn't make it anywhere near as impossible as the question implys.
  • IPv6 (Score:3, Interesting)

    by MosesJones ( 55544 ) on Friday November 30, 2001 @01:25PM (#2637214) Homepage

    WTF is it ? Solves all of these problems, increases security, increases reliability adds predictability to networking.

    Its been trialed and used on long haul cables and backbones. Most decent OSes support it. IPv4 would still work over IPv6.

    Isn't it time to flick the switch ?
  • Sure, you can STORE lots of routes in that much RAM, but how are you going to search that many routes to find the *right* one, in real-time, to route millions (or billions) of packets per second?
  • Use a WAN (Score:3, Informative)

    by the_2nd_coming ( 444906 ) on Friday November 30, 2001 @01:30PM (#2637251) Homepage
    If I understand your needs correctly,
    Why waist an entire set of IPs when you can NAT off your network and pay the local phone company o connect bothe sites over a leased line then you can have access to the 10.x.y.z reserved IPs. then you can have as big a network as yuou want. you could also put another NAT at the other end so as not to over load the first.
  • Why go multihomed? (Score:4, Insightful)

    by Colin ( 1746 ) on Friday November 30, 2001 @01:31PM (#2637259)

    I'm not sure why you want to go multihomed, with all the attendant problems that it brings. If this is a corporate connection, that's not got services (other than mail) being provided to the outside world, then I don't really see the point. I think you can provide the redundancy in other ways - here are some ideas, using 2 ISPs (and PA IP addresses allocated by each of them).

    Put a mail server on each connection (or map an IP address from each connection through your firewall to the mail server). MX records will do your load balancing and redundancy for you.

    Use NAT/PAT for users to connect to the Internet. If one conenction goes down, remove the internal routing to that connection - all your sessions will now go out of the other connection. I find that this is quicker than waiting for BGP to reroute connections via a backup/alternate path. It also gives you more flexibility in internal network numbering, and to move ISPs.

    Host services with colocation providers - not internally. Colo service providers have already solved most of the service provision problems, and are well connected to the Internet - I don't think it's worth trying to do this in house.

  • Sounds a bit silly (Score:2, Informative)

    by rnicey ( 315158 )
    Are you really sure that competing ISPs over there are not advertising others routes?

    I've just had some first-hand experience of this with Worldcom, ESpire and AT&T. Worldcom were more than happy to allocate us a 'class C' so we could run BGP without getting filtered upstream. (This appears to be the smallest block that gets routed these days.)

    Each and every one of these ISPs sold us dedicated connections boasting how many peering arrangements they had with each other and when it came time to route, no problem.

    Maybe that's the cutthroat ISP biz in the US, I'm quite surprised that it's not the case in NZ.

    The size of routing tables is quite big. In fact you generally require the entire use of a T1 just to manage the updates of a full table. That's why it's typically ISPs that do this kind of thing.

    One other solution they all put forward was to purchase connectivity from each of them and let them do the BGP over the lines. I thought this was quite cooperative of them, to send your traffic via another provider if their link went down.

    Hmmm.
  • by Phizzy ( 56929 ) on Friday November 30, 2001 @01:32PM (#2637269)
    How many computers do you have on this lan? Why do you think you need to 'own' the IP addresses? First off, you don't even need to own ANY ip addressed to do multihoming. You could NAT all of you LAN boxes up into the single /30 advertisement that your ISP(s) are going to give you for the serial interface on your router, and then have the ISP advertise that out to the 'net, and voila, you have multihoming. When one provider goes down, you can use your IGP to route across the other, OR, if you wanted to go a litte more high-class, you could buy a large router, and take full BGP tables from both providers, and differentiate intelligently based on the preferences sent on the routes. Now, if you don't want to do NAT, and there are a whole slew of good reasons you wouldn't, why are you hung up on ownership of these IP addresses? Why won't you let the IP-allocation process work like it's supposed to? If APNIC had to allocate IPs to every small business in the region it's responsible for, it would take 3 years to get IPs from them. Buy a block of IPs from your ISP(s), and if you transition to another ISP, re-number your network. Or, if you don't wanna go the cheap way, you CAN buy portable IP space from providers. Many of them buy whole Class As just for this purpose, it's just that you're going to have to pay more for these IPs than you would otherwise, as you should, since the ISP's netblocks can become non-contiguous if you leave. As far as your questions about IPv6 and router memory, the internet routing table is well up above 100k routes already, and there are many routers out there that are already having problems dealing with tables of this size. Many Cisco boxes will die in the near future if not upgraded, as their old routing engines run out of memory, and despite the fact that PC memory is cheap, router memory often is not. Especially when you have to install it on the tens of thousands of routers any decently sized ISP will have. IPv6 isn't really even a factor yet.. and when it is, many routers are going to need heavy upgrading (software, hardware, etc) to deal with it, which is why so many ISPs aren't rushing out to do it. So buy some portable IP space, get yourself multihomed, and go buy a good BGP book.

    //Phizzy
    • Doesn't address failover for incoming traffic. Neither do DNS tricks or other such kludgery: this is a layer 3 problem. The solution is to use IP as it was intended -- true end to end connectivity and routing issues handled by, of all things, routing protocols.

      BGP with aggressive route aggregation works well. Something better running on top of IPv6 would go a long ways towards getting rid of the convulated "solutions" that a lot of organizations are setting up.

      Blatant karma plug: http://www.nanog.org/ -- anyone interested in these sorts of routing issues should join the mailing list and lurk
  • by uslinux.net ( 152591 ) on Friday November 30, 2001 @01:33PM (#2637274) Homepage
    First of all, RAM on a router is not the issue anymore. The issue is bandwidth. If your router has to maintain 100,000,000 routers instead of 100,000, you have a 1,000 fold increase in routing table updates in network bandwidth.

    Second, IPv6 will solve this, at least for a while. Despite IPv6 having enough addresses for all the particles in the universe, I'm sure we'll run out again in a few years :-)

    Finally, how many companies actually need their own IPs? Small ISPs just get their IP range from a larger player, who is providing them with bandwidth. Under normal circumstances, a mom & pop ISP doesn't need an OC-192 - they're probably happy with a T-3. It's cheaper for them to sublet a fraction of a big player's bandwidth then to go at it alone.
  • In my experience working for the US government I have never seen them use a private IP range. They would have Class B subnets and use only a fraction of the available IP's. The rest are pretty much wasted. So if you can't beat them, join them. Become a government agency and you'll have all the IP's you could want.
  • by Jordy ( 440 ) <jordan.snocap@com> on Friday November 30, 2001 @01:39PM (#2637316) Homepage
    Oh so many answers, so little time.

    First of all, one should note that IPv6, while supported in newer versions of Cisco IOS, has the slight problem that in BFRs, the hardware accelerated routing hardware has four times more work to do to look up a 128 bit IP address making performance somewhat of a problem. Add to the fact that a lot of the routers out there simply can not be upgraded past 128 MB of RAM and you run into a slight problem when you go to make your $150k router IPv6 capable.

    Then there is the little problem of client operating systems and the "migration" to IPv6. As there are only a handful of people on this planet who use IPv6 exclusively, routers will have to support both until all the client software of the world moves over. Now, it is bad enough getting full IPv4 BGP updates, but getting them *AND* IPv6 updates?

    Of course, next comes all the little hardware out there. From the terminal servers people dial up to, to the layer 4 load balancers, there is a lot of hardware that doesn't support IPv6.

    So, as a large network service provider, one would have to justify the costs associated with IPv6 against the benefits. The benefits are pretty slim right now unfortunately. Ideas like a single roaming IP (pipe dream if you ask me), mandatory multicast/anycast support, fixed sized headers and IP level security are all fine and dandy, but when you are talking about replacing (or at least suplementing) millions of dollars in infrastructure to allow a handful of people to use IPv6 for years until the REST of the world follows, it starts becoming hard to justify.

    Don't get me wrong, IPv6 has some lovely attributes, but until Cisco enables IPv6 by default on all the hardware they make, everyone upgrades their copies of Windows and MacOS to support it and all of a sudden the terminal servers of the world (remember dialup still exists) all start learning how to route IPv6 packets, it is an uphill battle.

    So the question really becomes, how long will it all take? IPv6 really needs a killer application to the general public aware that they *need* it and ask their providers to provide it. Once enough demand is generated, ISPs will start asking their upstreams for it and the ball will start rolling.

    The same problems have plagued multicast for some time and still, very few providers support it and even fewer have customers who use it.

    Of course, that's just my opinion, I could be wrong.
  • by paulbort ( 9372 ) on Friday November 30, 2001 @01:44PM (#2637334)
    Here's how we solved the multi-home problem despite CIDR. We wanted to make a web service (Citrix [citrix.com] ALE) available over our T-1, or over our DSL (from a different provider) if the T-1 fails. The solution was to get a cheap Web hosting service that will use our (already registered) domain name to host a couple of static pages that point to our servers by IP address. One set of pages points to the address we got from the T-1 provider, the other points to the DSL address.

    When Big Brother [bb4.com] thinks the main connection is down, we ftp over the backup connection to the off-site web host, make the other set of pages the default, and our users now come in on the other circuit. We change the Alternate Address on the Citrix servers, and we're back in business.
  • multihoming defined (Score:5, Informative)

    by mdouglas ( 139166 ) on Friday November 30, 2001 @01:46PM (#2637350) Homepage
    for those of you who are confused about the nature of multihoming :

    multihoming involves connecting to 2 or more isps and BGP publishing your ip space through both of them. this (ideally) involves having your own ARIN assigned ip space & AS number.

    the point of multihoming is to address redunancy for inbound as well as outbound connections. you can use 2 isps + nat + creative outbound routing to handle outbound traffic, but that does nothing for a potential web server you're trying give multiple inbound paths to.

    read the multihoming faq :
    http://www.netaxs.com/~freedman/multi.html
  • by Anonymous Brave Guy ( 457657 ) on Friday November 30, 2001 @01:52PM (#2637390)
    Does this put control of the entire internet further and further into the hands of large corporate players, and and is anyone particularly interested in changing this situation?

    Not really, and no I'm not.

    The Internet already is, always has been, and must be, run by large players. You cannot have an interconnecting network that spans the world and has that many users without someone very big to put the infrastructure (hardware and software) in place, and to maintain it afterwards. The only people capable of doing that are major corporations, and a few very large not-so-commercial bodies (the academic community, for example).

    I'm sorry, but if keeping things efficient and practical for these essential big players means you can't play with precious IP address space, then that's the price you're going to have to pay. There just isn't space for everyone to play with their own blocks of IPs any more, and there isn't time for everyone further up the chain to account for them even if the space was there.

    Yes, it's unfortunate that some of these big players have a monopoly, which is rarely a good thing. Yes, it's unfortunate that little fish get eaten by big fish. But unless you have a better suggestion, there are only two choices: (a) leave the big fish alone, accept that for now there will be issues, and have an Internet, or (b) get on your high horse about monopoly abuse, civil liberties, and any other subject of pontification you can find, and kill the Internet. Me, I think that's a pretty easy choice.

  • by cowboy junkie ( 35926 ) on Friday November 30, 2001 @01:57PM (#2637421) Homepage
    There's a good article at onlamp [onlamp.com] that talks about where all the IP's went and why things have gotten so stingy. A sad story about misallocation in the early days of the net (do companies like GE or Xerox really need 16 million addresses?)
  • I'm moving my company over to a pair of T-1s multihomed right now. We're doing it through Bellsouth and having the T-1s go to seperate POPs and our router will run BGP. Sure, we still rely on Bellsouth but it's very unlikely ALL of Bellsouth will go down at once. Doing this between major telcos would be a real issue I don't think we can afford.

    The dual-homing aspect of this didn't cost us any extra. We're just paying for two seperate T-1s. To do this you need a somewhat sizeable router. They suggest a Cisco 3640 with 128MB, which is exactly what I'm implementing.

    No, you can't do this at home, but why would you? It's not that unreasonable for a business. We're looking at like $2K-2.5K/month for everything and a one time charge for the router unless we lease it.
  • by apilosov ( 1810 ) on Friday November 30, 2001 @02:04PM (#2637462) Homepage
    This was an extremely oversimplified view, more like "I think I need to have bar want to do foo, but I'm clueless what anything else".

    There are many issues at work:
    a) Assignment of PI (Provider-Independent) addresses:
    Back in '94, as an end user, you were able to get a netblock directly from ARIN. Then, this block could be advertised (by BGP4) by your upstream[s], and thus you got connectivity. The problem here lays that these IP addresses were nonaggregatable and led to exponential growth in routing table size. (see http://www.telstra.net/ops/bgptable.html up to 1994). Thus, CIDR was born, and hierarchical assignment became the rule. Your upstream (call it foo) gets the IPs from their upstream (call it bar), and the whole internet sees needs only one routing table entry to reach all of bar's customers.

    b) ingress filtering (filtering of traffic from customers to make sure only the source IP that are assigned to them are used). Yes, most ISPs do ingress filtering now, and it is now considered a BCP (best current practice) to do this (there's an RFC on that). Again, this is for a damn good reason: Without filtering, DoS attacks cannot be traced to their source, if one is spoofing the source addresses. With filtering, at least you know that the source IP address is likely to be the one attack is launched from (or one of 0wned machines attacking you).

    Its well known that ingress filtering makes multihoming harder, as your upstream has to open up their ingress filter for the IPs that are assigned to you by entities OTHER than your upstream (say, your other upstream).

    Since apparently you intend to advertise your network via BGP4, all ISPs who will talk BGP4 to you will have no problem relaxing their ingress filters. If all you have is a DSL line, you'll have fat chance of getting your upstream to talk BGP4 in the first place. See below for strategies to do this without BGP.

    c) Even if you managed to get your upstreams to turn off ingress filtering and advertise your network via BGP4, you still may run into problems because many ISPs do not listen to network announcements less than /20 (Sprint and Verio are two notable cases). (Thus, if you have an IP range IP_A from ISP A and IP range IP_B from ISP B, and both ISPs advertise both ranges, you can still run into problems when one of htem goes down). Fortunately, lately, the wind started to change, and I think sprint already relaxed their requirement to /24.

    Bottom line is: if you want to have your "own" IP address range, you must advertise it via BGP4. If you can get your upstream to do that, you can get them to relax their ingress filters, thus your original complaint is silly.

    Now, if all you have is two DSL lines and no cooperation with your upstream you can do the following (sometimes called DNS-based multihoming), _for inbound traffic_:

    You set up two nameservers (A and B), one on each of the IP ranges that you have (range_a and range_b). Make all of the entries given out by nameservers have TTL of 5 minutes.

    Make each nameserver have a DIFFERENT zone, containing only IP addresses on that range. (Ex, nameserver A will have an entry for www pointing to an IP from range_a, nameserver B will point to an IP from range_b.(both nameservers can actually run on same machine, bound to different interfaces).

    Then, whenever someone tries to reach www.yourdomain.com, they'll hit one of the nameservers. If the one they hit first is down, they'll hit the other one, and get an IP address from the _working_ network. Voila, you are still reachable when one connection goes down.

    Then, if you don't want your servers to actually have two IP addresses (one on each net), you can do some trickery with iptables/ipchains to redirect traffic to a single IP (probably on private network).

    For the outbound traffic: All you have to do is to NAT your traffic to the correct interface/IP range (the one that's currently working). That is not very hard to do with a bit of shell scripting.

    Actually, things are a bit more complicated because of this: Your machine (main firewall or whatever) that contains all these interfaces, normally has one routing table. Choosing of the correct interface is done by lookup of DESTINATION IP. Now, assume a packet comes over to IP_B. You _must_ make sure that it will go out BACK on interface B (if you send a return packet with an IP_B source address over ISP_A, it'll discard it because of ingress filtering). This is hard: again, remember, routing does not depend on your _source_ address, it depends only on destination address.

    So, how do you solve it?
    Luckily, Linux has policy routing, which allows you to have multiple routing tables and choose between them based on some criteria, in your case, it will be source IP. You'll set up two routing tables, one with default route pointing to ISP A, one to ISP B, and a rule saying "If a packet has a source on IP_A, use routing table A, if not, use routing table B"

    (see iproute2 documentation for details)

    Well, I think I should write a HOWTO on that...I glossed over quite a lot of details here.
  • Is it just my inbox, or does the spam (mostly foreign language) originating from APNIC-area domains outweigh all the other spam being created combined?

    With the exception of @home (are they finally dead yet?), it seems that all the major spam domains are now located in Asia, including:

    Kornet.net

    Dreamx.net/cjdream.net/thrunet.net

    Chinanet.net

    Hinet.net (though they MIGHT be improving; I haven't seen anything in my box in almost a week)

    Moreover, it always seems to be impossible to reach someone in these domains (we're talking 50 or more LARTs to every valid contact address I can find), and sometimes the contact addresses in APNIC's database have been invalid for weeks, if not months.

    Anyone else have these problems?

  • IPv6 could allow easy access to multihoming. (Actually, IPv6 could actually solve a problem but doesn't do that either).

    There are organizations (ARIN in North America) that handle IP alloations. Their policies have been created with one stated goal: keep the number of routes down so that routers don't blow up. With IPv6, they seem to be following the same policies.

    How do you keep the number of routes low? You make it really hard to get IP addresses. That's what they do and they do it fairly well. Personally, I'm not convinced that keeping the number of routes down actually helps anyone. The routers that carry full routing tables are all large and expensive and if they don't have the capacity for much larger routing tables already then it's because the router manufacturers knew that the number of routes was being kept low.

    IPv6 could change all this. With 128 bits of address, one could allow real multi-homing without making huge routing tables. This could be accomplished by splitting of multiple sections of the IP address as Service Provider IDs (SPID). An actuall address would the contain multiple SPIDs and an end user address. To have a full routing table, you would need routes to all the service providers and to all of your own customers. Just an idea.
  • Not that simple... (Score:5, Informative)

    by jbroom ( 263580 ) on Friday November 30, 2001 @02:43PM (#2637674) Homepage
    I'm Tech Director for a Caribbean ISP, so I know the problems in getting bandwidth AND multihoming.

    To be multihomed correctly you will generally need:

    -a decent router that can do BGP.
    -more than one connection to providers who will talk BGP with you.
    -your own AS number and an allocated block of IP addresses

    The expensive part is not really "paying the fees" of (ARIN, RIPE, APNIC), or complying with their conditions, but in fact having someone tech enough that also understands the POLITICS (yes POLITICS) involved in running BGP, and the ongoing cost of keeping your network in fact running in this type of situation.

    You are just looking at the tip of the iceberg and saying "wow that's expensive JUST for a block of IP's", which on the surface might look correct, however:
    -just about anyone can say "gimme a block please" (cheap).
    -checking on who can actually utilise them or not is expensive.

    Memory in routers is easily scalable (it isn't but lets pretend it is), but the problem is not lack of memory, but actually wading through all those blocks of IP addresses.
    Most of the main tier 1 providers have serious filters in place to avoid filling their routing tables up with junk due to mistakes or due to people who just haven't made a transit deal with them, so even if you were "given" a block of addresses, it wouldn't always be that easy for you to get it routed.

    My advice: as you are "small" (compared to a Tier 1 provider), my guess is that there are ISPs down there that will do a better job than you for getting redundancy. Spend a bit more money on linking up to one of these, and backup your link to them somehow, and trust THEM for your link instead of trying to do it yourself. It will probably cost you just about the same, but your uptime will probably be HIGHER, because when you do BGP yourself, you are adding in extra weak spots that you may at this moment not be thinking of (your internal routing policies and how they get propagated, the people you will need to make sure this runs, etc...).

    Just my own opinion. Add salt.
    • In north america, you have to prove that you need at least half of a /19 to buy your own IP block for your AS to route.

      Most internet BGP4 routers are now configured to ignore routes smaller than /19 anyway (stupid people can't upgrade to better routers).
  • IPv6 (Score:2, Interesting)

    by PineHall ( 206441 )
    Routers will not be upgraded to IPv6 until people are forced to. We want more IP addresses and the US government wants a secure (private) internet. To me the answer is for the US government to switch over to IPv6 because it is more secure. It would force the upgrades, and perhaps the US government would save some money and drop the idea of building their own private network for all their computers. This would get the process of the switchover started.
  • Multihoming (Score:2, Interesting)

    by Haywood68 ( 531262 )
    No need, most of the features provided by ISP multihoming can be provided by a linux box with balance http://sourceforge.net/projects/balance/
  • When I did router support for IBM's (now defunct) Network Hardware Division, I had my very own /24 just for my office, which had all of a dozen boxes in it... Even though that isn't my job anymore, there are definately no address restrictions here...

    Life is so very fine,
    when your corp. is class A number nine.

    SirWired
  • While I agree that the providerless ip blocks make routing tables more complex, you can still multihome without them. This is the easy way...

    Get yourself a domain name. Simple enough. Get yourself two internet connections, with two separate banks of IP addresses (however many you need). Now, you have two separate networks, but with linux boxen, you can alias both those networks over the same physical hardware on all your machines. Simply configure a primary outgoing gateway machine to forward half its packets to one router and half the packets to another, this will loadbalance your upstream.

    For the two nameserver IP addresses you provider your registrar, give one IP on one network, and the other IP address on the other. This will ensure that half the incoming connections will come in on each of the two networks. If one of your providers goes down, all your incoming connections will default to the working network.

    -Restil
  • by jroysdon ( 201893 ) on Friday November 30, 2001 @03:19PM (#2637862)
    As nice as it is to have Provider Independant IP Space, as you've found out it's virtually impossible to get without paying through the nose (you can just BS how many hosts you have, if you want to fork over the cash to pay US$2,500/year for a /20 block from ARIN [arin.net] here in the USA). Then there are less clueful orginizations that don't even know they have some, because the current IT staff didn't get along with their predecesor (for instance this block [switch.ch] I found for my own local City).

    However, it's not required to multihome. Really what you require to multihome is an Autonomous System Number (ASN [arin.net]) and a /24 block from either traditional Class C space, or the 63/8 or 64/8 Class A blocks that were returned a bit ago. No one with a clue should be filtering a /24 from either location.

    The biggest downside to using your upstream providers IP space is that it pins you to a single ISP as you must use their IP space, and leaving them requires renumbering (but can be done without downtime within a reasonable transition timeframe of a few days). What we did was pick the largest ISP out there (UUNET [uunet.com]), and then one of the top 10 (Sprint [sprintlink.net]) and use both IP space (although we could have chosen to only use UUNET's). We use both provider's IP space on any important box (email, mainly) so that if we were to disconnect from one ISP (not likely), we only have to remove their IPs from our DNS, and the other IPS's IPs are already there and live (plus it gets around odd local routing problems outside of our control, where one remote site can reach one ISP but not the other).

    We announce both blocks out both ISPs (to announce UUNET's blocks out Sprint and have them come back the shortest route, we had to get UUNET to "punch a hole" in their larger block and announce the smaller block we had so that both UUNET and Sprint would be announcing equally specific blocks for us... same is true of Sprint announcing their own assignment to us more specifically so they'll route to Sprint or UUNET, as if we only announcing the smaller block out UUNET, then all traffic would go that way unless our UUNET connection was down).

    Anyway, not to write a HOW-TO (see Halibi's Internet Routing Architectures [bookpool.com] ISBN: 157870233X), but that's how to do it.

    You don't need a huge router to be multihomed. Even a 2501 would work (as you just take default routes announcements from both ISPs, with the point being to advertise out your own blocks). If you want to take full routes from two ISPs, a 2650 with 128mb of RAM will work fine. If you want to take defaults + ISP-direct-customers, a 2610 with 64mb of RAM will work (it handles ISP-direct-customers from Sprint and UUNET just fine for us).

    Lastly, never forget that site redundancy is just as important as internet redundancy. If a backhoe takes out the fiber or copper pairs going to your neck of the woods, more than likely it'll be both ISPs.

    Normally I'd never mention my certs, but here they're relevent:
    I'm a CCNP (next step past CCNA) and CCDP (next step past CCDA). I've been working for an IT Consulting/Integrater firm for 4 years (help desk positions 3 years before), and we also have our own little ISP [switch.ch] on the side. I've worked with all the top 10 ISPs (and plenty of the Tier2/Tier3 folks), and set up a couple hundred of multihomed sites, so I'm not just quoting what I read in a book somewhere.
  • It IS hurting the Internet... most definately.

    If we look back at the way things used to work...

    Firstly, there was enough address space to go around.
    Because of that, IP addresses were not a commodity. You didn't hoard them; you didn't have to, you could get them if you needed them without too much hassle.

    And you did NOT have to be networked to anyone else to get IP addresses assigned to you; it was more like the assignment of MAC addresses... the whole concept was that you had unique address space, period, so if you wanted to internetwork one day, you could.

    This has now gone out the window, becuase the Internet is the product unto itself... Things may be restored with IPV6, but I doubt it.. big business will carry the current policies over into the new address space, or at least, try to.

    We attempted to do multi-homing in Europe... now, it IS possible to do, but it's hard to find information about how to do it. The IP assignment authority won't hand out a netblock to you.. no, you need the cooperation of your neighboring AS#'s to do it... but you can get an AS# assigned and some space allocated. THey just make it obscure.
    • It's really not that hard to find info. Get Halabi's Internet Routing Architectures book to start with the fundamentals. Then find LISTSERVs for your local IPSs. They're out there, you just have to look. Here are some generic vendor-specific Provider lists: http://puck.nether.net/lists/ [nether.net]

      To start with, I'd connect with UUNET, as they're everywhere worldwide, easy to work with, and very professional. Once you've been through the process one time, you can work your way through less helpful ISPs.
  • There was some period of time when routing tables were a problem. And, in a sense, it still is: there are lots of tiny networks (including huge numbers of two-address networks for DSL); if all of them got class C addresses, we'd probably run out of space.

    But another reason is that there is no incentive for changing the status quo. Letting the routers handle large tables means more work and more downtime and for what? Increased competition and less customer loyalty. It's not surprising that the people who could open it up don't have much interest in doing so. And I wouldn't expect that to change with IPv6.

  • by thogard ( 43403 ) on Friday November 30, 2001 @08:43PM (#2639462) Homepage
    Several people have explained why the route tables are so big but they could be reduced if groups like APNIC started allocating shared space. They also allocate IP addresses for Australia and here we only have a few big ISPs. So the next time telstra wants more address space, APNIC should allocate them a block that is allocated to both them and another ISP such as Optus or Connect. This would keep the routing tables smaler and allow large ISP's to provide dual homeing to their customers but its not in their best ineterest to do so and its not going to happen unless the APNIC forces them to.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...