×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A Peek At Google's Software-Defined Network

samzenpus posted about a year ago | from the check-it-out dept.

Google 75

CowboyRobot writes "At the recent 2013 Open Networking Summit, Google Distinguished Engineer Amin Vahdat presented 'SDN@Google: Why and How', in which he described Google's 'B4' SDN network, one of the few actual implementations of software-defined networking. Google has deployed sets of Network Controller Servers (NCSs) alongside the switches, which run an OpenFlow agent with a 'thin level of control with all of the real smarts running on a set of controllers on an external server but still co-located.' By using SDN, Google hopes to increase efficiency and reduce cost. Unlike computation and storage, which benefit from an economy of scale, Google's network is getting much more expensive each year."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

75 comments

I got this post in fast (0, Offtopic)

Nyder (754090) | about a year ago | (#43738627)

thru googles software designed network.

Re:I got this post in fast (0)

Anonymous Coward | about a year ago | (#43739439)

I saw it AS your were typing thanks to Google Glass.

centralized = fault-tolerant? (4, Interesting)

Anonymous Coward | about a year ago | (#43738641)

"it provides logically centralized control that will be more deterministic, more efficient and more fault-tolerant."

I'll agree with deterministic and efficient, and perhaps even less likely to fault, but more fault-tolerant seems like a stretch. SDN might get you better fault-tolerance, but that is not because the control is centralized. I suspect the control has more information about non-local requirements and loads, and that can get you better responses to faults. That happens because the controllers can communicate more complex information easier, since that is pure software, not because its centralized. You can have these fault tolerance gains via non-centralized SDN too.

Re:centralized = fault-tolerant? (2)

Alex Belits (437) | about a year ago | (#43739033)

Logically centralized topology planning and monitoring are OK.

Fixed centralized control path, one instance of configuration database, single controlling entity, crappy authentication, requirement for separate secure channel to do anything and everything, and nonexistent resistance to DoS, however, are not.

gb2school

Re:centralized = fault-tolerant? (3, Interesting)

bbn (172659) | about a year ago | (#43739183)

Compare it to the alternative such as the good old spanning tree protocol. You have a number of independent agents who together have to decide how to react to a fault. This is complex and requires clever algorithms that can deal with timing issues and what not.

With a centralised controller the problem is much easier. One program running on one CPU decides how to reconfigure the network. This can be faster and possibly find a better solution.

Of course you need redundant controllers and redundant paths to the controllers. Apparently Google decided you need a controller per location.

Re:centralized = fault-tolerant? (3, Insightful)

bill_mcgonigle (4333) | about a year ago | (#43739337)

With a centralised controller the problem is much easier. One program running on one CPU decides how to reconfigure the network. This can be faster and possibly find a better solution.

I can see how centralizing the control can be easier. But if the history of Internet networking has taught us anything, we should expect somebody to come up with a more clever distributed algorithm (perhaps building on OpenFlow) that will make SDN's a footnote in history while the problem gets distributed out to the network nodes again, making it more resilient.

That's not to say that trading off resiliency for performance today isn't worthwhile in some applications.

Re:centralized = fault-tolerant? (0)

Anonymous Coward | about a year ago | (#43740053)

OpenFlow is an SDN and everyone is moving towards OpenFlow.

Re:centralized = fault-tolerant? (1)

Alomex (148003) | about a year ago | (#43740667)

ut if the history of Internet networking has taught us anything, we should expect somebody to come up with a more clever distributed algorithm

The internet has moved from centralized to decentralized to centralized again. It is not the case that it has moved one-directionally towards a distributed system. Currently big parts of the internet are centrally managed (e.g. SuperDNS/GoogleDNS, IBGP, MPLS routing, most of network provisioning).

Current view is that centralizing BGP would be a "good thing" (TM).

Re:centralized = fault-tolerant? (1)

AK Marc (707885) | about a year ago | (#43749149)

Networks connected to the Internet being centrally managed was universal. You are right for non-Internet things (NNTP), but DNS is just as distributed as always, and MPLS doesn't cross network boundaries, and BGP *is* somewhat centralized, as it always was. You can't just make up your own AS to use (well, you can, but only from the private range).

There may be a move to concentrate traffic in fewer large networks, but that's not the same as the Internet getting more central management.

Re:centralized = fault-tolerant? (1)

Alomex (148003) | about a year ago | (#43751545)

but DNS is just as distributed as always

Google DNS is centralized.

BGP *is* somewhat centralized, as it always was

The change is that now many organizations drop centrally computed routing tables on the routers as opposed to the OSPF+manual tweaks that used to dominate before.

Re:centralized = fault-tolerant? (1)

AK Marc (707885) | about a year ago | (#43766847)

Google DNS is centralized.

Well, yes. Every network has "centralized DNS" it's how DNS operates. That this is a sudden and startling discovery to you indicates nobody should listen to you.

The change is that now many organizations drop centrally computed routing tables on the routers

That's always been relatively common. Especially if you have only one or two peers, dynamically learning the entire Internet routing table was a massive waste of resources. Many holders of a single class-C run BGP to advertise their route, not to learn routes. They default out, and advertise, so that their block is reachable if a link goes down, without concern of link optimization for both-links-up, and symmetrical routing more important than load balancing or optimum performance.

Re:centralized = fault-tolerant? (1)

Alomex (148003) | about a year ago | (#43767403)

It is clear you do not know what Google DNS is. It is not the DNS that serves the "google network" but a global provider of DNS services for all and people are encouraged to use it instead of their local DNS. This makes your comment

That this is a sudden and startling discovery to you indicates nobody should listen to you

rather ironic.

That's always been relatively common. Especially if you have only one or two peers, dynamically learning the entire Internet routing table was a massive waste of resources.

I'm talking AS level organizations including internal routers as well as border routers.

Re:centralized = fault-tolerant? (1)

AK Marc (707885) | about a year ago | (#43768799)

It is clear you do not know what Google DNS is. It is not the DNS that serves the "google network" but a global provider of DNS services for all and people are encouraged to use it instead of their local DNS.

Ah yes, the traditional "you must not have all the information, or you'd agree with me" argument. It's proof your logic is flawed, not proof of my ignorance. You do realize that "back in the day" there were people encouraging others to use things like 198.6.1.3, the DNS server for the largest (by volume, not reach) and fastest growing (by $ per day spent on infrastructure) ISP on the planet, rather than local ones because local ones were much more prone to failure than link failure to 198.6.1.3, right? You speak as if cross-provider DNS is something you discovered for the first time yesterday. Others of us have used it for 20+ years. Your problem is that you just don't get it. Everything you talk about being "new" has been done before.

It's like virtualization. We had that in the '60s. It was called a "mainframe" then. Then we had PCs. Then we had "terminal servers" which was more virtualization. Then PCs/tablets again. Nothing is new. It's cyclic, and if you are dumb, you might think the next big thing is new, but if you aren't dumb, you recognize it as a re-marketing of something that's been done multiple times in different ways in the past 50 years.

Re:centralized = fault-tolerant? (1)

Alomex (148003) | about a year ago | (#43771013)

You are funny trying to play the I'm older and wiser card. You are likely to lose that one too.

And all you prove with your 198.6.1.3 example is what I said in my original posting: there have been waves or centralization (such as that one) and waves of decentralization and back again (e.g. Google DNS).

Re:centralized = fault-tolerant? (1)

AK Marc (707885) | about a year ago | (#43771141)

I never played the "I'm older and wiser" card. I played the "you're dumb" card.

Re:centralized = fault-tolerant? (0)

Anonymous Coward | about a year ago | (#43777373)

> It is clear you do not know what Google DNS is.

It is clear you are an idiot.

Re:centralized = fault-tolerant? (1)

Anonymous Coward | about a year ago | (#43739345)

When I was asking around about what's the main thing about SDN, I got back because it's programmable. If there is a bug 10 years from now, it can be fixed. With a regular router, you're stuck hoping for support from the manufacturer.

Cool! Now does this all mean (0, Offtopic)

Anonymous Coward | about a year ago | (#43738701)

that I can finally download more RAMs?

Hype (0, Insightful)

Anonymous Coward | about a year ago | (#43738915)

SDN is hype, just like CLOUD.

Re:Hype (1)

squiggleslash (241428) | about a year ago | (#43740271)

Whether it is or it isn't, it sure feels like it. It feels, from the description, like stuff we've been doing for decades, except now it has a fancy name and people are doing more of it.

I'm genuinely interested to know whether my impression of SDN is totally off base and whether it is radically new and different.

Re:Hype (1)

citizenr (871508) | about a year ago | (#43741715)

You had to buy specialized routers/expansion cards for decades to do certain things. Now you reconfigure those things on the fly.

Re:Hype (0)

Anonymous Coward | about a year ago | (#43742629)

Those old protocols that seem similar have ever so slight difference which make or break their ability to be useful. Next we'll start comparing FAT16 to ZFS and ask why we even need ZFS.

Re:Hype (1)

AK Marc (707885) | about a year ago | (#43749219)

I finally had someone tell me something that can't be done with regular networking that can be done with SDN. Programmable STP. Currently, there's STP, and some proprietary replacements, like FSPF. But you can program your own primary and secondary links network wide, without having to rely on some more generic protocol to do it for you.

All the other things I've seen mentioned were SDN within a NIC for CPU offload. But if you are putting a computer in a NIC, you can do other things with it anyway.

Re:Hype (1)

jon3k (691256) | about a year ago | (#43758079)

SDN is separating the control plane functions from your network devices and centralizing. Yes, there is a lot of hype around it.

How can you have a software defined network? (4, Interesting)

Viol8 (599362) | about a year ago | (#43739213)

A network is physical infrastructure - software isn't going to be rerouting cables or installing new wifi nodes anytime soon.

If all they mean is routing tables are dynamically updated then how is this anything new?

This isn't a troll, I genuinely don't see where the breakthrough is.

Re:How can you have a software defined network? (1)

lobiusmoop (305328) | about a year ago | (#43739235)

You're missing the point. The summary describes it as a 'Software Defined Network Network', a true innovation.

Huh what is IOS and DDWRT then (1)

mjwalshe (1680392) | about a year ago | (#43739877)

you know swtitches and routers already run software - having a single controller goes against the design goals of the internet.

Re:Huh what is IOS and DDWRT then (0)

Anonymous Coward | about a year ago | (#43740175)

A central logical controller, not a central physical controller. Not to mention there is a fall-through in-case the controller cannot be contacted. The result is a better average case with a failure case that can limp along.

Re:Huh what is IOS and DDWRT then (1)

jon3k (691256) | about a year ago | (#43758105)

This is for a network under a single administrative control not the entire Internet. Eg: An ISP, a datacenter, an enterprise campus, etc.

Re:How can you have a software defined network? (5, Informative)

DarkOx (621550) | about a year ago | (#43739297)

Its not what they are doing here exactly but there is not reason you can't have a logical topology over top of a physical one. Actually its very useful, especially when combined with a virtual machine infrastructure. Perhaps you want to have two machines in separate data-centers to participate in software NLB they need network adjacency, for example, yet I doubt you want a continuous layer two link stretched across the country. Sure if its just two DCs maybe a leased line between them will work, what if you have sites all over the place and potentially want to migrate the hosts to any of them at any time? That would allow for maintenance at a facility, or perhaps you power on facilities during off peak local electrical use, and migrate your compute there?

People are doing these things today but once you get beyond a singe VM host cluster it gets pretty manual. With admins doing lots of work to make sure all the networks are available where they need to be hard coded GRE tunnels, persistent ethernet over IP bridges, etc. They all tend to be static, minimal overhead when not in use sure, but overhead and larger attack surface non the less. A really good soup to nuts SDN might make the idea of LAN and WAN as separate entities an anarchism. Being able to have layer two topology automatic wherever needed would be very cool.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43739865)

Actually its very useful, especially when combined with a virtual machine infrastructure.

The "network" between VMs running on the same host, is by definition flat. You can use this flat network (your RAM) to run a protocol over it that is designed for high-latency, low-reliability wires, but that's because your OS has no IPC because it's Windows. And you can set up firewall and routing rules for that network because this is the only way to implement any kind of access restrictions, because your OS security model is designed for anything but isolating processes' access to each other, because it's Windows and it's written by Windows programmers.

A freaking dbus is a better "network" than this. dbus, the swiss army sledgehammer of interprocess communication.

Re:How can you have a software defined network? (5, Informative)

bbn (172659) | about a year ago | (#43739333)

There is no routing as such. For each new "flow" the switch needs to ask a computer (controller) what to do. The controller will then program the switch with instructions for the new flow.

You claim that the flow table is just a glorified routing table. Maybe it is but much more fine grained, you can match on any fields in the IP packets, including layer 2 and 3 such as MAC, IP, port numbers, IP TCP packet types (syn packets) etc. Also you can mangle the packets, for example modify the MAC or IP address before forwarding the packet.

With this you can build some amazing things. The switch can be really dumb and yet it can do full BGP routing: RouteFlow: https://sites.google.com/site/routeflow/ [google.com]

The other canonical use case is virtualisation. No it will not be rerouting physical cables. But it can pretend to do so. Combine it with VMs you can have a virtual network that can change at any time. If you migrate a VM to another location, the network will automatically adapt. And still the switches are dumb. All the magic is in the controllers.

Before OpenFlow you would need to make a vlan (or MPLS). When moving the VM to a new location, you would need to reconfigure a number of switches to pass around this vlan and there is no standard protocol to do so.

OpenVSwitch supports OpenFlow so you can pretend your virtual network with virtual switches includes the VM host itself: http://openvswitch.org/ [openvswitch.org]

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43739825)

Also you can mangle the packets, for example modify the MAC or IP address before forwarding the packet.

And it's all stateless, and therefore worthless for anything but cheap tricks and trivial operations that are easier to implement with what already exists. It's the same mistake that caused NFS to go through four generations.

Re:How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43740281)

OpenFlow solves problems current systems cannot. If you can't see the difference between OpenFlow and current systems, then that's because you don't understand the problem domains.

Google has already shown that using their system, you can increase bandwidth and resiliency, while reducing latency. Once you understand that Google is working with hundreds of non-blocking 10Gb ports with sometimes asyemtric properties, like route latency and load, you will find routing hundreds of Gigabits over these links start to have problems with current routing and link teaming setups.

There is also the whole problems of device support where bugs have been found on equipment, but the parts are no longer supported. Having a way to reprogram and implement your own logic alleviates the dependency on manufacture support.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43742235)

OpenFlow solves problems current systems cannot. If you can't see the difference between OpenFlow and current systems, then that's because you don't understand the problem domains.

Translation: Let's reinvent SNMP and try to administer network with it, like people tried in 90's! a modern equivalent of This time it's different! [google.com]

Google has already shown that using their system, you can increase bandwidth and resiliency, while reducing latency. Once you understand that Google is working with hundreds of non-blocking 10Gb ports with sometimes asyemtric properties, like route latency and load, you will find routing hundreds of Gigabits over these links start to have problems with current routing and link teaming setups.

Translation: Google Big.

There is also the whole problems of device support where bugs have been found on equipment, but the parts are no longer supported. Having a way to reprogram and implement your own logic alleviates the dependency on manufacture support.

Translation: Firmware Is Magic.

Re:How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43742465)

Don't forget, IPv6 sucks and IPv4 should have just been extended to 128bits. Don't forget that ASM is the best language, so why use anything else?

Re:How can you have a software defined network? (2)

swillden (191260) | about a year ago | (#43743295)

Translation: Google Big.

Yep. And there comes a point when you're scaling up that quantitative differences become qualitative differences that demand completely different solutions to the old problems.

Translation: Firmware Is Magic.

No, firmware is static, and the code it contains must fit in limited capacity storage devices and run on low-end CPUs, unless you want to pay big money for your switches. Much better to make the switch firmware simple and the switches cheap, and put your logic in a few much more powerful machines with visibility into the bigger picture.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43743447)

Yep. And there comes a point when you're scaling up that quantitative differences become qualitative differences that demand completely different solutions to the old problems.

There is one little problem -- Google still has no evidence that this crap works or provides any benefit.

No, firmware is static, and the code it contains must fit in limited capacity storage devices and run on low-end CPUs, unless you want to pay big money for your switches.

No. Everyone but the old router companies with massive amount of legacy code, now runs Linux on their switches, with huge RAM and NAND flash available. Both performance and storage are far greater than the amount of performance of your controller server per amount of network traffic (or total size of tables) that it is supposed to maintain. The train of large CPUs on remote controllers and small CPUs on network devices left the station a very long time ago, and it's absolutely ridiculous that people now started to optimize their network architecture for this, now obsolete, assumption.

Re:How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43743745)

Translation: Google Big.

Quantity has a quality all its own [goodreads.com] . Maybe you're just whistling in the dark hoping that all your experience in networking isn't going to be made irrelevant in the next 5 years when competing with young turks? Openflow is not going to be necessary for most organizations, traditional hardware and routing will suffice. But for organizations that run more than a handful of racks of hardware in their data centre, this is going to save them money.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43744839)

Quantity has a quality all its own

Dear grand-grandson, I have studied this when it was properly attributed to Hegel.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43743069)

"it's all stateless" - no not exactly. First OpenFlow has counters and flow rules can apply to those counters. You can use this to rate limit flows or you can use it to sample packets (copy every 500th packet etc). Or to load balance.

But most important, the whole point of OpenFlow is that you do not upload the whole set of rules to the switch. Indeed the actual rules might be too complex for the switch to hold or to compute.

Take the BGP implemented by RouteFlow as an example. The global BGP table has about half a million routes. Your cheap OpenFlow enabled switch might not be able to hold half a million OpenFlow rules. Is all lost? No, because you need to upload all the routes, in fact you will upload no routes. Instead the first time the switch has a IP packet with a new destination, it will ask the controller. The controller will consult the BGP tables and program the switch with a new rule. Now the switch knows how to deliver for this destination. In the process the switch might need to flush an older rule to make space for the new rule in memory. This is made possible by the before mentioned counters - there are also timers. So we can remove the least used rule and avoid removing any recently active rules.

OpenFlow turns cheap switches into advanced devices that can solve many tasks that before required expensive equipment. The cheap switch can become your BGP border router. It can be your HTTP load balancer. It can be your carrier NAT device. It can support the full range of protocols even if the maker did bother to implement them in firmware.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43743123)

Small typo: "No, because you need to upload all the routes, in fact you will upload no routes" should be "No, because you need NOT to upload all the routes, in fact you will upload no routes".

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43743525)

The global BGP table has about half a million routes.

And 256M of dynamic RAM costs how much exactly? Can you even buy a smaller device and actually save any money on it?
And how many SoCs now come WITHOUT built-in Ethernet that can update that whole RAM in seconds?

It's pointless. The resources this scheme is saving, are now the cheapest ones.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43743629)

Can you point to any cheap switch that can hold 500.000 BGP routes in the dataplane? I didn't think so.

You are also missing the point: Do you really want to pay extra for software features? Software that has been done way better in open source controllers?

A Juniper router with 6x 10 Gbit/s is $50,000. An OpenFlow enabled switch with four times as many 10 gig ports is only one tenth of that. I do not know where you work, but in my shop that is some savings that we will take.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43744789)

Can you point to any cheap switch that can hold 500.000 BGP routes in the dataplane? I didn't think so.

But I can build one if a customer asked me to.

You are also missing the point: Do you really want to pay extra for software features? Software that has been done way better in open source controllers?

Then maybe it would be a better idea to write real open source switch firmware instead of remote-control toys? Don't give me "it's all proprietary" crap, the OS is usually Linux, and hardware switching devices all use pretty simple definitions for packet processing.

A Juniper router with 6x 10 Gbit/s is $50,000. An OpenFlow enabled switch with four times as many 10 gig ports is only one tenth of that.

That's probably because the Juniper router has a tiny fraction of your cheap switch latency while doing more work. And, of course, because Juniper likes jacking up the prices because there is no open source alternative.

I do not know where you work, but in my shop that is some savings that we will take.

First of all I have to loudly proclaim that I do not speak for my employer because I work for a hardware company that makes network equipment. And likely don't even know what is my employer's position on this is, or if it has any. I believe I can talk about this because information about approximate amounts of processing capabilities of hardware used in modern switches can be easily derived from published descriptions, so it's not anyone's secret.

Personally I am just as frustrated about the lack of open source fully functional switch components as the users, but seeing humongous amounts of processing power available in "cheap" devices every day right in front of my face, I also recognize that this "SDN" craze is a distraction that creates narrowly defined "open interface" as a barrier for development of open standards and open source solutions.

Someone will inevitably implement it on the level of the chip, stuff "controller" back into the CPU on the switch where it always belonged, and then everyone will find out that the mechanism and protocol are crippled, and require huge amount of extensions. Extensions will be all proprietary, and here we go again, another cycle of pseudo-open standards at least as bad as ACPI, and even more cemented in hardware, so no one will be able to fix them. Then Microsoft, or worse, will jump in with their "solution" that breaks everything but at the moment works better than now-crippled "open standard", and everyone, like sheep, will jump onto stuffing Windows into their network switches. When that blows up, it will be too late to pick up the pieces, so Cisco and Juniper will remain secure in their monopoly for fast and reliable, and Microsoft (or whoever will be playing that game at the moment) will take over the market of "small business" routers that will require daily reboots.

Does it sound like something we already went through in each and every aspect of computing OTHER THAN NETWORKING?

Alternatively, if all this crap is abandoned, and switch chip manufacturers adopt a standard for compiling rules from a text source in some language, they will be able to have a (likely proprietary) compiler loading rules generated by open source components, or written by people, and those compilers will run on the switches' large CPUs. Stateless, stateful, with external protocol having access to internal state, etc. Not unlike what happens with shaders when they are loaded into GPUs.

But then no one needs micromanaged "SDN" protocols with fixed structures containing extra-special fields for every protocol and feature known to man, everything just downloads lists of objects from hardware (over HTTPS if over the network), generates text representation of the rules, sends them to devices (over HTTPS if they have to go over the network), in the same manner compiles rules implemented on the processors, if any needed, at the same time running whatever routing protocol it has to run, etc. All you need is a common format that describes objects and somewhat standardized language that defines processing rules, and library of rules for common procedures.

Want a centralized system? Query everything from one controller box, generate everything there, upload from it everywhere. Want a locally cached distribution mechanism? Trivial, just place keys and cache rules (just don't mess up transactions). Want locally implemented routing protocols? Just make protocol daemons generate the rules and send them to the local compilers. Admins modifying things manually? All simple, as long as there is common human-writable language one level upper than the rule language, because humans like clarity.

Now please, someone tell me, why no one thought of going into that direction. Ignorance? Stupidity? Sabotage?

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43745079)

I will give you that the OpenFlow system is stupid in some ways. For example I can push a MPLS label on a packet, but I can not push a LISP header. Why not? Because they made separate instructions such as "push VLAN label" and "push MPLS label" - instead of a generic "push N bytes".

OpenFlow is two things. It is a language for the data plane. Not much different from what you are asking for. It is not turing complete, probably by design. So you can not make the data plane do just anything, but on the other hand you can guarantee that it will not do an infinite loop or use up all memory. It is possible to have OpenFlow in the data plane and still be able to guarantee that your data plane will switch at line speed. That would be impossible with a stronger turing complete data plane language. Yet they could have made it more generic, like having a generic push and pop.

OpenFlow is also a protocol. Currently we think the controller speaking the OpenFlow protocol must be external from the switch. But nothing prevents a switch manufacturer from granting access to the build in control plane computer in the switch. If it is just a Linux computer, as you say, I could just login and upload my controller software there. My software would still speak OpenFlow with the data plane, because that is the standard for how to program data planes. Also it would allow my program to be the same regardless if it is being used on a switch that allows uploading controller software or if it is run on an external server.

One thing is for sure thou - the big players like Cisco and Juniper do not want to go in this direction. You say that 50k juniper router providers lower latency than the cheap OpenFlow-switch - but that is just BS. They will switch at line speed and if the open source world gains access to program the things, there will be nothing to sell the expensive hardware on. We will be down to the pure specs of the hardware. Right now I see a lot of line rate 10G switches coming out at a very attractive price point - some of those made by these same brands but artificially limited in the software.

Re:How can you have a software defined network? (1)

AK Marc (707885) | about a year ago | (#43749335)

You say that 50k juniper router providers lower latency than the cheap OpenFlow-switch - but that is just BS.

That's proof you don't know what you are talking about. The expensive router/switches have dedicated ASICs for ports. RAM and processing at the port to get the packet in, processed, and back out as soon as possible. Pulling something in a "linux" router (often a PC with more networking cards) and you have to pull it through the cards, through the bus/MB to the CPU, process it, and send it back to the card for exit.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43749555)

How the hell did you manage to conclude anyone here was talking about PCs with networking cards? The "cheap" switches I am talking about are products such as Juniper E4550 that got 32x 10G and 960 Gbps bandwidth for $19k. Compare that with Juniper M320 which is twice as expensive with only half as many 10G ports and 320 Gbps bandwidth.

Sure the M320 can do more in the data plane, but people are using it for stuff that the E4550 would do just fine, if the software would allow it.

Or you could go for a HP 5820X-24XG-SFP+ switch with 24x 10G and 488 Gbps bandwidth for just $5k.

If you believe the HP 5820X is a "linux router just a PC with more networking cards", then you are truly an idiot.

Re:How can you have a software defined network? (1)

AK Marc (707885) | about a year ago | (#43749687)

Next clue you are clueless is that you are comparing L3 switches and routers.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43749701)

No that is the point of OpenFlow. The switches becomes routers.

Re:How can you have a software defined network? (1)

AK Marc (707885) | about a year ago | (#43749801)

I thought it was to do "new" and "innovative" things, not the same old thing we've been doing for almost 50 years, but, this time, at a lower cost!

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43764615)

The switches become stateless routers. A stateless router to a router is what unmanaged switch is to a managed switch.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43764629)

Please elaborate on what you mean by stateless. I already told you how it is not stateless.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43764835)

Router is supposed to be able to generate and keep state of groups of protocol sessions (sometimes individual sessions), make decisions about forwarding and queuing immediately on the packet's arrival, maintain multiple queues with different limits of delay, etc. It is not allowed to forward traffic to the admin's iPhone for NAT, QoS, tunneling, load balancing, etc. -- it has to implement their logic locally.

On top of that it may have external logic that reacts to major changes in traffic statistics or changes in network topology/connectivity, and those may be implemented externally, however that's not all the router control logic does. As I have mentioned before, nothing is really saved by moving processors around, as switch still needs one, and once it has a processor, that processor can do everything remote controller would ever do, except without low reliability and enormous latency that comes with any form of a remote controller.

It's true that in its simplest form routing can be implemented as stateless filtering, header rewriting and switching, however those devices exist already, and they did not replace routers.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43765259)

An OpenFlow switch will:

Update counters and timers. Make decisions based on those counters and timers. Support multiple queues with different limits of delay etc. QoS. Rewrite source and destination IP address and UDP/TCP port numbers allowing the switch to do NAT without querying any external entity on a per packet basis. Add and remove VLAN, MPLS, etc tags, modify the tags, modify the MAC and much more. Automatically drop flow rules by certain events such as the last packet in a TCP flow or by counters, timers. Allow rules that recognise a missing rule and query the controller to add the rule.

It will basically do anything routers can do in the data plane without querying a controller.

I fail to see by what property you can call the above for "stateless". On the contrary it is a little programming language with state updates such as counters, timers and queue lengths and the ability to make decisions based on those.

I recognize your belief that the controller software should run an a CPU in the same chassis as the data plane. This however does not necessary make the controller any faster reacting. Many switches only have limited bandwidth between data plane and control plane. It is assumed that most of the brunt work will be done in the data plane and that any work that needs to go through control will have higher latency and less bandwidth. It is this property that makes it possible to move the control plane out of the chassis.

Is it perfect? No but it is a good start. As to having the controller in the same chassis, why don't you talk your employer into allowing uploading OpenFlow controllers to run on the control CPU? That is actually a good idea and might help sales of your product...

To implement NAT with OpenFlow you would need a rule that recognizes new connections and lets the controller add a new rule for that connection. The controller will not actually route or modify any packets, not even the initial one.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43765589)

It will basically do anything routers can do in the data plane without querying a controller

And once this will be all implemented, it will become yet another implementation of a router.

I fail to see by what property you can call the above for "stateless". On the contrary it is a little programming language with state updates such as counters, timers and queue lengths and the ability to make decisions based on those.

And once it is all implemented, you will have a router, except all control is done in a limited "language", and it will have to control resources (queues, tables that now correspond to rules, not just physical ports) that will require ASIC and FPGA chips not unlike those used in expensive routers. It won't be cheaper or better, or more flexible -- in fact, it will cement the state of technology and protocol support at the moment this "standard" was created and provide no method for extension other than asking controller before deciding forwarding for each new address.

I recognize your belief that the controller software should run an a CPU in the same chassis as the data plane.

And then the design that relies on fixed-format exchange between controller and "dumb" switch is pointless, because router/switch implementations will have to accommodate new features, new hardware, and passing new kinds of protocols with appropriate kinds of QoS.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43765597)

This however does not necessary make the controller any faster reacting.

Of course, it does. Latency of a PCI Express connection is negligible compared to any network interface. Properly organized DMA can perform multiple transfers, and get decisions from the local CPU while the packet is waiting in the queue.

Many switches only have limited bandwidth between data plane and control plane. It is assumed that most of the brunt work will be done in the data plane and that any work that needs to go through control will have higher latency and less bandwidth.

The problem is not bandwidth, it's latency. A tiny fraction of packets are supposed to reach CPU to trigger protocol-based decisions for their forwarding. However the decision (without data, no need to transfer it back if it's already queued) must come back fast. PCI Express can do that. Local DMA-capable bus interface to built-in core can do that. Management Ethernet to another device, passed through other switches can't no matter how fast it is.

It is this property that makes it possible to move the control plane out of the chassis.
Is it perfect? No but it is a good start.

It's not a start. It's a start and the end because once standard like this is implemented, it's set in stone, and requires rebuilding ASIC to add anything. It's completely inflexible, and designed for convenience of software developers, not extensibility. Once there are devices with first version of the protocol, everyone will have to support it forever, and can only add a completely different protocol that only new devices will understand.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43766725)

OpenFlow will only pass as much of the packet as you need to. For most cases that is just the headers. Say the controller is on a 10G interface and 100 bytes needs to be transferred out and then the reply will be about 100 bytes too. The time to process the packet will be the same or less compared to the switch build in controller (external controllers will generally be more powerful servers than the controller CPU in a switch or router). Time to transfer 200 bytes on a 10G is 200 ns.

Of course there might be multiple hops to reach the controller but that would be the network designers choice. Google apparently put the controllers adjacent to the switches, so they would have a direct connection.

Extra delay of this order, and only for the first packet in a new flow, is negligible. If it is a standard 1500 byte packet, it will be 200 ns to query the controller and then 1.5 microsecond to actually forward it.

By the way, there are multiple commercial available switches with OpenFlow support already. HP is retrofitting their entire product line with OpenFlow support. Juniper has experimental support too. Both companies seem to be doing it without rebuilding any ASIC or other hardware, considering adding OpenFlow is just a firmware update.

Nothing stops you from adding your own proprietary solution. But we need standards if we are to write software that will work on multiple brands and models.

Re:How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43767163)

OpenFlow will only pass as much of the packet as you need to. For most cases that is just the headers.

It does not matter if it sends one bit per packet -- latency is per packet, not per byte. Packets must be sitting in a queue while the switch is waiting for response -- so the time for response is determined by the time for the queue to overflow, or the packet will have to be dropped. It will never work.

By the way, there are multiple commercial available switches with OpenFlow support already. HP is retrofitting their entire product line with OpenFlow support. Juniper has experimental support too. Both companies seem to be doing it without rebuilding any ASIC or other hardware, considering adding OpenFlow is just a firmware update.

And which subset of functionality is actually implemented there? Those companies rely on ASICs to provide a huge chunk of functionality in their switches and routers, and OpenFlow won't be able to use them. So they either will have to drop the functionality or replace it with "almost QoS", "almost NAT", "almost load balancing", etc. that always drops a part of traffic on every transition, has limits imposed by cotroller's resources, etc. They will produce exactly what they want -- an inferior solution with limited applicability, that does not compete with their high-end products. The users will get some dubious benefits for trivial cases, and all directions for future progress will go through destroying this new standard (not unlike how all directions for progress in general-purpose computing went through destroying Win32 as a de-facto standard for all software development).

Nothing stops you from adding your own proprietary solution. But we need standards if we are to write software that will work on multiple brands and models.

It's a shit standard. I would rather have no standard at all than crippled standard, made with fundamentally wrong assumptions, with all the bigwigs pushing it upon everyone, destroying all attempts to create a better one.

Network management protocol must be low-bandwidth. An admin with iPhone called on vacation now can perform emergency maintenance of a large company's network, with every signgle link initially down (but terminals working over cellular modems). If someone wants to standardize protocols, standardize that protocol, as current solutions are pretty poorly done and can be easily improved. Common standard here would be great.

Internal mechanisms for switch ASIC control can also use some standardization. But the principles of that interface have absolutely nothing in common with remote management, and can not be made completely network-transparent because of complete mismatch of latency requirements. It's not user interface where X11 can use the same protocol for local and remote access, and just choose a way to pass the data because local network is always faster than user's eyes. It's the situation that requires DMA, multiple forms of offloading operations, protocol-specific hash algorithms, etc.

They can be standardized, and can be implemented as open source projects (this is all nothing compared to, say, gcc), but people who write those standards should abandon the idea that they are writing a remote management protocol, because they are writing local control mechanism that should be usable in a tightly coupled hardware system that invoves ASICs, FPGAs and CPUs connected by multiple buses. For all I care, they may decide to compile rules into Verilog and reload a chunk of FPGA every time real management protocol chanes something important -- but probably they would choose a more limited in functionality but greater in performance "almost FPGA" inside their ASICs.

But then I would rather prefer that control mechanisms and upper-level representation of rules were common and human-readable, as they are going to the compiler that generates representation for a particular hardware anyway. OpenFlow goes into the opposite direction, and would only make such systems less feasible for anyone but few large companies.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43767203)

It does not matter if it sends one bit per packet -- latency is per packet, not per byte. Packets must be sitting in a queue while the switch is waiting for response -- so the time for response is determined by the time for the queue to overflow, or the packet will have to be dropped. It will never work.

So you are saying my estimate of 200 ns delay is wrong? Give me your own calculations.

Yes the incoming packet is in a queue while the switch waits for response from the controller. That response can be there within 200 ns. In the meantime the switch is not blocked from processing further packets.

A 200 ns delay on the first packet in a flow of packets is so little that is barely measurable. You will be dealing with delays much larger than that simply because you want to send out a packet on a port that is already busy transmitting.

I am not going to comment on the rant about management protocols. OpenFlow is not a management protocol.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43771911)

So you are saying my estimate of 200 ns delay is wrong? Give me your own calculations.

This would work if data was transmitted instantly and there were no packets. 200ns round trip latency can be only achieved if network interface on the controller that can handle ten million packets per second. Network cards can only do 1-2 millions, and this does not count latency caused by a network stack of a general-purpose OS on the controller, and any delays in the switches along the way.

Yes the incoming packet is in a queue while the switch waits for response from the controller. That response can be there within 200 ns. In the meantime the switch is not blocked from processing further packets.

It is blocked because it does not yet have a rule that will determine the processing of any packet that follows, so any packet may happen to match it, and have to sit in the queue until the new rule arrives from the controller.

A 200 ns delay on the first packet in a flow of packets is so little that is barely measurable. You will be dealing with delays much larger than that simply because you want to send out a packet on a port that is already busy transmitting.

Switches always delay packets -- this is what queues are for. In this case, however, any packet that has to be processed by a controller means instant blocking of the input queue, something that network switch designers avoid like the plague. And just imagine what will happen if the response will be lost and everyone will wait for timeout and re-transmit. The loss of traffic will make those large routers look cheap in comparison.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43772249)

That is bullshit. Here is a guy that benchmarked the Intel X520 10G NIC that wrote a small piece titled "Packet I/O Performance on a low-end desktop": http://shader.kaist.edu/packetshader/io_engine/benchmark/i3.html [kaist.edu]

His echo service manages to do between 10 and 18 Gbit/s of traffic even at packet size of 60 bytes. And there is plenty of optimizations he could do to improve on that. The NIC supports CPU core affinity so he could have the load spread on multiple cores. The memory bandwidth issue could have been solved with NUMA. But even without really trying we are hitting the required response time on desktop hardware.

The simple fact is that after the packet has been transferred over the 10G link it will go through a PCI Express (x8) bus and be processed by the Linux OS - the same OS that you earlier claimed to be running on the control plane of the switches designed by your company. The only difference here is that I would probably get a faster system CPU than would be in your hardware.

As to the blocking issue, only packets from the same (new) flow would be queued. Say this was a NAT implementation, all other existing connections would continue with no blocking. Or if it was a BGP implementation, all other already cached destinations would continue to be routed. Also given that it is possible for the controller to reply in less time that it takes to actually receive a full sized 1500 bytes packet, this blocking idea is a bit far fetched.

Also given that protocols like TCP do not just suddenly burst out 10G of packets, the next packet following the initial SYN packet is not likely to arrive before the SYN has been processed by both switch and controller and forwarded long ago. And again packets to other destinations will not be blocked while we wait for the controller and somehow I get the impression that you think they would.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43773237)

That is bullshit. Here is a guy that benchmarked the Intel X520 10G NIC that wrote a small piece titled "Packet I/O Performance on a low-end desktop": http://shader.kaist.edu/packetshader/io_engine/benchmark/i3.html [kaist.edu] [kaist.edu]

His echo service manages to do between 10 and 18 Gbit/s of traffic even at packet size of 60 bytes. And there is plenty of optimizations he could do to improve on that. The NIC supports CPU core affinity so he could have the load spread on multiple cores. The memory bandwidth issue could have been solved with NUMA. But even without really trying we are hitting the required response time on desktop hardware.

That's without data ever being accessed from userspace, no protocol stack, average packet size being half of the maximum, and there is a good possibility that the measurements are wrong, because then it would be easier to implement the whole switch by just stuffing multiple CPU cores into the device, and the whole problem would not exist.

The simple fact is that after the packet has been transferred over the 10G link it will go through a PCI Express (x8) bus and be processed by the Linux OS - the same OS that you earlier claimed to be running on the control plane of the switches designed by your company. The only difference here is that I would probably get a faster system CPU than would be in your hardware.

Actually in that driver it was not processed by anything -- copying is performed while in kernel. Data is mapped to userspace-accessible memory but userspace does not even touch it, everything happens in kernel, with network stack being completely bypassed. The only thing less relevant (but perfectly suitable for a switch if it was indeed that fast) would be forwarding done entirely in interrupt handlers.

Another important question is, it's still unknown what the latency of that thing is -- with enough buffers it can get very high, and we did not even take into account all the delays in hardware, sitting in the queues in the switches that forward them over the management network.

As to the blocking issue, only packets from the same (new) flow would be queued.

Except there is no new flow because it will be only created after the controller will produce a rule to identify it, so any packet may belong to it. And there is a matter of somehow expiring old rules based on lack of traffic that matches them. This works very well when the whole NAT or load balancing can be reduced to predefined static mapping with a hash. What is done already with managed switches, even cheap ones. There is no benefit in making fancy controllers do that.

Also given that protocols like TCP do not just suddenly burst out 10G of packets

But protocols like UDP (and convoluted monstrosities on top of them with multiple dependent mappings, like SIP), do.

, the next packet following the initial SYN packet is not likely to arrive before the SYN has been processed by both switch and controller and forwarded long ago.

Except there may be a DoS in progress that fakes those packets. On a host, you can have an algorithm that prevents them from eating your memory, so only real mappings (with both sides completing the handshake) end up being persistent. On a switch you have no way to implement such an algorithm because it does not fit into rules you can define by your protocol, what means, either your simple switch has to become a router, or it will not be able to provide such functionality without falling a victim to a simple SYN flood.

And again packets to other destinations will not be blocked while we wait for the controller and somehow I get the impression that you think they would.

There is also a matter of number of rules, size of internal RAM in ASICs that contains them (and it's a different kind of RAM, not cheap and not large, think CPU cache), and time that it takes to update it. All for nothing because CPU in the switch can do the same, better, without any "globally standardized" formats for rule-passing protocol, without data ever showing up on management interfaces, without any of those hare-brained schemes. As I said, there is a place for standardization of switch ASIC control, and it will be a great way to make open source firmware for high-end switches and routers feasible. But then the standard should not be crippled or shoehorned into a management protocol that is supposed to work over the network.

Re:How can you have a software defined network? (1)

bbn (172659) | about a year ago | (#43773541)

That's without data ever being accessed from userspace, no protocol stack, average packet size being half of the maximum, and there is a good possibility that the measurements are wrong, because then it would be easier to implement the whole switch by just stuffing multiple CPU cores into the device, and the whole problem would not exist.

The article was written by the guy that did the driver, I think we can assume he knows his stuff.

No it appears that if you want to switch more than 10-18 Gbit/s the computer would have a memory bandwidth problem. Trying to use multiple cores and NUMA might improve on that, but I do not think you would manage to build a 24 port switch that switches at line speed this way :-).

But if you could somehow get an external switch to do 99% of the work, this might work...

I am not sure how much more we can get out of this discussion. From my side I believe you are going too far in trying to make a problem out of something that actually works quite well for some very large companies (Google and HP!). Packets need to be delayed when the controller needs to be queried and that is true for both OpenFlow and traditional switches. We are just fighting over some nano or possible microseconds here with no one showing that it actually matters. It very likely does not matter for the use case that Google uses for, or they wouldn't be doing it. At my company we are using it too and it works very well for us. We are an ISP by the way.

There might indeed exist a work case where a 10G flow just pops into existing out of nowhere and where even 1 microsecond delay on the forwarding of that stream is not acceptable. I am just having a real hard time imaging that case.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43779711)

he article was written by the guy that did the driver, I think we can assume he knows his stuff.

Most of the driver is just a copy of Intel driver, with additional functionality bolted on top. Whatever the author's abilities are, the goal was not to produce a working protocol stack, and benchmarks of this hack can't be used to predict anything but the behavior of this hack.

No it appears that if you want to switch more than 10-18 Gbit/s the computer would have a memory bandwidth problem. Trying to use multiple cores and NUMA might improve on that, but I do not think you would manage to build a 24 port switch that switches at line speed this way :-).

But if you could somehow get an external switch to do 99% of the work, this might work...

And then they would inevitably slow down this hack, too, what makes me doubt the validity of the measurements.

I am not sure how much more we can get out of this discussion. From my side I believe you are going too far in trying to make a problem out of something that actually works quite well for some very large companies (Google and HP!).

Those companies merely announced that they intend to use this "technology" somewhere. They are not throwing out the routers they have. They likely replace some level 2 and level 3 switches ("almost routers") and treat the whole thing as a fancier management protocol for simple mostly flat or statically configured networks that they have in abundance. For all we know, Google may already have no routers at all except for links between their data centers, as they are famous for customizing their hardware/network infrastructure for their own unique software infrastructure, and would probably gain more from multi-port servers connected by very primitive switches into clusters with VLAN or even physical topology following the topology of their applications' interfaces.

Packets need to be delayed when the controller needs to be queried and that is true for both OpenFlow and traditional switches.

Except traditional switches never have high-latency, unreliable links between their components, and the data formats follow the optimized design of ASICs and not someone's half-baked academic paper.

We are just fighting over some nano or possible microseconds here with no one showing that it actually matters.

Then why don't people just place Ethernet between a CPU and RAM? It's "nano or possibly microseconds", right?

Google uses for, or they wouldn't be doing it.

See above.

At my company we are using it too and it works very well for us. We are an ISP by the way.

If it works, then the way you use it, did not require anything complex to begin with, and you use it as yet another management protocol. You could have bought cheap level 3 switches before, and configure them to do exactly the same thing with command line, except with less buzzwords.

Re:How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43765619)

As to having the controller in the same chassis, why don't you talk your employer into allowing uploading OpenFlow controllers to run on the control CPU? That is actually a good idea and might help sales of your product

I can't make comments about anything like that. However take into account that I also support implementation of SNMP despite being opposed to its design.

To implement NAT with OpenFlow you would need a rule that recognizes new connections and lets the controller add a new rule for that connection. The controller will not actually route or modify any packets, not even the initial one.

Thank you for stating the obvious. The problem is, once someone implements NAT that way over low-latency link with high-performance CPU in the switch, the whole idea of it as the management protocol that can work over the network goes out of the window, as networked controllers will suddenly get the requirement of being able to handle every SYN passing over the network. What also creates the possibility of a denial of service attack, that targets not hosts or routers but controllers.
Then network configuration interface, now something with negligible bandwidth requirements, will grow into a monstrosity that will require management (and QoS, and resistance to denial of service) on its own. Everything will be the same as before, except worse.

Re:How can you have a software defined network? (2)

swb (14022) | about a year ago | (#43740259)

Sometimes it seems that SDN is just a new dress on an old pig, sometimes it starts to make sense.

When I'm feeling enlightened or charitable about the concept I envision it as an encapsulation system for layer 2 on layer 3, allowing layer 2 networks to be created independent of the physical constraints of actual layer 1/2 topologies.

I imagine the goal is to define a layer 2 switching domain (ports, VLANs, etc) and connect systems to it regardless of how the systems are physically connected or even located. This all seems fine and dandy -- draw a network diagram, connect systems, voila!, you have a SDN.

But when you start to actually think about it seems kind of problematic...

It seems hard to separate SDN implementation from virtualization, though. If I have a SDN, how do I connect VMs to it if the SDN isn't part of the virtualization environment? Do you install a virtual network adapter in your OS to configure SDN network membership?

Or is it a switch-level system? I feel somewhat less enthusiastic about this as a concept as it just seems like more configuration for the same basic product (VLAN or VLAN trunk membership), with benefits only to really the largest and most complex networks with maximum bandwidth trying to re-solve problems sort of already solved other ways (like LAN bridging over WAN links).

Since encapsulation appears to me to be an inherent part of it, I also worry about performance but I suppose everyone in the SDN world are go-fast, low drag operators on fully meshed, aggregated 10 gig ethernet end-to-end and doesn't care about encapsulation penalties.

And then there's my inherent skepticism about the value payoff relative to the level of complexity added, as well as asking isn't that why we have layer 3 protocols? To define networks above and beyond their layer 2 memberships?

Re:How can you have a software defined network? (1)

jader3rd (2222716) | about a year ago | (#43741115)

And then there's my inherent skepticism about the value payoff relative to the level of complexity added, as well as asking isn't that why we have layer 3 protocols? To define networks above and beyond their layer 2 memberships?

What once was old is now new again.

Re:How can you have a software defined network? (1)

Anonymous Coward | about a year ago | (#43742505)

what is the payoff? no Cisco support contracts on gazillions of switches and interconnects (which support really is purchased for firmware updates--support/replacement is or should be quite rare) this will have a very fast payoff, despite the initial complexity curve.

a lab should be quite cheap for POC testing of production changes.

Re:How can you have a software defined network? (1)

Alex Belits (437) | about a year ago | (#43743315)

If all you care about is Cisco contracts, you could have replaced all your routers with a mesh of WRT54Gs (or a modern equivalent) for at least a decade. It will also provide better redundancy than anything Cisco can sell you. Now, if you have requirements for performance... That's why Cisco can force idiotically expensive contracts.

The problem is not the management protocol. The problem is DEFINITELY not where the management functionality or protocol is implemented, considering that CPUs are now literally dirt cheap, so there is no excuse to move controller to some server, even if it would make sense 15 years ago. You will save $1-$5 on a $5000-$10000 device by moving from CPU capable of running full switch control (or both Openflow switch and Openflow controller, and leave enough CPU time to mine bitcoins or something if you really, really want to) to a CPU that will only be able to run OpenFlow switch.

Speaking of which, there are a lot of FPGAs in modern network switches... I have an idea! Let's all use those FPGAs to mine bitcoins! That will reduce the network maintenance costs!!! Someone, gimme a couple hundreds of millions, I will sell this idea to Cisco and VMWare, they seem to be buying all stupid ideas now. Oh, and Citrix! How could I forget Citrix! All their technology is obsolete again, so I will convince them to become a giant of router-based bitcoin mining software! BWAHAHAHAHAHAHA!!!

Re:How can you have a software defined network? (1)

evilviper (135110) | about a year ago | (#43743773)

A network is physical infrastructure

No it isn't. Sure, there's one ethernet cable connected from a server to the rack switch, but even there, the packets coming in could have hundreds of different VLAN tags on them.

Everywhere else, you have multiple redundant links from everything to everything else, and deciding which one to use for each packet is the complex part.

Re: How can you have a software defined network? (0)

Anonymous Coward | about a year ago | (#43744761)

+1 for the need of coherent networl virtualisation.

Re:How can you have a software defined network? (1)

AK Marc (707885) | about a year ago | (#43749245)

Do you know what VLANs are? It's a logical imposition on a physical network. SDN is an extension of that idea. No reason you couldn't put every computer in its own VLAN, with ARP and DHCP forwarded to the correct server, or configure a full mesh of connections and disable all but the best route spanning tree style, with your explicit rules, no 3rd party decisions required.

Google is the Amazon of networking (0)

Anonymous Coward | about a year ago | (#43744685)

Will be interesting to watch this evolve. Facebook is another one to watch driving the change with OCP. Bet it doesn't lower my cell phone bill from AT&T.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...