Everybody has their own twist and take on software-defined networks, or SDN, these days as virtualization moves on from servers and storage and into the still crufty world of networking. Server vendors who bought into networking to bolster themselves are now scrambling to be players in virtual networks when all they thought they …
what the hell does that mean
""The promise of SDN is that you have an abstracted control plane that allows you to tie your applications to the network,"
I keep seeing people talking about SDN but still have yet to come across an example that let's me see the value in such a thing. I'm sure there is, but I'm betting it's pretty niche compared to the amount of hype it's been getting recently. Just now I browsed over two more articles on SDN and they told me nothing other than it makes the network more programmable, and can tie into things like Openstack. I don't get what there is to be excited about. I have seen some people say it allows you to experiment with different ways of routing data without making changes to the OS code on the switch - again, I see no point unless your in the specific field of needing to do that(e.g. developing the next BGP or something) - I've never had a problem (and don't know anyone personally who has) who has had issues with today's methods of routing.
When I tie applications into the network I do it with load balancers(oh sorry I think they are called application delivery controllers or something - in any case full layer 7 traffic routing), what applications need to be tied into switches?
Re: what the hell does that mean
SDN is largely about scalability and how to manage bandwidth allocation efficiently between the different applications and services being provided.
Imagine a large data centre running a lot of virtual hosts in a multi-tenant environment (i.e. a public cloud). Clients will be paying different rates for different services and having them share uplink bandwidth will either restrict what you can offer or require more bandwidth than you want to provision to each rack. SDN would allow you to manage the bandwidth allocation between racks, potentially requiring virtual hosts to be moved to less busy racks i.e. during backups or data-intensive applications such as Hadoop).
Is it niche? I believe so, although the total number of ports that will ship will be significant due to the scale of locations where it is likely to be used.
The thing I don't understand is why the network equipment providers are chasing this - the OpenFlow switches will be commodity hardware based around a management processor (probably x86) and a switch ASIC. There won't be any money in the ports side of it.... On the OpenFlow controller side, I'm expecting someone (one of the big cloud providers) to release an OpenFlow controller which doesn't leave a lot of money to chase.
I'm less certain how this will effect Enterprise networks where an organisation has control over what is being done inside the data centre and may see over-provisioning as a cheaper option than managing a SDN.
Re: Re: what the hell does that mean
"SDN is largely about" VMware bought it so hp have to play nice as VMware will no doubt be building it into future Vsphere products, and no matter what anyone else says the market is still going to carry on buying VMware. End of.
Re: what the hell does that mean
ok I think I can see that use case but yeah doesn't sound like something that would be a killer app in an enterprise environment.
Yeah it wouldn't surprise me if some sort of open flow controller was integrated into open stack as part of the rest of that solution..
thanks for the info
Re: what the hell does that mean
yeah I can see the massive cloud organizations wanting this kind of thing if you have 10s of thousands or more physical machines.
But as you say in several of these cases the big massive cloud companies make their own switches and don't use the gear from the enterprise companies.
For the first time last week I heard the term "software defined storage" as well.. I guess they need to keep inventing new concepts and term to keep the air in those cloud bubbles.
Its about centralisation and easier management, if you have a large Virtualised enviroment with 50+ Switches from various manufacturers, it means you can manage and make changes from 1 central location. At present you may to add a new global VLAN you many need to log into each and every switch individual which could take hours..With things like OpenFlow, such a change can be made within seconds..
This this this this this!
I run two datacenters with equipment from Dell, HP, Cisco, Extreme, 3Com (redundant, I know), A10, Brocade, Palo Alto and juniper. Plus I'm responsible for the networks in a number of small SMBs locally. This adds up to tens of thousands of touch points for configuration. Reducing this to 1 or even 20 touch point would majorly impact my bottom line. Heck, plug this into OpenStack and the cloud operators will be doing network stuff without even knowing it since the local SDN controller will do their bidding, but under my conditions and constraints. I may not have as much billable time per client, but I will have a lot more time for clients I currently turn away.
All good from my pov.
Isn't that what SNMP was supposed to solve many years ago ? The management issue? I thought SDN was more than just management..
If SNMP didn't fix the management issues what makes anyone think SDN will? And what are the expectations for a time line for it to be flushed out, inter operable etc.. ?
Myself I don't use SNMP for anything other than polling counters and stuff, though of course there is the ability to send snmp write requests, and it can even be secure with SNMPv3.
sounds like a topic El reg could get some credit in exploring if they got the expertise to do it with.
I don't recall SNMP being sold as the end of all management but I won't argue it either. But SDN isn't just about device management, it's also about traffic management. Right now you have to design layer 1, 2 and 3 for any enterprise scale design. When doing that you have to take into account redundant pathing, spanning-tree topologies, routing topologies and convergence time of all of these protocols in a failure situation.
With SDN you can actually slap your switches together any way that works with your physical topology (rings, stars, daisy chains, triangles, etc). You can string as many lines between devices as you would care to and you can create as many loops as you want. The SDN controller manages flows so that all paths from A to B can be utilized and load shared. Even better you don't have blocked or unused links a la STP, you don't have to set up port LAGs on each pair of devices with multiple links, and this is the best part: no convergence time ever! Just like there's no convergence time on a LAG group when a link fails, with OpenFlow there's no routing or STP convergence time for node or link failure.
That bears repeating: no convergence time. That's huge. Many a career was built by engineers who knew how to tweak routing protocols an STP down to a gnat's ass in order to get lowest possible convergence times. With SDN that special skill is no longer required.
Oh, and you can ignore STP, or TRILL or whatever the new loop correction protocol of the day is.
At least that's how I see it.
Re: Re: This
"......But SDN isn't just about device management, it's also about traffic management....." Erm, isn't the traffic management part just OpenFlow, as already used by hp, Brocade, CISCO and many others?
Um, no. SDN is a concept wherein all the network parts are stupid and the brains/control are centralized. OpenFlow is an implementation of SDN. Various switch/router vendors have implemented (at least parts of) the OpenFlow client specification. Some vendors have created devices or softwares which implement the controller specification Clients are dumb boxes with lots if ports and get all of their configuration from from the controller. Both controller and client are required for a working installation.
If you are familiar with WLAN evolution it is very similar. At one point, all wireless LANs were made up of stand-alone access points where each AP had to be configured independently. As you can imagine, this scales very poorly in environments with hundreds of APs. Nowadays you can still deploy that way or you can use the newer methods where the actual APs are just dumb radios which load their configuration from the WLAN controller. Here the radios are non-functional without the controller; in OpenFlow the switches are equally useless without a controller.
Of course there are variations to this theme, but you get the general idea.
<Yawn> @ TPM
Come on, TPM, try saying something about an hp product wihtout trying to bash hp. Go on, just try for once.
"....Server vendors who bought into networking to bolster themselves are now scrambling to be players in virtual networks when all they thought they needed to do was sell switches along with their servers and storage. Hewlett-Packard is one of them...." Really? So you just forgot about all that Flex technology that hp has been shifting for years, much earlier than IBM had a comparable converged network product?
For me, openflow and SDN is not mature enough to get into my core network. MPLS Services with OpenFlow sounds interesting for me. But today there are only some case studies, some testbeds and presentations. Please give me a product number, I want to buy this now!
Talking about HP. I looked at thei release notes ans switches.
1. HP supports openflow 1.0 We are at 1.1, so version 1.2 will be there soon.
2. HP promoted its "A"-series to be put into the core. Well, those switches uses that former H3C OS, whereas the HP 3500/8200/3800 uses Procurve OS. The last-named should be placed on the edge but have openflow onboard. The H3C ones belong to the core, but do not have openflow. Somehow strange...