As network topologies and data access patterns have evolved, load profiles can change so quickly that a completely new approach to networking is required. That approach is OpenFlow. According to Renato Recio, IBM Fellow and system networking CTO, life before the advent of x86 virtualisation was simple: client computers did most …
Finally! SecureFast is back.
I wonder if it will work as well.
If I say this often enough, maybe it will actually happen.
Re: Finally! SecureFast is back.
My thoughts exactly. For those of you scratching your heads, SecureFast was a very similar system (maybe even patented?) designed by Cabletron and running on their MMAC+ Smartswitches back in the late 1990s - centrally managed, resilient, path oriented, distributed Layer 2 VLAN forwarding. Worked brilliantly, just never caught on because it was perceived as proprietary and non-standard.
So is this about ethernet swtching without spanning tree?
Alot like a Telco style network then.....
It is a protocol for managing the switching hardware - rather than individual devices deciding on which paths to use, a management system "programs" all of your switches to form your network. Spanning-tree allows individual "intelligent" nodes to communicate path information and create a network - OpenFlow creates the network in a management system (controller) and passes the details to the "dumb" nodes (hardware) to create the network. OpenFlow allows the hardware to feedback information, but limits what the hardware can do without the controllers approval (i.e. re-route traffic from a failed path to a backup path but don't use an additional path if one becomes overloaded).
The idea is to replace expensive single-vendor networks, with cheap, multi-vendor hardware and a control system. I believe Google's network resembles this approach, although they may or may not use OpenFlow.
As a future step, OpenFlow would expose an API to applications to allow the applications to feed back information to the controllers (i.e. VMotion requesting additional bandwidth to move VM's between machines, maybe at the expense of backup bandwidth).
there are already devices that do all of this - they are called load-balancers -- local or global server load-balancing can achieve all of the requirements.
setting specific 'probes' can monitor conditions and adjust/vary the 'answers' to the load-balancer query.
The IP answer, URI, or DNS answer can all be based on conditions including
- load characteristics
- resolved failures and promotions
so all this stuff is already available to the network designer using pupose built industrial strength network devices... the idea of bunging a load of crappy software in the middle of the network flows just seems to be backwards thinking -- capacity to the 'decision maker' would be the vital planning problem/consideration.
security boundaries from the desicion maker also might have to be compromised to allow communications to network devices in other security zones/DMZs.
i wont hold my breath for this crap to become mainstream. sounds like a load programmers who think they know networking better than network engineers/designers to me.
Useful until your server crashes
So basically instead of letting a network device make decisions based on information it receives, the decision is pushed to a server or set of servers. Great theory, but terrible execution. If the server(s) have a problem, your entire network could conceivably crash. A coder has a bad day? Crash. From a service provider perspective (my industry), I do not see this as a good thing.
"sounds like a load programmers who think they know networking better than network engineers/designers to me"
cos its CLI monkeys that code IOS/NX-OS/EOS/etc...?
I don't know about the specific products in this article (because i CBA to check right now), but most of the people developing in this space (I am referring to SDN as a whole) are the same people who were a few weeks/months/years back developing the stuff you are so keen on today!
@ahtung!: centralised (vs. distributed) does bring with it its own challenges. fortunately making application services highly available is not a new concept. Also, this doesn't have to be an all or nothing affair, there are possible architectures/designs that can "fail open" or it could be that software simply augments existing technologies. Big bang switch-overs are rarely a good idea in any field....
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- 20 Freescale staff on vanished Malaysia Airlines flight MH370
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked