One of the reasons for the rise of the Internet was that it was stupid: by throwing buckets of bandwidth at any problem, and attaching the intelligence to the network edge, it could ship bits around vastly cheaper than telco carrier networks. That competition was documented back in 1998 in David Isenberg's famous paper, The Dawn …
Eventually everyone becomes right wing, its just a question or age!
The siren voices of "centralised control makes things more efficient" have proved wrong for every system.
Central control always leads to corruption!
Re: Old Age
You realise he was talking about the networks and not the socio-political landscape don't you?
Re: Old Age
Nobody mentioned centralised control, the article is about separation of control. Out-of-band versus in-band. It's a sound principle, look at how Bell got bitten by in-band control of the phone network when the Blue Box came on the scene.
Re: Old Age
It's a bad idea to put SCADA on the public Internet too. People still do it, increasingly.
Re: Old Age
Right wing does not equal centralised control.
Stalin was hard left wing in economic and social ideology, and was totally centrally controlled. Similarly, the Italian fascist government of mussollini was hard right wing in economic and social ideology, but also applied total centralised control.
The axis of political thought that you are looking for is totalitarian <--> libertarian, with the power being vested totally in the state or the individual.
This is orthogonal to both economic and social ideology.
Debasing the meaning of words does no one any favours.
Re: Old Age
Hmm I wonder what the implications of control and data being run by 2 separate companies/entitities? Y'know, the one where the ISPs get to run the control part and the Government takes over the data part?
Re: Old Age
Italian Fascism was also left week, drawing most of its tenants from socialism, which is by definition, hard left wing. Fascism was regarded as right wing by International Socialists because Fascism was a nationalist socialism. Essentially, the national socialist just weren't pure enough for the international socialists.
There are totalitarian examples on the right wing, but they are monarchists. And Monarchists (as opposed to constitutional monarchists) have mostly disappeared from the world with the exception of a few regimes in the middle east.
@Tom 13 - Re: Old Age
You forgot to consider banana republic totalitarian regimes. Are they left, right or center ?
Software Defined Network. (Information the article didn't supply.)
He sounded like a man with a fully functioning crystal ball once. I stopped paying attention when he sold his soul to Google.
Show me an efficient system...
... and I'll show you a single-point failure.
When one goes looking fot the "internet Kill Switch", it will be on that control-plane.
Re: Show me an efficient system...
You have that one correct. How the security happens is going to be a break-point IMNSHO. I've been running my own CA since Windows Server 2003 Enterprise (in a VM no less) as I did not want that outside my control. And lo and behold, one of the main vectors of attack these days is the certificate chain by both criminal and national actors. Now we want to (re-)centralize the control plane, which is just as stupid as not having a heterogeneous network, subject to that same kind of attack.
It's a nice idea, it's the implementation(s) that are going to be nasty.
could someone explain this (network newbie here)
It seems like it's important but I can't make sense of it.
> throwing buckets of bandwidth at any problem, and attaching the intelligence to the network edge, it could ship bits around vastly cheaper than telco carrier networks.
How could putting intelligence in the network reduce bandwidth, other than by improving the routing protocols (which I thought were pretty good anyway)?
> SDN's separation of the data plane and the control plane.
I take it this means 2 networks, the biggie for user data + a mini on top for control, is that right? In which case one can have dedicated QOS 'channels' and be back to one large data network again, with the control network threaded through logically, yes?
> That separation means that forwarding decisions – the control plane – can be abstracted away from the switch elements and hosted in external (and more generic) servers.
I thought that's what big switches/routers/whatever were doing, though I don't understand the comment about 'more generic servers'
> The abstraction of the control plane into software is also seen as an opportunity to pull service creation and definition back into the network, away from the elements
WTF is 'service creation and definition'? HTF could they live in a network? And what are 'the elements'?
> giving carriers a chance to recoup some of the value lost when they were turned into big, dumb bit-pipes.
How, how, how, with magic or something?
And how is anyone who doesn't do heavy duty networking supposed to decipher this article?
Any explanation happily received, TIA
Re: could someone explain this (network newbie here)
OK, I'll no doubt grossly over-simplify here, but stick with me.
Up until the 1960's, telephone exchanges (broadly) used the same switch-plane for traffic and control/signalling. The problem with that is that it's blind to what's ahead. If something further along in the chain is broken, an element earlier in the chain probably won't know and will blindly keep sending calls to it, all of which will fail. (Massive over-simplification).
To use an analogy, it's like driving to a destination you've never been to before without consulting a map. Instead of planning a route, you stop at every junction and ask the way.
With packet as opposed to circuit switching you can take that hit - if the bridge over the river to the place where you're going is down, you can just send out someone else in another car - and keep doing so until someone gets there. Eventually the people giving directions will know the bridge is down and give you an alternate route.
Separating traffic and signalling in telephone exchanges began with Ericsson's Crossbar and the BPO's trio of experimental electronic/digital exchanges in the 60's - Highgate Wood, Empress and the TXE1. That separation allows you to plan a route in advance, to know that the elements in the chain are working - with (most) digital exchanges the switch path isn't created until the signalling path has made the distant phone ring and the called party has answered. It's efficient with bandwidth - especially compared with setting up a path from Exeter to Edinburgh just to have the exchange in Edinburgh return engaged tone. If you separate traffic from signalling/control you achieve the same result without taking up bandwidth for no benefit. Separating the planes means that the traffic bandwidth is just that - traffic - with no bits given up for signalling or control - payload increases for a given throughput.
SDN is heading in broadly the same direction - to use intelligence to plan and set up a route end-to-end, instead of making a routing decision at every switching element, every time, for every packet. Best route instead of first available route, and only set up the route once you know that information exchange can happen. Better route decision making is needed in IP networks for the same reason it was needed 50 years ago in telephony networks. Done right, it makes it cheaper.
To me, this introduces a conundrum. The logical outcome of such a step, longer term, is a return to circuit switching. If you've gone to the effort of setting up a 'best' end-to-end path, why not leave it set up for the duration of the information exchange? That gets chaps with beards dusting off research papers with B-ISDN written on them - the telecom world's proposed approach to high-speed global networking before TCP/IP won the networking war.
It starts with a bit of a false premise-
"throwing buckets of bandwidth at any problem, and attaching the intelligence to the network edge, it could ship bits around vastly cheaper than telco carrier networks"
This is not strictly true given the 'net is fundamentally reliant on telco/carrier networks. It also depends on a definition of 'edge'. At the extreme edge, ie customer premise, the network doesn't need to be very intelligent. If you've only got 1 connection to the 'net, deciding where your traffic goes next isn't difficult. If you've got an IP connection to a PoP, that PoP may typically only have 2 connections to the rest of the 'net, so routing decision is perhaps left or right, or maybe back out another interface for P2P or local traffic. So for basic networking, you don't really need much intelligence.
Problem is and was router vendors make a lot of money selling routers, which are 'intelligen't devices that end up spending most of their lives doing simple forwarding. Their business is and was challenged by dumber networks (eg Ethernet) in the volume space at the edges of the network. Use cheap Ethernet to aggregate and forward to somewhere with some brains, if it needs a routing decision. Adding expensive routers into the network just adds unecessary costs. Putting more intelligence into the network can reduce bandwidth by using it more efficiently.
"I take it this means 2 networks, the biggie for user data + a mini on top for control, is that right? "
Kind of, but it's more about communication between the network and the service layer/user data. At the moment there's little communication between those layers. Suppose a customer has a 2x 10G pipes. One's full because their router says that's the best route. The other is practically empty. With better signalling, the network could potentially shift traffic onto the idle link or increase capacity. Same could be done if a circuit was degraded. QoS gets more contentious given the net neutrality arguments.
"I thought that's what big switches/routers/whatever were doing, though I don't understand the comment about 'more generic servers'"
If you're switching or forwarding most of the traffic, then intelligence in most of the network is less complex, so cheaper. So with an MPLS network as an example, put the Internet into a VRF and if it needs to make a routing decision, forward that to a route server. Which doesn't need to be an expensive router.
"How, how, how, with magic or something?"
By offering value-added services, or simply better performing services. Challenge would be overcoming user perception that networks are big, dumb, bit pipes that should be cheap.
"And how is anyone who doesn't do heavy duty networking supposed to decipher this article?"
Same way you would any marketing.. use caution, examine the claims. Try and figure out how adding cost reduces price :p
Re: some explanation. @Terry Barnes @Jellied Eel
Much appreciated, gents, that nudges things towards comprehensibility. I'll read over your replies a couple more times *very* slowly.
Thanks for taking the time.
(btw @Terry Barnes, "If you've gone to the effort of setting up a 'best' end-to-end path, why not leave it set up for the duration of the information exchange?"
I guess that best, or even 'working', path can change such that it becomes significantly cheaper during transmission to reroute. I guess an even better reason is that maintaining such a path at the logical per-connection level (ie. between arbitrary communicating machines A and B) would require additioinal state in each of the intermediate network boxes, which with large numbers of communicating pairs, could become... well, you tell me! Current routing is stateless at this level, I think. Perhaps these days with cheap memory that might work, but 20 years ago?)
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- 20 Freescale staff on vanished Malaysia Airlines flight MH370
- Did Apple's iOS literally make you SICK? Try swallowing version 7.1
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked