Reminds me so much of Microsoft's forced updates in Windows 10, and and the blue screens and angst that has caused.
Time to be leaving AT&T methinks.....
AT&T has launched an audacious attempt to push the networking industry towards software-defined networking and white-box hardware. Revealed in a white paper titled Towards an Open, Disaggregated Network Operating System [PDF], the carrier's plan called for the creation of a “Disaggregated Network Operating System” (dNOS) as …
It's Tuesday Morning, let's hope BT are already analyzing the white paper "Towards an Open, Disaggregated Network Operating System" because this is far more important to them than any 'snake-oil' Pointless bamboozled obfuscated copper based G.fast* in terms of allocating R&D resources/teams, if we truly are to have ubiquitous full fibre FTTP cheaply throughout the UK.
If BT need a new direction as legacy copper based Pointless G.fast looks more and like a 'can of worms' in the making, it wouldn't go far wrong following this route (too) instead, to cut rollout costs of a full fibre optic FTTP network.
And yes, BT still need to set a date with Ofcom for the last new installation date of Copper.
*G.fast with the usual "up to" caveat of 'zero (a not spot) and up'-sweated to the max copper.
(Following on from the comments yesterday, there seems to be a theme here ;)
If you look at the costs of providing home Internet connectivity, the cost of the routers at the home end is almost negligible (likely £10/unit in bulk) and the costs for the central offices are significant (typically £500,000-£1,000,000/exchange mostly down to the hardware and support costs rather than the initial software costs) but both costs are insignificant compared to the cost and time taken to install additional cabling. i.e. if there are around 3000 households connected to an Exchange, you have around £300,000 of CPE (home) routers, £1000000 of exchange routers/DSLAMs etc, and a load of other infrastructure to provide the connectivity to central communication hubs and individual houses at least £1000 per household, and a few million for the exchange infrastructure (land/power/backhaul) for a total of at least £4-5 million. And around 20% of that per year for maintenance and management. Moving to "free software might reduce those costs by 1% or 2% and the chances of that being passed on to customers is negligible.
The open router standards will hurt existing providers of "enterprise class" routers like Cisco and Juniper, where a £1000-£5000 CPE router will be replaced by an equivalent device costing a tenth of the price.
Note: I suspect all the estimates of exchange costs are on the low side based on an urban telephone exchange in the UK.
> *G.fast with the usual "up to" caveat of 'zero (a not spot) and up'-sweated to the max copper.
What on earth has this got to do with distributed network operating systems in the operator's IP core?
But on the subject of G.fast, presumably you missed that Openreach have committed to a guaranteed minimum speed of 100Mbps - i.e. if it delivers any less than that, you will be able to raise a fault.
I'm rather interested in something to replace VDSL2 LAN boxes with something faster (yet cheap) in mostly rural cases where barns and other buildings are connected only with telephone cables and cabling everything with fibre just isn't worthwhile. Same goes for indoor cabling in many buildings, because VDSL is mostly plug and play, and G.Fast is just the next gen.
Power-line Ethernet is hit-and-miss - usually miss unless the adapters situated close to each other, and it won't work between buildings anyway.
barns and other buildings are connected only with telephone cables and cabling everything with fibre just isn't worthwhile
Something like this? Not as expensive as you might think, and perfectly usable for point-to-point links. I dare say other vendors have similar options.
You're quite a bit out on exchange costs. A typical DSLAM is something like a Nokia (Alcatel) 7330 or in BT's case, a Huawei MA5300/MA5600, which cost a LOT less than £1m a PoP. Most of the time the FM cost is well & truly sunk, give or take BT's sale & leaseback deal.
There is, but then that's always been one of the challenges with SDN. If you're a Google, Facebook or Amazon then it's vaguely OK to roll your own 'white box' and buy/build/lease capacity to plumb them together. If things go wrong, it's your network, your risk. Ok, so that doesn't neccessarily apply to Amazon, but contracts can be wonderful things to limit liability.
So challenge for SDN in an SP environment mixing internal, retail, wholesale is how to extend control-plane capabilities to normally data-plane dwellers. Especially if some of those are TLAs and paranoid about security. That's mentioned in AT&T's paper and would be something they'd have looked closely at in their trial. If customer data are encrypted and salted, that reduces the risk of eavesdropping or traffic analysis leaving potential for DoS via control plane attacks.. Hence why SPs that want to stay in business take care to protect that. And the classic router vendors haven't been immune to security problems, and when they crop up it's a major ball-ache to patch a 100K+ node network. Being 'open' rather than proprietary may reduce security risks.
The hardest part here is probably convincing a loyal bunch of well paid Cisco Certified Network Engineers, in a similar vein to persuading Microsoft Certified folk to see Linux Desktop/Libre Office 5 as a viable alternative to Windows 10/Office 365 deployments.
There is absolutely nothing wrong (and an awful lot which is right) with the Linux open-source alternative to Microsoft's Desktop in 2017 "to do the drudge", but getting traction behind an alternative idea so it becomes a mainstream alternative, is always hard, even Microsoft struggle with launching alternatives - just look at Windows Phone.
AT&T are giving the signal though, they are ready for change away from Cisco, they are probably looking at Facebook because Facebook is leading the way, it has to be said, away from vendor lock-in.
Two problems I see here:
1. 'well paid Cisco Certified ... Engineers' - this is an oxymoron; the good pay packet in the Cisco world was during 2000 - 2008
2. AT&T are no Facebook - they are a behemoth incumbent as opposed to a 'disruptor'. It takes a lot more than desire to shave off a few percent in order to actually achieve something good. It's a culture thing and judging by opinions I read the culture may not be there.
But you never know, I could be wrong.
I'm curious which incarnation. Post consent-decree, some Baby-Bells (apologies to La Vache Qui Rit) maintained a level of service that was quite good, while others were at best surly about responding to complaints, and at worst outright fraudulent in re. SLAs and the like. Then, as any student of capitalism would predict, the worst of the worst saved so much money by skimping on maintenance and infrastructure (unlike those dolts who remembered they had customers to serve, even if most customers had little choice), that they were able to buy up all the others. Friends of mine who worked for one acquired Telco referred to their new overlords as Sodomized By Cowboys.
Anyway, this sterling example of Free Enterprise eventually bought the husk of AT&T, and switched to that name. When a company changes its name to one whose logo was commonly referred to as The Death Star, you know even they recognize their image problem.
That might be why AT&T are doing this. Cisco or Juniper release their latest bleeding edge code, operator needs to test it before hitting the 'I feel lucky' button and rolling it out.
But this isn't new. Back in the mists of time Demon used gated on unix boxes for routing. Some years later, I looked into doing the same for another very large network and had some meetings with a company who's name escapes me, but develop and maintain a commercial gated, which I think Deutsche Telecom used.
Problem with IOS and JunOS is both have bloated to include a huge range of bells & whistles that typically aren't required on a core router and can just create vulnerabilities.. Plus the cost of licences and maintenance. And if you look at what a typical core router does, it's mostly doing BGP, ISIS or OSPF and holding a bunch of routing tables, aka VPNs or VRFs.
And there's been routing table bloat, both with the plain'ol Internet and also customer VRFs. Traditional routers have often had very limited memory onboard, which limits the services that can be offered, ie max routes in a VRF for a customer VPN due to memory limitations.
And then there's Ethernet. So now a core router is something that can connect to a bunch of 100Gbps interfaces and run a pile of routing instances. A smart CTO might be looking at the cost/features/performance of a 'cloud' VM platform and then looking at the cost of a core Cisco or Juniper, and wondering why one is so much more expensive than the other. Especially as the VM platform is likely to be more expandable, offer backup/rollback/auto-deployment features that make offering customers a VPN+services play cheaper and easier to automate. Oh, and be 'network ready' for SDN.
The same holds true at the edge, so a consumer router needs an Ethernet in, and an Ethernet out. Why would you need a router when there's only 2 interfaces?
Define 'Core router' please?
For example Cisco ASR9000 is not a core router, by a (nautical) mile.
True core router doesn't need to know anything about a VRF. If it must absolutely positively speak BGP all it sees is another AF prefix in BGP, not in the forwarding table.
Speaking of code bloat - (again for example) ASR9000 can have selective packagages loaded, you can cook your own (smaller) installation from a large tarball. Also every flavour of line card out there needs to have the code to support it. Also imagine every MPLS flavour (there is more to life than VRF and EoMPLS), OTV, VXLAN, LISP, etc - that is all code which not every ISP might use, but someone out there uses. It adds.
Add the things like HQoS and the likes and see why the vendor might want to add those as different line cards (ServiceEdge vs Transport) and charge appropriately. The NP/NPU in a router line card can't simplly be replaced by a Xeon chip and hope it will do. Not likely.
Yes - agreed - the memory thing is shambles. Why do I need 8G RAM on ASR1000 in order for the RE to have only 3G allocated to the IOS-XE process is beyond me. (I know, it runs Linux underneath and that takes RAM, but still)
Cloud VM - good luck with getting the throughput of 120Gbps full duplex from a VM. There is a place for VM-based routers, but they can't replace things like Proprietary Silicon or Merchant Silicon when it comes to performance. Also Juniper vMX, Cisco XRv and Aristo vEOS are nothing new, Cisco CSR is quite a few years old.
Regards automation - Ansible, Netconf and Yang, Napalm, expect script over SSH if you are so desperate and if the kit is old, nothing new here. But that's the easy bit, the hard bit is the orchestration. And making network engineers think like DevOps (yuck, hate the word).
Disclaimer: I do not own shares in Network vendors, distributors, etc.
I'm not a tin shifter either, so a core router increasingly.. isn't. But then AT&T's paper isn't about just 'core' routers. It's about turning 100,000 routers into something cheaper, flexible and manageable. But assume a classical 'P' router in an MPLS world, that's mostly a transit router. I mean switch. Well, it could be a switch if certain vendors let you slap SP code onto their switches.. And switches can switch fast, and support things like SPB/PBB(TE), and so does the real core, ie the transmission layer. Internet, IPVPNs, MPLS VPNs, P2P or P2MP are just service instances on the transport infrastructure.
But having a simplified services architecture makes it easier to use the automation tools, and the orchestration. But I'd agree with the challenge of making network engineers think like network engineers, rather than an extended vendor salesforce. Real engineers should also understand what's happening in the packet/frame transport world and be vendor agnostic. Then you avoid things like selling EoMPLS delivered over a 'router' infrastructure that's connected via GFP-capabable packet/frame switches.. So one element is largely redundant, and expensive.
(and of course being able to ditch 100,000 routers would save a bunch of space/heat/power as well as the OAM costs)
So now a core router is something that can connect to a bunch of 100Gbps [ethernet] interfaces and run a pile of routing instances. A smart CTO might be looking at the cost/features/performance of a 'cloud' VM platform and then looking at the cost of a core Cisco or Juniper, and wondering why one is so much more expensive than the other. Especially as the VM platform is likely to be more expandable...
Simple: Hardware offload. Sure, you could put a couple of 100Gb/s ethernet cards in a standard server chassis (Haven't seen any blurb on cloud VMs with 100Gb/s interfaces). But could you actually move traffic at 100Gb/s between them? There are various articles out there about how Linux is struggling to support a full 10Gb/s traffic flow, let alone a 100Gb/s flow.
Then add in another 50 or 100 network cards/ports, all at 100Gb/s, and route traffic between them at wire speed.
That's why you pay Cisco, Juniper, et al a lot of money: For the hardware to move lots of traffic very quickly. Have you seen the amount of silicon on router line cards? It's not there just for looks!
Yup. Some of the silicon's probably there for licence management.. :p
So ok, a core router in vendor J-world is the T4000. 3.8Tbps in a half-rack package. Supports 16x100Gbps line cards, which shows some things never change. Router vendors have always had a problem with the concept of full duplex. So if you're looking to sell 100Gbps across a (inter)national 'core', using an IP/MPLS router gets extremely expensive compared to say, dropping an ODU4. Boxen that can do that (Nokia, Infinera, Huawei) typically have a far higher port count and lower port cost.
That's kinda going back to what is a 'core' device in an increasingly Ethernet(ish) world, and the old adage to switch when you can, route if you have to. If you want to offer customers fluffy stuff, offload to an appropriate service 'white box'. I tend to steer clear of the really fluffy stuff, like specifying server hardware, but have designed 100Gbps networks for HPC users and large corporates. That's the kind of kit I'm assuming AT&T would like to turn into it's 'white boxes'.. Or encourage vendors like Broadcom to support in their silicon.
But there'll probably always be a place for some big iron, ie peering routers. They've been fun ever since AS7007 days. Not sure any router wrangler would trust peering to any 'white box' just yet..
But this is the Unix philosophy.
1 unit doing 1 job very well.
This is not not really about the little boxes sitting under home users desks.
This is about the racks of hardware at the other end, and the much beefier cards sitting in the racks next to those, that handle the terrabits of bandwidth needed for a tier 1 backbone supplier. Where you want to p**s about configuring a router through a GUI, you want to configure 1000 of them (or patch them all when a vulnerability is found).
Performance says this is a job for a monolithic kernel. But maybe the time has come for a cleaner, layered, message passing approach (keep in mind Erlang is like this, but passing pointers, not copying chunks of memory for performance, and it was designed by Ericson to program PBXs).
Interesting they will only consider x86 and ARM architectures. A real recognition that in high performance embedded who the real main players are.
I see this as the old white-box switch argument, version 3. Version 1, as pioneered by the web guys, was to buy some no-name device and stick their own control sw on it - BGP or whatever. Version 2 was to make the software on the device a little dumber and have the smarts centralized somewhere, in other words SDN. Neither of those really break the dependence chain: you are tied to your hw platform (or at least, the HAL and drivers) in the first and in the second your controller needs to speak every variant of OS your devices provide.
So in that sense, ATT are right to want to drive towards interfaces: let the RIB expose an API that can be programmed, and let it in turn program the FIB, etc all the way down into the hardware. I can't see Cisco and Juniper being ecstatically pleased about that. But equally, ATT need to really look at themselves in the mirror and ask if they are truly ready to buy like this. Take BGP for an example. It's not your grandpa's BGP any more. It has a bazillion extensions and features. If ATT insists when buying that all of them be supported, then frankly they'll be buying from Cisco and Juniper again and the architecture is moot. If they are prepared to start with stripped down minimal services, and (big if) if they are prepared to trust the control plane to a bunch of brand new companies (some of whom will fail), then things get much more interesting. That's a hell of a bet.
Cisco, Juniper and other cultist operations offer the usual advantages of cults: by the use of rote learning and faithful repetition it's possible to make fully functioning acolytes (IT personnel) out of staff barely able to connect one synapse to another. That is, as long as the horizon doesn't extend past the doors of the temple (or brand of networking gear).
Does AT&T really want a lot of thinkers on its hands?
I left a Regional Bell (RBOC) before the great suck toward San Antonio, I don't know what replaced Bell Labs/BellCore). However, this has to be a super architecture project that is an opportunity to remake the NOS architecture(s). As such, this project will take years to define and implement with lots of jobs of all kinds created.
Biting the hand that feeds IT © 1998–2018