back to article Better late than never: Cisco's software-defined networking platform ACI finally lands on AWS

Networking overlord Cisco has punted its Application Centric Infrastructure (ACI) platform into AWS-hosted public cloud. Available on-premises since 2014, ACI is Switchzilla's home-brewed software-defined networking platform that automates management of connectivity and security policies, with a focus on keeping the …

  1. CheesyTheClown

    If you need ACI in AWS or Azure, you're just doing it wrong

    So... the year is 2019 and well... software defined is... um... in software.

    ACI has one of the most horrible management and configuration systems ever to be presented on earth. It started off as a solution to support an "intelligent" means of partitioning services within data centers running VMware. This is because VMware really really needed it. VMware, even with NSX still networking like it's 1983. So companies invested heavily in ACI which would allow them to define services based on port-groups and describe policies to connect the services together and even support service insertion.

    Well, if you're in the 21st century and using Hyper-V, or far better yet, OpenStack and even better Docker/Kubernetes, all of these features are simply built in. In Docker Swarm mode, it's even possible to do all of this with full end-to-end encryption between all services. And since you can free up about 98% of your bandwidth from storage in a VM environment, you have lots of extra bandwidth and also extra CPU... and I mean LOTS of extra CPU... a well written FaaS function using 0.0001% of the resources that a similar routine on a VM would use... no exaggeration... that's the actual number... we measure resource consumption in micro-CPUs (as in one millionth of a CPU) as opposed to in terms of vCPUs when doing FaaS. For PaaS on Docker, we think in terms of milli-CPUs for similar functions.

    So, we use all that idle CPU power for networking functions. And since we can truly micro-segment (not VMWare NSX crap segmentation or ACI brainless segmentation), we can have lots of load balancers and encryption engines and firewalls, etc... and still not use a 100th of what ACI would waste in resources or a millionth of what it would waste in money.

    The best solution a company can take in terms of the 21st century is to start moving their systems more and more to proper modern networking and virtualization rather than wasting all that money on trying to come up with ways of scaling even further up using solutions like ACI.

    What's worse is that if you're considering using ACI in the cloud, what it says is that you think that none of the pretty damn awesome SDN solutions that are integral parts of the cloud provider's solution work. And instead you're willing to spend A LOT more money to add networking that doesn't do anything that their offerings don't but at least creates a bunch of new jobs for engineers who don't really understand how it works to begin with.

    Having reviewed ACI in the cloud in extreme detail... the only thing I could come up with is "Why the hell would anyone want that?". I was just at a job interview with a major multi-national financial clearing house where they wanted to hire me as an architect to recover from their failed attempt at ACI... I explained that the first thing I'd do is delete ACI from the Nexus 9000 switches, upgrade to NX-OS (the legacy networking platform) setup layer-3 connectivity between nodes and use their OpenShift environment to manage the networking and handle all the software defined networking as it's far better suited for it. They loved the idea... we could easily reduce the complexity of the networking infrastructure by a substantial amount. In fact, by using a simple layer-3 topology (all that's needed for real SDN which operates entirely on tunnels over layer-3) we could cut costs on people and equipment by millions per year.

    Cisco has spent the last 10 years trying to make new technologies which don't actually solve problems but add complexity and therefore errors and management headaches at up to 100 times the cost of their other solutions which are actually more suitable. And I really only wish I was exaggerating those numbers. ACI actually increases costs DRASTICALLY with absolutely no chance for return on investment.

    On the other hand, if your company has a VMware data center and A LOT of VMs which will take years (if ever) to replace with intelligent solutions, I would recommend buying two small HyperFlex stacks (retail cost with VMware licenses and ACI, about $1.6 million minimum configuration) which should let you cut the operations overhead substantially... possibly down to 3-5 people... until you can move more and more systems off the legacy platform.

    1. Anonymous Coward
      Anonymous Coward

      Re: If you need ACI in AWS or Azure, you're just doing it wrong

      Agree with the above, totally.

      We've gone the ACI route, and I was AMAZED to see the amount of hardware we had to install ! I call it hardware defined network ! Insane, really, vs. the classical solution. Those dudes at Cisco surely know how to shift boxes !

      And yes, it is a lot more complex to manage.

      1. Anonymous Coward
        Anonymous Coward

        Re: If you need ACI in AWS or Azure, you're just doing it wrong

        You are an idiot. How is three 1RU server amazing.

        This place is going to shit, you can't have a discussion without a bunch of losers shitting on the competition.

        1. CheesyTheClown

          Re: If you need ACI in AWS or Azure, you're just doing it wrong

          Shitting on the competition?

          What competition? NXOS vs ACI?

          ACI does try to solve software problems using hardware solutions. This can’t be argued. In fact, it could be its greatest feature. In a world like VMware where adding networking through VIBs can be a disaster (even NSX blows up sometimes with VUM... which no one sets up properly anyway), moving as much networking as possible out of the software is probably a good thing.

          Using a proper software define solution such as Docker/K8S, OpenFlow, Hyper-V extensible switch, or even NSX (if you just can’t escape VMware) with a solid layer-3 solution like NXOS... or any other BGP capable layer-3 switch is generally a much better design than using a solution like ACI which separates networking from the software.

          It’s 2019, we don’t deploy VMs using OVFs and next-next-next-finish things anymore. We create description files like YAML or AWS/Azure specific formats and automate the deployment method and define the network communication of the system as part of a single description.

          ACI didn’t work for this. So Cisco made Contiv and by the time the market started looking at ACI+Contiv as a solution, Cisco had basically abandoned the project... which left us all with Calico or OpenFlow for example... which are not ACI friendly.

          Of course, NSX doesn’t control ACI since they are different paradigms.

          Hyper-V extensible switch doesn’t do ACI, so Cisco released an ACI integration they showed off at Live! As few years back and then promptly abandoned.

          NXOS works well with all these systems and most of these systems document clearly how they recommend they are configured. Microsoft even publishes Cisco switch configurations as part of their SDN Express git.

          So... which competition are you referring to?

    2. baspax

      Re: If you need ACI in AWS or Azure, you're just doing it wrong

      Two "small" HX systems are not 1.6m you dolt

      1. CheesyTheClown

        Re: If you need ACI in AWS or Azure, you're just doing it wrong

        Servers + Fabric + VMware license + Hyperfles storage license + Windows Server Enterprise licenses + backup licenses (Veeam?) Firewall + Load balancer + server engineering hours + network engineering hours + backup engineering hours + Microsoft hours...

        You need two stacks of (three servers + two leaf and two spine + 2 ASR1000 or 2 border leafs + 2 firewall nodes, 2 load balancers) and whatever else I’m forgetting.

        If you can get a reliable Hyperflex environment up with VMware and Microsoft license and all the hours involved for less than $1.6 million, you probably have no clue what you’re doing.... and I specifically said retail. And architecting, procuring, implementing and testing etc... a redundant Hyperflex environment requires several hundred hours of what I hope are skilled engineers.

        I’ve done the cost analysis multiple times on this. We came in under $1.2 million a few times, but that was by leaving out things like connecting the servers to the UPS management system and cutting corners by using hacked fabric solutions like skipping the border leafs or trying to do something stupid like trading in core switches and trying to make the ACI fabric double as a core switch replacement. Or leaving out location independence etc...

    3. mikus

      Re: If you need ACI in AWS or Azure, you're just doing it wrong

      Agreed, I've been working with or around ACI from it's launch, and it's been a perpetual disaster in almost every case. Their micro-segmentation strategy fell apart quickly when adding almost any sort of filtering between segments quickly exhausted the tcam on their switches, and blew up at least one large Biotech company I had to clean up after. I recommended the same, "upgrade" to normal nx-os, and use them that way, as it was a giant L2 network mostly anyways. They ended up turning it off and putting it in a corner instead, simply leaving their cat6k's and old nx5k's to bleed for a few more years until maybe something better comes along.

      More recently I've been pinged about helping with an ACI to Arista migration from one of the big three credit card companies that was becoming painful with outages even trying to migrate away from it. Same thing, no one wanted to deal with the complexity once it was in, and quickly lost any value.

      In every case I've seen it put in, network engineers retch at the fact they have to click through 90 places to try and setup a basic vlan and layer 2 connectivity. Why not do it programatically? Because old network engineers don't program, and never will. They're just hoping to retire before someone makes them have to learn.

      Good news is with disasters like ACI, there will always be a need for traditional network engineers.

      1. jeffty

        Re: If you need ACI in AWS or Azure, you're just doing it wrong

        Network engineers retch at ACI because instead of 90 click throughs for a basic VLAN and L2 connectivity, it takes a handful of lines of config on any other switch, whether it's running NX-OS, IOS, Commware, or whatever your poison.

        Usually a couple of lines for the VLAN (name and number), a line on every port you want the VLAN assigned or trunked to (which can usually be applied with a range command), plus an additional couple if you want to configure an SVI/gateway interface for routing.

        Most network guys I've worked with can script, and can easily and quickly stand up stuff like this in a matter of minutes. ACI has been a massive step backwards in terms of speed and user friendliness.

  2. Tom -1

    CISCO is evidently still as usual

    I totally agree with the first two comments above. Last time I had to deal with Cisco (about 15 years ago) I had enough clout in the company I workd for that I could ban all further acquisition of CISCO gear for internal use and most importantly also ban including any Cisco stuff in what we delivered to our customers. I've been retired for nearly 10 years now, and haven't kept in touch with what's going on in the cooms and networking world, but from teh fist comment above I deduce that Cisco is still Cisco and hasn't changed a bit.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019