back to article No more tiers for flatter networks

There is a disconnect between data centre networks and modern distributed applications, and it is not a broken wire. It is a broken networking model. The traditional three-tier, hierarchical data centre networks as defined and championed by Cisco Systems since the commercialisation of the internet protocol inside the glass …

COMMENTS

This topic is closed for new posts.
  1. NoneSuch Silver badge
    Boffin

    No single company...

    ...should ever dictate a standard for the entire community for just this reason. The RFC process is well established, community supported and works.

  2. Matt Banks

    10GigE

    We went with 10GigE (Cisco Nexus) quite early on in our "cloud" environment.

    We'll complete that transition this year when we replace our 4507's with 5596's (with FEX's and two ASR's already in place).

    GigE just isn't fast enough for a datacenter any more - especially if you're using NFS like you probably should be...

  3. Anonymous Coward
    Anonymous Coward

    The cisco model is flexible

    You can, and a lot of large companies I have worked for/with often do (BT/HP/EDS) collapse the core and distribution layers together, and you can decide where your layer 3 boundary is. It was recommended the layer 3 boundary was as close to the access layer as possible a few years ago (normally at the distribution layer) due to the advent of high speed layer 3 switching, but this is changing with the popularisation of "non-blocking fabric" over the more traditional spanning-tree.

    Also if you have one DC where you need migrate a guest VM through the core that's just bad design, the core should only be involved in pure layer 3 routing (ie access external to the DC/Campus/whatever). You would normally have the VM migrate within a single layer 2 domain, otherwise you'd have to change IP addresses/DNS records, etc etc. Between DC's this is normally achieved through the use of a layer 2 link (VPLS/DWDM etc) and hence still would not normally involve the core.

    It is true though that the new standards for this "non-blocking fabric" will improve the use of available bandwidth via the use of all links, rather than just using the links with the best path to the root switch, but this is being embraced by cisco and all the other major network kit manufacturers, if they didn't then they would probably loss a huge amount of business.

    Seems though that some people are fixated on the Cisco 3 tier design and think that means they aren't still at the forefront of this, which if you actually read up to date design documentation and make the effort to understand you'll see this is not true.

  4. RudeBuoy

    Makes No Sense

    You level your critisism at the Heirarchical Model which Cisco stopped promoting almost 6 years ago and then outlined solutions which are only relevant to the core not the entire model from end nodes (access layer) to core.

    I am no longer active in Network design but I remember the three Level hierarchical Model from my CCNA about 14 years ago. The last time I looked at the Cisco literature about 5 years ago Cisco had moved to what they call the Enterprise Composite Network Model which was a bit more functional and a think a bit similar to the flat model you defined.

    Your top of the rack switch scenario sounds like it would fit into what Cisco calls the Server Farm Block in it current enterprise model.

    I am sure the Hierarchical Model was never meant to be the solution for every circumstances. I am also sure that if it was relevant to network design in most enterprise over the past few decades it is still as relevant in all but a few today. Also being just a conceptual framework I am sure that it can be used in the scenatio you described, since all you describe is what should happen at the Core. In a sense you are saying that at the core we need to implement the a specific design for today unpredictable and data hungry vitalization and other application.

    All from memory because I have not lifted a finger to do any networking for almost five years, but answer these question because it was not answered by your model. Where in your model will you aggregate access devices, where will you implement ACLs, QOS and security. Where will you converge VLANs and broadcast domains.

    Cisco moved to a more Functional enterprise model a while back. Your article would be more useful if it truly critiqued the Hierarchical Model in some coherent way rather than just being focused on networking within a Server Rack or data center.

    Since the critique was leveled at Cisco generally it would have been useful if it was leveled at the Enterprise Composite Network Model which Cisco currently promotes ( Or was promoting FIVE YEARS AGO) rather than the older Three Layer Model.

    Cisco has seen better times but I am sure there misfortune of late has more to do with the fact that we got tired of buying their overpriced kits that offer worst performance than their competitors than with network design models.

  5. Anonymous Coward
    Anonymous Coward

    Not a Standard

    The Hierarchical Three Layer Model is just a conceptual framework it is not a standard. Furthermore the article describes the type of networking which is required within a data center not within the enterprise from end to end as the Hierarchical Model does.

  6. E 2
    Happy

    I forsee

    billions times billions time billions of tiny pigeons, an adaptation of RFC 1149.

  7. Anonymous Coward
    Anonymous Coward

    Snake Oil!

    There isn't much difference in the physical topology here; both the Cisco Model and the "New Super DataCenter Model" are fat tree topologies. Fat Tree topologies are basically multistage (typically 3) Folded Clos Networks. Both are hierarchical, albeit the Cisco model tends to provide for link diversity at each stage; e.g. two second stage switches are connected together, likely to share trunking information for vlans across the distribution or core.

    The real differences between the two approaches are the hardware used ( smaller, more modular chassis) and a routing and forwarding logic is multipath with an emphasis on a non blocking end to end topology without the constraints of spanning tree. However, this has little to do with flatness. You have been able to run nonblocking datacenter fabric that runs Layer 3 protocols without using spanning tree. TRILL et. al. allow you to do similar things using what is basically a spanning tree replacement, but even these approaches might be leapfrogged by things like openflow enabled networks.

  8. Mat Mathews

    Are “Flatter” Networks Simply Rearranging the Deck Chairs?

    Timothy, great article. You really hone in on the fundamental problem in data center networking today. The entire model is broken and it’s all too often glossed over because of how much the large incumbents have riding on it. The problem has been masked with solutions that continue making the infrastructure more and more complex and less responsive to current application needs. More protocols and abstractions don’t fix the problem; they just put off the real pain for another day. Until we start with a clean slate and understand that we can't build sensible networks in an OSI stack vacuum, we're just re-arranging the proverbial deck chairs. Here’s my take: http://www.plexxi.com/index.php?option=com_content&view=article&id=42:flatter-networks-&catid=14:blog&Itemid=27

This topic is closed for new posts.

Other stories you might like