back to article It's all in the fabric for the data centre network

IT infrastructure is worth exactly nothing if the network doesn't work. The network designs we have grown so comfortable with over the past 15 years or so are wholly inadequate if you are building a cloud or merely undergoing a refresh that will see more virtual machines packed into newer, more capable servers. The traditional …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Virtualization? No, PCI.

    I think, perhaps, that the PCI-DSS has had more to do with the trend of 'one process per machine,' rather than the widespread adoption of virtualization. Virtualization means that instead of having to lease 6 racks to run 150 machines, I can do it in 2, with space left over.

  2. theblackhand

    SDN

    I can see the benefit of SDN for large data centres, although I believe the value is in the removal of the software layer in the control plane allowing switches to become white boxes rather than expensive single vendor solutions. Look at the standardisation in network hardware between 10GbE vendors already - under the covers they are pretty much all Broadcom Trident switching hardware tied to an Intel processor managing the control plane.

    What I don't see is the value for smaller environments where the cost of the SDN management platform or expertise to run it is likely to exceed any saving in the network hardware.

    If you have less than 10 racks (that's enough to be supported by a "core" 40+ port 10GbE switch stack with aggregated links to each 10GbE top of rack switch) connecting to your VM farm, over-subscription is unlikely to be an issue. In such an environment there are likely to be a handful of server VLAN's that rarely change so once a VM server is connected to the network on-going server deployment will be automated via your VM management tools.

    I know there will be complex smaller environments that will benefit from SDN, but I don't see it becoming mainstream.

    Unless I have missed some feature.....

    1. theblackhand

      Re: SDN

      To clarify - what would SDN provide for a small scale network that implemented TRILL in switch hardware over a "dumb network" with a management application managing the device control planes? i.e. your "core switch" manages a flat network ensuring the maximum bandwidth possible as all ports are used rather than spanning-tree .

      In large scale environments I can see the benefits of dumb hardware where the cost of the network equipment is significant.

      In small scale environments a pretty good solution that is affordable and plug and pray is likely to be preferable to the ideal solution that is harder or more expensive to manage.

      1. Trevor_Pott Gold badge

        Re: SDN

        As a general rule SDN is easier to manage than traditional networking. SDN interfaces are often modern, up-to-date GUI affairs that can be addressed via the command line or scripts, but also take into account the rest of the human race that are visual learners and/or only go modify the network a few times a year.

        That doesn't cover all implementations, naturally, but in general SDN has been used as a chance to break free from the ios tyranny and open switching administration to those who excel at things other than rote memorization.

        1. Roland6 Silver badge

          Re: SDN

          >As a general rule SDN is easier to manage than traditional networking. SDN interfaces are often modern, up-to-date GUI affairs

          Whilst I agree SDN does permit the introduction of management consoles that break free from ios; are SDN's actually easier to manage, or are the tools (and under lying management applications) available so much better?

  3. Random Q Hacker

    SDN

    The entire Internet runs on routing, but god forbid enterprises and their shitty software cope with anything but layer 2 bridged everywhere. Software's failings have lead to all of these infrastructure "innovations".

    1. Trevor_Pott Gold badge

      Re: SDN

      "The Internet" is low bandwidth, high latency. Datacenters are high bandwidth, low latency.

      A fabric allows me to talk horizontally across a datacenter without bandwidth contention. Explain to me exactly how I accomplish that in a routed scenario without bottlenecking on the router. Or for that matter, how your very Cisco view of networking is going to be cheaper than a mesh fabric with dynamic layer-2 packet routing?

  4. Jason Ozolins
    Meh

    Infiniband, anyone?

    Funny how Infiniband has been offering a switched fabric network with separated control and data planes for about a decade, at a price per port that for a long time was way lower than comparable Ethernet (once the Ethernet specs were even drawn up for the link speeds that IB was supporting). Plenty of supercomputers have had single IB connections from compute nodes to a converged data/storage fabric.

    Not sure how much 40Gb Ethernet switches are going for, but considering that a basic unmanaged 8 port QDR (== 40Gb/sec signalling, 32Gb/sec data) switch can be had in the USA for less than $250/port, and a 36 port top-of-rack QDR switch with redundant power for about $140/port, I'd be surprised if there were such low entry points for Ethernet switches with comparable bandwidths and software defined networking capability. Even that tiny 8-port QDR switch can be connected into a mesh fabric, and toroidal IB networks with peer-to-peer links to adjacent and nearby racks can allow some degree of horizontal per-rack scaling for deployments growing from small beginnings that cannot justify more expensive core switching.

    Granted, this last point is making a virtue of necessity, in that you pretty much *need* to run an Infiniband subnet manager on an external host once you get to a decent size network. The subnet manager that was supplied on an embedded management host with our modular Voltaire DDR IB switch was not much use, as it tended to lock up... it's easier to restart the subnet manager, or switch to a failover backup, if it's running on hosts you fully control. =:^/

  5. Roland6 Silver badge

    Are we approaching the problem from the wrong angle?

    Back in the late 70's and early 80's I worked on microcomputer systems that made use of Multibus and (Blue Book) Ethernet, to link dozens of single board computers together. What struck me then was that the LAN cable was effectively just a system bus extender. Looking at the modern grid data centre, it would seem that it is just a very large computer, hence we shouldn't be trying to link it together with enhanced bus technology, not network technology.

    1. Trevor_Pott Gold badge

      Re: Are we approaching the problem from the wrong angle?

      Whose extended bus technology? QPI? Hypertransport? Infiniband? Something even more proprietary? Who owns the patents? Who makes the money? Who makes the kit? What are the standards?

      Ethernet is never the best option for anything. It is, however, something that everyone can play with. This is IT; the best technologies wither due to greed while mediocre technologies that were opened up for the entire world to innovate upon flourish.

      If you don't believe me, do some research on USB...

      1. Roland6 Silver badge

        Re: Are we approaching the problem from the wrong angle?

        I was referring to the general approach, rather than a specific solution or product. I came to this article after reading a related article on re-architecting the datacenter (

        http://www.theregister.co.uk/2013/11/06/put_racks_on_chips_or_the_data_centre_will_die_say_boffins/ )

        Yes 'Ethernet' has become the the de facto LAN standard, but as you intimate we are coming to a turning point with datacenter area networks, where neither traditional 'LAN' technology nor 'telco' style switch solutions really fit the bill. Which would seem to indicate that the needs of Datacentre area networks have evolved beyond these approaches.

        You raise an important set of issues around standards etc., perhaps what is needed is a Joint IETF/IEEE working group dedicated to the DCAN - to try and avoid potential battles like the one brewing between TRILL and SPB supporters.

        The catch as you allude to with Ethernet is that from a marketing viewpoint any DCAN solution probably still needs to be able to carry the 'Ethernet' label...

This topic is closed for new posts.

Other stories you might like