back to article Bechtolsheim races Arista to zero latency

Arista Networks, Andy Bechtolsheim's creation, is in a race to zero network latency. That's the situation he presented at IP Expo in a keynote pitch today. Andy Bechtolsheim is a heavyweight IT industry maverick who has consistently punched far above his weight. He was a co-founder of Sun, has been credited with inventing the …

COMMENTS

This topic is closed for new posts.
  1. Steven Jones

    Zero Latency?

    This guy most know something about the speed of light that Einstein didn't. Unless he's managed to construct space/time worm holes in a data centre, then he's going to have to put up with the same 300 metres per microsecond propagation speed as the rest of creation. Of course that's if he's found a way to route photons through a vacuum. Signals via copper or fibre don't go faster than about 200 metres per microsecond, which means that 50 metre link is never going to have a round-trip time less than 0.5 microseconds, whatever you do with switched, adapters and the like.

    There are very low latency interconnects of course, but they all have very short reach for this reason. Indeed the propagation speeds inside computers are a fundamentally limiting factor, let alone on external comms links.

    1. Nightkiller
      WTF?

      Where do these guys come from?

      Here's a guy who runs a company that supplies product that is 6 times faster than the competition and you are whining about what? That its too good? Latency is cumulative and Mr. Bechtolsheim is lending his expertise towards a solution to part of the problem. If other manufacturers had the vision to strive for the best like Andy, we'd be a lot better off. Globally.

      Spare us your pedantic ministrations.

      1. Anonymous Coward
        WTF?

        "other manufacturers had the vision"

        In the mil/aero market our switches have 2uS cut-through latency.

        Now who's the daddy?!

        Obviously, the laws of nature do get in the way a bit when running 10G over 100m! (in ships before you say that ISS and tanks and planes are not that long!)

    2. Adam Nealis

      It was "the race to zero latency"

      "This guy most know something about the speed of light that Einstein didn't. Unless he's managed to construct space/time worm holes in a data centre, "

      Like trying to get to absolute zero.

  2. David Halko
    Thumb Up

    10GigE on the motherboard

    Andy Bechtolsheim said, "Almost no server has 10 gig on the motherboard."

    Sun included 10GbitE on the UltraSPARC T2 processor.

    Oracle included 10GbitE on the SPARC T3 processor.

    Network latency is of huge importance on overall system performance, some companies understand that.

    1. Anonymous Coward
      Anonymous Coward

      Exactly what I was thinking.

      But then, sunacle doesn't really want to sell servers. And perhaps we're too mired and entrenched in x86, this guy included, to realise other systems might be available. I would expect the low latency junkies at the robotraders' to have insisted on stacks and stacks of the latest ultrasparc gear.

      But maybe they haven't. For all those trades rely heavily on middleware and might even be done in java or ghod forbid "dotnet". "Whaddayamean low latency? We like losing all our hardware gains in sky high software stacks!" So very enterprise. Oh well.

      On another note, I recall Hirschmann doing its level best to achieve low latency and sub-microsecond failover for its switches. The UI (especially the CLI) was somewhat horrible, but the hardware quite reasonable. Don't know if they've managed 10GbE switches yet, haven't really kept up. Probably not because they're more in the industrial sector.

  3. Anonymous Coward
    Anonymous Coward

    Zero Latency?

    If you are directly connected, maybe. Tending toward zero. Directly-connected to the trading system, with a cable and not a VLAN, that is.

    Anything else is just PIE IN THE SKY. And delay is not the limit. It's the shit that is on the end of the cable. The Operating system. The WAN link, perhaps. Which most traders will have. And it is routed over internet. Unless you are a bank and have some fibre into the trading system. From afar.

    Arista are quite big in that trading space as the article declares but also I think Google (don't quote me, he may have been an original investor in Google and they returned the favour in buying his gear for their DCs - recall Google were talking about building their own router some time ago...).

    Anyway. His gear is quite specialised and competitively priced. Plus he knows all of the pitfalls of Cisco OS design because he worked with them. But with Cisco - people know it and don't have to wait for a reply from San Francisco for a new update to support standard stuff that Cisco has been doing for years.

    In the end, you put this in your datacenterz and then you lose all that latency gain when you have to upgrade every 5 mins to support new (barely tested) L2/3 security features or so coming out of the door.

    FWIW - Just chose CISCO Nexus over ASTARO for a DC and those are the reasons why.

    1. Anonymous Coward
      FAIL

      Whoa, dude: Prop 19 hasn't passed yet!

      Easy, there, cowboy!

      Prop 19 (http://en.wikipedia.org/wiki/California_Proposition_19_(2010)) hasn't passed yet!

  4. Anonymous Coward
    Unhappy

    Latency is now the decider

    I work for a mega Telco / ISP (not BT or Virgin, I’m talking a BIG Telco!) and almost every circuit we install now, financial service or not the customer is more sophisticated often carrying out detailed latency checks and if it isn't good enough getting us to investigate alternatives.

    Latency is so key that often the customer who would not have noticed an outage or even any impact on a cross continent service due to a traditional fibre break on an SDH ring is now reporting that an increase of 10 - 15 milliseconds because their traffic is now going the "long" way round the SDH network is now unusable.

    The circuit is working without any errors but that (to us at any rate) small increase in latency because of a ring switch has now rendered the service unusable.

    We know there is never going to be a zero latency network, even if the kit is directly connected in the data centre (barring some new physics discovery) but customers are making more demands on (comparatively) old fibre networks and not taking into account the physical distances involved!

  5. AceRimmer1980
    Boffin

    Coming soon, zero latency networks

    *All CAT5 must be supercooled.

    **Price does not include liquid nitrogen.

  6. Steven Jones

    No excuse for claiming the impossible

    OK - so you clearly admit zero latency isn't possible. I get heartily sick of overblown hype about physical impossibilities wowing the credulous. The reason this matters is that over and over again I come across senior managers who swallow this sort of stuff without any conception of what is physically possible.

    Of course the latencies are accumulared, but if you are suffering most of the latency on the links and adapter, then you are into the law of diminishing returns by looking only at the switch.

    Where things are better, then fine. Tell us what the realistic numbers are, but vendors are always issuing unachievable figures, but when it comes down to hyping up physical impossibilities, then I smell a rat. This stuff matters - a lot.

    Another thing - this industry is all about precision. Computers and systems don't work on fluffy sales speak. They work through what in other areas might be called pedantry, but which in hard-core IT is engineering precision.

  7. Anonymous Coward
    Happy

    Bandwidth vs. Latency

    It always cracks me up that people so often don't understand latency and why it's important. I had a WAN admin (a "senior" one if I recall) tell one of my guys that their MPLS network ran "at the speed of light" while not understanding/ignoring that 200-300 ms latency between west coast to their centralized infrastructure in the midwest *was* an actual problem when their pipe was sitting around with 50-80% utilization. They had bandwidth to spare, no doubt, but there was still a problem as far as their users were concerned.

    I find it amazing that many corporate network buyers don't seem to get this concept. To their credit, everyone seems to check for throughput minimums these days. Unfortunately a dense few still haven't gotten the memo on latency.

    In a way though, I think 10GigE - a little like the new SATA and USB specs - and especially the future 40 and even 100 GigE mentioned are a little ridiculous for where we are now. For aggregated backbone links, it's great don't get me wrong and I'm sure there are other specialized uses for it... but the average server in a data center isn't anywhere near capable of making appropriate use of a 10 GigE NIC just the same way that new USB3 external HDD isn't any faster than the USB2 version (i.e. bandwidth isn't the issue).

    For where we're at today... with most of the environments I see having adequate bandwidth for their needs, we'd be much better off focusing on component latency to improve our performance.

    1. Anonymous Coward
      Boffin

      Server can't drive 10G!?!

      The average modern server (think dual-socket, quad-core, for 8 cores total) can absolutely drive *well* over 1G, and if it doesn't saturate a 10G link, it will get bloody close.

      This is why people aggregate multiple 1G links (usually 4-6), which is a major PITA, and expensive, because you burn multiple 1G switch ports, and you're probably buying a multi-port NIC (most MoBos include a single port, or maybe 2).

      It depends on your bottlenecks, of course. If you're downloading stuff that you're writing to a local disk, then your disk will be your bottleneck, even if you've got a RAIDed pair. But SSDs are eliminating some of that bottleneck. (Bechtolsheim is a fan of SSD, by the way, probably for this reason.)

      And if your machine is out of memory, and constantly paging out to disk, you're screwed, too. But if you've got a workload that matches your machine's CPU/memory/IO characteristics, your network IO can become a bottleneck in a hurry. 10G is a solution there.

      1. Anonymous Coward
        Thumb Up

        I don't disagree with what you wrote Mr. Other AC

        ...a properly tuned/sized/utilized server is capable of doing amazing things but I would venture that the "average" server in one of our data centers averages less than 30% utilization, if even that much... and would hit other bottlenecks way before maxing out their network. In fact, I can't even recall the last time I saw a LAN throughput issue that wasn't caused by the old auto-negotiate mismatch... even on my biggest servers (which, admittedly, are *very* modestly loaded P-Series boxes).

        IME the servers you're speaking of do exist - of that I have no doubt... but they're not the "average" server in the data centers I work with. Performance tuning is ultimately a game of whack-a-mole - you address one bottleneck then move on to the next. For my guys, we have to get past upstream limitations (mainly software and disk I/O) before we seriously need 10+ GigE.

  8. Slimster

    Follow the bear

    Whats withall this wibbling about latency, if Andy is involved then there is a high chance he's onto a winner. This guy is a genius and was involved in the early days of many techs we now take for granted. The real question is what happens next with Arista? Who will buy them?

    <Hofmeister ad>

    If you want great products follow the bear

    </Hofmeister ad>

  9. Anonymous Coward
    Grenade

    what a bunch of whining wankers.....

    The RACE to zero latency.

    Just like the race to absoloute zero

    The race to perpetial motion

    The race to perfection

    It about who can get closest and at what speed. Yes it may not be possible, but isn't worth trying to get as close as you can, as fast as you can?

    FFS, if some of the morons here had been around at the dawn of man, they would be moaning about the pointless waste of trees making fires and the pointless round turny thing attached to a box with handles.

  10. Robert E A Harvey
    Stop

    Hang on a minute

    So we have all this glorious technology, and the objective is what?

    To reduce suffering in the world?

    To eliminate animal testing from drug trials?

    To improve agricultural production, or to provide home automation for quadraplegics?

    No. We want it to make very rich people fractionally richer!

    I am so pleased about that.

  11. LawLessLessLaw
    Boffin

    Obvious press release is obvious

    Join us after the break when a CEO will tell us why his kit is the best.

  12. Anonymous Coward
    Thumb Down

    wheres the 10giE for Home Use, still not available Years after 1 gigE

    "10gigE, although: "Almost no server has 10 gig on the motherboard."

    "

    LOL, again after YEARS of crappy 1gigE ,wheres the 10gigE FOR Consumers at a consumer price

    hell your average consumer TODAY would be SO glad to get 2, 4 and 6 gigE in a single PCiE card and a basic and cheap 10gigE router or switch to connect all their family PCs together, rather than even think about installing several 1gigE per machine and direct "Bonding" (is the word your looking for not aggregate multiple 1G links) , not to mention windows does Not Do "bonding" to well, as the drivers dont seem to actually exist for cheaper 1gigE today.

    If your really bothered that your HD's cant keep up, as the OEM's are overpricing SSD assisted read/write kit...., theres always the basic old school outline to follow for a bonding driver, that driver would also include making an automatic and basic large ram disk to write to and auto save that content to the slow HD's over time without you needing to think about it or setup a script OC....

This topic is closed for new posts.

Other stories you might like