Must be getting old
When I saw 'more doughnuts' my first thought was 'surely we're not going back to the days of magnetic core storage'!
The hierarchy of adapters, switches, routers and directors involved in storage networking is unwieldy, complex and costly and needs replacing with a flatter scheme of direct connections between servers and storage devices. That’s the networking message from start-up Rockport Networks. It says that current fat tree and spine- …
...but if you want to expand later on you need to pay up front for a configuration that can handle your largest expected number of nodes. Given the speed of increase of capacity in most shops it's going to be a hard sell unless you're aiming at large specific configurations (e.g. HPC) rather than general-purpose compute.
The torus approach works best in fixed installations: imagine needing to add a few more devices to an existing, running torus. In the end, when we used even a basic ring back in the early days of Fibre Channel (Fibre Channel Arbitrated Loop, or FC-AL), the industry learned quickly that we needed to wire that loop in a star with wiring hubs to make adding/removing/failing/repairing devices straightforward. Even then, loop reconfiguration was disruptive enough that in the end low cost switching (the exact wiring needed for leaf-spine) won out.
The torus approach also needs to have the failure (immediate workaround, repair, return to normal) cases thought out well. A torus with hardware which forwards through a node in 80ns is doing so in hardware based on tables loaded into each ASIC by some sort of software control plane. There are a lot of "interesting" cases (black holes, forwarding loops, non simultaneous changes to forwarding tables by software, ...) beyond the basic operation described in this article.
Will be interesting to see where this torus design finds its home, and among what customers.
Biting the hand that feeds IT © 1998–2019