Kazan is in Auburn, California.
Congrats Joe and company on the funding round. Go for it!
Kazan Networks just got $4.5m A-round funding from Intel, Samsung Ventures and Western Digital Corp for its storage array access acceleration technology. So what? So Fibre Channel and iSCSI external array access are poised to be devastated by NVME over Fabrics (NVMeF) and Kazan is developing a hot little ASIC number that it …
Does it implement capabilities to replicate what zoning / masking can provide? If not, it will live in a very specialized niche while FC carries on. If it does, it may replace it someday, but would suffer from the same problem as FCoE - since you need very high speed ethernet to get the full benefit, and those high speed ethernet NICs and switches are expensive, you don't actually save much money.
Of course you would save money if you could use the same ethernet NICs and switches you already have, but then you have to worry about QoS, and traditional 'tower' models of management where the storage team and network team don't overlap create the potential for turf wars and finger pointing.
OK, so NVMe (or "envy me" as the marketeers seem to like calling it) gives you a much faster interface than SAS to your SSD. Great! So, if all I want is to attach lots of flash to a server then it looks like a good way to stack lots of JBOF shelves onto a server. Fine, I can rip out all my old SAS JBODs and replace them with NVMeF JBOFs. Problem is I hardly see any SAS JBODs or JBOFs, I mostly see flash in big arrays where the bottleneck is usually the array header, and the value is the centralised consolidation, management, HA, easy presentation and backup offered by the array's software. I suspect the real winners will be the vendors that get the best NVMe features into their arrays, not the JBOF shelf vendors.
NVMeF is a great replacement for a SAN or SAS Jbod, good for the legacy stack using a local (unshared) file system
While most of the world data is moving to shares file, object, NoSQL/NewSQL types of solutions that are based on DAS or OSDs (simply because data and Metadata must be updated simultaneously, with SAN it means shared locks and journals and that just doesn't scale, lots or writeups from aws, Google, Azure, .. About it)
Even VM images which typically use local FS are now 1% of their size with Docker
I think the storage camp need to spend more time with the apps camp, and focus on the right problems to solve, e.g. Add k/v notion to those NVMeFs would make them way more useful
BTW i imlemented the first NVMeF prototypes, before it even had a name, thought its the best thing since slice bread, but the world around us has changed since, and we need to adapt the infrastructure to the app stack
NVMe and NVMeF are both so immature that literally anything could happen before they come of age. FC has *decades* of evolution and multivendor support behind it and will have a very long tail. A better question is will external arrays of any kind exist 5 years down the road? SSD and other NV technologies are getting significantly denser and faster at a rapid pace and one likely scenario is compute and storage (NV) becoming a common platform with interconnect between nodes becoming the only external 'component'. Whether it's an evolution of Ethernet, Infiniband or NVMe is still an open question.
"....A better question is will external arrays of any kind exist 5 years down the road?..." Not all storage scenarios require ultimate speed, some require large amounts of space at low cost (such as archives). Arrays still offer simplest and most cost-effective and space-effective provisioning of storage - local storage has advantages in speed but leads to inefficient distribution of storage (lots of wasted space in each server as opposed to little wasted in a centralised array). Whilst scalable architecture where server nodes are compute and storage nodes (AKA the Intel dream), they do not offer the simplicity of scale advantages of monolithic arrays. I predict arrays are going to be around for a while yet.
NVMe is a very high speed, very low latency protocol that is optimized for on-board short distance data transfers. Just like you can use an airplane to drive down the highway, you can use this protocol over longer distances by putting bridge circuitry at each end of the connection compensate for the different physics of communicating over longer distances. But now you've slowed it down and added costs - like clipping the wings and tail off the airplane to make it fit through overpasses. It would go really fast over some stretches of the highway but then have to slow way down (go through bridge chips) in other places.
TCP/IP is a great protocol for long distances and NVMe, QuickPath, and Hypertransport are great protocols for short distances. Fiberchannel and infiniband are great protocols for medium distances. Many attempts have been made over the years to jam sone of these protocols into different roles and they've generally wound up providing similar (or worse) performance at higher cost (like FC over IP).
Maybe this time it will be different.
Biting the hand that feeds IT © 1998–2020