back to article Huawei developing NMVe over IP SSD

Analysis Huawei is developing an NVMe over IP SSD with an on-drive object storage scheme meaning radically faster object storage and a re-evaluation of what object storage's very purpose. At the Huawei Connect 2017 event in Shanghai, Guangbin Meng, Storage Product Line President for Huawei, told El Reg Huawei is developing an …

  1. Blotto Silver badge

    Cloud first networking?

    This could be huge for cloud providers, of course all these ssd's needs connecting somehow.

    What a way to sell more network kit. Prob a better way to sell more ports than building your own servers.

  2. CheesyTheClown

    What the!?!?!?

    What is the advantage of perpetuating protocols optimized for system board to storage access as fabric or network access?

    Bare metal systems may under special circumstances benefit from traditional block storage simulated by a controller. It allows remote access and centralized storage for booting systems. This can be pathetically slow and as long as there is a UEFI module or Int13h BIOS extension there is absolutely no reason why either SCSI or NVMe should be used. Higher latencies introduced by cable lengths and centralized controllers make use dependent on unusually extensions to SCSI or NVMe which are less than perfect fits for what they are being used for. A simple encrypted simulated drive emulation in hardware that supports device enumeration, capability enumeration, read block(s) and write block(s) is all that is needed for a network protocol for remote block device access. With little extra effort, the rest can be done with a well written device driver and BIOS/UEFI support that can be natively supported (as is more common today) or via a flash chip added to a network controller. Another option is to put the loader onto an SD card as part of GRUB for instance.

    The only reason block storage is needed for a modern bare metal server is to boot the system. We no longer compensate for lack of RAM with swapping as the performance penalty is too high and the cost of RAM is so low. In fact, swapping to disk over fabric is so slow that it can be devestating.

    As for virtual machines. They make use of drivers which translate SCSI, NVMe or ATA protocols (in poorly designed environments) or implement paravirtualization (in better environments) which translate block operations into read and write requests within a virtualization storage system which can be VMFS based, VHDX based, etc... this translation then is translated back into block calls relative to the centralized storage system. Where they are translated back to block numbers, then cross referenced against a database and then translated back again to local native block calls (possibly with an additional file system or deduplication hash database) in-between. Blocks are then read from native devices in different places (hot, cold, etc..) and the translation game begins in return.

    NVMe and SCSI are great systems for accessing local storage. But using them in a centralized manor is slow, inefficient and in the case of NVMe... insanely wasteful.

    Instead, implement device drivers for VMware, Window Server, Linux, etc... which provide the same functionality but while eliminating the insane overhead and inefficiency of SCSI or NVMe over the cable and focus instead on things like security, decentralized hashing, etc...

    Please please please stop perpetuating the "storage stupid" which is what this is and focus on making high performance file servers which are far better suited to the task.

  3. NVMe or bust

    Samsung already announced similar SSD at Flash memory Summit

  4. Anonymous Coward
    Anonymous Coward

    NVMe over IP, facepalm

    Indeed, double facepalm.

    Sub 1/2 ms latency, multiple GB/s throughput, all with IP ... Really curious how this will fail.

    1. HPCJohn

      Re: NVMe over IP, facepalm

      Weeeellll..... how about NVMe over RDMA?

      If the network controllers on these things did proper RDMA this might be very interesting.

      http://searchsolidstatestorage.techtarget.com/definition/NVMe-over-Fabrics-Nonvolatile-Memory-Express-over-Fabrics

  5. Anonymous Coward
    Anonymous Coward

    By the time you've wrapped it in IP, I would expect SCSI-over-IP, Fibrechannel-over-IP and NVMe-over-IP to perform essentially the same. A block access is a block access.

    1. HPCJohn

      https://www.openfabrics.org/images/eventpresos/2017presentations/407_ExperiencesNVMeoF_PPandit.pdf

    2. CheesyTheClown

      Nope... block access is file/database access

      No storage subsystem (unless it's designed by someone truly stupid) stores blocks as blocks anymore. It stores records to blocks which may or may not be compressed. The compressed referenced blocks are stored in files. Those files may be preallocated into somewhat disk sector aligned pools of blocks, but it would be fantastically stupid to store blocks as blocks.

      As such, NVMe is being used as a line protocol and instead of passing it through to a drive, it's being processed (probably in software) at fantastically low speeds which even SCSI protocols could easily saturate.

      There will be no advantage in extended addressing since FCoE and iSCSI already supported near infinite addresses to begin with. There will be no advantage in features as NVMe would have to issue commands almost identically to SCSI. There will be no advantage in software support because drivers took care of that anyway... or at least any system with NVMe support can do pluggable drivers. Those which can't will have to translate SCSI to NVMe.

      They should have simply created a new block protocol designed to scale properly across fabrics without any stupid buffering issues that would require super stupid solutions like MMIO and implemented the drivers.

      Someone will be dumb enough to pay for it

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like