back to article Western Digital CTO Martin Fink refused El Reg's questions, but did write this sweet essay

When "retired" HP Labs head Martin Fink surprisingly joined Western Digital as CTO we were interested how this memory-driven computing proponent would affect WD's solid state tech strategy. After a few weeks we sent him some questions, to which he responded: "Some of your questions need a bit of context around them. In some …

  1. Steve Chalmers

    Well said, Martin!

    Chris, Martin's comments are well said and get to the heart of how we need to think about the next decade. Looking forward to what he and his team will accomplish at WD, and to what the industry can and hopefully will do to evolve to best and highest use of byte addressable persistent memory, rather than just looking at it as a new component and jamming it into an existing slot in an existing server or storage system.

    (For context, the first time Kirk shared the concepts here with me almost a decade ago, I was very skeptical and tried to force the ideas into the existing concept of what a server is, what a storage system is, and how the two talk to each other. That's not the best and highest use of byte addressable persistent memory. It's OK for it to take a year or two for you to "get it". When you really grok what a memory fabric like Gen-Z is for, and how the application does kernel bypass for storage reads and writes, and instead of thinking in today's paradigm are asking how the industry can get to an affordable plan to evolve to the kind of end Martin has in mind, then you really get it.)

    The best and highest use of byte addressable persistent (storage class) memory does not respect today's boundary between server, storage, and network in the data center.

    @FStevenChalmers

    ("retired" from HPE last fall, still too excited about this area to take a job doing something else)

    1. Ian Michael Gumby

      @Steve Re: Well said, Martin!

      Absolutely.

      Chris did write a good article and Martin was right in bypassing answering his questions.

      For the convergence to occur there has to be parity in speed and at the same time have the ability to match current storage density. Imagine a small server having 4-8 TB of SCM per core. (That's roughly what you'd see in terms of storage per core on a Linux server, if not higher due to improved density.)

      But that convergence point is still a while away.

      1. Steve Chalmers

        Re: @Steve Well said, Martin!

        There are a lot of ways to build product. We don't know whether a distributed approach (put a modest amount of SCM in each server box, and use a lot of server boxes) or a centralized approach (looks like an EMC Symmetrix of 20 years ago, or more recently a DSSD box) will make the most business sense for the most applications. The market is probably big enough for both, so long as they use the same access and security mechanism on the wire (data plane), and the software agrees on who gets to read/write what the same way (control plane).

        @FStevenChalmers

  2. Dave 126 Silver badge

    No mention of the unpatched security holes in the WD My Cloud series of NAS drives (as reported by the Reg this week), then? No great surprise, their website makes mo mention of it, either.

  3. luis river

    Fink disappointed me

    Theregister questions there is very appropiate, Fink fault dont answer them, great error. The theme NVmemory is very interesting, I not doubt what the on next times, it will be most important tech on IT

    1. Ian Michael Gumby

      @Luis Re: Fink disappointed me

      Fink side stepped the questions for a couple of reasons.

      Most importantly is that you have to understand and accept the concept of a convergence between storage and memory.

      He's hedging his bet because there are several competing ideas, each with their own strength and or weaknesses. Regardless of the individual tech, its the ultimate goal of SCM that is important.

    2. luis river

      Re: Fink disappointed me

      After Western Digital promises, on end 2018 year, nothing new about SCM. Is time what M Fink get back to the Register?

  4. Pascal Monett Silver badge
    Coat

    "In a world where I might want Exabytes of memory..."

    64TB should be enough for everyone.

    1. Steve Chalmers

      Re: "In a world where I might want Exabytes of memory..."

      64TB is four top-end SSDs right now. That would be a good ceiling for storage in a client (a PC), but not for a server.

      When we are talking about memory as storage, a suitable ceiling for a single very large database server is probably several thousand SSD's (perhaps 64PB). For a SAN of a thousand servers (not that SAN is the right technology to share memory), the right measure is exabytes. I would not want to lock in a shared address space smaller than 2^72 bytes or so for a shared memory semantic fabric for use in the 2020's.

      This is a nuanced topic, and what I've said here is only a tiny part of what it will take to successfully design in this space. Assuming any of us has a crystal ball clear enough to see that far into the future...

  5. Anonymous Coward
    Anonymous Coward

    Not so much a Von Neumann bottleneck

    More a complete choke hold - see the cables at the top of the picture for the old version...http://www.manchester.ac.uk/discover/news/article/?id=3750

  6. Ian Michael Gumby

    If SCM possible... then ...

    You have to go back to the bottleneck of the network.

    It also changes the current paradigm where storage is cheap.

    1. Steve Chalmers

      Re: If SCM possible... then ...

      That's what Gen-Z or some other memory fabric is for: delivering low enough latency for an SCM read or write across a fabric the size of a SAN today, that the software can rationally wait for the response, rather than releasing the processor to go do something else (whether by blocking the thread or by using async I/O and doing the something else in the application / database / filesystem).

      The basic action of an Ethernet switch -- packet processing -- even in the fastest switch made today takes too long relative to an instruction execution time in a processor, to wait for a request and a reply to each be packet-processed. This doesn't mean shared SCM access over Ethernet/RDMA makes no sense, it just means it's for cases where the application knows it's reaching for "far" data and is willing to wait for it.

      @FStevenChalmers

  7. Down not across

    Thanks Martin (and Chris)

    I really enjoyed the "essay".

    Martin was quite clear, and explained, why the questions were wrong whilst answering to the extent it made sense to. Interesting way to look at things.

    To get an idea where this is heding just think of a simplistic step like Oracle's Exadata where the controllers on disk have understanding of where clauses and how that benefits performance. SCM of course would take it to another level entirely.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like