back to article Future storage tech should KILL all-in-one solutions, says CEO

El Reg had a conversation with DataCore president, CEO and co-founder George Teixeira about what’s likely to happen in 2014 with DataCore. Software-defined storage represents a trend that his company is well-positioned to take advantage of and he reckons DataCore could soar this year. Some of his replies have been edited for …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    >The concept is simple. Keep the disks close to the applications, on the same server and add flash for even greater performance. Don’t go out over the wire to access storage for fear that network latency will slow down I/O response.

    Until your server goes down and you need to failover to another one. But the storage was in the original server so it went down too and you've now lost all of your applications. Well done.

    There's a very good reason we went to external storage, and it's still valid today.

    1. ToddR

      @AC

      No, if it's architected correctly, (multiple parities), then multiple servers and or discs can fail and data is still available.

  2. Daniel B.
    Boffin

    The SAN

    We have SANs for two things: sharing resources and reliability. The resource sharing is basically to be able to up total storage space and just allocate it to the zillion servers we have. The reliability is for failover capabilities; if the server crashes, the data can still be accessed from another server without much hassle. The SAN itself might even be made to have HA capabilities. Local storage? If the server goes poof, your data might do so as well.

    1. ToddR

      @Daniel B

      Same functionality is a SAN, (if it's designed correctly). DataCore may or may not be, but several are.

      1. Anonymous Coward
        Anonymous Coward

        Re: @Daniel B

        Is it just me, or does this last reply make no sense whatsoever? What does "Same functionality is a SAN" mean?

        Do you mean to say that these products offer the same functionality as an external disk system? If so, then yes, they can do, but you need to have external disks. You can't offer the same functionality as external disks with internal disks, because they're not external: they're internal.

        1. ToddR

          Re: @Daniel B

          But you treat, (or can), the server which hosts the discs, as a box of discs, i.e. the same as an iSCSI disc array.

  3. bdj

    Local storage

    DSSV can use local storage, or remote storage and I believe the only requirement is for a Windows Server to recognise it as a HDD so it can be a local disk inside the server, attached via SAS or at the other end of a building via FC/iSCSI.

    DSSV is only a single point of failure if you have a single server, with a single pool of accessible to it. Volumes are built in DSSV storage pools and can be (but do not have to be) mirrored between multiple nodes in the cluster. Volumes therefore live on multiple servers and multiple disks, similar to how VSAN / ScaleIO functions. Access to these volumes is Active / Active with each node housing a copy of the volume can be serving live IO. This keeps the IO inside the box and local to it (but again not required). IO does need to be replicated though and writes have to be acknowledge by all nodes mirroring the volume.

    So for everyone saying they use a SAN for resiliency and DSSV node failure is a data loss situation is not understanding the concept of it virtualising physical storage with a properly configured DSSV implementation.

This topic is closed for new posts.