El Reg had a conversation with DataCore president, CEO and co-founder George Teixeira about what’s likely to happen in 2014 with DataCore. Software-defined storage represents a trend that his company is well-positioned to take advantage of and he reckons DataCore could soar this year. Some of his replies have been edited for …
>The concept is simple. Keep the disks close to the applications, on the same server and add flash for even greater performance. Don’t go out over the wire to access storage for fear that network latency will slow down I/O response.
Until your server goes down and you need to failover to another one. But the storage was in the original server so it went down too and you've now lost all of your applications. Well done.
There's a very good reason we went to external storage, and it's still valid today.
No, if it's architected correctly, (multiple parities), then multiple servers and or discs can fail and data is still available.
We have SANs for two things: sharing resources and reliability. The resource sharing is basically to be able to up total storage space and just allocate it to the zillion servers we have. The reliability is for failover capabilities; if the server crashes, the data can still be accessed from another server without much hassle. The SAN itself might even be made to have HA capabilities. Local storage? If the server goes poof, your data might do so as well.
Same functionality is a SAN, (if it's designed correctly). DataCore may or may not be, but several are.
Re: @Daniel B
Is it just me, or does this last reply make no sense whatsoever? What does "Same functionality is a SAN" mean?
Do you mean to say that these products offer the same functionality as an external disk system? If so, then yes, they can do, but you need to have external disks. You can't offer the same functionality as external disks with internal disks, because they're not external: they're internal.
Re: @Daniel B
But you treat, (or can), the server which hosts the discs, as a box of discs, i.e. the same as an iSCSI disc array.
DSSV can use local storage, or remote storage and I believe the only requirement is for a Windows Server to recognise it as a HDD so it can be a local disk inside the server, attached via SAS or at the other end of a building via FC/iSCSI.
DSSV is only a single point of failure if you have a single server, with a single pool of accessible to it. Volumes are built in DSSV storage pools and can be (but do not have to be) mirrored between multiple nodes in the cluster. Volumes therefore live on multiple servers and multiple disks, similar to how VSAN / ScaleIO functions. Access to these volumes is Active / Active with each node housing a copy of the volume can be serving live IO. This keeps the IO inside the box and local to it (but again not required). IO does need to be replicated though and writes have to be acknowledge by all nodes mirroring the volume.
So for everyone saying they use a SAN for resiliency and DSSV node failure is a data loss situation is not understanding the concept of it virtualising physical storage with a properly configured DSSV implementation.
- Review Apple iPhone 6: Looking good, slim. How about... oh, your battery died
- 'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
- +Comment EMC, HP blockbuster 'merger' shocker comes a cropper
- Moon landing was real and WE CAN PROVE IT, says Nvidia
- Apple's iPhone 6 first-day sales are MEANINGLESS, mutters analyst