Feeds

* Posts by Dave@SolidFire

5 posts • joined 27 Jul 2011

SolidFire brings out new Carbon, says it'll make data centres more like clouds

Dave@SolidFire

Re: "Real time" replication?

Generally, it would be considered async - it's real-time, but without waiting for the remote confirmation. We don't use "async" as that term is far from standardized - many vendors still call snapshot-based replication "async" (e.g. compellent) and other async schemes end up with large RPOs which isn't the case here. Semi-synchronous is sometimes used for this type of replication, but that is also not standardized and is even more confusing. Real-time replication was the best description.

0
0

Why storage needs Quality of Service

Dave@SolidFire

Re: Indeed, delivering good QoS isn't easy

What I said was that only SolidFire had build the architecture from the ground up to support Guaranteed QoS. 3par has bolted QoS functionality onto a 15 year old asic-based controller -- a good architecture, but not one designed with QoS from the start.

I expect you'll see every other major vendor follow suit, just as they bolted on thin provisioning after 3par innovated in that area. If I've learned anything from 3par's marketing over the years, a bolt-on is never as good as designing it in from the start :)

0
0
Dave@SolidFire
Thumb Up

Indeed, delivering good QoS isn't easy

Good to see some of the incumbent vendors acknowledging that QoS is essential in large scale, multi-application & multi-tenant environments. But as the article alludes to at the end, it's not such a simple task on most systems today. Between juggling tiers, RAID levels, and noisy neighbors, it's nearly impossible on most systems to guarantee a minimum level of performance... which is really the key.

Despite claims in the article otherwise, Netapp's QoS features today are just rate limiting ( http://www.ntapgeek.com/2013/06/storage-qos-for-clustered-data-ontap-82.html ).

Fujitsu has made some a few references to "automating" QoS, but there doesn't appear to be any real detail on what that entails.

Only SolidFire has built its architecture from the ground up for Guaranteed QoS, including the ability to easily specify and deliver minimum performance guarantees, and adjust performance in real-time without data movement. ( http://solidfire.com/technology/qos-benchmark-architecture/ )

Going forward, high quality QoS, including guaranteed performance, is going to be essential in enterprise class storage systems.

1
0

How to tell if your biz will do a Kodak

Dave@SolidFire
Alert

The media yes.. but the storage systems.. no

While the physical media may be the same locally or in the cloud, it would be a mistake to think that the storage systems that Amazon uses look anything like the EMC & Netapp arrays in most enterprises, or the Drobo and Netgear home NAS boxes in SMB and homes deployments.

The reality is that the migration to cloud will drive significant change in storage systems architecture, if not the media itself (and media is actually a small portion of storage $ spent).

0
0

Why should storage arrays manage server flash?

Dave@SolidFire

It may work, but it's just a stopgap

There are certainly advantages to server-side SSD caching, the biggest of which is that it reduces load on storage arrays that are these days taxed far beyond what they were originally designed for, but in the long run I think we'll see server-side SSD caching as nothing but a complex stopgap making up for deficiencies in current array designs.

If you look at "why" it's claimed server-side cache is necessary, it basically boils down to:

-The array can't handle all the IO load from the servers, particularly when flash is used with advanced features like dedupe

-The reduction in latency from a local flash cache

The first is a clear indication that current array designs aren't going to scale to cloud-workloads and all (or mostly) solid state storage levels of performance. Scale-out architectures are going to be required to deliver the controller performance needed to really benefit from flash.

The second is based on the assumption that the network or network stack itself is responsible for the 5-10ms of latency that he's reporting. The reality is that a 10G or FC storage network and network stack will introduce well under 1ms of latency - the bulk of the latency is coming from the controller and the media. Fix the controller issues and put in all-SSD media, and suddenly network storage doesn't seem so "slow". Architectures designed for SSD like TMS, Violin, and SolidFire have proven this. Local flash, particularly PCI-attached, will still be lower, but that micro-second performance is really only needed for a small number of applications.

EMC and Netapp have huge investments in their current architectures, and are going to try every trick they can to keep them relevant as flash becomes more and more dominant in primary storage, but eventually architectures designed for flash from the start will win out.

0
0