* Posts by PaulHavs

11 posts • joined 1 Jun 2013

It's wall-to-wall Huawei: Chinese behemoth hogs five of six top spots in SPC-1 array benchmark

PaulHavs

Not really relevant any more

I think the simple facts are around the relevancy of SPC-1 in an all flash era.

Every all flash array has more iops than what is needed and what the app workloads can drive..

The common bottlenecks are now outside the storage ... at long last

They are now in the apps and the humans!

...so... it doesn't have the relevancy compared to the HDD era...

RIP HDD ... we loved you once!

Enterprise storage sitrep: The external array party is over

PaulHavs

AFA just External - bit moderised

Paul Havs from HPE herer....

Ummmm, i fail to see why the industry, including el'Reg, doesn't just recognise that the AFA industry is just an evolution of the (yes traditional) external array industry.

For sure, the external array industry is flat to small growth; however virtually ALL of that fleet will be replaced with AFA over next 5 years to be sure - thats a fantastic opportunity for all - customers and vendors.

The reason for this is due to structured data growth being minimal; and unstructured being massive. External shared array storage is best suited for structured data - thats what it is designed for.

The TPC-C/SPC-1 storage benchmarks are screwed. You know what we need?

PaulHavs
Stop

The age old problem with POC's is the focus on perfect world scenarios.

Whereas - through life - what customers will remember and what will bite them - is the imperfect scenarios.

How does the system perform under all the likely failure situations likely to be encountered over the 3 to 5 year lifespan? How will operating in a degraded state affect performance? How much failure can be sustained before the system stops servicing IO requests?..... and so on....

Good luck with another new synthetic workload...we dont need it. We just need customers to run their own workload in a POC and do some realistic failure testing.

/Paul H - HPE Storage.

HPE bolts hi-capacity SSD support onto StoreServ

PaulHavs
Happy

Re: wherefore art thou, so low-cap SolidFire?

@JohnMartin,

You do ake some correct points there John; however it takes two very different architectured product solutions for you to reference to make them. Yes - SolidFire has a good QoS implementation. Yes - Ontap has features (file services) which 3PAR has built out.

The overwhelming beauty of 3PAR which customers are voting for (re: buying it) is its near perfect balance of Cost effectiveness; Performance; and Enterprise matured data services.

SolidFire has some of these but not all.

Ontap / FAS has some of these but not all.

Customers want to simplfy - not complexify. I'd imagine the NetApp answer to this is a FAS gateway in front of a SolidFire backend? - managed at some point in the future with "an extra pain of glass" ?

regards,

Paul Haverfield

Storage CTO, HPE APJ region

HPE’s StorServ filer is very speedy. So for best $/IOPS, get a DataCore

PaulHavs

The two DataCore SPC-1 results as submitted do not have any HA. They are akin to a single controller storage device. It is much easier to obtain higher performance if there is no controller resiliency. Also, the single controller configuration reduces cost dramatically. I don't know any customers running application workloads on single controllers!

DataCore is a software defined storage solution, and suits very different workloads to the shared external arrays with HA and controller resiliency. So I think it is reasonable to not make direct comparisons between the two DataCore results and external shared array results.

Paul Haverfield.

HPE Storage APJ.

Flash storage: Has the hype become reality?

PaulHavs

Enrico Signoretti Sums it up perfectly

Enrico Signoretti's summary is perfect - in so far as representing the use cases and requirements for the transition to the all flash datacenter...

If I flip it around to a capability view. I see:

1. All flash all the time

2. Tiered flash / fast SAS / slow SAS

3. Flash cache acceleration

4. Flash cache + Tiering

5. Converged Flash - Datasets on all flash + Datasets on HDD - WITHIN same array

As far as I know - there is only one solution available to these 5 use cases with one architecture; one common set of native enterprise data services; one management framework, and so on,,,,,

,,,,,3PAR StoreServ !

Anybody disagree ??

Paul Haverfield, HPE Storage APJ CTO

HPE trots out benchmark blaster flash array as PCs become distant memory

PaulHavs

Re: Translation

RE: "I would expect a NetApp 8080 cluster to trounce this...."

I wouldn't be so confident. NetApp have never submitted a FAS based SPC-2 result. Their first ever SPC-2 result was submitted in Sep/15 on the E-Series @ 8,236 MBPS. If they could submit a trouncing result .... my experience in this stuff suggest to me they would!

/Paul Haverfield [HPE Storage]

NetApp's all-flash FAS array is easily in the top, er, six SPC-1 machines

PaulHavs
Thumb Up

Re: Really??

I'd love to see an SPC-1 from EMC on XtremeIO or Pure on an FA-xxx, would make my day / week / month!!

SPC-1 is still as relevant for All flash systems as all HDD or even hybrid now that the test specs allow tiering systems to do their magic.

the benchmark test is still the most standardised way of comparing systems with a realistic and reasonably hard workload that actually resembles virtual server IO blenders very, very well.

Paul H [HP]

In-array compute ....

PaulHavs
Go

But whats the difference? Servers --> Storage; or Storage --> Servers...

"..... then bring it closer to the servers. ...."

It will happen, not this year or next, but 5 - 10 years out we will have converged systems which are modern day open-system looking 'mainframes'. The technical argument is whether "servers will move into storage arrays" or whether "storage will return to servers". ...and the internal networking will be totally transparent.

Its happening today in the form of the converged system stacks from all mainstream vendors - storage + servers + networks in a rack or a few racks - sized as small, medium, large for a range of guest VMs in a hypervisor of choice (choice only from some vendors!).

Look in your lounge room at home. How many of us still have 'best of breed' component hi-fi systems? Not many I think. Most have smart TV's with integrated DVD / BluRay / DVR / PVR connected to a surround sound system... maybe an external set-top box, but that's a transition issue!! integration and elimination of 'best of breed components'.

Why otherwise would EMC, NetApp, Cisco be so hell-bent on getting into the converged systems market? Early days yes, but we will have enterprises and service providers running "open systems mainframes" in the foreseeable future before I start pushing up daisies (well, I hope so anyway!)

Paul Haverfield

Storage Principal Technologist, HP APJ region.

Are SPC Benchmarks useful?

PaulHavs

Re: Are SPC Benchmarks useful?

Totally reasonable line to take Chris.

The founding premise for the SPC organisation was to provide a level playing field to allow comparisons, judgements, and informed opinion to be made - as you pointed out.

All vendors contributing SPC results should be applauded and recognised for what they are doing. An SPC submission "opens the kimono" to many details regarding a storage systems design and architecture which normally would not be easily available until a customer has purchased, implemented and experienced many months of operations.

A vendor will not submit an SPC result for publishing if they thought they could get a better headline number by tweaking this or that, changing the balance between CPU and Capacity and hosts, etc... it is all a fine balance between cost and ensuring that all resources are utilised efficiently.

This is why I love the SPC-1 Full Disclosure Reports - the detail and nuances disclosed there are amazing and accurate to make an informed opinion and position regarding storage array solutions available on the market.

For example; Why do some vendors go to extreme lengths to implement super-complex host-side file system wide-striping? Answer: because to achieve high performance you need to distribute hot spots, and wide-striping is the best method today to do this. That's why industry leading arrays do the wide-striping inside the array - eliminate the complexity from the customers daily life.

Second example; Why do some vendors configure their array groups, volume pools, whatever they call them - without any sparing? Answer: because if they configured sparing that will reduce (worsen) their capacity utilisation efficiency, increase cost maybe, and potentially decrease performance. I have never met a customer who uses a RAID array and not configured sparing... doh!!

Above are examples - and there are many more - as to why the SPC-1 benchmark is such a good "level playing field" to make informed judgements and decisions (by customers) regarding storage array choices.

Disclosure: I'm a HP Storage Employee.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2019