E-mail reply from Chuck Hollis of VMware
Chuck Hollis of VMware has read this comment thread and sent me an e-mail. His opinion and views on the matter - and on my comments above - are valid and deserve to be included in this dicussion. I am reproducting the e-mail chain here.
Chuck Hollis to Trevor Pott
It was interesting to read your recent comments on The Register regarding the latest Nutanix snafu.
But I think you've completely mis-understood (and mis-represented) our stance on performance testing. We encourage it, not discourage it.
We've published oodles of our own data. We've published data from customers. We've encouraged StorageReview.com to publish. Etc. etc. etc. The more the merrier.
All we ask is a chance to review the configs and methodologies prior to publication -- which has been VMware's policy for many, many years. Lots of people are new to this testing thing.
We plan to release an easy-to-use testing tool (based on VDbench) to help make it easier for folks to test hyperconverged clusters with a variety of IO profiles. You, of course, are free to use it -- as will anyone else.
Or use your own tools. Have at it -- really!
However, we don't have much of a budget to send people free hardware. We're tapped out for the year, unfortunately, so you'd have to round up your own four-node config that conformed to the VMware VSAN HCL and design guidelines. Dell may be willing to play, or perhaps HP or similar.
Nor do we generally pay for reviews, as that's a slippery slope.
I hope you understand our position here, and can perhaps soften some of your comments to more accurately reflect reality?
Trevor Pott to Chuck Hollis (reply)
While your take on this does not reflect my experiences with VMware in this regard. We appear to have dramatically different understandings of the meaning of "chance to review the configs and methodologies prior to publication". I view independent reviewing – especially of software solutions like VSAN – to be fair game if you test multiple options on the same hardware. Doubly so if the individual components are on the HCL.
VMware seems to disagree, and has insisted that individual components being supported isn't good enough: the whole of the thing must meet the desired qualities. Slower CPUs, for example, are apparently not okay.
That said, I don't have to agree with your take on this for it to be valid. I have my view and I have expressed it. It is entirely possible that my views or understanding is wrong, and I'm willing to admit that possibility.
I will publish your e-mail in the comments as it is entirely valid that you get the change to rebut what I have said, along with this response. The readers will decide.
For the record: I never wanted – and don't really want – extra hardware to do testing. I will absolutely test whatever hardware comes my way, but for the love of $deity I have 10x as much server widgetry as I could ever conceivably use. I've also not asked to be paid for reviews by you or by Nutanix. I've offered several times to do independent testing for free in order to help put this debate to rest.
What I want – all I've ever wanted – is the chance to test hardware, software and services that I think my readers or my clients (or preferably both) will care about. I want to dig to find the truth of the gear that real systems administrators use, because it is those sysadmins that I feel a kinsip with, and it is those sysadmins that I feel I serve.
It is worth discussing the issues surrounding vendor control over reviews via an exercise of their legal rights. I believe it is perfectly valid for VMware to want to review the configuration and methodology of a review of their software. I don't believe, however, that they should have the opportunity to deny things just because they won't show that software in the best possible light.
It is absolutely valid to test non-optimal configurations and report the results of that testing. In the real world, lots of people live outside pre-canned, certified solutions. HCLs exist for a reason: they are a recognition of this fact and a publicly visible list of not just entire servers that are certified, but individual components, for those who are colouring outside the lines a little.
I view VMware's VSAN team as spectacularly hard to work with in a way that the rest of VMware isn't, specifically because of the level of control they insist on having over reviews. VMware's VSAN team don't seem to view their efforts as an attempt at control, but as an attempt at quality assurance and review integrity.
If I am being honest, then I cannot say that I have the answer to which view is right. My views are deeply rooted in my own past as an SMB sysadmin, which is tied to a need to know how things work when you can't afford to pay top dollar (and high margins) for everything. I feel that is a world that needs to be quantified, and I spend most my year trying to answer those questions for other sysadmins.
VMware's views are influenced by their own needs, but I must admit their take is objectively no less valid. I think readers should read all of this. Not just this thread, but many of the other threads that are associated on various blogs across the virtualization blogosphere.
I am one voice with one set of experiences. There are other voices with other points of view. Decide for yourselves. Test for yourselves.
I look forward to using both VMware and Nutanix's testing tools in my future HCI testing just as soon as they become generally available.