* Posts by Clockspeedy

5 posts • joined 21 Apr 2012

Nutanix consciously uncouples with move into software-only sales


So here's the thing...

For more than 15 years now I have been making the observation that for any new technology, the immature (stupid) phase is the ‘appliance’ phase. When was the last time any of us asked for a ‘NAS appliance’? (not counting the sub-$1,000 market on Amazon) 

The mature phase of NAS happened when the software stack was disaggregated from the hardware, and likewise, historically disaggregation ALWAYS wins in the end. Like the old saying goes, customers want to ‘date’ their hardware vendors but want to ‘marry’ their software vendors. Once the ‘gee-whiz’ factor wore out and customers realized that NAS is a relatively simple software stack that runs on any server, nobody liked the hardware lock-in that is represented in the case of the NAS appliance.

We’re seeing this play out again in the crashing-and-burning of Nutanix, the poster-child for the 'appliantization' of HCI.

Now, as customers are figuring out Hyper-converged technology, realizing that its JUST SOFTWARE that runs on ANY hardware, here comes desperate Nutanix as a software-only stack…

Good news for customers is that the market preference for the disaggregated HCI model is clearly starting to emerge. It would seem that companies who have been selling the disaggregated model of HCI are in the catbird seat.

Told you so ;-)


DataCore scores fastest ever SPC-1 response times. Yep, a benchmark


When you're hot, you're hot....(was Re: NetApp comments -- nothin but FUD, dispelled here...

...and when you're non you're not.

Dimitri, if adding 2+2 and coming up with 3.14159269...was all there was to math, you'd be a genius.

Every real-world application has relative IO intensities that are associated with the applications IO stream requirements. SPC-1 models this with a pretty fair degree of accuracy. You're thinking resembles the ancient logic of IOmeter -- which we all know is pretty much useless.

SPC-1 models these relative IO stream intensities accurately, more than any other available benchmark. If you can suggest a better benchmark for customers to use, please do.

Your employer (on the other hand) seems to think SPC-1 is pretty good. I understand Nimble's Mr. Daniel is a big fan of SPC-1...have you talked to him?

"Naturally, I’d love to see IT architects simplify their purchasing problems by requiring “SPC-IOPS”, not merely “IOPS” in their requests for proposals (RFPs)."



Million-plus IOPS: Kaminario smashes IBM in DRAM decimation


Too funny...

I was wondering how it Kaminario could have possibly come up with an 'architecture' to make DRAM go thousands of times slower than it should. 3.4 milliseconds? That's ridiculous.

I looked at the benchmark docs...this thing is nothing but a rack full of forty-seven Dell server blades (cheapest available) connected to a FC SAN, with RAMdisk software, software RAID and an APC UPS to make it "non-volatile". Oh yeah, and a 1,900% markup piled on for good measure.

No, really. I'm not kidding.


So what? Anyone can slap this together with off-the-shelf parts in an hour...


Take a Intel E7 box off the shelf (IBM x3850 X5 is nice), load it with DRAM and RAMdisk software (e.g. SuperSpeed) and voila, you have up to 3TB of DRAM-speed disk...from a real vendor...with real support...for 1/3rd the price.

FYI, this is what SAP uses for their HANA in-memory database.



WTF is... scale-in?


Explanation of Scale-In

Scale-in relates to elasticity. It's a concept that's been around a few years, and it's actually quite relevant to IBM's new systems, although not at all in the way their marketing folks have spun it.




Tinyurl for IBM link:


And there's a very nice picture here (Fig. 5):




Biting the hand that feeds IT © 1998–2017