Unfortunately ...
... shifting targets mid-stream leads to a mission profile that is contrary to the build/launch profile and generally makes the kit unfit for either purpose. Mark my words, it'll all end in tiers ...
NVMe storage is becoming denser, faster, than other forms of storage and will therefore become a capacity tier according to Cisco's chief technology officer for UCS Raghunath Nambiar. “Right now people are looking at NVMe from a performance point of view,” Nambiar told The Register in Sydney last week, “but the real game …
Starship + UCS + NVMe + VMware + Nexus + Windows + Linux + Hyperflex storage etc...
This is a tub of rubbish which is delivered with CVDs which takes weeks to months to deploy.
If you have a validated design and an automation platform, then you plug it in, answer some questions, and let it rip and it's done.
Or you can buy Cisco UCS with Azure Stack, turn it on, answer a few questions and your running in an hour without having to pay $60,000 a blade for licenses plus the Windows tax. Or you can install Ubuntu on a VM on a laptop and point it to a UCS and get a full OpenStack up and running with containers and automation.
Come on guys... Microsoft and Ubuntu have nailed full data center automation, have app stores and eliminate the need for server, storage or network guys in the data center. TCO on Hyperflex is close to a $150,000 dollars more per blade than on Azure Stack or Open Stack. Why the hell would anyone invest so heavily in VMware which is great for legacy... but we already have legacy sorted. Run that and as more services move to Azure Stack or Open Stack, shut down more legacy Vmware blades.
Cheesy, I love reading your comments but you are quite wrong here. First, Azure Stack has a pitifully small CVD that is weirdly limited and dependent on MS validation. It's, on purpose, limited in sizes and increments.
Second, without arguing about Vmware's limitations or adoption, the cost of VMW EPL per node is somewhere around $8k for 3 years. HyperFlex is ~$8k for 3 years per controller, not bad for 100,000 IOPS and 100TB usable. Perhaps you counted the hardware costs as well but you can't run Azure stack without hardware, can you?
Since compute nodes have no licensing costs most customers are fine with 3-4 controllers, setting the cost at $24-32k. That's not bad at all for storage software licensing.
Current nodes come with up to 56 cores and 3TB of RAM, in other words, a 4+4 node HX cluster can have ~450 cores, 24TB RAM, 400TB, and ~400k IOPS. Software costs $32k, Vmware cost $64k. That's peanuts compared to the cost of the RAM alone.
https://www.servethehome.com/nvme-falling-below-sata-pricing-get-ready/
If you think NVMe is only for performance and has a big price premium attached, get ready. NVMe is reaching parity with SATA drives and from what we hear, NVMe cost per GB below SATA is next. You read that right. Not only is NVMe reaching price parity with SATA, it is headed below SATA pricing in the not so distant future. In fact, we are already seeing retail examples of where this is happening today.
The article talks about the use of the NVMe drives for doing just in time analytics. Their example of a recommender engine is a poor example.
Recommender engines will be based on batch jobs which will run off peak local to the user and then be used for the next 'day'. Note for global companies night/day become relative. Recommender engines to not need to provide near real time calculations, only the ability to retrieve the calculated value which does not need to be precise.
More unnecessary hype where the plain facts are enough to say that these will be disruptive in terms of overall IT performance.
There are simple recommendation systems for example customers who bought “War and Peace” also bought “Anna Karenina”, but there are more complex 'context' based recommendation systems that requires searching through large volume of dynamically generated information to provide personalized recommendations.
@AC ,
I suggest you think about it. Most of the heavy lifting is done in batch jobs where it can compute most of the work.
Then you can do lighter calcs in near real time.
And yes, I've done this type of system. (e.g. you have a recommender engine that also takes in to consideration your geographical location at the time of the query.