6ms+ w NVMe
The only way to achieve such abysmal write latency is to have the world’s slowest data protection algos.
Are they sending the acks via pigeon carrier?
117 posts • joined 8 Feb 2017
What a complete bullshit story.
Where in the world does a signed purchase order and invoice change its value? FOREX fluctuations are completely irrelevant to you buying from Cisco or Arista or Juniper, that's something your procurement guys and the CFO worry about.
You sign PO, that price gets invoiced according to T&C after merchandise ships, then you pay with 30 days or whatever your payment terms are. Alternatively, you have some sort of financing in place to spread the payments over time.
And LOL A BEEEEEELLION DOLLARS
NVMe has no SAS stack and hence no RAID. You have a bunch of individual devices which are very very fast. Usual limitation is 10 per servers due to lack of PCIe lanes.
I need to take a closer look at this box and see how they upped the number of available lanes for a 2 socket system, if they didn’t it will be a world of pain.
This box needs a fast SDS layer to control all those individual devices, VSAN and Nutanix for example.
Fair enough. The issue is with who carries the burden of support and code maintenance. Is it *you* writing and maintaining drivers for Linux, Windows, Solaris, AIX, Vmware, KVM, etc. (in all their glorious versions and patch levels) permutated with all possible array models and versions, or is it those vendors writing and maintaining their code to adhere to this standard. The difference is monumental.
And yes, it’s true, TCP is everywhere. But NVMe’s biggest advantage is getting rid of the SAS stack. Why would I introduce the even crappier TCP stack unless it’s just for tier 2 and 3 but why use NVMe at all then?
Right. I am totally going to install a KERNEL EXTENSION from some small outfit. Into all my tier 0 and tier 1 app servers, change management will wave this trough no problemo.
And then, instead of a standards based architecture (RoCE) which is actively supported and maintened by all my vendors Oracle, Red Hat, Msft, Vmware, Dell, Pure, and Cisco I would have to rely on these guys to maintain code and drivers in a stable and timely fashion for all mentioned above. Yeah right.
Reason for custom firmware, or rather validated and certified firmware, is interoperability. No-one really cares if your desktop PC goes on the fritz and you lose your photo collection.
If that happens to a corporation someone will be held liable, including public egg on face.
What I find more interesting lately is that apparently Cisco is really testing the crap out of their component suppliers and finding all these bugs. Someone mentioned below that this is an Intel bug, so not only kudos to Cisco for finding it but more importantly, shouldn't Dell and HPE with their much larger server orgs be the one who catch it?
What's the point of this article? Is this some sort of fncked up gotcha journalism? What exactly is being concealed? It was done relatively quietly without big press release or false virtue signaling, then someone thought the wording could be better and changed it.
What exactly are you implying?
Cisco took the high road and did the right thing. Instead of reinforcing good behavior, and I think we can all freely admit this is exactly what should be done to force Google to finally police the cesspit that is YouTube, you guys are egging on the first major company to dare do this. Are you out of your mind?
I am appalled by your behavior. You should be ashamed of yourself, Mr Sharwood. Have you no decency?
MDS Diagnostics is Virtual Instruments, OEMed into Cisco MDS switches. Doesn't require TAP and SAN rover which is a huge deal.
We ran a massive array of VI instrumentation across all our VMAX arrays and it's a storage admins dream. We could pin point individual FC frames and IO, correlate with DB IO, determine application performance on the SAN in real time. Cache hits, cache misses, we could identify misconfigured RAID sets just by looking at FC frame statistics. Ridiculously good stuff.
If I understand correctly it's now available in all 16 and 32G MDS switches.
"NetApp on the other, who finally figured out how to get scale-out clustering and all-flash working"
Oh man, this statement really bends the truth lol
First, it took NetApp only FOURTEEN YEARS to get Spinnaker finally working. Congrats, I guess?
Second, "working". Working it does but scale out it is not. It's a federation, not scale-out.
No dog in this fight but $500m in the bank? Not really. I was looking to buy some stock for long term investment and am following their fundamentals for quite a while.
It’s $500m line of credit to cover negative cashflow and get disbursed quarterly. Unfortunately, their financial health is atrocious and unsutainablr. The company is profitable only if stock payments and liabilities magically disappear, which they won’t and can’t. This is the same problem SnapChat had and has, GAAP accounting always catches up with you. Uber they are not.
I really like how much they disrupt the status quo and speed up innovation even if I disagree with the style due to my advanced age. But at the same time I feel there is an almost cultlike following which ignores reality and facts. This won’t end well.
"We have more than enough CPU to run things inefficiently and not care about it if it means we can cut a network admin."
That's a surefire way to have a massive network outage.
SDN does not make network admins obsolete, it only means that admins require more architectural skills than before when flying by the seat of your pants was kind of ok.
No one is denying that NetApp and others are growing. Also there is definitely decline in VMAX and XtremIO. But ponder this:
VSAN/VxRail teams are cannibalizing Unity by snatching up VNX refreshes. NetApp does not have this problem yet but might once SolidFire HCI starts replacing FAS.
As of time of that IDC report, HPE hadn't pushed SimpliVity into their 3Par, EVA, Nimble, and MSA base yet but I am hearing changes in compensation structure are driving such behavior.
I agree in all points except we will see continued growth in others. I am convinced we will see the same erosion EMC is being in other vendors installed base because of lower revenue HCI solutions replacing legacy SAN/NAS.
This is actually quite impressive, who would have thought?
For what it's worth, the chatter I am hearing (from Pure reps and SEs as well as channel) is that they are -very- successful in healthcare. The Pure SEs are also bragging about a large number of VxRail VDI implementations which they fixed by adding a FlashArray.
I fondly remember deploying ScaleIO on 500 t2.micro EC2 instances and (jokingly) needling my EMC rep why my VMAX AF can’t keep up with the IOPS.
But a bit more seriously, does Dell have ARM servers? One of the great advantages of ScaleIO is/was its capability to run on ARM. This seems to be gone now.
Bullshit. During layoffs, the most talented leave on their own because they have options. Everyone wants to change jobs on their own terms not by decree of some anonymous director/VP far away.
Case in point, in my region all the talented Nimble sales reps and systems engineers have already left.
I was engaged with Scality in a number of opportunities over the years. For a while they kept ignoring us because our HPE relationship is lukewarm at best. We worked a lot with SwiftStack and lately also with Cloudian.
Frankly, we saw Scality as a soon-to-be-acquired by HPE company and didn't want to bring them into our accounts as we feared we would suddenly an HPE footprint. To us it also looked like they were only working with HPE reps on HPE opportunities. Then suddenly a year or so ago we noticed them knocking at our door (we are a large regional SI/VAR) which always struck us as weird. All their enablement decks were still full of HPE products, logos, success stories, etc.
Unfortunately for them, we don't trust them anymore.
Don't know but the data centers I see have a lot of UCS. You walk down any of the Switch NAPs and all larger sections are mostly commodity or UCS (still plenty of Dell, HP not so much). Of course, I've only walked a few NAPs and SuperNAPs, for all I know the other NAPs contain a million sqft of Dell and HPE all stacked 60RU high.
It's a total mess. I've read the complete report now (thank you Nutanix for spamming my inbox) and pinged two buddies at VCE/Dell and Cisco. Quite some interesting opinions.
The software only requirement is completely ludicrous IMHO. One poster below already said that this should disqualify HPE Simplivity. I would add it should also disqualify VxRack (requires high performance Nexus 3000 switches) and theoretically also disqualify Cisco UCS as it requires UCS fabric interconnects (high speed network fabric).
But if we accept that as a criteria, then we also have to disqualify Nutanix. While their new "Turbo" feature is theoretically available for all platforms it only makes sense with RDMA. For that you have to use specific Mellanox or Arista switches and RDMA adapters.
This "hardware independent" requirement is so 2016 to be honest. After years of general purpose computing we are seeing more and more specialization in hardware, esp for encryption, compression, and low latency data transport. We will have vendors like Nutanix who will try to do everything in software (because they have to) and vendors like Dell, HPE, and Cisco add custom ASICs and FPGA (or specialized merchant silicon such as Nvidia and Qualcomm). Perhaps SimpliVity was just ahead of their time.
Worse, he assumes that companies won't update for 10 years. Literally no one does that. The longest you see is infrastructure software like Vmware, where there are still idiots running 5.0 because they can't be assed to upgrade to 5.5 or 6.x.
When it comes to development platform, esp containers or anything else cloud native, 9 months is like 5 years. This fellow definitely does not grok the concept of microservices.
You have obviously never deployed S2D, which is a completely and utter disaster. So much so that MSFT has pulled ALL validated designs and forces OEMs to submit plans for S2D Ready Nodes. They want the OEMs to hire headcount for that as well. So this whole initiative is DOA, buddy.
Thanks for playing.
Biting the hand that feeds IT © 1998–2019