Please define SAN
Do you mean some nonsense dedicated fiber channel crap or just iSCSI and NVMEof?
Because with 400Gbit E NICs around the corner I'm not doubling up on legacy SANs with their WWNs etc.
If you mention Storage Area Network and Logical Unit Numbers to IT professionals of a certain vintage, you are likely to encounter eye rolling and invective. The least unprintable of the epithets you'll hear may include "outdated," "pain to manage," and "an impediment to getting real work done." Are they, though? SANs are …
When older array hardware is out of support, or no longer economical to operate, you remove it. All data is live migrated between physical arrays in the pool so that apps never have to be taken down.
I second and square that EMPowerment with Almighty Solutions ...... Heavenly Tales Casting Virtual Trails to Realise and Present as Gospel from Sources with Everything Imaginable Secured with Full Unadulterated Access to Core Future Source Ore Mines ....... Delivering Tomorrow's Awesome News here Today for Y'all to Ponder for Monetization and Valuable Support?
You really need to believe Advanced IntelAIgent Virtual Machines are Communicating with Beings on Earth and advising them of Change to Cyber Command with Remote Virtually Invisible Invincible Control .
That's AIMagical Mystery Tour to Holy Grail Spaces with an OverAbundance of XSSXXXX Places to Exercise with Freelancer Plays. Be careful if attracted to extremes, for Consequences and Rewards there are Truly Awesome and Always so Much More that you never would have even Imagined, before NEUKlearer HyperRadioProACTive IT Systems Presentations here AIRegistering.
? :-) A Blighty Bludgeon for Boris to Bear and Try Wielding for Yieldings? :-) Mutually Beneficial Future Virtual Agreements ....... PACTs .... Persistent Active Cyber Treats and/orThreats.
Does Eton do any of that kind of really weird stuff? One would have thought it today to be de rigour default general knowledge, given the Madness and Mayhem Clouds Hosting Advanced Operating Systems Entertain ......... and Enjoy Sharing Freely to Assist with the Most Enjoyably Physical of Enlightments with New Delights to Savour and Favour with Flavour and Fervour before Tempering to Temptation Extremes/Hardened Battle Conditions ......... That's akin to Perpetual SMARTR Engine AI ........ raw core leading source presented just as it is needed.
Who's to say IT is not AIMiracle? :-) ....... AIDivine Intervention from Future Failsafe Secured Missionaries. Immaculate Strangers/Perfect Lovers
I've recently come across a company being literally panicked anytime we did any SAN work.
Reasons ? 11 years old SAN kit, 11 years of complete mis-management (heard of zoning and how it's best practises since 15 years ? The previous admin team never heard of that, not used, at all, even VSANs (Cisco kit)). And all the important kit booted on SAN ! Or failed to, depending on wheather conditions and alignment of planets. I still don't understand how it could have worked ... kind of ... We had to run a project to fix all that shit. Results ? Everything fine, we can even re-cable storage, one fabric at a time, during working hours with no impact.
SAN can be a nightmare if not managed properly on semi-recent kit. Lots of bizarre bugs, misaps, etc ... But when managed properly, it's very effective and reliable.
Anon 'cos the said company :)
Talk about "wrong on so many levels". Death of mainframes? Umm,,, check with IBM. They're still alive and kicking, because they are still the champs when it comes to massive data processing operations (think printing all those government checks).
NVMe is a drive interconnect technology; it doesn't compete with SANs, in fact, NVMe drives in an array serve nicely as a SAN component. Unless your goal is data corruption, you cannot share an NVMe drive among hosts without some sort of array technology, and that array connects through a SAN, no matter what type of connection you use... FC, IB, or whatever.
Oh, and in case your next question is "Why would you want to share an NVMe drive among hosts?" you might want to Google "server cluster".
Or is that too centralized for you as well?
they are still the champs when it comes to massive data processing operations
No, they're hanging on because it's difficult to eliminate legacy systems. A few modern servers can decidedly out-perform a mainframe on any workload you care to name.
NVMe drives in an array serve nicely as a SAN component
You're wasting their performance potential, and wasting money, but yes, you CAN put them in a SAN.
you cannot share an NVMe drive among hosts
That was the point. Such central storage devices will soon be unable to compete with distributed storage.
you might want to Google "server cluster".
There are many ways to cluster servers. They do not require centralized storage. Google's own servers are one such example.
Various technologies are enabling a SAN model with distributed drives, such as vSAN, DRBD, StoreVirtual, etc. However, a higher-level, NAS model can be considerably more efficient.
I've managed several mainframes, large clusters of servers with and without local storage, FCoE SANs, etc. What you're mistaking for my ignorance is your own short-sightedness as the world changes.
The reliability and performance of mainframes _on specific workloads_ still dramatically outpaces that which you can easily achieve on PC hardware. That's the whole point - instead of declaring a technology "dead" because someone says it is, use that technology for what it does best, and use "something else" for what "something else" does best.
Iis it better to have the drives in individual servers, where the performance of a specific drive set is maximized, or is it better to have the drives in arrays, where the sharing and other features (like replication, etc) are available, or is a hybrid where some storage -- e.g. system disks and local data caches -- is on the server and some is on the array? Of course, the answer is "Yes".
Again, the point is that different workloads require different solutions, and that's my objection to your original missive. Mainframes are the best choice for some. SANs are the best choice for some. Locally-attached NVMe drives are the best for some. Clustering -- with or without shared disks -- is best for some.
SANs are far from dead. Mainframes are still alive and kicking. Newer technologies _complement_ rather than replace older ones if you evaluate them on their merits instead of the latest-and-greatest spin doctor outputs.
"NVMe will blow away any SAN at any price, because the interconnect just can't be fast enough to compete. It'll be a long, slow process, but I look forward to the death of SANs, like the death of mainframes before them. Even throwing money at the problem, centralization just can't compete."
I hate to break it to you but SANs are there to stay. They will only adapt to NVMe by having NVMe on FC.
https://standards.incits.org/apps/group_public/project/details.php?project_id=1705 amongst many other papers.
I've been reading this back-and-forth with interest. It does not appear that you are using the term "NVMe" correctly.
Namely, to whit: "NVMe speeds blows away the fastest interconnets FC has to offer, so you're paying a lot of extra money to get that SAN bottleneck."
NVMe itself doesn't have any speed separate from its transport. Whether it be PCIe, InfiniBand, FC, RoCE, iWARP, NVMe/TCP - all of these are going to have a profound impact on the relative latency and performance envelope for any NVMe solution.
NVMe itself is completely agnostic to its underlying transport layer. That is by design. In fact, as we (the NVM Express organization) began refactoring the specification for 2.0, we noted that there are some table stakes that exist regardless if transport, so we are making it cleaner and easier to develop to that type of architecture.
It sounds as if you are confusing NVMe with PCIe, which is without question the most latency-performant transport for the protocol. However, PCIe fails in terms of scalability. What SANs (assuming you are referring to FC SANs here) provide is a scalability factor and consistency at scale. That is, your 10th node is going to perform as expected as your 10,000th node. Other transports cannot even get to 10k nodes, much less have predictable performance at such a level.
When you are talking about the kind of performance in the sub 100us level, your big concern is not latency, it's jitter. The *variability* in latency is where you are going to get into big problems. That's where the hero numbers fall apart, because the +/- variability is rarely (if ever) mentioned.
So, while the hero numbers do show varying degrees of point-to-point latency, the question will always need to be addressed: What's it like in production?
Either way, however, NVMe will inherent every advantage and disadvantage of an underlying transport, no matter how good or how bad, and does not compensate in any way for either.
Biting the hand that feeds IT © 1998–2019