Just two months into the job and IBM's newest storage general manager Ambuj Goyal is putting his stamp on the business. He told El Reg that Big Blue plans to move all transaction data away from disk to all-flash arrays; that he's not that keen on object storage; and that he envisages an IBM that sells "less storage". He gave …
This sounds like the mainframe problem again. It's a story of more, faster arrays, but don't change anything else.
I sort of get a feeling of deja vu. That was the story that led IBM to back the mainframe while the rest of the world moved on.
Re: Mainframes again?
Check the IBM share price vs, say, HP. The z Series has worked out amazingly well ever since the death of the mainframe was proclaimed.
So long, SAN
SAN was anyway a fad that delivered storage at exorbitant costs. And if a completely separate network fabric wasn't expensive enough, the stakes were upped again with even more layered implementations like SVC. Over time we found that SAN servers (like some data warehouse systems) were nothing more than Linux servers with a nice management software, and NAND/Flash storage entered the market. The combination of the two was a game changer, as we could use direct attached storage again, which we could cluster, thin provision, replicate etc. using the same Xeon chips we anyway used for virtualizing our Linux servers. We used the same 10GEth network between the physical servers and found no performance degradation, as the storage was direct attached, and didn't hit the LAN much. Only the storage replication and backup, which was there before the reintegration. Flash storage also questioned the existence of SATA and SAS, as the NAND chips could be put on a PCIe card, leading to a further simplification and acceleration. No more separate support and maintenance costs for one component of the infrastructure.
With that, we asked, what was SAN anyway? A marketing fad that successfully made one component more expensive than it should be?
Re: So long, SAN
I thought a SAN was there to consolidate storage -- DAS has traditionally carried with it very low utilization rates.
On top of that the emergence of virtualization and the ability to move VMs between physical servers online has required shared storage of some kind for a while. Relatively recently some folks have come up with ways to do that now with DAS -- though you still have the availability problem -- you can't migrate the VM in the event the host that VM is down. So your back to good HA shared storage (of course not all shared storage is created equal) for the more mission critical things at least. Not only that but there is a massive amount of overhead if your having to transmit the data of a VM from one host to another.
SANs also offer things like snapshots -- for me I use this feature a lot - snapshot a LUN(e.g. database), present a writable version of that LUN to another host (or another VM - as opposed to say VMware's own snapshots which I find mostly useless) so it can do things with it. The snapshot consumes almost no space(only deltas of course), so provides a valuable means to improve data flexibility vs doing full on replication. I can wipe the snapshot out and refresh it from the master LUN in a matter of say 90 seconds (95% of that time spent doing host-based tasks to ensure consistency).
Being able to re-stripe existing data online (w/o application impact) over new resources is also a feature that is not found in the vast majority of DAS platforms. Also not available in say Linux LVM either (last I checked).
Certainly the idea of DAS has returned to some extent especially in the crappy cloud providers, the whole concept of everything is throwaway has come back for some. But for many others their applications are not designed to handle that(and I don't see that changing for the vast majority of cases - most of the time it is vastly cheaper and much simpler to solve a problem in infrastructure vs application architecture) so they need higher reliability and that often means SAN.
Some folks can even turn DAS into a makeshift SAN, though at the end of the day I'd still consider that a SAN - even if it's a shitty one.
There is certainly a place for DAS - it is most useful in situations where you have good knowledge of what the workload is, and have a predictable growth pattern. Or if you are using leading edge applications that handle fault tolerance and the like at the application layer.
For the rest of folks - the more traditional HA storage arrays are here to stay for some time to come (at least a decade I'd wager).
Re: So long, SAN
SAN is for consolidating storage. You'd have to read tech articles from the early 2000's when the concept was floated up.
Hell, even my makeshift home SAN did serve its purpose for a time. I was able to just plug in HDDs on my NSLU2, make that an iSCSI target and then just expose a couple of LUNs to my PCs. Instead of growing the PCs HDDs, I simply would expand the LUNs themselves and add more HDDs when needed on the NSLU2. Unfortunately, I also overclocked the NSLU2 and it died a horrible overheating death sometime around 2011. But while it was running, I did get a lot out of it, especially on the PCs that didn't cope with the larger, TB-range HDDs.
Re: So long, SAN
"...massive amount of overhead if your having to transmit the data of a VM from one host to another." - replication of storage and servers is still required when you deploy a SAN. That's when you guy a second SAN and put it in a second data room/centre and replicate using Dataguard or SRDF. The overhead is already there, I just don't see the need for a second fibre network when we have 10GEth already.
"crappy cloud providers" - I agree with you here, the cloud providers, albeit hideously expensive, have their business success only because departments are not getting any new IT resources from their CIO. Now, the business has to continue, so the stationary budget is burned for buying some cloud servers.
Yes, SAN had it's advantages 10 years ago. But when I see the internal cost charging, like, £1/day for 1GB of Tier1 storage, this simply cannot stand. A 300MB MS Exchange inbox is just pathetic. Today, we have 2/3/4TB harddrives, it is easier than ever to buy high density, modular systems. And if you compare the costs of a £500K SAN system with just a £60K piggyback to add HDD/SSD storage to the existing Xeon servers, then SAN had its day.
So, btw, has the network, but you don't want to say this too loud unless you get 500 downvotes: when you have 60 - 200 VMs in one physical server, you have a great deal of network in there. And it's faster, too, as it transmits packets via the memory. So, the router is only between the physical machines, for which you use your 10GEth or Etherchannel. And with that you have enough bandwidth for your DR replication.
- Updated HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
- Peak Apple: Mountain of 80 MILLION 'Air' iPhone 6s ordered
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion