What's going on with Oracle storage? Besides tape, which seems to be doing fine, does the IT giant have a long-term, viable external storage array product line at all? Oracle's general hardware business is in decline. Witness the chart below showing quarterly overall sales including hardware product revenues; storage sales …
"Fourthly, Oracle needs a storage array roadmap that responds to a gradual but growing migration of enterprise's stored data into the cloud, which will include some primary data, thus reducing the market for shared storage arrays. "
Wouldn't this beg the question what "the cloud" actually stores stuff on and get in on that market?
Re: Paul Re: So?
"Wouldn't this beg the question what "the cloud" actually stores stuff on and get in on that market?" We'll, I think you're only thinking of external cloud (like AWS), whereas you can also have private cloud (cloud but inside your own environment) and hybrid cloud (very nasty, means you have to bridge between your environment and an external cloud supplier). So really three markets.
Why does Oracle need storage
I guess the question here, is whether Oracle needs to get involved in storage? Why is it assumed that every hardware vendor has to cover all (or most) of the bases? Why couldn't Oracle just stick with servers? Sun pretty much just rebadged other peoples arrays, so maybe Oracle could do the same?
I'm not sure that 'the cloud' is actually something that is going to grow as some expect. Internal, company clouds, maybe. General, shared clouds, for some users yes. For many, no. Too many legal and other difficulties.
The recent Oracle server announcements are interesting, but I'm not sure they're groundbreaking. Maybe just bringing them into contention again, but I don't think it puts them ahead. Also, it's somewhat confusing as the UK Oracle website is showing the Fujitsu servers, as well as T5-x and M5-x servers!! What's the strategy?
Re: Mad Mike Re: Why does Oracle need storage
Well, if you believe the Oracle hype, it's so they can build a better, more tightly integrated stack. It's also one of the few areas of hardware that is growing and looks to keep on growing. A more pertinent question would be why does Oracle think it needs to be in servers?
Re: Mad Mike Why does Oracle need storage
I agree. Data keeps on growing and it might be the storage marketplace is better suited to producing profits. Is the server marketplace offering the same profit potential? Don't know. However, my point still stands. Why does a hardware manufacturer need to be in all hardware markets as the article suggests?
Re: Why does Oracle need storage
Oracle can't just rebadge other people's hardware for two reasons:
1. They already burned their bridges with HDS and LSI (now NetApp) when they terminated their OEm relationships.
2. They spent hundreds of millions of dollars on Pillar. They aren't going to just bail on the technology. I mean... they could, but Larry might finally be shown the door if they did, when all his shareholders realized he was just bailing out his personal bank account.
Re: Why does Oracle need storage
One thing you learn quite early in business is that rarely are bridges burnt. At the end of the day, if money can be made, HDS and LSI will come to the table. Yes, they might smart over Oracles previous dealings with them, but if there's profit in a deal, they'll be there. It takes a very strong leader to refuse to do profitable business due to some perceived moral or personal reason. Quite often they don't last that long either, as their companies shareholders generally don't like turning away profitable business.
Re: Why does Oracle need storage
Contrary to popular belief, they never terminated their OEM partnership with NetApp. They still sell the 2500-M, though you'll probably pay a heck of a lot more for it than you would if you bought it from Dell, etc.... Which is the real problem with Oracle storage--it costs too much.
Oracle's Plan 'B'
Must be Exadata.
They seem to be pushing this as a solution to getting rid of a number of Unix boxes.
Re: Oracle's Plan 'B'
Try getting some 'engineered systems' techies and Sparc techies in the same room. Ask a few pertinent questions and watch them try to justify their own systems without directly criticising the others!! It's great fun. In truth, Oracle don't really know what their direction is. The engineered systems people will tell you their product is the best for everything, but the Sparc people (and disk people etc.) will claim theirs is the best for a lot.
In truth, they both have a place. Also, if Oracle are going to start building accelerators into their chips (can only be Sparc), then Sparc will have to start appearing in engineered systems. They don't tell you that!! In truth, I suspect the only reason Oracle has kept Sparc is that they need a chip they can control to add this sort of function and tie companies into their technology stack.
Re: Oracle's Plan 'B'
They already have SPARC as an engineered system. It's called SPARC super cluster, and it can be used for general purpose, run Exadata software (and by default even comes with the storage nodes for it), run Exalogic software, or some combination of them all.
And yes. Chances are highly likely they'll push the SPARC side more heavily once accelerators specific to their apps are added into the CPU. I also expect this will start to happen once they add one of the newer SPARC processors into the super cluster.
Re: Oracle's Plan 'B'
If they're really serious about accelerators in the chip, this is their only option. Intel isn't going to oblige them, and anyway, everyone else would have access to the same, so no lock-in. I'm sure Sparc will become far more common in their engineered systems. However, this will become an interesting change of trend in the industry. For ages, everything has moved to towards using general purpose hardware and putting the effort into the software. Hardware became simply a commodity. If Oracle do this and others respond, hardware will become as important as the software once more as the accelerators etc. built-in become more and more important. IBM could add some to Power and do the same thing.
Re: Oracle's Plan 'B'
Power already a bunch of accelerators for crypto, compression, variable off-loading, and the like.
It supports NFS, CIFS, iSCSI and FC... It uses Solaris' COMSTAR which supports all those protocols and more
Might be, but COMSTAR ain´t what it used to be since they lost terra to Word of Blake...
I dont know who wrote this article. We use our ZFSSA's for FC but it supports NFS, CIFS, and iSCSI just fine as you said as well as infiniband. For the price we tested several other storage vendors and in ease of use and performance it beat or was on par with everything including the VNX line and was cheaper. Not to mention if your using it with Oracle DB and ASM you can get Hybrid Columnar Compression which is only available on the exadata or Oracle DB with a ZFSSA.
Yes, but HCC primarily benefits OLAP or other data that arrives in bulk loads with common columnar data, not OLTP or unstructured... which would be a bunch of random inserts or reads.
Buy a better future
Since some if the most recent TPC/ Vmware benchmarks run on Oracle Databases and ( Cisco, dell, Fujitsu and HP servers) with Violin storage. Why doesn't Oracle just buy Violin?
The biggest problem we had...
...when we looked at Oracle storage wasn't the technology so much as it was the software, the packaging and to some extent, the marketing. As far as the marketing, the ZFS Storage Appliance IS a unified storage offering. You can add 8GB FC HBAs to the appliances and connect them to SAN fabric and provision LUNs on top of ZFS. You can provision LUNs through iSCSI, or you can provision network shares through NFS or CIFS. You know, kind of like NetApp with it's FAS heads. Those are pretty good, right? They should be bragging about their protocol support, 8Gb FCP, 1Gb/10Gb iSCSI, IP and RDMA over Infiniband, NFS, CIFS, HTTP, WebDAV, FTP, yeah, it does those.
The packaging problem that I see is the entry system, the 7120, doesn't support a cluster option. It's a single head system. But it supports 177TB of capacity. NetApp's FAS2220 entry system will grow up to 180TB, yet you get HA with the system. Those capacity levels call for HA and make them attractive to businesses who are outgrowing entry networked storage systems (QNAPs and DLINKs and junk). Sure, the FAS2220 doesn't support FC, but the FAS2240 does, and it's still cheaper than the 7120 as of the last series of quotes I received. If the 7120 were built on a shelf unit like the Supermicro two node units built into a chassis for HA with SAS-connected drives out front, I think they'd do better with this product.
In the midrange, the product starts looking a bit better, especially for Oracle shops cashing in on Hybrid Columnar Compression. It also does inline deduplication and compression, snaps and clones, local and remote replication, SSD read/write acceleration, phone home capability, various RAID options along with data/metadata checksum verification, etc. High availability is an option but it's a no-brainer when you're talking about 432TB of data, at least in my opinion. Two nodes should offer enough front end ports and performance to handle that kind of capacity, so this one seems to hit a sweet spot.
In the high end it stumbles again with a packaging problem, as it's peers are scaling up and out it is still stuck at two nodes. 2.6PB is a lot of disk for two nodes to keep up with, especially with all the bells and whistles turned on and slamming the system with heavy workload. It certainly has the capacity for big workloads.
Then there is the software problem. You could do quite well with Linux, Windows, and VMware integration tools for path management on block storage and snapshot/backup utilities with application awareness. But aside from the Oracle Snap Management Utility, I can't find anything else that ties it in to anything else. DTrace looks pretty and the mobile app is neat. If they would open their focus and develop tools for more systems, the appeal would grow.
They have a potentially good product with the ZFS Storage Appliance. This should be their take-aways:
- Rebrand it as an Oracle Unified Storage Appliance built on ZFS technology, make some noise about its protocol support;
- Develop more tools and integration pieces with other OSes and applications (why not at least make a good storage product to sell into the MSSQL and DB2 shops?);
- Re-engineer the low end system to do HA in a single shelf (I'm sure SUN can put together a Supermicro-like two node chassis with SAS-connected disks out front);
- Scale out the high end model to support more controllers;
- Add all- or mostly-SSD options to the mix.
Lots of folks out there think ZFS is the future. Lots of folks want a big name attached to their mission-critical storage when things go pear-shaped as opposed to buying a "validated design" based on whitebox hardware from other, smaller ZFS vendors. Oracle has the opportunity to really build something here if they would simply broaden their horizons and expand their focus. But I don't think they will, they want you to buy engineered systems and they want you to use them to store your data in their database. There's nothing wrong with that, but their storage sales won't be booming anytime soon with that approach. Until they do turn it around, people looking for an all-in-one HA storage array with some space-saving features and broad protocol and application support will continue to buy NetApp.
Re: The biggest problem we had...
I agree 2.6PB is a lot to keep up with but if you compare their controllers to other vendors they put pretty extreme horsepower in theirs. Sure I am not sure if I would trust it either at that level but we are using the 7420s with 4x 8 core cpus (10 cores are available) and 512GB of Ram per controller as well as another 1TB of flash cache per controller. You can go up to 1TB of RAM in each head with the 10 core systems. I cant think of any other storage controller I have looked at recently thats even close.
Re: The biggest problem we had...
The biggest problem we have with Sun/Oracle storage is the main administrative system (the 'akd' process) sucks donkey balls like there is no tomorrow.
Sure, when you have it running and don't try to change things it works well, for our load (large of sequential read/writes) the use of ZFS with HDD storage and flash journalling is very cost-effective.
But heaven help you if you try to configure the system or, say, use the backup/restore feature! It breaks. And as far as we can tell, the whole Oracle support organisation exists to stop you actually dealing with the engineers who can fix things. Assuming they have not all left when Oracle took over.
Maybe if we had several £M per year to spend they might just do things, but then again, we doubt it. Fix your organisation Larry!
Larry only cares about engineered systems
They don't need a separate independent storage system. Larry himself said if the regular Sun hardware business goes to $0 he doesn't care(http://video.cnbc.com/gallery/?video=3000119877&play=1). There's no margin in it. They will only develop stuff that can directly contribute to engineered systems and I believe that means tight integration with the applications(e.g. Oracle ASM) which means less work needs to happen on the storage end of things which means those storage solutions are not suitable for anything other than engineered systems.
The rest of the stuff will just wither and die on the vine.
The title made me think of the story of the "Foo" Bird
Hadoop and SSDs
They already have something for Hadoop. It's call the Oracle Big Data Appliance. I however know nothing about it, but surely they are looking at that market unlike what the article notes.
As far as SSDs in the ZFS storage appliances. I don't think it's a code issue as to why they aren't fully populating them with SSDs. I suspect it's more of with how they're using flash, that they have not been able to justify a configuration with only SSDs to give a performance benefit to justify the added cost. That may change as SSD prices continue to drop in cost and increase in size.
In general, I think their storage plan is in the server (most of their servers offer 8+ drives--works quite well with ZFS), midrange with the ZFS storage appliances, and then push the Axiom arrays for the highend. Ideally they'd try to merge the later two. That being said, I think they'll stay a niche player, not try to be the next EMC.
Re: Hadoop and SSDs
SSDs have a beautiful benefit in hardware way beyond their speed increases etc. They define a lifespan for the hardware. Rotating disks do have a failure rate, but not really a lifespan as such. You only have to speak to companies with some very old disks still running. Therefore, if the array keeps working and you have some spare disks for failure, you can keep using it. However, SSDs are another thing. They have a potentially much shorter lifespan, especially when used in high access rate, low latency applications, their current niche. So, the more people use SSDs, the more they have to keep refreshing their storage on a regular basis. This is even better if the SSD is built into the server as well.
All keeps the hardware sales going quite nicely and tends to depress the used hardware market.
Oracle cannot fix its ZFSSA
Talking with managers at Oracle that still there from Sun and soon you hear that launch of ZFSSA has been plagued with issues, lots still no resolution. Managers there have also complained that it was a great idea that Cantrill, Shapiro & co launched but no plan to support or maintain.
Nobody is saying how great their Oracle/Sun ZFSSA appliance is anywhere on web. Only positive for ZFSSA is Nexenta based solution is cheap and storage resellers can compete with Amazon.
We have a ton of oracle zfs storage appliances here.
1. Usable vs Raw is horrendous
2. If you utilize more than 80% of the box the performance goes downhill
3. Due to it being zfs/software raid adding additional shelves requires you to add even numbers
4. They produce hilarious amounts of heat
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base