796 posts • joined 19 Jun 2007
that's a joke, right?
they're about to get blown out of the water on car audio/navigation systems too....whether it is Ford likely dropping windows or the new android/IOS stuff that is coming out.......
burn baby burn
What's the compatibility look like for existing phones? Or will this only work in newer(yet to be released) phones. Samsung Note 3 for example uses SDXC but specifically mentions limit of 64GB. Not sure if that is just because that was(assuming it was) the largest available at the time or what.
It looks like the SDXC spec supports up to 2TB so hopefully it'll work with the Note 3 at least..
and a screen resolution that takes you allllllllll the way back to what, 1992 ?
didn't they do this for netbooks
a while back?
Looks like it was called windows 7 starter edition, though I don't see mention of what the cost was.
To the five year old 3PAR F400.
NetApp performance: 86k IOPS
3PAR performance: 93k IOPS
NetApp usable capacity: 32TB (450GB disks)
3PAR: 27TB (147GB disks)
NetApp unused storage ratio: 42% (this is fairly typical for NetApp systems on SPC-1 from what I've seen)
3PAR unused storage ratio: 0.03% (numbers available in the big full disclosure document , full disclosure document not available for NetApp system yet)
NetApp Price: $495k (this is list)
3PAR Price: $548k (I assume this is discounted, though there is no specific reference to list or discounted pricing in the disclosure that I can see readily available - obviously the pricing is 5 years old and the F400 is an end of life product no longer available to purchase as of November 2013).
Last I saw the NetApp clustering was not much more than what I'd consider workgroup clustering, sort of like how a vmware cluster is, that is a volume doesn't span more than a single node (or perhaps node pair but in NetApp world I think it's still a single node). I believe if your using NFS then you could use a global namespace across cluster nodes perhaps and span that, but that's more of a hack then tightly integrated clustering.
I admit I do not keep up to date on the latest and greatest out of NetApp, but about 18 months ago I was able to ask a lot of good questions of a NetApp architect(I think he was at the time at least) specifically around their clustering and got good responses -
Of course that is Ontap 8.1, according to this article on el reg the latest is 8.2, so I'd wager there can't be anything too revolutionary in version increment of 0.1 from an architecture perspective at least.
I don't mean to try to start a flame war or anything but I found the comparison interesting to myself, having dug a bit into SPC-1 results over the past few years, the disclosures are quite informative, which is why I find it's a useful test that goes beyond the headline numbers.
3par still growin strong
converged storage revenue up 42% year over year, last I saw HP considered that to be mostly 3PAR.
Traditional storage revenue down 17%.
Re: Those are great numbers
they did demonstrate AFA performance with respectable numbers last year -
Not that I'd use their stuff in any case(any more than I'd use something like LSI or Infortrend I group Huawei in that tier of service).
Numbers are a far cry from the first hybrid system to post SPC-1 which was IBM a few years back.
how different is the design?
I mean aren't most NetApp arrays basically just X86-64 servers with ram, nvram, pci express etc? How does this system differ in design so that it is more "built for the clustering" ? Is the software that runs on top somehow different than other NetApp arrays with the clustering software? I wouldn't expect it to be since that is one of NetApp's claims to fame.
Just seems like it is the same with just more powerful hardware...
devil in the details
for the windows comparison.
They say UNPLANNED downtime...
Take planned and unplanned into account please.
Or just don't bother we all know the answer.
I have no doubt that modern windows server is quite stable, but the frequent reboots for updates is well still quite a problem at least on win2k8(as well as win7), I'll be trying win2k12 in the next month or so.
my spindles are already not being interrupted with reads given my 94-97% write ratio(average) on my 3PAR arrays.
We were close to getting NetApp a couple years ago, really glad we didn't after seeing those ratio numbers though. We didn't have the numbers at the time since we moved from a public cloud(the company had no infrastructure of their own at the time) which has shit for performance metrics(those that it did have you couldn't rely upon).
read caching is handled upstream from the array in my case (application layer).
Been hammering on 3PAR myself for nearly 5 years to get SSD write caching. Don't care about fancy SSD read caching, that won't do shit for me.
SDN is horse shit
Really too much to put in a comment box, so google "techopsguys sdn" for my analysis of SDN. I was able to ask the creator of SDN a couple key questions which I used to confirm my own beliefs of what SDN is, and I rip the SDN concept to shreds in a 2,000 word blog post complete with pictures and diagrams.
SDN is to networking as FCoE was to storage. It's all marketing.
Same goes for cloud but that is another rant.
Really the only people that can truly benefit from SDN are service providers that have massive scale(e.g. 50-100k systems and up), and have very very frequent changes. If your operating at say a few thousand systems or less SDN is just stupid. It fails to address the core problems of networking complexity.
If SDN helps you at smaller scale then you've probably done something horribly wrong with your network design before deploying SDN. Or you picked the wrong gear. I cover that in the post as well.
Perhaps the Juniper stuff looks cool because otherwise their gear is just too complicated to manage(there's been solutions on the market to handle that aspect for 15 years).
Re: HP's answer
I'm confused what you say by LeftHand is for really small clients, do you mean Synology or Qnap is for really small clients? It looks like StoreVirtual VSA scales to 50TB per instance which is quite a lot of storage for an entry level appliance. Performance I think varies depending on what kind of hardware your using. I have been told what might be called horror stories for performance on Lefthand with network RAID 5 (at least relative to 3PAR RAID 5 performance). Though having a shared nothing design does have a couple advantages over 3PAR(or any other SAN that is not shared nothing) from an simplicity/availability standpoint.
I mean as far as I know HP's own openstack cloud uses Lefthand storage (in part because it was the first HP storage platform to support Openstack, 3PAR support came about a year or two later).
I think StoreVirtual is a interesting product at least(I only used it once for a few minutes a couple years ago). I still believe HP should/will kill off the Lefthand hardware(which are just HP servers) in favor of entry level hardware arrays being based on 3PAR instead and make Lefthand exclusively a software only solution. They won't come out and say that though (much like they didn't come out and say they were killing off EVA for quite some time after they bought 3PAR). I think they will kill off the P2000 line as well for the same reason.
I agree their marketing needs some work I have talked directly with the marketing folks in HP storage on this topic on several occasions and they finally got my point last August I think, though I am not sure if they've done anything about it yet(I haven't seen marketing info for their stuff since in the event they updated anything), I wrote specifics on the conflicting messages HP is sending out about storage here last year:
(jump to the Storevirtual section)
to those little boxes is the HP Storevirtual VSA. Every Gen8 proliant comes with a free 1TB license (of course more capacity is available licensing based on capacity I think). Install it on your server hardware on top of VMware or Hyper-V and have full fault tolerance between servers with network RAID(real "HA"). Not only block replication but thin provisioning, online expansion, sub lun auto tiering, snapshots, live movement of data between storage systems, blah blah you know the rest. All the software features come enabled(I believe) out of the box. Don't need a full fledged SAN? Then don't buy one. I don't think the HP lefthand stuff is capable of online data migrations to upgrade to a 3PAR transparently(yet anyway) like you can with HP EVA..
Or if StoreVirtual is too complicated for some reason HP has StoreEasy as well.. again no SAN required. I'm certain Dell has equivalents to StoreEasy on their end, I believe StoreEasy uses Windows Storage server(and I think Dell uses that too on their entry level gear).
Not sure if IBM/Dell have something equivalent. I know NetApp has a VSA but last I checked it had no high availability(it did do replication I'm sure).
i was part of a DDOS last week
I have a 1U server at a colo and my ISP contacted me last week saying they got reports that I was part of a NTP DDOS and that I need to fix my shit..
Which had me confused because the IP they claim that participated in the attack was the IPMI interface of my server.. (since I'm fairly limited in what I can put at the DC it's hard to put the IPMI device behind a firewall)
Upon further investigation it seems that the NTP client on the IPMI interface was less of a client and more of a client with a server attached.
After I disabled the NTP client the vulnerability was closed. I'm not expecting the vendor (Supermicro) to ever release a fix(server is a few years old) fortunately not having NTP on IPMI is not a big deal. The IPMI interface has a built in poor man's firewall though not sure if it would impact inbound NTP requests and I'm too worried to enable it in the event I need to connect to it from a network it is not configured to recognize.
The support team at my ISP gave me a handy command to verify whether or not you could be impacted(not sure if this means you are vulnerable or if it means there is just a possibility)
ntpdc -n -c monlist <IP>
And sure enough with the NTP *client* enabled on the IPMI interface (well web-based IPMI) the system responded, like a server would respond (I guess, haven't spent any time researching this)
Anyway found it strange/stupid that something that claimed to be a client only would be vulnerable enough to participate in an attack.
At my company we were indirect victims of a NTP based DDOS on 1/2/14 when our upstream ISP got hit by a 100Gbps attack for another customer (am assuming it was a gaming company). They handled it pretty well, not a big impact to us(spotty VPN connections, occasional site connectivity errors) but our bandwidth usage is pretty small.
servers are little more than building blocks. Last I heard you can go out and get Pernix software and put it on most any server, so what is the issue here. Servers are about hardware. There's not a lot of software with them outside of management functions, and there should not be.
Same goes for fusion IO's acceleration software, go out and buy it and slap it on a HP or Dell or whatever server, or have your VAR do it for you if you don't want to do it yourself.
Next thing you'll be seeing this reg author wondering why the server vendors don't make their own hypervisors too. Since the hypervisor has had a massive order of magnitude of impact on servers vs any fancy storage caching scheme.
Now what I'd like to see is better integration between the various guest operating systems(Linux I suppose is the one I care about most) and the hypervisor (e.g. automatically shutting off vCPUs if they are not required(as in making it impossible to schedule anything on a vCPU from within the guest until the other vCPU(s) are heavily loaded instead of trying to load balance), freeing up buffer cache automatically when it is not being actively used, perhaps even some sort of control plane communication between guests(co-ordinated by the hypervisor) so they can tell each other what they are doing and perhaps make more intelligent decisions on resource utilization). I'm talking kernel level stuff here - I don't think this sort of thing can be done with the stuff vmware tools has for example.
don't rely on vendor studies for FCoE
Just make it simpler.
Don't use FCoE. It's been a market failure since day one. I remember sitting through what was it 5-6 years ago now various presentations from the NetApp folks (and one or two from Brocade) talking about how great FCoE was. I never bought into it and I still don't. The added cost of a real FC network IMO is quite trivial in the grand scheme of things for the benefits that you get(greater stability(more mature etc), isolated network, etc). It's pretty crazy even now the sheer number of firmware updates and driver fixes and stuff that are going out for various converged network adapters(and I'm sure the aggregators ala UCS as well as HP Flexfabric and any others).
Applications often do fine if there are network issues(last round of issues I had was with a manufacturing flaw in a line of 10GbE NICs about two years ago - fortunately since I had two cards in each server it never caused an outage on any system when they failed), network goes down no big deal things recover when they come back.
Storage of course is unforgiving, any little glitch and shit goes crazy. File systems get mounted in read only mode, applications crash hard, operating systems crash hard etc. Last major storage issues was with a shitty HP P2000 storage system(since replaced with 3PAR) which on a couple of occasions decided to stop accepting writes on both of it's controllers until I manually rebooted them. Each time at least an hour downtime to recover various systems that relied on it. Fortunately it is a very small site.
Keep it simple. If you really really want to use storage over ethernet, I suppose you could go the iSCSI route, and/or NFS though that'd certainly be a lower tier of service in my book. I have a friend who has done nothing but QA for a major NIC manufacturer on iSCSI offload for the past decade and he has just a ton of horror stories that he's told me over the years. That combined with the wide range of quality of various iSCSI implementations has kept me from committing to it for anything critical. I still do use it though mainly for non production purposes to leverage SAN snapshots to bypass VMware's storage layer and export storage directly to the guests to work around bullshit UUID storage mappings in vSphere since 4.0.
Now if your using UCS, I'm sorry, from what I've seen/read/heard those blade systems have very limited connectivity options so you may be stuck with ethernet-only storage. At least HP(and others I assume) give you options to use whatever you want.
When a new good VM server can cost well over $30k a pop with vSphere enterprise+ licensing (and a few hundred gigs of ram) - the cost associated with FC is totally worth it. I'm sad that Qlogic is getting out of the FC switch business.. though they seem to continue to sell their 8Gbps stuff which I will use for as long as I can. I always found the Brocade stuff more complicated than it needed to be.
leaving at end of the month
vs staying until March
What's the diff?
March of next year maybe?
Re: Never mind Google and Amazon, I want one.
They'll get it for you - just be aware the minimum quantity to order is 10,000
Re: Dell has an R&D divisiion? Interesting...
They seemed to start doing some decent R&D going back a few years when they acquired a bunch of storage assets and of course Force10, probably others too... taking some time to integrate it all though.
Re: Well duh
FCoE has been all marketing for the past 5 years, and has shown across the board it has thus far failed in the marketplace. UCS deployments may use it but it's because they are hamstrung by limited connectivity options in those solutions.
Not likely you need those firmware updates either if all your looking for is spare parts.
Most of the el cheapo places will end up just putting one server in support and using that contract to get to the goods.. since HP has for a while at least distributed their firmware update CDs that support a massive swath of server gear. Time will tell if HP goes through the trouble to make it difficult for those folks to work. I suspect that is more trouble than it's worth for them. This will catch most of the low hanging fruit.
I haven't applied any firmware updates myself on my HP gear in about 18 months now (all of it is 20-24 months old now). Everything is still under 24x7 4 hour on site support ( and will be extended to at least a 4th year -- haven't had to use that either in 16 months when I had to get all of our NC523SFP 10GbE cards replaced to fix the manufacturing flaw in the Qlogic chipset). All of the equipment is 3-5,000 miles away from me in remote data centers though so I require on site support. HP support was quite masterful in replacing our NICs, I was surprised they managed to re-plug all 11 cables into each server correctly across all of the systems. Not even I'm that careful.
still record setting
Apparently a record 115 & 1/2 million viewers tuned in for the SB..
as a Seahawk fan I am quite satisfied. Though I do like the Broncos as well. Like most everyone else I did not see that coming. I was astonished how well Seattle played.
Downloaded the Superbowl from my Tivo this mornin, going to strip the commercials and re-encode to h264 for archiving.
Re: Amazon is NOT cheaper than self-run CO-LO
absolutely agree mr IOPS.
I tell folks amazon has one use case and one use case only
- your software stack can handle built to fail
- your application load is very highly variable, to the point where stuff is going up and coming down all the time.
I'd wager a TINY fraction of 1% of workloads out there are like that.
Anything else and your using it for the wrong reasons. That doesn't mean you can't limp along and get it to work. More often than not ends up with massive cost overruns, availability problems, performance problems or a combination of the them.
I've talked to bunches of companies over the past 2-3 years that have moved out of various clouds (often amazon) all of them for the same reasons that everyone lists. Unfortunately for folks like myself those often don't make the news headlines. I moved my company out of amazon cloud two years ago (I was hired to do just that I wouldn't of taken another job working with amazon, and still won't) and cost savings aside, the improved availability, performance, flexibility, and perhaps most important of all EASE OF USE has made everyone across the board more productive.
By contrast I tell folks it's like building a Hadoop cluster backed by a tier 1 Fibre channel SAN for storage. Sort of.
It's the wrong solution for the problem at hand. Though ironically the FC SAN will do the hadoop job better than amazon cloud can do just about everything else. You'll just pay a lot for it.
because of dumb shits like this
They are spending 25 % of their REVENUE on cloud services. REVENUE -- for them that comes to over $7 million in 2013.
There are a lot of clueless companies out there... it makes me sick.
is EMC and NetApp going to partner with for servers? To them I think Cisco is the least bad to deal with even though they have whiptail. I suppose Lenovo with IBM's X-series is a possibility but I really can't imagine that(Lenovo) having a whole lot of success at least locally in the U.S..
so is this
what MS is deploying in their IT PACs ? Are they still using IT PACs ? Those were pretty neat.
My contacts have told me in the past there's no way DDN is going to IPO anytime soon, I suppose the only thing that might change that is some sort of bubble like valuation, something so stupidly high that the owners couldn't pass on it.
The owners have sometimes indicated towards IPO, sort of leading the employees on "work your ass off for the next year and we'll do good and probably IPO!" the longer term employees have seen that too many times and have long lost faith in that ever being a possibility. The owners are happy with their fast cars and bonuses and stuff they have what they want, unless someone offers them say multiple billions a piece I don't see them changing much of anything.
The relatively poor quality of DDN (and other products in that space) I think keeps them from bumping up the margins on their products. I was (and am still) not well versed in their product lines but was talking to a former employee about a year ago who was a sales engineer(I think or maybe a sales rep, though he was a highly technical person) and he was telling me how much of their products are not designed for 24/7 operation, and that they expect you to have scheduled downtime to do maintenance. Some of their customers were not aware of that before purchasing and weren't happy when the time came. Honestly I was just shocked, I did not expect that for something that apparently has redundant components throughout.
One of my friends over there tried to sell me on DDN a few years ago when we were buying storage for a new VMware build out. I have been 3PAR for many years.. I like the guy a lot he is great but there is no way I would touch DDN for what to me was a mission critical deployment even if he is a good friend.
I remember being told the stories about their largest customer and how much they were buying back then it was pretty nuts(not gonna mention any names!).
probably be a low take up
(Debian user for the past 16 years)
Much of the debian development base seems pretty hard core free software... Given that Valve and their games are not anywhere close to that I would be surprised if more than a handful picked up on it.
My own expectations of Steam on Linux/Steambox/SteamOS being a success(relative to other platforms) at least in the near term (next 3 years) are very very low. Beyond that I don't know.
I'm not a gamer so I suppose it doesn't matter to me either way.
I still have some Loki games I bought back in the day that I never got round to installing. Unfortunately due to Linux's wonderful backwards compatibility they have absolutely no chance in hell in running on any distribution released in the past 5-6 years (meanwhile some how I can still play windows games that came out in 1995 in Windows 7).
That doesn't stop me from using Linux though.
not sure about now
But as of August of 2013 the express query stuff on HP's StoreAll did NOT support searching objects(they didn't admit to it but when questioned directly they confirmed that fact), it was file services searching only. Maybe that is in place now I am not sure.
my tivos get lots of use
I retired my first series 1 tivo back in October, gave it to my sister - it had been running since summer of 2001. I bought a Series 3 back in 2007 I think it was, and it still gets used every day (60+ season passes/wish lists). Series 1 tivo had one set of HDs go bad (I replaced em). Series 3 so far has had two sets of HDs go bad(most recently December). None of my Tivos have ever NOT been connected to a good UPS(same goes for all of my electronics at home).
Bought a series 4 on sale in September and it gets lighter use(replaced Series 1) but still used every day.
Ran into a NASTY bug on series 4 a couple weeks ago where it thinks all of the space is consumed and it deletes all of the recordings (including those set to "don't delete until I tell you so"). At the worst it was recording and deleting that recording 7 minutes into the recording because it thought it was out of space. Then it decided to get stuck on reboot every time. After working with support they decided I needed to RMA. I decided to try to use some of those kickstart tivo codes and managed to repair the system by doing an emergency re-install and a wipe of all data. Not sure if the problem will recur or not... that is the biggest issue I've had with tivo in the past 14 years.
A side effect of using tivo for so long is I'm always out of the loop as to what new shows are coming out or what new movies are coming out etc, sometimes don't find out till long after the show is canceled.
All of the podcasts I get all come through tivo. I watch NFL football, lots of CNBC(entertainment purposes only), and quite a bit of other stuff, so I don't see myself cutting the cord anytime soon. I cut off Netflix when they jacked up their prices after looking back and seeing how little I used them over the preceding year(hours viewing was literally single digits for the year, though the year before that I used it a lot, until I ran out of things that interested me to watch). Never used any of the other premium streaming services, content selection is just not good enough. I'd be happy to pay $150/mo for a service that had everything, but such a thing doesn't appear to exist (that and peak times for bandwidth use is still often problematic for HD content streaming).
I bought a WD TV Live over the weekend, but only for streaming my own media files via DLNA from my linux server. So far it works well.
red had does similar
Red uses similar(perhaps the same tech I recognize some of it from their presentations) tech in their PaaS product which I think is called Openshift.
Sounds interesting but I really don't see it going anywhere outside of highly specialized shops that really can't bare any overhead at all that virtualization requires(that % is going down as time goes on and CPUs get better). Perhaps if this was out 12-14 years ago (VMware GSX days) things would be different.
Re: RAID rebuilds
sub disk distributed raid
what IBM might use for their billion dollar investment in their cloud group with softlayer? Perhaps some special pricing deal with whomever buys X series?
interesting to see the contrast
Between the seemingly struggling OpenBSD foundation and the seemingly flourishing FreeBSD foundation whom seem to have gotten four donations of $50,000 or more in 2013. NetApp alone has donated $100k+ the past three years in a row to FreeBSD.
I hope OpenBSD makes it, though my usage of it is limited to firewalls (not fond of BSD user space, much like BSD folks hate linux user space). I think pf is awesome (after having gone through ipfwadm, ipchains, iptables, ipf, and I think one more firewalling tool on FreeBSD 12-14 years ago). I know pf has been ported to FreeBSD, though haven't needed to move from OpenBSD for this purpose.
I had to re-install my home Soekris firewall recently as the CF card died, and tried to install Debian k/FreeBSD to get pf, but after 30 mins of fighting it I could not get it to PXE boot, so went back to OpenBSD again. I did manage to get a beta of Debian k/FreeBSD installed to Sokeris a few years ago, don't remember what I did to get it to work though, it was really slow at the time so I stuck to OpenBSD.
Linux for the rest of my systems though. Linux has infiltrated pretty much all of the enterprise gear I have at work short of my Citrix Netscalers which are FreeBSD (F5 uses Linux though, I like F5 too). I was sort of surprised not to see Citrix on the FreeBSD donations page given their usage of the platform.
FreeBSD atlanta-netscaler 6.3-NETSCALER-9.3 FreeBSD 6.3-NETSCALER-9.3 #0: Wed Jul 3 14:58:06 PDT 2013
Not exactly a recent release of FBSD (2008).
Oh and side note I believe we have OpenBSD to thank for OpenSSH which of course is used very widely, there are a couple other major products they re-wrote from scratch due to licensing, the names escape me at the moment though (maybe pf could be considered one? a re-write of ipf? I seem to recall licensing issues with ipf back in the day)
never experienced that myself
I hear stories about ad tracking, ads following users around and stuff. For about the past 8 years or so I've had firefox prompt me for each and every cookie, checking the permissions.sqlite database I have 7,103 sites that I have banned cookies from in my browser, and about 450 that I have accepted.
I used to work for an ad targeting company a few years ago that tracked folks based on cookies. They were pretty above board, no funky stuff.. If you opted out they didn't track you(except once I remember a bug being reported and the developer told me he had it fixed the same day it was reported).
It's been pretty effective...and is interesting sometimes to see the sheer number of cookies that some sites try to set, in the off chance I go look at gaming sites I just flat out disable all cookies there's just so many.
I could use ad blockers and stuff but for some reason never did use them.
looks like it worked
Lifetime just meant the lifetime of the hosting provider.. read the fine print
my switches used to reboot after 497 days of uptime, due to the linux uptime counter rolling over, though that bug was fixed about 6 years ago I think. Linux uptime counter doesn't roll over at 497 days anymore either.
Don't get me wrong I've been a linux user for about 16 years now, and linux on the desktop for at least 12 of those.. but I suspect a decent part of the problem is changing(breaking) the driver interfaces in the kernels as they come out significantly increasing the amount of work required to support a new android, instead of just dropping the older binary code on top of the newer kernel. Hard core open source folks say just release the source, a lot of times that is not feasible(and even with source it only solves part of the problem).
*I am not complaining* (learned to live with it), I've been an android user for almost 5 days now (WebOS before that). But I do believe it was a sufficiently large contributor for linux never making any inroads on desktop market share (because you really had to rely on drivers that came with the distribution - and distributions for the most part seemed bad about back porting driver updates to support new hardware). Their "solution" is to upgrade to a newer distro (and take other downsides that may come with that, all I want is a newer driver). For me it's not a big deal I just compile the driver(s) myself. Obviously that doesn't work for a normal user :)
An example I use is Intel e1000e(which is fully open source) driver and Ubuntu 10.04 LTS. They haven't updated the e1000e driver I suspect since 10.04 came out, and each time a kernel update occurred I have to compile a new driver. On my desktop/laptop I just disabled kernel upgrades now (even though desktop LTS is end of life the kernel and non desktop related things still get updates). I have absolutely no interest in going to a newer Ubuntu, maybe I'll go to mint or something when the time comes(which at this point I think is when I do my next hardware refresh), or maybe generic Debian which I use on all of my personal servers.
I suspect not many folks outside of the tech community care if their android devices get major OS upgrades.
It seems like the track record in general for major OS upgrades on mobile devices isn't that hot anyway, seem to get lots of breakage.
of course something like a stable binary interface to drivers is probably a boring, hard thing to get right so developers shift the blame to someone else, because they are lazy(and in some cases not being paid). I had hoped a decade ago this problem would of been solved by now but it seems that it'll never get fixed.
Re: legal? We shall see
HP and Dell (IBM too and others I'm sure) have long blocked 3rd party HDs from their servers via firmware.
El reg had an article on it for Dell a few years back since they were a hold out.
A couple jobs ago we had a bunch of DL585s that were using entirely 3rd party memory(32x2GB chips each), the HP hardware fault light was lit up on all of them for years, though there was never an issue. Maybe coincidence or maybe not I don't know (the systems and memory was installed long before I started at the company).
In my experience at least the # of times I *need* to upgrade system firmware is really, really rare (barring other changes like installing new cpu types or something that may trigger a supportability thing. I often upgrade firmware regardless if I haven't heard of any loud complaints - but rarely has it been something I've needed to do. My current production servers haven't seen a firmware update in 18 months, and I have no immediate plans to upgrade them further (the servers themselves are ~3.5 year old tech at this point so fairly mature). Oh and we will be renewing the 24x7 4 hour support contracts on these servers when they expire for at least another year regardless...first round expires next October.
Most of HP's servers(all?) come with a 3 year warranty by default which should entitle you to firmware updates and stuff. Beyond that if there are still critical bugs being found after 3 years that's kind of sad.
I'm sure people looking to run their servers on the cheap will have no trouble finding copies of the latest HP firmware DVD ISO images if they wanted to regardless.
eva customers got away with a lot for free
I've never talked with any EVA users myself but one of the VARs I used to use told me stories on how to lower costs a lot of customers would keep support on the controllers but not maintain support on the disk drives, and just keep some spares and/or buy on demand/from ebay etc. The HP support system wasn't very sophisticated(not sure if it was ever fixed) and customers would often get HP offering to replace drives on systems that did not have drive support because for some reason they could not distinguish between systems with controller support only and those that had full support. Another strategy would be high level of support on controllers and low level(perhaps warranty only) on drives.
3PAR sort of fixes that problem as their support policy is really strict, no splitting of support types, and you either have the system fully supported, or not at all. I bet this VAR he would not be able to get a EVA-style support on a 3PAR system, he tried for something like a month to no avail. He still owes me...
It sounds to me from a lock-in standpoint the Nutanix solution is far more lock in then a software-only solution like VSAN. They admit it themselves they want to be and end to end solution provider- so you are locked into their solution.
Not that there is a problem with lock-in, just seems sort of hypocritical to call out VMware as lock in when guess what, Nutanix is lock in too.
The approach these two players are taking is certainly interesting, myself I still have tons of concerns on how well it works, what they are trying to do is incredibly complicated to get right, assuming it's even possible(or perhaps more specifically if it's even a worthwhile exercise).
It would not surprise me in the end if both of these companies toss their integrated compute capabilities (or put them at the low end of the market ala HP Storevirtual VSA) and shift focus almost entirely to distributed grid storage, and most customers end up with separate(but more optimized) compute and storage stacks.
They would still have a very compelling solution(on paper), and theoretically there would be significantly less risk and complexity to managing deployments. Also likely higher margins too, storage tends to have much higher margins than compute servers and companies seem to like margin.
sounds pretty cool
and useful tech to have. Wonder what the cost is ? On a kinda-sorta-semi-related-but-maybe-not note I wrote a blog post last year about astonishing WAN performance with scp and rsync-over-ssh over a Dell Sonicwall VPN on highly compressed files no less was able to sustain greater than 10 megabytes a second between Atlanta and Amsterdam (~95ms of latency) on a single connection. Outside of the VPN throughput was closer to what one might expect around 600-800 kilobytes a second. No WAN optimization functionality enabled on the Sonic Walls (and support confirmed even if it was there is no protocol optimizations for SSH, which of course is encrypted in itself).
Both sides have a gigabit link, I've never been able to get a good answer out of Dell as to how this is possible but I've repeated it again and again and again over many months. I can transfer ~250GB in a matter of hours between the sites. Which is literally ~3x faster than the Atlanta facility can transfer to Amazon cloud on the east cost(throughput is of course limited by latency). And of course it's far simpler, I just scp <filename> or rsync -ave ssh <filename>.
I'm sure if I was able to dive into the tcp packets I would discover the answer, but I'm not that tech oriented when it comes to networking. I confirmed with multiple network gurus that know a lot more than me that this performance is unexpected.
If your interested in reading I won't post the link directly but you can find it with a google search "freakish performance sonicwall". It attracted the attention of Dell themselves at one point, but they were never able to get me in touch with someone senior enough to explain the situation (they promised they would on several occasions).
On another slightly-related-but-not-really note a few years ago I wrote a distributed file transfer system for a company that among other things leveraged HPN-SSH which is a WAN-optimized SSH, combine with (at the time unique I think) the ability to disable encryption for data transfers(while maintaining encryption for authentication) it made for a very scalable, and incredibly reliable(much more so than I was expecting) system. The files that were being transferred were basically compressed apache-style access logs with tons of cookie info for an advertising company, and it transferred probably 10TB of compressed (~3TB post compression) a day from multiple sites. The files were split up based on customer ID, so the file distribution system was built to automatically send files in parallel making for even better throughput. Load balanced SSH/rsync servers with a shared common key at the central storage area received the files. It was a pretty fun project.
So if your in the market for doing large data transfers over ssh over WAN connections, consider Dell Sonicwall, and/or HPN-SSH ...
They don't seem to believe that given their engineered solutions.
If they did then they would not be rapidly exiting the commodity server business.
They want high margin, they don't care about selling a million boxes if it's at a tiny margin rate (IBM is similar I think hence rumors of them wanting to sell some/most of their X-series x86 systems to someone).
I suppose the only explanation is the "commodity" cloud has much higher margins than the term "commodity" indicates. Given your typical cloud pricing from Amazon or whomever it doesn't surprise me so much. But the use of the word commodity was somewhat surprising.
they're getting out of commodity hardware, and into commodity cloud?
seems kind of stupid.. would of thought they would of gotten out of anything commodity. No margins after all.
Re: Cheap = Business differentiation?
Since the first time I spoke to Nimble a couple of years ago now it really seemed like they were gunning for Equallogic as their target market. I'm sure their tech has since grown a bit beyond that but back then they seemed equipped with stuff to battle Equallogic. On paper at least the tech looked impressive. Though the first time I logged into one of the demo systems we had I was incredibly underwhelmed by the management UI (maybe 18 months ago), and the lack of ability to do many things. We ended up getting a system for a particular deployment(not sure which model, not managed by my group -- as of today the system has been working fine - though the workload is pretty trivial). All of our most important data still resides on HP 3PAR and I don't see that changing any time soon.
My experiences with storage in general (both 3PAR and other) over the past 12 years have made me far more careful, paranoid etc when it comes to managing and choosing which storage system to use for critical production stuff(and yes I have had some bugs in 3PAR as well, some minor others less so).
Nimble has a ways to go before they reach that state for me (if they ever get there -- Equallogic is a good example of a technology/organization whom has never been there for me and never will be -- same goes for HP Lefthand while I'm here).
I remember one storage company where we had a 9 hour outage because they didn't have something as simple as a solid escalation plan in place. The on site engineer was stuck twiddling his thumbs until one of my co-workers made some calls to people he knew at the vendor to(try) get it escalated. Their CEO later apologized to us in a letter and they implemented an escalation policy after that. They were later acquired by a 3 letter storage company. Their technology was good, just the organization wasn't quite mature enough.
That same organization installed some demo equipment at a company where one of my friends work(maybe a year or so ago) at (really really big company). The storage vendor decided to do some unscheduled upgrades on the demo equipment (apparently without telling enough people) and they got their products banned from the customer as a result(interfering with testing).
Re: Why not just use SSD or flash
you may be surprised about this discussion, by the same token I'm NOT surprised your not in charge of any serious storage (sorry).
can netapp+fujitsu enforce minimums?
From the article:
"Another route, and the one chosen by a number of leading-edge developers such as Fujitsu, NetApp and NexGen (now part of FusionIO), is to enforce minimum application data throughput levels rather than maximum."
I wasn't aware NetApp had QoS(though I don't follow them closely at all) in doing a quick search it seems as of October 2013 they supported rate limiting but I see no mention of enforcing minimums
"[..]you set throughput limits expressed in terms of MB/sec (for sequential workloads) or I/O operations per second (for transactional workloads) to achieve fine-grained control. When a limit is set on an SVM, the limit is shared for all objects within that SVM. "
(no solid indication that you can specify both IOPS and MB/sec levels simultaneously for a given workload)
Fujitsu seems similar, again I don't follow them but doing a quick search seems to indicate they only support limits, not minimums:
"The Quality of Service (QoS) function [..] setting of an upper load limit for each application enables stable performance and reduces the impact on other applications due to load changes. "
Perhaps that info is just out of date, though NetApp's info seems pretty recent, not sure when Fujitsu's info was last updated but since it is on their product page(rather than a blog post) I'd have to assume it's up to date.
maybe if they just used cloud
it would work, because cloud fixes everything right?
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Google offers up its own Googlers in cloud channel chumship trawl
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON