1031 posts • joined 19 Jun 2007
don't trust mobile period
I don't do any online banking, any e-commerce transactions (outside of google play store for a few apps, no movies/music/whatever else they have) on my mobile devices (all Android Samsung Note 3 and Toshiba tablet though I wouldn't do it on IOS either).
I am very very cautious not to install any privacy invading applications either.
anything that needs security gets done on my linux laptop.
exception is I do occasionally login to work VPN from my phone.
in general I think the risk is quite low for me, but I don't do it anyway. There's never been a time where I felt "I have to do this *now* (and can't wait till I get to my own computer)"
also maybe goes without saying I don't use public wifi hotspots (except the very occasional hotel but that is rare I prefer to use my phone's mifi which I pay $50/mo for already).
Re: Won't upgrade
what is "the vpn bug" ? does it impact note 3? I occasionally use Dell Sonicwall VPN on my note 3 without any issues (with their app).
Re: Why so late?
I can drive my note 3 battery down by 50% in about 45mins with intensive usage. Light usage gets me through a day. I think I've lost 20% of the battery capacity over the past 11 months, it goes from 100->80% pretty quick.
to me anyways looping a video isn't a good indicator of battery life, since most of the work is done by the GPU. Drive the CPU hard and tap the screen a bunch, and see what you get.
going to buy a new battery soon.
Trading one lock in for another
Title says it all
absolutely, nail on the head there.
this "super computer" didn't cost $5,500 to build, it cost $5,500 to rent. big difference(duh). Obviously 99.9%+ of the workloads out there aren't suited to one off runs of a few hours never to be needed again..
You've been able to "rent" super computer time for a long time, no news here.
This is one of the very very few good use cases of public cloud computing (IaaS anyway - SaaS is a good model, PaaS not sold on either).
Re: Could it be some stupid and simple?
maybe he pulled a SAVVIS and spent tens of $ at a good strip club then refused to pay the bill.
Re: Not that uncommon
this is my own personal favorite traceroute - from just over 10 years ago, ISPs claimed a fiber broke in the eastern US which caused routes to fail over and opened a hole in which russian ISP(s) were able to advertise for our IP space(no idea if intentional or not), resulting in some ISPs routing to russia before getting to us resulting in excess of 95% packet loss. Sometimes the routes would bounce back to normal.
Took AT&T (our ISP) and friends a good 8-9 hours to get it fixed(I assume by installing route filters or something). Due to the packet loss we had to shut our website down for several hours.
As soon as the packets hit Russian routers the packet loss went from around 0 to more than 40% and higher from there.
That is a traceroute that in theory only should go about 30 miles from source to destination(my apartment to my company's colo), instead - 32 hops, 200ms+ and 98% packet loss.
purchase price is not the full cost
Of course, having used Nexenta for the past couple of years in a very limited role I would never ever deploy it to true mission critical, wouldn't even consider it.
Big IT firms buy things lightly all the time, they have the budget to do so, they can buy something, have it work or not work,toss it out and go to something else.
As for data integrity on ZFS - it probably works great when there are no issues, once you get data corruption(as I had several times with Nexenta - due to their clustering software messing up) you are SOL. The diagnostic/repair tools for zfs for such a situation are basically useless. Even something as simple as say "zero out the corrupt data and fsck the file system so at least it mounts cleanly without panicing the Opensolaris kernel resulting in an automatic reboot" (something I've done with linux on one or two occasions w/ease) wasn't possible at the time (about 1.5 years ago). Support said "restore from backup". I didn't care too much about the data that was lost I just didn't want to have to rebuild the entire file systems, there didn't seem to be a way around it though. I spent hours with the zfs repair tools to no avail (at some points it seemed like it would work then after some time it would panic again).
Nexenta also tries to do too much for a small company, too many features makes me nervous in general. For us we were using only a few of the most basic ones(mainly I was using it as a NFS platform(for under 1TB of data) because it supported snapshots and high availability -- until we turned the HA off to stabilize the system). Oh and don't try to click on the snapshots in the GUI, it will spin for about 75 minutes before timing out because it is iterating through each and every snapshot on the system in serial to get info on it (yes the Nexenta software we have is old, I refuse to update it until we are migrated off out of fear it will break more, at least the issues we have now are known and I can fix them if they come up - support was useless)
Nexenta is not much more than a toy for white box system builders to be able to say "hey we do storage cheaper than the big guys!". For that they do it well, not a solution I'm in the market for anyway.
I'm sure it can work for some folks, I feel sorry for anyone being forced into using it for an important project though - you get what you (don't) pay for. Storage is complicated, storage is generally pretty fragile (unlike networking which is generally stateless), it's hard to get right, it's not cheap to get right. Cutting corners will likely cost you more in the end (or maybe you'll be lucky and not be impacted).
Re: Avoid at all costs
I just got my 7450 installed last week and upgraded to 3.2.1 MU1 (as far as I can tell that is the first release that supports dedupe and it is *brand* new (the ISO on HP's ftp site is dated Oct 30).
from the PDF
"After upgrading to HP 3PAR OS 3.2.1 MU1, customers with an SSD CPG can provision TDVVs(Thinly Deduped Virtual Volume)."
the 3.2.1 release notes don't mention dedupe, only 3.2.1 MU1 (above link has release notes for both versions in one PDF)
So find it kind of hard to believe you've had much time to test dedupe at this point. Dedupe would obviously get it's best value on something like VDI (never used VDI myself and don't foresee that changing anytime soon).
As a customer since 2006 I have been very happy, even more so to have such a mature platform offering all flash and dedupe etc.. The initial footprint of our 7450 (4-node) can fit almost 200TB of raw flash alone (that is assuming they don't release larger drives in the future which I'm sure they will have larger drives). Cost was really good too. Even if there is no dedupe it will pay for itself quite easily in performance and scalability.
I exchanged a couple emails with a friend of mine who works at Pure Storage (who used to work at 3PAR) and he said they do get beaten by 3PAR on price (not on data reduction - 3PAR doesn't support compression yet). Just hearing 3PAR beating someone (esp a smaller player) on price is just foreign to me. Most of it comes down to the very large SSDs and how they can get much lower $/GB as a result.
I don't have anything yet connected to my 7450, been busy configuring new vmware servers and other things -- but in the next few days will be storage vmotioning 1-200 VMs over to it. I don't expect much savings from dedupe until we move some databases over. Short term it's just playing around with it, see how it runs, probably won't have time to move much production stuff on it until our holiday "freeze" starts at the end of next week.
I have never used AO -- never really liked the concept of sub LUN tiering(on any platform was never sold on it), I always wanted to see a real array-based flash cache(that can cache writes), and it seems now after 5+ years of me pleading for it they are getting closer.. caching reads on a 90% write workload isn't going to do shit for me.
that's not going to stop a lot of idiotic management types from trying......fortunately I don't work for those types at the moment. Having worked with such in the past though, so frustrating..
Re: New partner?
seems like a fairly typical article. I don't see any opinion in it saying "oh go buy this it is better than the rest". They are just reporting on what the vendor is announcing, El reg has been doing that for as long as I can remember (been a reader for about 14 years now though my memory doesn't go that far back).
El reg does seem to favor sponsors though I suppose is understandable. One vendor I use for networking doesn't get mentioned much here in articles, after speaking with the vendor's PR folks (who do send info out to network oriented sites) speculated it is because they do not advertise on el reg.
why would you want to risk storage by running workloads on a storage system? It's not like your going to benefit from ultra low latency on such a system it's distributed object storage. Even if you did benefit, spend the extra $ and get dedicated compute hardware, keep them separate.
my VAR has latched onto the idea of Cleversafe back end + Avere front end(NAS + tiering/caching) which seems to be an interesting combination. I have warned him against Ceph, not mature enough for most customers out there. (Not sure how much better Cleversafe is but they do seem to be the market leader so that says something at least). I believe Ceph has NAS as well but that's another thing I wouldn't put much trust in at this stage of their game.
I'm not in the market for such a system and don't imagine that changing in the next 2-3 years anyway.
might it be easy
to make a firefox (or other browser plugin) to get rid of this header? or perhaps just bake it directly into the browser. Enough users would benefit from it anyway.
would it kill ya
to include some pics.
was curious so ended up going to zdnet to see pics
that this is limited to the credit line of the card? If I have a $30k credit line there's no way the card/system could approve a $999,999 transfer?
I don't get out much
but I was kind of surprised to see an official Delta airlines computer running XP (had the XP screensaver at least) a couple of weeks ago at the gate I was at. Or maybe it is a generic computer managed by the airport and just used by whatever airline is at the gate I am not sure.
What kind of generators spin up in 20 milliseconds? Maybe 20 seconds..but even then for me that is way too low, want at least 5 minutes in case something goes wrong with the transfer (same reason I won't put gear in a data center that uses flywheel UPS)
hybrid not tiering
IMO anyways more traditional folks that have SSD+HD in the same system and do sub LUN tiering aren't hybrid. I would call that tiering. I would call hybrid more transparently integrated like Nimble, or a few others that I forget the names now because I'm really tired.
I got my 7450 installed on monday though so I'm pretty happy. maybe next week will have time to play with it
dropping Lumia too?
the headline says Lumia, but the article says Nokia.
Of course we knew MS had a limited time frame to use the Nokia name, though I don't recall seeing any restrictions on Lumia.
Not that it matters to me, it is interesting if they brand it Microsoft given the extent they went to in the early days of trying to hide the Microsoft brand from Xbox.
should be easy
to beat public cloud object storage when your talking PB or more of data, that's quite a bit of (initial) volume.
object storage players have been doing this for years, not sure why this might be news. I was thinking maybe it was news because it was going to be a small scale or something but a PB of data is significant to all but the largest orgs in the world.
Re: US will still get your data
Even if they do win, the threat of having future rulings overturned etc will probably always be there. So if your concerned about such a thing, no point in even considering such a cloud company. You can encrypt a lot of your data, but obviously most active data has to be in a decrypted state in some form to be read(I suppose one exception is if your doing nothing but storing and retrieving encrypted data and decrypting it off of the cloud for reading -- in which case this new data center buys you nothing new from a security standpoint)
Re: VM Migration
I remember about 4 years ago I met with the head of EC2 and their chief scientist (the head of EC2 was the brother of the CEO of the company I was at at the time). We vented our frustrations with their platform (it wasn't the first time my boss met with them to say how unhappy he was as a customer). And they acknowledged the problems (but never fixed most of them likely to this day).
ANYWAY the topic of vmotion came up in the discussion and they said "oh well when you move a VM you get a spike in latency for a few seconds, we don't want our customers to endure that so we don't have it" (I think that was more of a cheap excuse because they probably weren't technically able to do it due to the outdated and hacked Xen that they had(and probably still have).
As a customer I'd be more than happy to endure some latency during a VM migration if it means avoiding hard downtime of that VM, easy choice to make.
Moved my company out of ec2 cloud about 3 years ago, I couldn't help but laugh when about a year afterwards a developer was doing something and I asked them why (I forget what they were doing) and they said they were doing that in case we had to rebuild a server (not too uncommon in public cloud). I laughed, to this day 3 years later we have not had to rebuild a server for any reason.
lots of apps have single points of failure, not everything is a stateless web server. One of my big arguments against such cloud providers is just that -- most developers don't build for that, even dozens of ones I have worked with who built their apps from the ground up in cloud environments. It's simply not a priority. It makes sense to me anyway, you have to choose building features for the users or making the app really robust, I'd wager greater than 95% of orgs choose the former(in all of the "web" startups I've worked at the past 15 years that number is 100%). Only when the latter becomes really really painful do most orgs invest in that(yet to work for a company that did this) because it's quite complicated to get right, and in many cases it is simpler/more cost effective to provide higher levels of availability in the infrastructure than it is in the app (that usually flips around when you get to really large scale (systems numbering in the thousands at least) -- but most orgs aren't at that scale and never will be).
most management types see cloud and think of it as a utility that it'll never go down, some magical thing that just works.... there's a ton of work that has to be done to leverage such cloud providers properly and most folks never get around to doing it. Certainly was never, and is not worth the effort for my org, or any company I have worked for in the past, the return on the effort is simply not worth it with all the other compromises (both in cost and tech) that must be made to make things work.
Re: Where are the 2TB SSD's?
There are 1.6TB SSDs in the enterprise space (seem to be starting in the $2-3k range)
I imagine 2TB SSDs will come when they are more cost effective for the consumer space. Right now it seems a 1TB runs well north of $500 it seems like, so perhaps when 2TB can be sold in that price band it will come.
for one supply and demand, if flash was so cheap people would be buying flash and not disk, and there is not enough capacity in the world to come close to being able to satisfy anything remotely resembling that demand.
two, the cost of spinning rust has always been going down and continues to do so. I don't understand the logic behind flash is getting cheaper so it should be cheaper than spinning rust(which is getting cheaper too).
Flash has a lot of precision involved as well, just look to the massive quality issues on many of the lesser brands over recent years. Just ask Samsung how much it cost them to develop their 3D NAND technology, probably wasn't cheap (my laptop has a 850 Pro in it, so far averaging 1TB/writes/month over the first two months of usage).
The best solution will vary of course. For me higher priority goes to high reliabilty, proven systems. Server side san sounds nice on paper but to me too complex to build a solid system yet anyway in a few years maybe itll get there.
Forgive typos am on my phone
Re: Sustained writes flash sucks
How does 8PB of writes for an SSD sound? That's whats behind HP's 5 year unconditional warranty on 3par SSDs.
Deploying my 7450 in about a week or two.
Violin is 2 years too little too late
why compete against yourself?
EMC owns what 75% of vmware? what's the point of trying to compete against them aggressively when the profits from either go to the same place?
EMC has this too which I didn't know about till recently (I don't follow them too closely)
(you won't find any special array hardware sauce there)
so it seems to me EMC has already been thinking along these lines, so nothing new for them to learn here.
Cut to the chase
Just show me ads for porn and I'll be happy.
Most dont need it
None of the companies ive worked at the past 15 years were in a position to do such a move. VMs or server lives are measured in years. It is not difficult to repair a failed one. In fact we have not had to rebuild a single VM since we left amazons shit cloud 3 years ago. VM failures are very rare. Ive seen maybe 4 in 3 years all fixed with power cycling the VM (kernel panic or something). One VM host panic in 8 years of using ESX.
Cloud is mostly hype the amount of effort required to do it right is absolutely huge, and for most orgs a waste of time because they arent to that kind of scale and probably will never be (be realistic!).
Sure it sounds cool, but ill argue till im purple that utility computing fits the needs better than true cloud in greater than 90% of orgs. But utility computing is not hip.
All of the companies I have worked at have been web style startups. Current one and prev one built on cloud (badly). First one collapsed. Current one moved out 3 years ago and even if you toss out the cost savings the improved functionality flexibility performance and avaulability reign supreme.
Excuse typos on my phone
disable secure shell
enable telnet ..
i don't care about IoT
I'm more concerned about the "smart grid".
IoT is easy to secure, just don't connect the damn things(i.e. don't use them), smart grid is likely to be forced upon us whether we want it or not (at least that seems to be the trend anyway). I am not concerned about someone "hacking" into the power in my apartment, much more concerned, no matter how "secure" the technology claims to be that bad stuff can feed back to the power sources or distribution centers and get them to shut down(or worse).
40 or 400
400km doesn't sound metro to me
expensive I think
One of my friends lived there and worked there for some time(and is 1/3rd bermudian or whatever they are called), one bit he gave me is assume each bag of groceries will cost you $100. The salary one gig there was offering me seemed tempting until my friend told me about the cost of living.
My commute is a 20 minute walk (used to be 10 minutes before the company moved further away, new office has a long lease so won't be going anywhere anytime soon) in the bay area. One company I was at four years ago(in WA state) the walk was literally about 500 feet(across the street). I had co-workers who parked further away than I lived. I was so overjoyed when they said come in to the interview and it wasn't until that point that I realized how close they actually were to me (I knew they were close, just not across the street!)
Some people have asked me if I get called all the time because I am so close, I have not. Perhaps twice in the past 3 years.
It's nice to be able to work from home in the morning and then get to the office around noon (I only go to the office because I want to, not because I need/have to).
It's one of the reasons I don't even talk to the companies that try to recruit me(seems like 2-3x/week), quality of life is too high, I don't want to be in a pressure situation where some other company is trying to pressure me to leave because I went down the interview path with them etc.. Nice to feel loved though :)
I had a 1+ hr commute once 14 years ago, lasted about two months before I moved closer to work (down to a 3 minute drive in that situation). Just can't put up with that kind of shit, I suppose maybe I could if it was just driving a long distance w/no traffic.
I understand the point of this article. The kinks are in large part why my systems still run SAN (and disk less -- boot form SAN too). I don't see this changing in the next 1-2 years anyway.
Lots of companies out there have been trying to integrate the server side end with arrays, it's not easy to do, and gets almost exponentially harder when dealing with writes and clustering. That's a lot of logic required, a lot of complexity.
I was pretty excited when Qlogic announced their Mt Rainier technology mainly because they said they were going to offer cached writes too. My workload is 90% write from a storage perspective, so stuffing ssds in the systems for read caches isn't going to do crap.
For a while at least my talks with QL didn't get too far and they did not have the write caching ready yet (with no ETA at the time this was 1-2 years ago). Looking now it's hard to tell if that is done yet or not(they still talk about it in their documentation), and in any case I'd feel most comfortable deploying something like that if it had support from the SAN vendor as well (array based snapshots data consistency etc).
But for me, at least for now it is a moot point, my all flash 3PAR 7450 arrives in a couple of weeks and that will allow our systems to scale nicely, without having to worry about complex sophisticated clustering software on the hosts to offset the performance penalty of using spinning rust. Our data sets are small and are not growing very quickly.
So, people have been working on this for years already.....
us dear readers
will continue to vmotion our VMs across hosts to patch the hosts with 0 downtime like many have been doing for over 10 years now.
Haven't used it much but dislike the object oriented-ness behind it. I understand it's mainly because windows is just a big bunch of binaries...
"Growing up" in the linux world for the past 20 years I latched on to the "strings" based approach for scripting, never had much/any interest in object level stuff. Totally different line of thinking. I have a few scripts that use really basic objects but if you look at the 10s of thousands of lines of scripts I've written less than probably 0.1% of them have anything to do with objects. I don't see that changing, so I don't see myself ever really using powershell. Not that it is a big deal since windows is not an area of interest to me.
Of course with Linux with some things you can work with objects if you please (perl etc), though likely most underlying system components still are string based (maybe with systemd that will change, haven't looked at systemd, trying not to think about it, likely won't have to deal with it for another couple of years).
As a very casual user of windows (I manage about 15 windows servers and around 600 linux systems) I am not fond of the new UI in windows 2012 (I've yet to use windows 8, and maybe if I'm lucky I never will, windows 7 works ok). Once I install classic shell it is a bit better.. Our usage of windows is pretty unsophisticated so I'm sure we are not leveraging any of the new fancy features of 2012 (outside of 2012 storage server clustering which is our new low end NAS platform which I have been fighting with known MS bugs for the past 4 months working with HP support on, they basically say known issue, MS is not going to fix it and even if they did it'll take forever so use this workaround instead).
The thing I dislike most about the UI in 2012 anyway(other than metro) is how the automatic update prompt just consumes the whole screen, like it's punching you in the face to read the message, really terrible. Maybe for a consumer OS, not for a server OS. Keep it to the system tray, and maybe have a little bubble pop up out of the system tray icon to remind you.
While StoreAll is a scale out filer it does not compete with Isilon. HP re positioned it as an archive platform. Its not geared towards the same market as Isilon (HP used to try to sell it that way but it didn't work).
StoreEasy may be their scale out filer for transactional workloads. Up to 8 controllers I believe and windows 2012 storage server software. Having used it for the past few months I am not too fond of it.
StoreAll is also what powers they underlying storage of StoreOnce.
HP offers helion openstack (downloadable) for the virtualization standpoint.
I think hp data protector should go on that list perhaps next to avamar.
HP arcsight and tipping point are both security products
I was thinking more along the lines of
"I felt a great disturbance in the Force, as if thousands of routers suddenly cried out in terror, and were suddenly silenced"
Re: Was WiMax better or worse than LTE?
Rarely have issues here in bay area on ATT. Data wise sprint 4g was lucky to het me 1.5mbit. 3g maybe 2mbit on a good day. 4g more often than not maybe 500kbit on wimax. In the end I just locked my mifi on sprint to 3g because 20kb/sec speeds just sucked on 4g.
I am a new att customer (2 years or so). Maybe it was worse before that. There was one big outage a few months ago a tower was down for a day or two knocking out all data services. Voice worked fine though.
I switched to att mainly to use the gsm hp pre3 at the time. 10 months ago upgraded to a note 3 and didn't have a reason to look elsewhere at the tine
Re: Was WiMax better or worse than LTE?
Purely tech wise I am not sure - but real world experience I had a Sprint MIFI 3G/4G Wimax thing for a couple of years, and 3G was almost always consistently faster than 4G. Once I moved to AT&T the HSPA+ was waaay faster than sprint 3G or 4G. Rarely did I get good reception with 4G, the MIFI never really showed higher than 25-30% signal strength(this was in pretty major urban areas). I'm sure if I could land in a spot that had 75-80% signal strength 4G wimax would of looked a lot better.
curious how it works
in case someone here knows -- flooding wifi with deauth - how does that not impact the hotel's own wifi? Unless the hotel wifi is on a single channel and the deauths are flooding all other channels (in which case you could work around it by using the same channel as the hotel?)
I rarely use hotel wifi myself whether it is free or not, For some reason I feel safer for using the mifi on my phone, and I'm paying something like $50/mo for mifi anyway so might as well use it (unless cell coverage is bad).
Hotel wifi is generally bad in my experience anyway.
On that note I've never used other public wifi access spots like coffee shops(I don't drink coffee so am rarely in one anyway), airports(don't fly often anyway - also never used wifi on a plane) or whomever else seems to offer "free" wifi, generally don't trust them either (not that I feel the urge to need to use them in the first place so it's not like it's hard to resist).
don't like windows NFS
windows 2012 nfs sucks bad. I thought it may be a ok solution for my sub TB of NFS data (migrated from Nexenta ZFS) but it's been almost nothing but issues. I haven't tried R2, but HP says there is really nothing related to NFS in R2 that makes it worth upgrading or will fix any of the outstanding problems which MS is likely never to fix including:
- get an I/O error often times on accessing a snapshot for the first time(over NFS), 2nd time works fine (still working a support case on this - sounds like another issue which won't get fixed).
- when de-dupe is turned on, the command 'du' does not return accurate information the first time, it typically returns less than 20kB for files that are multi GB in size, forcing the NAS to re-read the file results in correct size calculations(for about 60 seconds then the incorrect sizes are reported again). Known issue with MS and dedupe, no fix, so I disabled dedupe on most of the volumes.
- dedupe operates at a block size of 32kb. Pretty coarse. The first volume I migrated had literally 20+ copies of 5,000+ images many coming in and 20kB and less, dedupe didn't do so well (didn't realize the 32kB thing until after this migration). Dedupe is also not inline. So I figure once our deduping 3PAR 7450 is in place I will shut off dedupe on windows and just use the 16kB inline dedupe on 3PAR instead.
- cluster is FAR too sensitive to DNS configuration.
- Have an issue where if I add any new volumes to the cluster they will by default overtake the existing E: volume, knocking it's drive letter away and messing up the NFS (and I assume CIFS) shares. Another known MS issue not likely to get fixed but at least I know to be very careful and can prevent the system from making this mistake in the future.
- I could write something myself I guess but the snapshot scheduling in windows sucks, it seems to schedule but then the retention period is "whenever I happen to reach the max # of snapshots which I think is 64". I want to define retention periods like hourly snapshots retained for 24 hrs, daily snapshots retained for a week, etc.
- NTFS not case sensitive by default - this one took support a couple weeks to track down why I kept getting errors with rsync, because I had some directories that had the same file names with multiple case formats in them - once we disabled this via registry it worked better.
Probably logged a good 18 hours of support calls on this windows 2012 storage cluster for NFS to-date, and I'm not yet fully deployed.
One good thing about Windows 2012 though is at least I can write zeros to the volume and reclaim space on the backend - couldn't do that with Nexenta(the zeros would get compressed and not sent to the backend).
Maybe it works great for CIFS, but for NFS stay far away from windows 2012 storage server. My requirements are REALLY basic, file serving to servers, less than 500GB of data(currently), snapshots, and high availability. Oh also something that is supported (3PAR support matrix is very strict). I didn't want to pay very high $$ for sub 1TB of data too. I love Linux and have been using it for 18 years but the state of file systems in linux means there isn't solution that is good enough out there for me (e.g. no snapshots - no not going to use btrfs or zfs in production - though I do use zfs on linux at home). That and a lot of the linux HA involves something like drbd which is just stupid wasteful when operating with a high availability SAN in the backend.
Inline compression would be nice too. NFS v3, don't care about any NFS version other than 3. I had heard good things about Windows server NFS in recent years and HP claims they have a lot of customers using it with success (I have been in direct contact with the product manager for the product for many months now).
Maybe it's just me - I do find a way to break things in unusual ways.
HP announced integrated file serving support for 3PAR last year, maybe when that comes out it will be good, I plan to deploy it at another smaller site whenever it becomes available.
Nexenta was bad, this might just be worse though, at least I get supported high availability with this, with Nexenta we got it originally to run HA within VMware (Nexenta VMs), they said they supported it, we tested it and it worked - but when it came time to really test it, it failed badly, lots of data corruption and reboot loops (until we disabled HA - not many problems since but I don't trust it anymore).
Red hat storage server doesn't meed my needs either, so that's out.
glad to stay away
apple reaches out to me seems like 2-3 times a year to come work for them, never been interested. Never liked the idea of working for a big company.
maybe too early
vmware seems to have yanked the blog post you pointed to.
I assume when you say mobile broadband you mean LTE... I could of sworn my latency even on LTE was well above 100ms on every occasion, but I could be wrong.
I do certainly have less latency between our main data centers in Amsterdam and Atlanta though, hovers at around 90-95ms most of the time. Not that I need to vmotion between them.
I don't get out enough perhaps because I don't believe I've ever had a conversation about VDI with any person, any company at any time. I see these new storage startups touting VDI left and right but seems the market for VDI is pretty small.
I came across this blog post a few years ago on VDI and thought it was a good read:
Also the thought of running a hundred users on a single server when any given user can lock up a full cpu core(assuming they are being limited to a single cpu core) by running something crazy on their VM seems kinda scary to me.
But perhaps for the helpdesk/service type of workloads it's fine. I've never dealt with that kind of stuff though, hell I haven't dabbled with internal IT in 12 years.
Re: I can see why you would simply patch the VM
because these public clouds suck ass & do not support live migration in many cases even though the tech has been available for over a decade now (just checked VMware vmotion anyway came out no later than 2004)
customer buys half a rack of equipment, we now take you to our reporter Ollie Williams for his assessment of this new cluster, what's this all about Ollie?
design it right
And you may not have an issue. HP recently came out and claimed there has never been a software update that required data migration on 3PAR ever. I do recall being told there have been I believe two 3PAR updates (a looooong time ago certainly before I was a customer in 2006) which required some degree of hard downtime, but no data loss/migration. The end to end virtualization of the platform means data migrations, if needed should be able to be done on the fly. You're not dealing with disks after all but virtualized chunklets. Simply replicate from one format to another and flip the switch (this technology has already been on the platform for many years it's how they convert between RAID levels and move volumes between tiers - totally non disruptive).
On the topic of upgrades, just about 3 weeks ago I did a minor software upgrade on my dual controller 3PAR platform - first node upgraded fine, 2nd node failed halfway through (internal disk on the controller failed). System was in a degraded state for a long time, as support struggled to try to get the new controller to join with the existing one with mismatched code revs. Eventually once I got the case escalated to engineering they figured out a way to do it. Performance was not good during this time of course, made worse by our 90% write workload. I had been trying to get a 4-controller system in for many years but they company didn't want to pay for it originally. Our new all SSD platform which I just submitted the order to yesterday is 4 controller though (nothing to do with recent upgrade this was approved 10 months ago). So, will be nice to be on a 4-node 3PAR platform once again. It doesn't solve world hunger but really makes me not want to even consider an architecture that can't go beyond 2 nodes in a unified cluster (e.g. I don't view NetApp clustering as real clustering it's more like a workgroup of systems that you can move volumes inbetween similar to how you can move VMs between vmware hosts, but you can't have a single VM span more than one VM host)
(unlike many 3PAR software features that one is included at no charge)
Public cloud migrations are of course generally far, far worse. They would often go something like "you have 30 days to move your data before we delete it - oh yeah and we don't offer you support or help you should know how to do this".
maybe someone will actually buy this
I was told a while back that a former Pillar sales rep was brought back to Oracle -- to sell Pillar to internal Oracle users/groups. NOBODY wanted the tech, not even inside Oracle themselves!
On paper this seems ok, but really don't see any reason to give Oracle consideration for serious storage (outside I guess of their engineered systems if I was in the market for such a system). The capabilities of this new system seem like they are playing catch up for the past ~6+ years of being MIA - which is fine, but just means more than anything else the platform is not mature, and won't be for a while.
I don't know about you but I was running linux in a mission critical role (RHEL AS 2.1 if I recall right and before that it was "Red Hat 7" I want to say, before RHEL existed RHEL 2.1 was a simple migration basically the same software if I remember right) back in 2003 in a datacenter for mobile e-commerce transactions(easily a half million $/day I think in sales in the early days). Wasn't native linux code but rather a combination of Tomcat, BEA Weblogic on Linux, and Oracle back end on HPUX (PA-RISC then Itanium) with EMC then HDS storage at the time.
I don't miss having to hack kickstart install discs to get compatible network drivers inserted into them so we could install our systems, stupid broadcom. Though a few years later intel e1000 went down the tubes(drivers changing constantly in incompatible ways) too.
I remember deploying my first VMware in production (GSX 3.0 if I recall right) I want to say in 2004, same application stack. Was an interesting weekend, last minute deployment to try to save the company's reputation because of bugs in their software. After working 50 hours over 3 days (most of that configuring the in house application stack) it worked, and everyone loved it.
ODMs do well for cloud folks I suppose but the lack of support/poor quality makes them a poor choice for most folks out there. Not everyone has the time to babysit the systems and write custom code to keep the HW running. I did one fairly decent scale white box deployment back in 2005(cost was the main driving factor), and well I was glad when I got to use HP Proliant again. You certainly do pay more, but you get a lot more too. I love ILO4 (even more than ILO3) - the KVM remote console is SO FAST now, email alerting for system events/failures/etc.
Firmware updates are painless too. My personal supermicro server's remote KVM management card has been offline for months because the last update required I wipe it's configuration and I haven't gone back on site to re-configure it yet. Yeah - not for serious business use. OK for personal use.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Product round-up Ten Mac freeware apps for your new Apple baby
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'