i think you mean
StoreVirtual not StoreAll
1268 posts • joined 19 Jun 2007
StoreVirtual not StoreAll
I think it's been close to 2 years since one of my CCs was last compromised. I use them at many places including hotels(usually best western or holiday inn), airlines, online(using BofA "shopsafe" virtual CCs). For a year or two it was getting nailed 2-3 times a year seemed like (with most of those being "shopsafe" compromises which doesn't matter to me because nobody else can use those numbers, but the BofA fraud system last I dealt with it anyway couldn't distinguish between a shopsafe compromise and a main card compromise, had to talk to a rep who would manually clear the fraud alert after verifying it was shopsafe).
Maybe just been lucky I don't know.. outside of a brief trip to europe Target is the only place I've been to that accepts the "chip" on my cards(that have no PIN), other places tell me their chip readers(the few that have them) don't work(yet). I like swipe still myself, much faster.
ok having the private key there is not good(didn't notice that bit).
Still maintain it is overblown, this is a honest mistake I am sure and Dell also I'm sure be releasing an update to fix it. No malicious intent(again assuming this cert is from Dell and is not just forged to look like it is from them).
Whether it is this or heartbleed or shellshock etc(more familiar with those since I don't deal in end user computing systems), so many alarmists out there behave like every little thing the world is going to come to an end because of it.
(for another poster who captured my post - I don't delete my posts, with one exception I deleted one in a thread about el reg's new android app I wrote a long post on how it didn't work and then later realized I was not using the new app since you had to jump through hoops to get it, the post wasn't relevant because it was not about the software version in question).
While I didn't notice that the private key was included my opinion still stands the situation is overblown. Dell will fix it.
It's a CA, people seem to be worried if the CA gets compromised then their systems can be abused, how is it different from any other CA? Dell probably signs their software with this CA. Now if this cert doesn't come from Dell and is on Dell computers that may be more of an issue.
It looks like the newer Firefoxes use the windows ssl keystore, at least on my firefox 38 ESR it seems to.
On the list of things that may keep me up at night this dell thing(even if I had a dell system running windows) doesn't even register.
Some random folks trying to get free press I suppose.
open source etc, no doesn't interest me in the slightest, because I run much more than just OpenBSD in VMs. I'd rather have a more common platform that can run everything(or close to it), especially when the number of non OpenBSD VMs outnumber the OpenBSD VMs for me by a huge margin.
(on a similar note I have no interest in KVM or Xen either, vmware has worked very well for me for 16 years now)
openbsd vms run fine on vmware already. Nothing here to get me interested in changing.
what is this bullshit you are spewing again? amazon suffers from power outages too
"But despite all the precautions – and there were many – the US East-1 region of Amazon's cloud was brought down after an electrical surge.
And not because of the power outage, but because of faulty generators and some bugs in some of the control software that Amazon has created to deal with failover and recovery of various cloudy services."
I've had equipment hosted at one data center that suffered a power outage(I mean equipment failure which resulted in an outage of both power feeds at the same time), one data center in the past roughly 15 years. Moved out of that data center shortly after(I was a new employee at the time), 2-3 years later that facility suffered a large 30-40 hour outage due to a fire ( http://projects.registerguard.com/csp/cms/sites/web/news/cityregion/16561418-57/story.csp )
wonder why they kept sharefile, seems like a logical thing to associate with the "cloud" (hosted) goto products.
That's nothing. One cert expired? Pfft. I remember what was it 2004 perhaps a big CA expired. News was coming for years this cert was going to expire. I was working at a company with nextel as one of our clients. They had outsourced much (all? I don't know) IT to IBM global services. This cert expires and from what was explained to me is virtually every single cert at nextel was hit as a result. All certs went bad globally at the same time. It was pretty amusing. I don't remember how long it took them to recover.
I don't use Mongo but if costs for dev are a problem just give discounted or free licenses for dev for customers paying for production. Pretty common hell almost universal practice.
That it was mainly white hats doing this. They exposed flaws which tor fixed (maybe not all i don't know).
Never used tor myself but doesn't seem like much to get upset over. Could of been far far worse.
62% off list is not extreme by any stretch for HP 3par. After HP boufht 3par one of the first things they did was jack up pricing so they could show higher discounts. I heard part of it was agreements with aome distributors that required certain levwl of discounting. Prior to that 3par regularly walked away from deals that needed more than i think 47% discount.
They used 96 ports and 8 switches(it's in the disclosure doc).
Scales to 160 16gig FC.
Rated for 75gig a sec (read). Some of the SPC2 tests showed higher throughput but it fell relatively short(er) on the large file test. So overall 63gig. Probably 3-5x what any earlier gen 3par could do.
First SPC2 ever for 3PAR i believe.
The system will go faster in the future as the software matures to the new hardware (talked about this during discover)
is "100W IOPS" ???
when I see 100W I think 100 watts. But seeing "100 Watts IOPS" of course makes absolutely no sense..
40GbE has been available for a while(in 1-2U form factor switches).
The people that can't afford to buy 10GbE very very likely have no need for such storage to be connected to the network. Regular old SAS SSD platforms driving gigabytes a second(some cases 10s of gigabytes/second) and million+ IOPS are enough for just about anything.
The 500 or so VMs connected to my org's flash array combined typically pull under 100 megabytes a second on average for throughput. Burst to maybe 4-500MB/sec. Went all flash because it was cheap enough. System is connected by 6x8Gbps fibre channel connections to about 20 HP DL38x servers.
I was talking to a friend recently who is the CEO of a NVMe startup from Israel. Their tech sounds pretty good but the use cases are pretty limited. Until the costs come down enough and the standards get through enough that it just becomes a "natural progression" to use NVMe instead of SAS for SSDs, because most workloads won't see any benefit.
Most of my own workloads have seen no benefit even from going all flash, our workloads are 90%+ write, which means cache hits 99% of the time, which means low latency 99% of the time(under 2 milliseconds on average for writes) even on spinning disk because the I/O is going to the cache. For me anyway sub 5ms response times for storage are excellent, I have nothing that benefits from sub millisecond, sub 5ms becomes a rounding error in my experience anyway.
Flash gives us the capacity of course to scale up more, there's only so much data cache in the controllers(though new flash array has 5.3x more cache than previous spinning disk array). I also have to be much less concerned about unusual I/O patterns with the flash since it can take it. Though in general our I/O profiles are very predictable.
Actively refusing android 5 upgrade by having wifi off most of the time.
Only reason i have a passcode is i had to install a custom ssl cert for my personal owncloud server for address book sync because android does not support the CA i bought it from. Otherwise would have no passcode.
I've never lost my phone nor had it stolen(had cell phones since 1998). I'm very careful what apps i install. Based on history i believe my vulnerability is very very low even without a passcode on the phone.
Also i don't do anything like online banking on phone either. I do use company vpn on occasion with duo security two factor (though it uses phone for 2nd factor)
Is that their tech is really squarely aimed at general file storage. No transactional stuff, nothing performance sensitive. Think file shares for end users and document storage that sort of thing. Nasuni reps are pretty up front about this (I talked with them a few months ago, one of my 3PAR reps went there). They seem to have pretty neat tech on paper, for IT organizations it is probably a decent fit for a lot of folks(if for nothing else a nice way of providing "WAN optimization" and portability for the data). For my workloads which have nothing to do with document storage(mostly application VMs and mysql databases) it's not useful in any capacity.
Not a storage expert but have been managing 3PAR systems for 9 years, never once used Excel (or any other spreadsheet).
I do remember back in 2004 or 2005 I was at a company that had EMC (later HDS and later NetApp) storage and I saw the storage admin using excel a lot for laying out LUNs etc, I still recall that moment to this day, I saw that and I decided I had no interest in storage(I was doing servers/networking/etc at the time).
From what I read in the article nothing much excites me. Probably because only one of the companies I have worked for had data in the files that was worth some sort of analytics on them. However that company was in the business of analytics and wrote their own stuff (at the time entirely custom but they were trying to move to Hadoop). So I don't think they would give up their own stuff for some fancy storage array that is too limiting in what it's capable of doing(or not enough scale etc). Some people called them big data at the time(5-6 years ago) but I didn't. They were processing in the realm of maybe 10-15TB of fresh data a day(generally with a rolling 30 day window).
But when it comes to the vast bulk of my storage, which is entirely block based, whether it is vmware VMs, or MySQL databases, there's really no way for any storage array(that I can think of anyway) to have any insight on that kind of stuff, what's it going to do start up a mysql instance on itself and go through the tables without any insight as to what the app is, how it works etc? Mount VMFS volumes and look inside vmdks? what value does that have?
We have some file data but in the grand scheme of things it's probably less than 0.5% of our overall data set. Backups excluded of course, we have 50TB+ of deduped backups stored on a pair of HP StoreOnce appliances (~33:1 dedupe ratio) at different data centers. Backups are stored as files(NFS).
I also suspect many "cloud" companies to ignore this kind of trend too because they tend to like to do things on their own and not bring in canned solutions.
sorry should of been more clear
Has there ever been one reported to the public, is more accurate of what I was asking.
Obviously open or closed source bugs can exist.
I've thought about it from time to time and I really can't recall of any security issues in esx that allowed this kind of thing to happen. I just did a quick check and I see one related to vmware workstation(which I do recall that issue). Though obviously workstation and esx are totally different beasts.
I see an issue related to vmware cloud (vcloud I assume?) a few years ago that allowed someone to upload a vmdk that allowed the person to apparently read any file on the system(never have/will use vcloud anyway).
But I can't think of any time where if you were in a guest you could get to the host somehow. I do recall issue(s) related to the kernel file system driver that grants access to the host file systems via the guest, though I believe that again was on the desktop products (never used that driver either).
This is obviously not the first time such a bug has been exposed in Xen, a quick search shows one more from earlier in the year and apparently one more back in 2012, maybe more I didn't spend too long on it.
How about ditching the 90s era concept of hard allocations for resources and provision cloud resources based on resource pools? Give me W x Ghz X x Memory Y x IOPS and Z x disk space and let me carve it up into however I want (any number of CPUs or allocations dynamic etc). Bonus points if you charge for only what capacity is used.
Like many enterprises have been doing for a decade now with virtual infrastructure in their own data centers.
Yeah, not holding my breath. public cloud, ugh.
don't really see this as an issue, vmware has demonstrated for years that the system scales well for I/O. Here is a really old white paper showing how they configured a ESXi 5.0 system to pull 1 million IOPS.
(for me really even if I got 1/10th of that level of performance per host on ESXi 5.5 I would be more than happy).
here's something else that is years old
"But this year, we achieved 1 million IOPs from a single Virtual Machine. And guess what storage the VM was running on? Yep, a Violin Memory Flash Array."
The use cases for even faster I/O to me seem realllllly few and far between.
A solution looking for a problem.
maybe someday kvm will be mature enough to be a good solution for me, for now it's not(I say this as a linux user for 19 years and vmware customer for 16 years).
There was an article recently here I think on el reg that said Intel designed the NVMe interface specifically for those flash killers. So when they arrive products like DSSD won't really be impacted I think they can just swap their flash chips for whatever is the new thing, the software and hardware built around the super fast storage should stay the same.
just another day as a vmware customer.
What seems like about 9 years ago. I had a hoover and it's "self cleaning" filter got clogged, took it to a shop to get fixed, while there asked what the repair guy recommended and he couldn't speak highly enough of Sebo. I had never heard of them myself. It took at least a month if not more to get the hoover repaired. I used it for another year or so until it broke down(or filter got clogged again and I destroyed it trying to replace the filter myself, it wasn't designed to be user replaceable and I am not a handyman, so I expected it may very well not go back together if I took it apart to replace the filter).
Forked over the $700 or $800 for the Sebo online. Seems like a quality product. I've since noticed them used in a few hotels. Obviously never have had any issues with it. No idea how much power it draws but it works great for me. I'm somewhat expecting it to last another 9-10 years(if not more).
I bought a bunch of vacuum bags with the cleaner itself, not realizing how infrequently I'd have to replace the bags, I replace them a little more than once a year(have never seen the light come on asking me to replace it, the bags just get pretty heavy and I got tons of em..) so at this rate I have enough bags to last at least another 10 years.
It shocks me that so many folks push to just kill stuff without any sort of graceful failure. I mean in the case of SHA1 for example browsers and servers should be able to present a coherent message to the user about why things are not working.
firefox for example gives this kind of error
The connection to XXX was interrupted while the page was loading.
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. (<-- could not be verified to me sounds like a ssl cert issue e.g. self signed certs, with self signed certs browsers allow me to EASILY override and accept the cert, it should allow me to do the same here!!!)
Please contact the website owners to inform them of this problem.
which to me is not sufficient, it should give details(or have an option to get details) as to specifically what the problem was. Was it we are now not allowing SHA1 ? or some other encryption standard? If I get the message perhaps I could contact the site owner and tell them specifically what is wrong, as-is it looks like a browser bug and I use another browser to get to the site(in this case happens to be a PDU, I assume perhaps it is using SSL v3 which maybe firefox doesn't like anymore but honestly I don't know because it doesn't tell me.).
It pisses me off to no end to see a browser update for example all of a sudden break sites that were perfectly usable in the previous version because they decided that some security standard was no longer valid. If you are going to break it the least these people can do is show a more useful error message.
Same goes for web servers / load balancers. I shouldn't get some obscure SSL connection error when connecting via SSLv3 if the remote server doesn't support it. It should accept the connection and show me something that says "sorry I can't serve you data because you are using SSLv3 and that is not secure anymore please upgrade your browser" (to-date haven't seen anything like that).
ram failure hasn't been an issue for a long time if you use the right hardware(no doesn't have to be mainframe or other "big iron")
"To improve memory protection beyond standard ECC, HP introduced Advanced ECC technology in
Online spare memory, memory mirroring.. quite a few companies offer this kind of tech, though HP seems by far the most advanced and mature in my experience (tried it with Dell a few years ago and it required disabling something like half the DIMM slots just for Advanced ECC).
Have too much time on their hands
Companies that are growing faster buy more IT ?
though that wouldn't make a good sales pitch.
between something like SaaS and IaaS. The article seems to interchange the use cases pretty freely which is misleading. Also seems to be very vague on provider capabilities. IaaS for example the provider may very well have their "code" up to date, but it's still up to the customer to configure (or mis configure) the "firewall".
the 20k which is bigger, came in June then the 8k series which is smaller came in September.
I too agree there does not seem to be a bright future for Pure, can't help but think of Violin and Fusion IO.
My ass it is.
I spoke to opscode leadership(5 years ago when it was still called opscode) and told them that to their faces. They told me "oh you know apache, postfix, sendmail, etc configurations you'll get used to ours pretty quick too" (I had been using CFengine v2 for the previous 6 years so was already using automation tools).
Fast forward 5 years later (been a chef customer for past 5 years though fortunately I don't have to deal with it too much other people on my team handle it. My motto for chef has been "chef makes the easy things hard, and the hard things possible". You can google that set of terms for a 5-8 page in depth blog post of mine from 3 and a half years ago where I get into more details.
I tell people as a systems admin/engineer/architect who has done servers/storage/networking for the past 19 years(I think I do a very good job too) I still routinely struggle with how chef does things. I'm not a programmer, never have been, never will be (some developers I know do call me a programmer based on the scripts and stuff I do). I can (and have) taught people the ins and outs of CFengine in literally an afternoon. Chef complexity is so far beyond anything that most anyone needs it's absurd to recommend it as a general purpose tool. I begged, and pleaded opscode for an "idiot mode" for people like me but they didn't (and probably still don't) care to cater to that market.
I haven't tried to rip chef out of my current place because I just don't care enough to try to re do everything in something else given that there are other people on the staff that have to deal with chef on a more regular basis. But if I was starting fresh Chef is not a tool I would use for sure. Not for the type of companies I have worked at for the past 15 years anyway. I'm sure it does a great job for the hyperscale totally dynamic always changing environments but those types of environments are very few and far between.
In that one chef blog post from a few years ago, someone from CFengine wrote a comment which I liked. They said CFengine treats "infrastructure as documentation" rather than "infrastructure as code". I'm very much in the documentation camp myself(probably obviously by now). I believe in sane, consistent naming schemes, good stable designs and other things along those lines. No random stuff. I want to be able to look at a name for a server, or a storage volume or something and be able to get some idea of what it's purpose is. Chef is built more for a world where shit is random and you have to maintain databases of mappings of what does what(inside chef) in order to configure things.
Oh and operationally chef has been terrible too, they have gotten better but for a while they were taking full scale production outages in the middle of the business day(both theirs and mine as we are in the same time zone). I complained directly to their VP of services(former boss of mine, I know a few people that work at Chef), and he said that was the only time they could make sure they had all hands on deck for such operations. Which made no sense to me because I worked for him at another company where all of our big deployments were done after hours(usually 10PM-4AM) and we made sure all hands were on deck. But recently they have gotten somewhat better in their maintenance windows.
I think Chef in general is a great idea, a great tool -- FOR THE RIGHT PEOPLE. It is absolutely not a system that should be adopted by organizations not willing or able to commit serious work to making it work right. I'm talking an order of magnitude more work than competing tools in the space. There are many automation tools in the space, Chef is one of the most advanced, but with that power comes complexity and (for me) pain.
Until mobile folk give an easy way to roll back major upgrades id prefer to get such upgrades only when i buy a new device. Too much shit breaks. Give me security updates only on a given major version please yes I'm willing to pay.
In the meantime i must be vigilant and keep wifi off most of the time to prevent my phone from auto upgrading to android 5.
I recently rolled back an upgrade to samsung s health because the new ver could not do landscape mode and yet lost the last 18 months of data. Yay.
Tell them you are moving, and give them an address that isn't served by comcast territory. When I moved I did this. I was happy to remain a comcast customer but they didn't serve in my new location. I told them this, they didn't believe me and asked for the address, so I gave it, and they immediately admitted defeat and said thanks for being a customer. New cable company is municipal and serves the city I am in only. For the most part they are fine, just want more upload speed.
Next year I plan to move to a new apartment in a different city and will be a comcast customer again. I actually look forward to it I was a pretty happy customer at the time, specifically looking forward to faster upload speeds on a business class connection, fastest I can get right now is 5mbps and that is if I sign up for a 100Mbps plan, currently on a lower plan which maxes out at about 2Mbps upload. I have a server at a local co-lo (unlimited 100Mbit on a good backbone) and like to be able to transfer data to/from it. It's painful to upload of course from my home connection today.
Comcast business internet seems to run from 109/mo to 169mo for between 20-50mbit down and 10-15mbit up. I'd be happy with 20 down / 10 up, though may go for the bigger plan, for the extra upload.
I was on a cross country flight a couple of weeks ago and Barracuda was advertising on the in flight TV/media systems. Seemed like a *really* strange place for such a company to advertise. Similarly several months ago while in a Taxi on the way to the airport I heard an ad on the radio for Dell Compellent storage, another reeeeeally strange place to advertise that kind of product.
To me anyway cloud implies a large amount of automation. I think this automation is way overkill for most organizations and generally requires a significant amount of work to make it work and generally isn't very flexible (to quote a recent el reg article gartner said something like less than 5% of the users of in house IaaS was happy with the setup).
Virtualization on the other hand by itself to me is utility computing. Pools of resources that can be carved up into VMs(compute) or volumes(storage). Some automation may be there (vSphere DRS or HA or other types of things) but it's not fully automated from VM to application(s) totally integrated and "aware".
Which is why to me a lot of users of public clouds end up paying so much because they implement utility computing in a cloud environment where it is not built to support(and when you do that not only do costs explode but problems expand by orders of magnitude because things don't(or can't) self heal etc.
In the voice of nelson
I thought Nimble came out with all flash over a year ago, though looks like that was just an all flash shelf. Their website implies you can pin a workload into all flash if you want, so coming out with an all flash version I don't see as doing much for them other than perhaps to get into gartner's MQ for "all flash" or something. I'd expect the cost for having a few extra nearline drives in the existing solution geared towards all flash is a rounding error.
What can possibly be evolving so quickly that someone feels the need to constantly update their office suite short of security updates? I'd wager most people are quite comfortable with features of office that are 5-8 years old at this point.
I realize this won't be the only way to track versions of office, and that most will opt for the more traditional slow moving route, but just think to myself what could possibly be so useful that someone feels the need to constantly upgrade (short of maybe a developer making stuff that integrates with office or something).
yeah sure the high end phones do, but those are small market share compared to the rest. The original commenter seemed to imply by their comment that all android devices have NFC or will have it in the near future.
how many android phones have NFC? Not many I think.
This gives me no reason to turn NFC on my phone, rather use my credit card(s), which work fine.
I think most people would not consider that Sandisk product "commodity" hardware(myself included). When I think of commodity when it comes to flash it means 2.5" form factor drives.
Looking for the form factor of the sandisk product I see this picture:
which leads me to believe it is not compatible with probably any other system out there, maybe it is at or near commodity pricing I don't know but doesn't seem like commodity hardware.
Just doing a quick search on what others think commodity hardware is the results seem to match my expectations.
they will make chef usable by mere mortals...nah they won't do that.
(chef customer for past 5 years, my chef slogan is "chef makes the easy stuff hard, and the hard stuff possible")
it's not as if this new technology is going to be proprietary to them. What's stopping other hyper converged players from doing the same thing?
I remember lusting after ATI in the mid 90s. Though ended up with Number Nine cards. Switched to Nvidia about 1999 and never considered anything but them since. Very satisfied customer after 16 years(mostly linux). Though I've never been on 5he bleeding edge ot close to it.
I keep seeing people gripe about their drivers and stuff but in my experience they have always been very solid.
I have one ATI video chip(integrated, only one pci slot used by 3ware card) on a small Athlon desktop. It basically acts as a screen saver. It works fine, though for anything serious, nothing but nvidia for me.
The number of negative experiences I've seen on ATI drivers vs nvidia seems to be a 30:1 ratio(against ATI).
Also sad AMD abandoned all hope on their high end x86 years ago.
Written from my phone so excuse typos
1) I've hosted my own email for the past 19 years, currently on a 1U server in a co-location facility(for the past 4 years or so). Running Postfix + Cyrus IMAP on Debian.
2) I have at least a half dozen domain names where my email addresses are scattered about.
3) I have roughly 340 email addresses (of which about 120 are currently disabled since they haven't been used in some time, simply a comment character in the postfix virtual file). All 340 accounts are accessed using a single login(none of them have their own authentication credentials). I have about 75 different inboxes, again all accessible with one login. Inboxes continue to receive email whether or not I am "subscribed"(IMAP) to them. Some inboxes go months or longer without being checked(like my inbox for el reg). I used to have a 1:1 mapping inbox:address. Last year or the year before I started cutting down on the inboxes and just directing more addresses at fewer inboxes, just to reduce the labor involved, seems to work pretty well.
4) I retired the email address associated with my email login ID (due to 10+ years of spam build up), so even if someone wanted to try you can't determine what even my username might be for email from my email addresses(didn't do it for security reasons just coincidence).
5) I don't do the thing where people say my_email+someuniquestring@mydomain. My email addresses are all <some unique string usually tied to the organization I am dealing with>@(one of my domains).
6) Have never worked at a company that did invasive internet monitoring, first job was the closest they watched http urls(well technically my friend who was in IT watched them the software was on his desktop, the most frequent abuser was one of the VPs trying to find european porn sites to get around the filter this was 15+ years ago). Obviously every company I have worked for I knew the IT staff well, even though I have not been in internal IT in 13 years(and never will be again).
7) have never had a CxO come ask for a root password to anything, sometimes they have as a joke but they know they should not have that information and so do not pursue it beyond the joke phase.
8) With one exception, all of my jobs I was responsible for installing the operating system on my own computer. That one exception I was not, but I had full admin rights(job ended more than a decade ago), and it ran Windows XP I think, I replaced the shell and other things to make it more linux-like, worked really well for me.
9) My linux systems run firefox as a different user using sudo(transparently I just click the icon and it launches under a different user id). A little more secure, though it does take co-ordination some times managing downloads(e.g. I usually can't edit a document I download from firefox without changing permissions on it or copying it to another location).
I think the author is arguing that that high end market(that continues to shrink as x86 power grows) you talk about isn't going to be enough to make investment in SPARC worth while for Oracle, that they want to reach into the more general purpose server market, like Sun tried to do I remember with at least the T series? On paper at the time they seemed to me anyway(from what I recall) to be decent chips but they never went anywhere(relatively speaking market share wise).
Not even Intel has been able to convince customers to get off of x86. Myself I think ARM is a dead end for servers Intel will stomp on them. I do miss AMD's high(er) end CPUs (I have a bunch of HP DL385 G7s with Opteron 61/6200s) but AMD abandoned them years ago now, and bet the farm on ARM as well (too bad, and they burned their engineering talent and product pipelines so getting back into high end X86 at this point is probably impossible for them).
There will be a market for high end CPUs, though I suspect SPARC will go away, Oracle is pretty ruthless on profits and stuff. IBM will probably keep Power running for the foreseeable future, I don't see them changing strategy especially since they sold off their X86 server line.
I've really yet to work for a company that plans more than maybe a year two in advance (some set broad goals that may stretch out a bit further but little actual planning - this experience goes back just about 20 years now across almost 10 companies).
The landscape changes so much it doesn't make sense to ask those sorts of questions in my opinion. New things come along that may change the way we approach stuff. I remember VMware once said that server virtualization was somewhat of an accident, not even they believed there was a market for it at the time and they were building the tech(I was a vmware 1.0.2 for linux customer back in 1999).
You also have to analyze the risk and return of automation. For some things it make sense, for others it does not. If it takes significantly more time to automate something than time saved then it makes more sense to not do it. Don't automate everything just because you can you'll be wasting your time.
I remember I interviewed some dumb shit for a director of operations role (the guy was interviewing to be my boss, NO IDEA how he got past the phone screening I think because he had met our then CIO at some conference). He was one dangerous guy. He wanted to automate everything, he wanted everything custom. He felt having our own data center was a "risk", because if I or other critical members of the team left who would be able to operate it? I didn't have the words at the time to give him the response because he kept shifting gears every 30 seconds(The IT manager was interviewing this guy with me and after about 10 minutes stopped asking questions because the responses were incomprehensible).
But in the end I would of said it's much easier for our company to continue operating in the event I leave the company than if you go and build some really complicated custom stack of shit that nobody knows how to use other than you because it's fully custom. With what we do we can go to our suppliers like VMware, like HP, etc and they would help our company (if we needed), or go to independent consultants, people can get up to speed very quickly because it's not obscenely customized and automated.
I told my CIO even before I started talking to this nut job that his resume had tons of red flags all over it. I later revised that statement to him the next morning, saying those were not red flags, they were land mines. Obviously he didn't progress past that interview.
Take some crazy automation custom stack and well you have to recruit someone to come in and read the code, and hopefully understand how stuff works. Not quick. Also hope they don't declare the system terrible and go on a expedition to re-write it.
I spent two years simply migrating a company from one bad implementation of CFengine to a good one(wasn't my full time responsibility and I was working in a "four nines" environment so I had to be careful). I am absolutely confident if my team quit the replacements would have a FAR harder time grasping what is going on inside of the Chef automation tool rather than the infrastructure components that are used as "utility computing" (not "cloud computing"). I can count the number of "system admins" that I know personally on one hand that would get up to speed quickly on fancy Chef stuff, yet alone any of the newer bleeding edge automation tool sets.
The more I see Trevor write, the less I feel he actually knows. It seems like he just gets briefings and talks to marketing people and lives on the hype machine. Having a little lab to play around with doesn't count as experience, I don't care what kind of workload you think you can generate, sorry it doesn't matter.
SLA From my isp(internap. Customer for 6 years) reads in part
"What our SLA covers*
Our SLA supports the following North American performance metrics:
100% network availability
Less than 45 milliseconds latency
Less than 0.3% packet loss
Less than 0.5 millisecond jitter"
only 99.9% availability for the best? From what I see that is almost 9 hours of downtime allowed per year. I'd figure more like six nines of availability at a minimum, with fancy multi site replication and multiple data centers, multiple ISPs etc and object storage that shouldn't be hard to achieve.
Cleversafe claims nine 9s of reliability (less than 32ms per year). Never used them but heard they are a leader in the space for on site object storage anyway.
Not sure how it compares to other object storage providers.