There's no such thing as the cloud, it's just someone else's computer.
The concept of cloud repatriation – shifting systems back in house from the cloud – is nothing new. For as long as there have been cloud services, there have been those who have hosted applications and workloads off-premises before bringing them back in. The past few years have served up some high-profile cases: file storage …
Every small to medium sized company is no bigger than gnat's poo to the cloud service companies. That means they don't care at all about any issues you have having. If you are a brewing company with wort in the tank and need to shift ingredients around it's sort of important that your systems are up and running or you may wind up with thousands of gallons of money down the drain. If your control systems are in-house, you can at least throw money at the problem to get it fixed fast.
Too many "business" people don't run the big "what-if" when it comes to outsourcing critical systems. Cloud backups of certain data can be handy way to get data off of the premises in case of fire, asteroid strike, locusts, etc, but there IS risk that you have absolutely no control over. Somehow I don't think the formula for Coke or the Colonel's 11 herbs and spice list are ever going to be hosted on a cloud computer.
That's not how blame works in the public sector.
"It's not my fault that personal private data is now public / lost. I outsourced all the NHS data / HMRC data / Social Services data to UKCloud in according with the government Cloud First policy."
is just as effective as:
"It's not my fault the service is shite. I outsourced it to Crapita."
At a previous employer, I worked on systems for controlling cool-houses and slaughter lines. For the cool-house sorting the time between an RFID sensor read and a decision was well under 100ms - that is roundtrip network time + processing time.
Like the brewing company, it just isn't possible to outsource that to a cloud or data centre. We had a few customers with hosted SAP, but we always ended up with an on-premises production line server, which gathered information from SAP, made decisions on the production line and sent the results back to SAP. The information from SAP was a batch process before production begin with corrections being sent during the shift, the results going back to SAP weren't time critical (if it took half a second to get to SAP, it wasn't important), but all the important decision making work was done on the local server.
Put your data in the Cloud, if you don't need access to it to run your business is the primary rule of outsourcing. So its probably all right if whatever you outsource is fungible like office cleaning, routine printing, etc. - Look in the Yellow Pages, if there are lots of local companies doing "whatever", consider outsourcing?
I've been making pretty good money pulling companies out of the so-called "cloud" for over ten years now. It's actually quite nice, in that most of the time I get to start from scratch because they've sold/scrapped all their prior server infrastructure. If you're a consultant, try it. You might like it.
And before anyone asks, no I'm not afraid of losing business if more folks jump into it. There has been quite a bit more work of this nature out there over the last ten years than The Press would give you reason to believe ... and the quantity of work seems to be increasing as people discover that clouds are a scam, at least for the most part.
Back in the 1980s our Senior Lecturer said there are decade-long cycles in the Data Processing industry to keep the computer company sales force in work. A key one is the centralisation / decentralisation flip-flop. He predicted that would continue throughout our careers.
You are proving him right, by decentralising from the cloud.
He was a very clever man, that Mr Milner.
"Cloud done right" with appropriate automation and elasticity can save a boat load of cash, but not in every circumstance, and that saving can very quickly be wiped out if you have mismatched IO or processing is bursted in the wrong place. Buyer beware as always. I've seen some pretty grown up behaviour where n00bs have left VM's running and racked up costs that have been refunded, but there's always the flip side... So you use a cloud backup service and you think that gives you DR like capabilities to the cloud for any server that breaks - only you have to spend way more than you expected to get the data back on prem or to run everything in cloud - bet you didn't factor that in when you signed up! You spunk everything in to Glacier because it's cheap, but don't pay attention to how often you need to read the data back. You move everything into public cloud and rely on the cool only in cloud features, now you're locked in... Cloud will win out over time but right now isn't the 100% solution to all people.
Cloud will win out over time but right now isn't the 100% solution to all people.
It appears very unlikely that any single way of organizing your computing needs and resources will ever be the 100% solution to all people - at least not for very long.
In all healthy, complex ecosystems you invariably find multiple species occupying specific niches, each with nearly perfect efficiency. Most of these specialist species start out as generalists, who moved into a new niche, adapted to it, and outcompeted their generalist brethren. Conversely, an ecosystem with a single, dominant species occupying all available niches is not healthy, and liable to be disrupted by an agressive newcomer.
There is no reason to expect computing is different - after all we do have many historical examples of dominant players (or at least dominant ways of doing things) being upstaged by new technologies and new ideas. Sometimes they disappear completely; sometimes they transform themselves into a specialist, addressing a specific niche. Historically, the only thing which has been certain is that the things will change if you wait long enough.
Um, you're bringing that back in-house, right ? And who was the genius who thought to put that in The Cloud (TM) in the first place ? And how many numbskulls signed off on that ?
I can't believe you'd be stupid enough to imagine basing an industrial process on a server that is not on-site. You want to throw your accounting to the wind ? Go ahead, you'll only have to re-enter it
if when it goes pear-shaped. If your connection to the server goes down, the entire company is not blocked.
But basing a mission-critical component of your company on the vagaries of a data line and the uncertainty of someone else's server is just insane.
> And how many numbskulls signed off on that ?
Not a new thing. I asked the question years ago (at my employer at the time) why we were paying for multiple (costly) leased lines to a warehouse/distribution center located in the middle of nowhere. "Redundancy" was the response. "Um... you realize that, because of the site's location, those redundant lines are all on the same poles, don't you?" This was at a company who, on my first day, was running the computer at headquarters using several long extension cords running from a neighboring office because a truck had backed into the pole supplying power to the building.
That's why they have increased the cost of running MS software on any cloud but their own.
Next will come hefty uplifts for on-premises workloads. They'll say it is your own private cloud so cough up.
Some companies will do the sums and carry on bringing workloads back in house. OThers will buckle under the pressure and move to Azure.
Then MS will increase their prices as well. You just can't win with this cloud stuff. The sooner it all blows away the better if you ask me.
Tux because even MS can't yet charge me for running it on my own hardware.
'Yet' being the operative word.
Now all we need is for the utilities to do it.
- The lagging "phase shift" between whatever is cool-shiny and "decision makers" pretty much guarantees that businesses, utilities and government will buy heavily into something right about the time that the shiny-shine wears of and the concerns begin to show up :)
Why is anyone surprised by this?
Running yer shizzle in SOMEONE ELSE'S datacenter will always cost more, especially if your operation is large enuff to afford a datacenter....
Remember, all the FOG (as in "Lost in the") vendors pretty much do the same thing you would do to build a datacenter then drop Hadoop-derived software on top. EXCEPT, they do NOT buy large scale hardware like is needed for large scale databases and suchlike...
So after all the futility, management will report that nothing bad happened at any point. The wise guiding hand has been at work all along.
The move to being completely hosted in the cloud had the substantial benefit of revealing all the company processes and uncovering data relationships, enabling optimization of both, cutting operational costs. The move back to being completely hosted internally has again cut operational costs and responded to the now-proven long-term needs of the company.
"We knew what we were doing during the entire course of events." (So that's bonus 1 for moving to the cloud, bonus 2 for moving internal again, and bonus 3 for demonstrating incredible depth of ... leadership.)
This has been SOP for a certain not-an-oil-company-at-all, Oh-no-the-P-doesn't-stand-for-anything - it's-just-a-P of my acquaintance for many years.
"We're not a software company. We'll outsource all the development & support work"
"Brilliant! Have a bonus! Trebles all round!"
then a few years later
"Our supplier doesn't understand our business, is not nearly responsive enough and charges an arm and a leg for everything. I say, let's set up our own internal IT department and do our development in house. We'll save millions!"
"Brilliant! Have a bonus! Trebles all round!"
Rinse and repeat, apparently forever
Eventually, and this will happen, the whole 'cloud' thing will burst. Whether through continued outages, security breaches or increasing costs for enterprises. The cloud is about nothing but control, lock-in and revenue for the ISP's. Any enterprise selling their soul to a cloud ISP is just asking for trouble. Costs will often be more expensive than on-prem, your security boundaries will be shot to bits, and you will be entirely dependant on your internet connection, meaning you'll need faster and diverse links for resilience which will cost more (normal ISP's will obviously fall over themselves to provide this!),
One final thing, no matter what cloud ISP's say, is that as someone who works in IT, knows the companies systems inside and out and are employed by them directly, I'm trusted to manage the data, but can, if I wanted, bypass any security in place. I therefore know that anyone working in a cloud ISP could also do this if they wanted to, and have access to any data they host, and that's scary.
So that companies that didn't have the budget for secure server rooms and a range of IT skills could still obtain the benefits.
Companies going back to on-site are presumably large enough to have those things and so don't need to pay someone else for their billing and profit overhead.
The IT manager for the charity I'm involved with is me. I know my limitations, thank you.
"Storing the same information on tape on the other hand, added up to just $107k."
Interesting. The total cost of storing a petabyte of data on tape came to less than the wages of the staff? Also didn't mention how long it was going to be stored for, but that figure likely doesn't include re-cycling tapes every year or two as required by manufacturers. And tape drive maintenance...I still have nightmares about tape drive maintenance.
Having plenty of experience in this technology, I doubt you could buy the two tape autoloaders, drives, tapes, UPS, fibre switches, fibre installation, cabinets, AC Units for $107k so I call BS on that particular study.
It seems a bit low but any server room will already have the UPS & AC and unlike discs or SSDs, tapes do not need to stored in strongly climate controlled conditions (although protection from extremes of temperature and humidity is essential) and don't need power or cooling when not being read or written so the running & physical storage costs are where the big savings can be made.
I'd agree on the costs of autoloaders though, perhaps if you can keep them working for a long time the cost might be amortised but high volume ones that actually work and continue working day-in day-out are not cheap.
Conversely most autoloaders will allow the drives to be upgraded so when tape technology advances the autoloaders are not redundant so some saving in the upgrade cycle might be made there.
So call it $250k instead of $107k. Tape media can last 30 years, but drives/media should be upgraded every 6-8 years or so to keep up with new technology - still a longer cycle than disk/flash. Tape isn't always the right solution, but do your due diligence. Heck, aren't the cloud providers offering long term storage on tape? Not that they will admit it - wink, wink.
"Tape media can last 30 years"
It's plastic so it'll pretty much last forever for all intents and purposes. On the other hand, the data stored on it is only good for around 2-3 years so some poor employee has to periodically re-write to new media or overwrite the old media.
You did read the package the tape came in, didn't you?
Yes jake, some will be readable as with all old things. We're not talking about luck here though, we're talking about a proper archive where data recall needs to be reliable. For that you need to follow manufacturer guidelines and re-cycle tapes regularly. And that's expensive and usually manual work.
Properly stored tape lasts a lot longer than you seem to think it does.
By properly stored, I mean not underwater a couple times per week, not alternately baked and frozen, not accessible to your .fav toddler or pets/rodents/insects, in a decent form of dust protection, not subject to excess vibration, and etc. In other words, in a closet in space conditioned for human activity.
Under the above conditions, I have never seen a 40 year old tape be unreadable. I don't get paid to recover data from old tapes, that part is usually trivial. What I get paid for is having the hardware to read those old tapes.
It's not what I think that matters, it's what the manufacturer supports and recommends. As such your experience is also irrelevant and any sane employer would ignore your "wisdom" and follow recommendations too. Paper will last forever in a lot of conditions too, doesn't mean it's wise to trust that either.
The problem with "usually" is that when the shit hits the fan you're fully responsible for the loss. If you follow recommended practices, especially from a manufacturer, then you've done all you could to preserve the data. For clarity, one of the above is sufficient for regulatory compliance and the other isn't. When you're in court defending your company, "It's never failed before" tends to be a poor argument.
Keep telling yourself that, Lusty, if it helps you sleep at night. I'll keep making money reading old tapes, though, if you don't mind.
I'll keep using tape for my own long-term archives, too ... and recommending it to various clients on a per job basis. There is quite simply no archive medium that I am aware of that lasts as long with such a minimal requirement for storage.
I'm certain you're making money, and I believe you when you say you get data back most of the time. I'm telling you that the manufacturer REQUIRES you to re-write the data at specified intervals in order to guarantee readability. I'm also telling you that a court couldn't give a shit about your previous luck, if you don't follow manufacturer directions you'll be at fault and out of compliance. If a tape fails due to age and you can demonstrate you've followed directions - you're all good. If it fails and you didn't, your company is screwed.
It's not about whether you think you can get the data back, it's about whether you can show you did everything correctly. Tossing it in a safe for 30 years and hoping is not doing everything correctly.
I once had a chat with archivists from a government archive, and the only magnetic or optical media they considered 'archival', with a projected useful life of at least 50 years, was a specific type of magnetic tape.
This was a while ago, and I haven't seen that product in years.
Between bit rot and the death of compatible systems, we may leave less recorded history than the Victorian era.
Well this is a balanced set of comments!
I have not spoken to a single customer in the past 12 months who isn't looking to do more in the cloud. Very few, if any, think everything will end up in the cloud but they are almost exclusively operating with a cloud first strategy, with the acceptance that what they will end up with will be hybrid. In that time we've helped 1 customer to repatriate so I accept they exist but they are far outnumbered by customers going to the other way, and if anything they'd just made a hash of moving that workload to the cloud and we could just as easily have remediated it and fixed their issues. Some of the criticisms are valid - done badly it can cost more, bad engineering is bad engineering whether it's in your DC or someone else's, but done properly these issues go away. The issue is so many people have darted into the cloud without proper thought, not only around engineering but governance, security, culture, etc. These are the things that make cloud projects fail typically.
Cloud done right can save a lot of money and hassle. The problem is that people often want to control through tinkering and micromanaging, which does not work well with cloud.
Instead of using cloud to distribute tasks differently, many will choose devices as enablers and then look for hands to get the tasks done.
Let's be realistic here... most "cloud" migration outside of core-IT companies was done because bean-counters and senior management were sold the idea of a single monthly charge (that was amazingly lower than depreciation of having their own hardware plus recurring costs like networking and maintenance) and of course savings on not having their own specialist IT staff. A few years down the line they have saved nothing in staff and that "fixed" monthly fee has attracted amazing unexpected and unbudgeted extras, as well as growing magically as usage is well above what was forecast.
Maybe those companies would have been better getting rid of their "specialist IT staff" at the start of the project and getting some people in that understood cloud. I rarely see a failed cloud project where the people understand how cloud works. I regularly see them fail when the people use a massive crowbar to make cloud look like on-prem...firewall appliance anyone?
"A few years down the line they have saved nothing in staff and that "fixed" monthly fee has attracted amazing unexpected and unbudgeted extras, as well as growing magically as usage is well above what was forecast."
This is exactly what happens.
This is what happened the last time buying computing as a service was the cool thing to do, and the time before.
I've been on the vendor end both times around, as well as the customer end on the first iteration, and while there are some things for which the export of your computing facilities to a vendor/service provider will work, almost invariably the problems you mention arise... along with complaints that the vendor is unresponsive and unwilling to make changes for free, on your schedule, and has an insufficient understanding of your business and unique needs.
If you have a constant, unchanging, well defined standard application that can be down unexpectedly and not cripple your work, or if you have a simple web site that you constantly play with, then exported services may well work.
For larger, more mission critical, highly interconnected ecosystems from different vendors, running less common applications, and requiring unique tuning and/or modification... it's less likely unless you are willing to part with boatloads of money.... and they'll still not be as responsive as the in- house resources you used to have.
Just another of those long term cycles in IT that keep coming around every decade or three, with new names and somewhat different enabling technology.
Thank feck in not the only one based on the prior comments. I may have been slow in the uptake of Vsphere after a few nasty encounters with HyperV but this cloudy crap is on a hole different level.
I'm really glad I stuck to my guns on this. I. E. If its not mission critical stick it in the cloud. If it is look after it and keep it close.
It's about time ElReg stopped promoting this cloud shenanigans.
Just last week, a whole town in New Zealand was cut off form the Internet due to a contractor with a JCB accidentally cutting the one fibre out of the place.
Business owners were on TV complaining that they couldn't get access to their bookings data or accounts (which were in the cloud), couldn't accept EFTPOS and couldn't run their businesses without their clouded functionality. Schools couldn't run without Google Docs, GMail and website.
I was yelling at the TV like a grumpy old man (so SWMBO tells me) about the idiocy of (a) making your business unable to operate without the Internet link, and (b) having no plan for disaster recovery. I mean, if you have to keep your calendar online, then at least have a printout of the day's bookings!
I would like to think that the people affected would now reconsider their dependence on the Internet, but I bet they won't.
But she's not a tech specialist.
When you set up a small business and you get sold this Shiny New Thing and are assured it is The Only Way To Go, you have to trust what you are being told. If GoogleDocs and DropBox and tethering are what it takes to make your mobile business work, you'll do it.
So she knows she is tied in to a cloud solution, but has not got the time, money, experience, knowledge or staff to fix it. It's just another unmanageable risk, like assuming your Northern Rock Bank won't go bust, your Eagle Star endowment won't go down in value or your pension fund won't get robbed.
No, she's not a tech specialist. I thought she was a Realtor. Turned out she was a kiddie with her head in the clouds.
"It's just another unmanageable risk"
Oh, horseshit. It is quite easily managed, if one has even a modicum of wit. Her quite able replacement has seen the cloud for what it is, and has all her computer related stuff in-house (except the obvious listings & such that are handled by her parent company). If the Internet goes down for a week, she'll keep on working with barely a wibble.
Biting the hand that feeds IT © 1998–2019