The future, or at least this version of it, sounds boring.
Microsoft, Amazon, Google, others all telling us to move server loads to the cloud. Most of us with on-premise servers haven't designed things in a cloud friendly way and - call us old sentimentalists - have heaped time, love and workloads on our tin. Thus, today, we talk of such servers as “pets” while cloud providers talk of …
Cloud providers will hack your systems until you move to the cloud, then they setup rival businesses, cherry pick your best customers, coerce suppliers to match the best prices and basically put you out of business, or steal your IP.
Only the rich elites could come up with a blue sky way of taking over the world, whilst you are financially cleansed out of existence!
Got a mortgage to pay still?????
Plus they can shut down dissent even more easily as all communication moves even further online, and now your business is in their hands.
Unless it's for a temporary test or the Internet public to access, outsource to the "cloud" is madness. Even for public facing server loads, the database, financials, stock etc should be in house, only the web server in the cloud.
Otherwise, given the automatic patching, lack of transparency on security and backup and increasing monoculture, a cloud based apocalypse will come.
What happens when all ATMs, POS, Mobile Billing is on Cloud and it "goes down" due to an inept patch of edge routers or load balancing servers or actual servers on a Friday evening?
It's the plot of a believable book, "No Silver Lining" where one character likens the Cloud to potatoes, which allowed massive cheap food production and then population growth, but then famine, not just in Ireland but many countries. Monoculture.
Hence Cloud & Apocalypse icon.
The Cloud operators need to make profit. Right now they are in market acquisition mode, hence the pricing.
"The same thing can happen with tin. Indeed, tin could have a better chance of a Failsafe Failure."
No, not necessarily. I have tin out there which can (and has) better(ed) any cloud solution for uptime. I have one client with 2 custom built servers which have logged 2 hours downtime in the last 7 years - and that includes 1.5 hours for moving the servers from one room to another! We all see the stories (some here), where something has happened that has taken out the cloud provider. Whilst the in-house tin just keeps chugging along, enabling the workforce to continue working. There are many more "potential points of failure" with the cloud compared to in-house tin.
"where something has happened that has taken out the cloud provider."
The ENTIRE cloud provider, INCLUDING the secondary storage located in another city? How do you handle failover if your entire block loses power all night and you don't have a second server outside the blackout zone?
Remember the Adobe farce of 2014? Their Worldwide operation was offline for over 24 hours. Many EU users were left swinging in the breeze for almost a week! Adobe had multiple sites and still everything failed.
My point is that the cloud CAN fail, just like any in-house tin. And, as I pointed out, cloud-based systems have more potential points of failure. Like someone taking out the local exchange and trashing everyone's broadband. Or power failure. Or someone hacking the cloud provider. It's all happened before.
Clearly - faults happen with the big cloud (eurgh, I hate the word) providers, just remember the DynamoDB brouhaha.
However - a few random thoughts:
CFOs like predictability, i.e. Opex vs Capex.
Can you guarantee your bit barn will run close to capacity and be cost efficient?
You will never lose electricity and have power from different substations (ok. Global Switch 2 and Harbour Exchange have shown it can happen to big bit barn)?
That £$ XX,000,000 storage array used well enough to justify the price?
That you will have enough spares or good support contract to alleviate that patch which managed to irreversibly bork the firmware as well as the easily reversible OS on your edge router?
You have 3-site environment for stretched cluster and witness?
On the flip side - not saying there aren't some small(er) professional outfits who can make it work!
But they can have the same issues you pointed out as the big hyperscalers can, even if they are as good.
Time to stop denying the fact the inevitable will happen.
I'm a sysadmin. I have to work with tin, but I will work with cloud.
For now, because of the apps, and cost of running these apps in the cloud, tin is still in. However, there are plenty of cloudy app developers out there. Will most businesses engage, change and take risk to enable this move? Not if at all possible, or not quickly.
I'm not a complete sociopath because I still need to interact with the parts of the business that are not. Well, not until they get promoted to management, anyway.
A shared open plan office in Western Europe is a sterile featureless desert where all pets have been banned upon a Consluttant advice and Health and Safety Order. The hired goons have come and eutanized Greebo (the office cat), emptied the office fish tank into the toilet and even replaced all plants with agile motivational posters. There is occasionally a single exemption in the form of a half-dead contracted out "low allergen" specimen in reception.
After that they also dare tell us that we have an issue with team bonding and social skills. The office pet like it or hate it is a phenomenal stress relief and/or team bonding center.
So if we continue the pet analogy, removing ALL IT pets out of the office is counterproductive. It is no different from that consluttants "modern managerial practice raid" which emptied the fish tank into the toilet. You have to leave one or two so there is a stress relief and team bonding point. Sure, you cannot have a horde of cats roaming an office. There is, however, a clear psychological benefit in having one to keep the place from going into a sterile desert mode.
My last full-time IT gig, all the names were boring and functional, so one beery Friday evening I really had to rename the poor things, the server names became a mixed bag of Celtic/Romano-Celtic gods, oh, and the requisite Mythos beasties.
13 years on from my departure, a quick check of the DNS and the Mythos names still survive...Iä! Iä! Cthulhu fhtagn!
" For all I know he came to work stoned :-)"
You say that like it's a bad thing?
I guess if he didn't mind the chat with HR about the being drunk or drugged policy up whilst at work, and he was having a good time, I guess it could be a good thing.
If on the other hand he fancied being usefully employed then maybe no ...
Having done some recent testing on cloud based replacements for local implementations, we came to the conclusion that cloud performance of realtime applications is crap.
And that's with the cloud providers trying to grab market share by not caring about profitability. Imagine what will happen when they have to make money.
If Spot01 is a snail, this will probably work.
Even if your business decides to go all in on "this cloud thing", there'll still be old Rex down in accounts, running some obsolete bit of software on NT4, that is essential to the department, but has no budget for it to be moved, or migrated. It might be running on an old Compaq desktop, and the PSU might have started buzzing two years ago, (and is getting worse), but you'll never be allowed to touch it.
Or more likely, you'll have a replacement all set up, fully patched, backups scheduled and tested, modern hardware all redundant, with hot spares available, but the department will baulk at the hour of downtime that will be required to switch over, and so they'll keep putting it off indefinitely.
Why yes I am bitter, how can you tell?
... an hailstorm begin?
I mean, like everything things start nice and relatively small. You are a valued customer, and there's usually enough resources for your workloads. But as more and more customers are added, you become just a number among many, and resources may start not to be enough for everybody, while competition may shrink the margins.
Then you'll face the choice of becoming a valued "premium" customer (if you pay a lot of $$$$$$$$$$$$$$), or you will be just a (cash) cow among the cattle, and your workloads will be moved to more crowded, less reliable nodes (probably offshored them too to cheaper countries, and running on older hardware...) - and you'll contract will tell you so - and you'll have no way to change it but paying a lot more. Understanding issue will become difficult because smoke curtains will be deployed to hide the problems, as usual.
After all, that's what already happened with the big supplier like IBM and HP(E). When the "computer business" was relatively small, you got excellent service and good value for your money - then when it became large, and competition required to cut costs, what you get was what many of us experience everyday. Telco services and airlines are other examples.
Are we sure the cloud will be different?
I'd be very careful, and plan carefully what it's best served from the cloud, and what is not.
For all of this push to the cloud no one talks about security.
Security comes in many forms.
1) is the data in the cloud safe from hacking or governments wanting access?
2) are your critical applications going to survive someone cutting through the cable to the bit barn?
3) can you guarantee that your cloud data will always be available when you want it?
4) is the cloud provider going to be there for as long as your data?
These, and others, are all things that need to be considered when thinking of moving to the cloud.
It becomes a pain in the ass when you have very restrictive licensing agreements with your vendors.
Like with EC2 where you pay full license per vCore (if hyperthreading is not enabled, and 0.5 license if it is) which is really just a Xeon thread. While running on your own hardware you need 0.5 licenses per Xeon core (the two threads counts as "1 core") according to Oracle's Core Factor Table
So effectively you pay double the licensing on EC2 compared to running on your own hardware.
Thats the entire purpose (and challenge). That data is resiliently held, so you can kill one cow, but as long as you maintain the herd, you'll always have milk. Going from pets to cattle is not simply putting servers in the cloud with boring names, its taking technological steps to ensure that adding/killing servers does not lose data or require deep knowledge or effort.
"Thats the entire purpose (and challenge). That data is resiliently held, so you can kill one cow, but as long as you maintain the herd, you'll always have milk."
What happens if there's a cash flow problem, the cloud fees can't be paid for a couple of months and the whole herd gets slaughtered? You may have exchanged CAPEX for OPEX but suddenly that OPEX starts to gain priority over a lot of other stuff.
Not necessarily. With fewer servers on site sucking up the juice and making your HVAC work its compressors off, your electric bill would drop. Depending on the other things you wouldn't have to pay (because you may not need to lease so much space and so on), it could more than offset the cloud costs. It depends.
" it could more than offset the cloud costs."
Rule of thumb: it doesn't. Few people have found cloud migrations actually reduce their costs over time - I ring about 10 years out of an on-site server in front-line duties, and over that period a cloud-based alternative costs about 3 times as much (including all setup and running costs).
There's good reasons for going cloud; price is almost never one of them.
My former boss never said it, but I was fairly certain the only way that the enormous OPEX increase from moving everything to IaaS was going to balance out was to make one of the engineers redundant. I hated that.
My main gig for the past several years has been that of security practitioner. Lately, I've started bulking up on security compliance skills, because it seems that many companies are starting to outsource their practitioner staffs. If I'm not needed to do the job, then at least I can make sure that those doing it are doing it correctly.
I guess my point is that we do well for ourselves if we try to adjust with the times. I will always miss my big, beautiful racks of humming tin, but wish as I may I don't think those days are likely to return.
Is that 25 years in IT supposed to make me take notice?
To respect you?
To think that number of years in IT = expertise?
I was born in the early 70s so my sympathy for you is exactly zero.
I do not bang on how many years I have worked in IT, neither should you.
HTFU and start studying!
"In the 1990s, it was commonplace for the single IT guy to name their servers after planets"
Damn, found out at last!
Well, almost - my WiFi is called Saturn, and many of my servers and computers have named after moons such as Ganymede, Europa, Callisto, Titan (a very big box!) Mimas, Dione, Hyperion, Prometheus, Daphnis, Kari, Pandora, Pallene, Tethys and Enceladus!
Before that some early ones were called Ermintrude (it was a Gateway!) and Zebedee.
At work, we've been through the following set of servers names: simpsons, futurama, star wars, the muppets, planets, greek gods, roman gods, egyptian gods*, eve online ship classes... thankfully,
At home, everything is a character from Lost. The 6U 32 disk disk array is hurley, because he's so fat :)
* All our egyptian gods were Dell 2950s, and for some reason they were all flakey as shit, down to our administrator who backported the network driver himself... they would fall over every 2 weeks or so. I still get angry when I hear the name "anubis"
So many wrong takeaways from this - having cattle makes running things in the cloud easier, but it actually makes running things anywhere easier - even if every single service you host is in house.
If you treat each commissioned box as cattle, with no precious local state on the machine, then a failure of that machine can be handled simply and easily by provisioning a new instance, that instance joining the cluster of servers and being prepared for service without any manual configuration.
This can be achieved by anyone with ops knowledge, and so doesn't require specialist team knowledge. In a medium/large organisation, the failure of a single physical machine can result in VMs needing to be re-deployed for a bunch of teams. I think there are 3 indications your servers are cattle and not pets:
* Creating a new one requires no manual configuration of the server nor anything related to the server. Eg, if you create a new server, you shouldn't have to manually configure monitoring of it.
* Destroying a server shouldn't require any special treatment of data. Eg, you should not need to transfer data off before destroying it. It should not matter if a server dies and is completely unrecoverable.
* Creating N running instances should take as much effort for the operator as creating 1 instance
I thought I covered those points in the article, yes you can achieve cattle without cloud, but you won't have super elasticity without buying a lot more tin to cover way beyond your best guess at a worse case scenario.
The cattle cloud approach really applies to treating your entire infrastructure the same way as the servers wherever possible.
>you won't have super elasticity without buying a lot more tin to cover way beyond your best guess at a worse case scenario.
I was with you up to here.
Tin is pretty cheap, especially if you compare it to the costs of getting the cattle model to work.
" having cattle makes running things in the cloud easier, but it actually makes running things anywhere easier"
Exactly this; cattle v pets was a (long settled - cattle are better) virtualization argument from 15 years ago and is only very tangentially related to cloud (and even then only because cloud is just the logical extension of virtualization).
Saying that because cattle are better than pets I must buy an AWS subscription and store all my compute and data at the end of a phone line doesn't really add up. Saying that because cattle are better than pets I ought to virtualize my server instances on a host cluster does. If you want to push cloud, then come up with your own arguments.
The short version is:
Really, El Reg. 'On premise'? We expect better.
The long version is:
WTF is "on premise"? The only way that phrase is correct is if the author (who is alas not alone in this hideous misappropriation) is a steaming cretin. It's premiseS. The terminal S is important. Speaking of 'on-premise' IT makes no sense whatsoever. Each and every person who uses this intensely cretinous phrase just appears illiterate, irredeemably dumb or that they don't care. Whichever it is, it's a terrible, enormous shame.
There's a difference between language evolving and the spreading of stupidity: I guarantee this phrase was started by someone wasn't bright enough to realise the difference in the words. I correct everyone I meet who uses this phrase: I implore you all to do the same.
"WTF is "on premise"?"
It means on one, a single, property. Premises, a plural, should properly point to multiple properties. Language changes over time, and the use of the singular is an evolution of the term, which originally refers to the collective of the land and the buildings, but the logic falls apart for a empty tract of land (thus nothing BUT the land, a singular; you wouldn't use a plural noun for an individual thing now, would you?). The term "on premise" (meaning on the same property on which it was sold) has been written into state laws concerning alcohol consumption for some time now, so it has legal precedent attached.
It means on one, a single, property.
No, it just does not. You can start trying to influence language to make it's usage in to that, but it is just a wrong wrong neologism. As you rightly point out, premises is plural because it refers not just to a building, but the combination of a building and land. Show me a building which occupies no land..
You can be of the opinion that you are marking the brave new way of saying it, and think we are fuddy duddy for wanting the distinct meanings of premise and premises to not be confused, and we probably can't change your mind. However, using "premise" to mean a building would be to me and many others a mark of low intellect, and you also can't change our opinions.
So you agree with me since you used a double negative, turning it into a right (wronging a wrong).
"Show me a building which occupies no land.."
A FLYING building, of which such concepts are being developed (like a floating warehouse). Plus what about space stations? Meanwhile, I've demonstrated the fact of land with no building.
And yes, one CAN change another's opinion. It's called drilling it continuously from all angles until you give up.
"It means on one, a single, property."
No. It doesn't.
A 'premise' is the basis of an argument. It has no other special meanings, and that meaning is the one being used in 'premises' as well.
When using the term 'premises', you're not referring to the land itself, but the legally agreed dimensions of a given property. In early title documents, the 'premises' were laid out at the start in order to define the thing which the title refers to. So 'on premises' doesn't mean 'it's in the building' or even 'it's on the driveway outside'. It means 'it lies within a legal box of dimensions X + Y + Z, at location X,Y,Z as agreed between myself and the state'. So 'on premise' could only mean 'in a location' if that dimension was a 1-dimensional dot.
The reason we use the word 'premises' for it is from Norman French, which is the language the early title documents were being written in.
Hope that clears that one up. It's actually pretty obvious when you think logically about it.
Although I do fundamentally agree with you, surely we have better things to do than argue about terms when whatever used method is clear. If it was a term that had ambiguity sure, because that's not clear communication. I use both terms and rarely think about it.
I'm not a fan of the term 'serverless' but I know what it means, so I let that go too.
AC for obv reasons as discussing work stuff. Can only really comment on use of Azure though... not used other services such as AWS, Google enough to have expertise
Azure cloud is simple for someone who wants their SQL Server database in the cloud, it's as simple as altering connection string to point to Azure "instance" and good to go.
Can be handy for customers with multiple sites who previously had to use lots of replication as that can all be ditched and the Azure instance used.
Easy to set up backups etc.
Some functionality not supported.
Not good for anything time critical as slow compared to local network (obv!).
Things for customer to ponder.
Security - for lots of little companies chances are cloud security way better than on premises
Availability - again probably better than managing your own kit, but if it goes bellyup then out of your control how long before it's back up and running.
The big grey area is confidentiality (what countries might slurp your data)
Pricing - ideally need to get system up and running on real kit and get metrics on usage so can calculate likely costs, guesstimates can be expensive
For lots of small companies that are mainly storing data then cloud is probably great (as long as nothing too confidential as in the data)
Just like many small companies have long gone the hosted server route, cloudy options are similar good solution when you lack the staff / expertise to manage in house hardware
As for larger companies / small companies with lots of IT skills/ companies with very confidential / sensitive data, then the fun starts in assessing things...
I am seeing a pressure to stop doing specific upgrades or bespoke work to improve things. We go for bigger, homogenised solutions and have to lock step with everyone else. There's less chance of doing something a little better than your competitor or to get that great new idea working when we have to sail along in a standardised raindrop size in the cloud. I am certainly finding this out a a web person. Things start looking the same and working he same ad, while that gives a nice, universal user experience, it also blands everything to a pabulum.
It's a two-edged sword. A common base means lots of experience dealing with problems when they arise, but it also means when a problem arises, it's likely to hit more of them at once. Sort of the difference between repairing a stock, mass-produced car and a custom-built one.
It's great to treat servers and such as a disposable thing easily substituted in case of failure. In general terms I absolutely agree that the 'thing' itself isn't important, only what it does.
The big issue is cost. Between the platform and the application license costs the things are just too expensive to treat casually or to scale to X redundant instances just because.
I have no problem building and throwing away as and when I need to. But when I see the size of the bill attached I do start to think twice. Especially when you start to have licensing schemes which are actively hostile to cloud use except say if you run on the vendors platform.
What people forget is that cattle ranches assume some requirement to scale where pets come in small numbers and live in the house.
If you are keeping one cow as a pet in order to have milk on your cornflakes then you are being inefficient and would be better buying your milk from a large dairy herd.
If you are keeping a cat as a pet to kill the mice in your kitchen then the dairy herd is not providing the service you want and not providing it where you want it.
Its also not as if you have mice every day the lazy moggy spends most of its time asleep anyway its there as a capability rather than a capacity resource.
Even if you go to the cloud for your infrastructure many applications are likely to remain pets because they address a problem that only your organisation has and have fairly modest resource requirements.
The thing is that you're not comparing it at the right point.
With a "pet", you have the matter of constant upkeep. You have to keep it in tune yourself since it's your hardware. If something goes wrong, it falls to you to find a way to fix the problem, even if it means being forced to put your "pet" down and get another one.
Whereas with "cattle", your job isn't the cow itself but rather the things you put on the cow. If the cow falls over, the hands pick up your things and put them on the next cow. Even if it's a problem unique to your organization, unless it's extremely custom (like requiring specialist hardware), what more does your job need except a CPU, some RAM, and some storage, which any VM can provide? Put another way, why go out of your way for a draft horse when you can rent the mule team nearby?
"what more does your job need except a CPU, some RAM, and some storage, which any VM can provide"
Depends on what the application does. Physical location, access to internal networks, availability when network access is lost; are all requirements that mean you might be better with a cat in your kitchen than a cow in the dairy.
However I'd moved on to talk about applications not servers at that point anyway. My point then was that the software applications you run on your servers are probably still pets not cattle. You were probably running them virtualised in-house and outsourcing the virtualisation to some cloud provider probably does not change anything fundamental in the way you treat it. Just because the cat actually sleeps next door does not make it a cow its still a pet.
For the most part we make the rules our servers operate by. For servers owned by somebody else, somebody else makes the rules. And those rules will --always-- prioritize the objectives of the those who own those servers, not our objectives. "Their" business plan is not "our" business plan.
The ultimate objective of any private enterprise is to serve the needs of customers only --just-- enough to retain those customers, at minimal possible cost, while diverting as much money as possible from customer objectives. This is the ideal convergence point for a given enterprise. That objective is not our objective, as customers.
Add more layers of business plans with unaligned objectives does not fundamentally seem like a good idea.
So you port your app to the cloud. Yay! It works! But the app is dead, missing, or spewing errors an average of 10 minutes a day. The solution is a new distributed and redundant cloud design. Eventual consistency, redundancy, and distributed locking seemed so easier at those cheery developer conferences than now. All the developers start hooking up open source libraries that will surely solve all the problems. That needs just a few more servers to get it working. Add something to sift through the gigabytes of new logs. The app has to be re-designed to eliminate a few features that can't work well with a distributed architecture. As time passes, you start to wonder why code for recovery scenarios is getting larger than the happy path. Ok, it runs! Your app is never dead or missing but it's slow, glitchy, and never stops spewing warnings. You slay bug after bug but there are still some lurking designs needing immediate consistency or singleton tasks to work reliably. You yearn for the days of in-house servers that were fast, lean, simple, and ran for months without a moment of downtime.
I know when you're running Chaos Monkey. Please stop it when I'm trying to find a movie.
Stop thinking about cows and instead become a cattle investor...
Sounds like Apple without Steve Jobs. Sounds like General Motors when all the "car guys" were replaced with "automotive business executives". I could go on.
Any company which does not deeply understand and functionally work on the base elements of their business operation is a pattern trending towards zero. You don't make money by not sweating the small stuff.
Hey, cattle investors think a cow is a $1000 depreciable asset, and if it costs $200 in vet bills to avoid shooting it and selling the carcass for $100 in pet food instead of getting a bigger sale of beef or 5-N years of milk, they'll generally pay for it. Chicken farm investors might not think the labor's worth it, because a chicken's more like a $5 depreciable asset and the vet still charges per visit, so it may be cheaper to dispose of one chicken than risk the whole herd getting sick.
The "Cloud", that fluffy diagram that's been used in internet-explanation-diagrams (and sales pitches) for as long as I can remember the internet as a "thing". Ahhh makes me feel all warm and fuzzy ...
The thing is machines in the "Cloud" are just someone else's servers where you have less control. Maybe some of the conveniences suit some people, but it comes at a loss of total control and a set of backup and security implications where you are trusting someone else a lot more. Now in some cases the "cloud" company could be better than in house IT on security/backup and in others not so. Personally, and maybe I'm a bit of a control freak, I don't trust really important stuff to be stored in the "cloud".
There are things that I can never see being allowed in the cloud, PCI credit card stuff for starters, there must be a lot more examples.
For the kind of "real" workload most of us actually have, which is more like a set of small independent systems than a "netflix webscale system", the whole "cloud" approach isn't quite so relevant. And if you have a reasonably predictable workload, (rather than the "bursty, scalable" cloud model), then we priced it up, and the cost of buying tin and hosting it... is about 8x cheaper than provisioning cloud VMs.
"So flip side, what if your work load is inconsistent, subject to peaks and troughs where local iron can be under-utilized a lot?"
The example which comes to mind here is a large bank which had dedicated servers doing little for most of the month then running flat out for 2-3 days to consolidate data into the group balance sheet.
But that was actually consistent...
Consistent in terms of a pattern, but not in terms of actual usage (high activity for short periods followed by lengthy idleness isn't pretty steady), which becomes a factor in the figures since idle iron costs but doesn't return. But that example creates a dilemma. As a financial institution, its data would be considered highly confidential and under legal restraints: a significant reason you have to stick with pets, yet the job isn't run for very long (10% or less of total uptime), making it difficult to justify pets (lots of upkeep, little actual work for the time period). It's something like this that calls for a third solution: a rented physical machine that can be sited and a physical drive plugged in to run the job, then unplugged and the machine returned without any storage when the job's done. Use it only when you need it, the data stays with you on-site so confidentiality is kept and so on.
Biting the hand that feeds IT © 1998–2019