"SQL Server", not "SQL"
"The default database is, naturally, MS SQL"
That should be "MS SQL Server".
In case you hadn't heard, Microsoft is trifling with this "cloud" thing. It even has a new strategy, Cloud OS, discussed in the first part of this three-part series. Cloud service providers, the focus of part two, are important to Microsoft's plans as well. But Microsoft's plans do not end there: the company has gone and built …
"The default database is, naturally, MS SQL"
That should be "MS SQL Server".
The default database on Azure is not "MS SQL Server", its "Windows Azure SQL Database"
Nah, not really. SQL by itself I would say is wrong, but if you're going to qualify with the company name I don't see a problem.
Don't do it on a fucking leap year.
'The default database on Azure is not "MS SQL Server", its "Windows Azure SQL Database"'.
OK, now I know the proper product name. Thanks!
But "MS SQL" is just wrong. Even if that's what MS calls it.
Its not about whether people put the MS in front of the name; its that the default database that Microsoft offers in Azure is not the product known as "Microsoft SQL Server"; although they do seem to say that it is built on the same tech :P
...just as soon as my backend starts working as a zero-knowledge service that simply routes and stores encrypted blobs, and an encrypted search index. That work is taking time, but it's progressing. And from that point on I couldn't give a damn where my data is stored, so long as it's cheaper than doing it myself.
Plan for downtime about once a week because of
1) DNS cockup*
2) Routing cockup*
3) SSL Certificate cockup*
* Delete as applicable. Or not.
Once a week my arse. Three times in two years more like. Anyway, what's to say that companies running in house - particularly small non-IT centric operations will have better uptime?
Yes, I know, you can shout at someone if you employ them, but it's hardly going to make things get fixed faster, is it?
I keep toying with the idea of moving my servers to Azure. As part of my Action Pack sub, I get a budget for cloud.
1) It seems to become very expensive very quickly
2) I get a weekly email about scheduled downtime - I'd normally arrange downtime of my own servers around my work. not possible with cloud.
I'm just using it for demo installations at the moment. That brings the other problem that you can't use Save/Resume as in HyperV, so get the choice of running all the time (expensive) or allowing time to start.
Nearly got caught out the other day - if you shut down from within the server rather than the console, you keep paying!
You are correct to be unsure.
It does become very expensive very quickly and mistaken deployments could have a severe financial impact.
I am not sure where you are that you have weekly downtime. It is not nearly as good as they claim, but it is not that bad. It is true, though that some parts of scheduling are out of your control. I am actually going to have to leave a data center in a few months because of a scheduled change I can't live with.
Re: "if you shut down from within the server rather than the console,"
There are a bunch of "gotcha's like that" and they differ from vendor to vendor. You have to be extremely wary. I was charged by Amazon for a machine that was not even running. Mercifully it was small, but I got dinged for more than a month's worth before I got it, figured out how to fix it and then fixed it. It was due to a few hassles like this that I idled my stuff at Amazon.
"Oracle databases are still apparently a thing."
In five years time, people will still know about Oracle.
In five months time, will the "is that a thing" thing still be a thing, 'cos it's getting kind of mainstream now?
Just askin, like. Workin on internet time, an all dem tings.
"Is a thing" is an expression that significantly predates the hipsterati discovering it and converting it into an image macro.
I have been at this for many years now, starting with virtual servers in the late 1990s. Over the years I have gone for vanilla stuff at every turn and that has meant until recently things like CentOS.
I currently maintain servers in Data Centers in Florida, Washington and Toronto as well as servers in my local area. I have idled servers on Amazon's cloud system and at a company called 'GoGrid'. Most of what is running is still vanilla Linux of one flavor or another such as CentOS and Ubuntu. However, a few machines are now running Windows Server 2003 some client sites and 2008 and 2008R2 on others.
I have an account on Azure and have done for at least a couple of years. However, I have never even gone through the effort of deploying test machines for more than a short while (and on Microsoft's dime) because the cost structure has never been even close enough to viable to seriously consider. I will be returning to this in the spring but I am not optimistic.
I have also maintained a Google Office system and written test code in their Go language. There have been issues of one sort or another, but I will be revisiting this again. I am optimistic that Google will come up with something viable, but thus far they are not the answer.
The servers that I have committed to are generally Linux and generally run open source software exclusively. It would have been easy to move clients over to cloud offerings at Amazon, GoGrid or Microsoft' Azure except that the costs simply did not make sense.
I am confident that the cloudy universe will be the eventual winner and I have voted by keeping a number of live servers on various systems. It is only for limited production, though.
To the extent that it is prudent to move a client on to the cloud entirely, I need to make sure I am using a common vanilla subset that exists across multiple vendors so that I can set up adequate fail-over and so that clients don't end up captives of a single vendor. To a large extent, this rules out Microsoft because they have no real interest in supporting what we need.
Going forward, I am hoping to be able to cost-effectively have limited Windows Terminal Services facilities so that I can comfortably transition legacy applications. However, I am not confident that there will ever come a time when this makes financial sense to a client.
I will definitely be shifting function into the cloud on Linux. The direction is to do vanilla stuff with browser hosted stuff where possible and otherwise to use XDMCP or VNC to log into X-Windows.
My focus is on small businesses with less than 50 employees. Some of what I do may not scale well to thousands or millions of users, but I am fine with that. I would rather have a competent special purpose platform than an incompetent general use platform.
People who are serious about providing working systems at a cost that is viable should avoid the siren song of the many interesting but unusual applications available on these platforms. Certainly, you should avoid like the plague anything that would lock you into an ecosystem like Windows, MS Office, Outlook, Visual Studio, etc.
For the foreseeable future, mission critical applications such as banking systems do not belong on these cloud platforms. They cannot be sufficiently secured and do not properly support the types of sophisticated disaster recovery scenarios required by large companies who depend upon their LOB applications staying up.
YMMV. It is fairly easy and cheap to put up a cloud environment for testing. You can do that and if it seems to make sense you can prove it out cheaply on local systems and then deploy. Deployment is not as easy as they make it out to be, but it is still pretty easy to bring a live network up quickly.
You backup your data onto a remote site.
The hosting supplier you are using is HQ'ed in the USA
The NSA can snoop your data at their leisure.
Would not be suprised if your data is available for use by the hoster or it's customers
If their cloud goes down, then you lose access to the data
*and* you have to pay them for the Privilege.
seems like the worst of all possible worlds.
You are completely incorrect. Your data is not available to the hoster nor its customers nor NSA (unless they listen on the pipe)
A) The NSA do listen on the pipe.
B) The NSA can - and do - demand the data via secret letters from a secret court using tangled (and secret) interpretations of purposefully obfuscated laws paired with secret letters to keep everything secret on penalty of all involved going to secret prisons for a secret amount of time after a secret (or no) trial.
If your data is at any point exposed in a an unencrypted format* to a company or individual that has a US legal attack surface - no matter how small - then your data belongs to the NSA. That simply cannot be contested at this point.
Nothing Microsoft, Google, Amazon or anyone else can do will ever make their clouds secure unless they find a way to offer 100% end-to-end (at flight and at rest) encryption that nobody but the keyholder can decrypt. No man-in-the-middle attacks. No secret give-us-the-key-now attacks. No hidden backdoors in the crypto algorithm. (It must be developed openly and audited by multiple independent reviewers with divergent - or opposing - allegiances.)
Neither Microsoft, Google, Amazon or any other American provider have the slightest interest in providing that level of security as it would eat a few points of margin to develop and maintain it. So, unless you can secure it yourself you can't trust cloud computing for anything but the most trivial of workloads.
I'll certainly not be putting any personally identifiable information for any of my customers in the cloud...
*Or encrypted with an algorithm the NSA has broken/compromised.
(sorry, but someone had to - I'll read the article properly later, arf)
That was the answer I almost went with myself. Too true.
Trevor - excellent explanation of how you think and how this influences what you do as a sysadmin. I am much like you, having involved myself in that paradigm (for want of a better, less wank-word-bingo term) for a long time and it's sometimes difficult to break out of that thought process.
I think the important thing to note is that such bias is experienced by all sysadmins, and those who promote cloud-based systems are certainly not immune.
More directly, for some situations, it makes sense to deal with 'workloads' but for others it does not. Those situations and jobs that can be effectively provisioned and managed in a per-workload paradigm (bingo!) are the ones most likely to benefit from being run in a commodity cloud environment such as Azure or Amazon - though obviously that is not the sole criteria.
Trying to force things to fit a per-workload model, however, is a recipie, if not for disaster, then at least for pain - either through reduced functionality, increased costs, or both.
I once had an argument with someone who claimed that relational databases were outdated and no longer relevant, 'NoSQL' being the future and the only logical path. His bias was born of his long-standing involvement with building back-end data crunching systems. Though he never conceded the point (and I didn't expect him to,) it was amusing to me that his current project involved processing data generated by SAP. The fact that his 'big data' project would be pointless without the data generated by an application requiring a relational database seemed not to inspire him to rethink his position.
Long story short, the pain people experience when moving to the cloud, where not due to sheer incompetence or rubbish luck, is usually due to trying to shoehorn a system/client/application into a model that just doesn't fit it.
Unfortunately, vendors like MS view cloud services - specifically the "delicious monthly subscription[s]" - as essential for their business to succeed. The result is that many smaller companies are effectively being forced into the cloud as MS are quite simply structuring their software offerings that way.
The brutal murder of SBS is the prime example, but the removal of the 3-user 'family pack' Office license in 2013 is another. Neither are in the same sphere as people who would be using the Azure platform but both are part of Microsoft's plan to move as many users as possible to a subscription model and eventually do away with perpetual licenses altogether. Both encourage people to adopt this model not by making the model itself desirable, but by increasing the cost or difficulty of not adopting it.
For some users and businesses, it actually makes sense and will represent an improvement or saving, but not for everyone and those people well end up paying more for a less suitable product.
Couldn't agree more. Well put.
Why bother trying to compare costs if you don't factor in two of the most expensive parts of your infrastructure?
A basic Azure account gives you 20 storage accounts and each one can have 200TB of storage, the on-prem equivalent would costs you $$millions. Granted your 100 VM's are unlikely to need that much storage but if your going to compare cost at least try comparing apples with apples.
And you missed one of the major advantages of Azure, the ability to automate scale. We turn 80 percent of our servers off during less busy periods, the process is automated and costs us nothing. We don't pay for the servers once they are off. This is almost impossible to achieve on-prem as you are paying for the hardware and licences regardless of their state.
This one I am going to respond to in article form.
"This is almost impossible to achieve on-prem as you are paying for the hardware and licences regardless of their state."
Really? I thought there used to be at least a couple of vendors that would sell/rent you "power by the hour". May not have been applicable for the Window box market though.
"Why bother trying to compare costs if you don't factor in two of the most expensive parts of your infrastructure?"
The most expensive parts of whose infrastructure?
Is storage and bandwidth always the most expensive parts for everyone? It seems the question could be better worded:
"Why bother trying to compare costs if you don't factor in two of the most expensive parts of MY infrastructure?"