Cloud computing holds the promise of someone else taking care of automating your IT, but just because that gives you a fleet of power tools doesn't mean you should throw out your hammer, wrench set and screwdrivers. An Anonymous Coward comment about a recent article of mine:What you need to know about moving to the Azure public …
"A large quantity of the world's IT needs to idle, waiting for something to do. When it senses something to do, it wakes up and does it."
Rubbish, IT hardware, software and especially the IT workers are all working flat out 100% of the time...
*Checks ticket system* no messages *clicks next reg article*
The article is pretty much spot on in how I see the utility of the cloud. It works very well for new applications that have been designed for it - spin up smallest possible instance for new web service X and now you've got a much cheaper platform than you could build yourself.
Data costs are a red herring - if they are growing in the cloud they would be growing locally too. Whether the cost is too much depends on how much you want to store and what you have infrastructure wise. Amazon S3 is $85 per TB per month. Depending on your scale thats either a lot more than you pay locally or not a bad deal. Got no infrastructure then you have to buy a server, maintain it and power it. If you already have a server then another couple of disks for your RAID array is much cheaper but that might not suit your workload.
cloud != cloud
Most smaller/local providers offering "cloud" are basically giving you a few Dell PowerEdge nodes, in the same country, in a single datacentre, the same datacentre as all their other stuff, the same network as all their other stuff, and in the same racks as all their other stuff.
Just because the ESX nodes have been configured as a "cluster" that doesn't make it a cloud. That's just some dedicated servers with a "mission critical" premium added to the pricing. In those cases, you're better doing it yourself and spreading the risk around multiple DCs.
Proper cloud to me is the abstraction of resources through APIs and geographic redundancy.
Re: cloud != cloud
When you dig a bit deeper you realise that the big companies CLOUDs are less ‘cloudy’ and more ‘clustery’ than you think. Quite scary really what people think they are getting compared to what they are actually getting. Good marketing though.
Re: cloud != cloud @AC
I suggest that all big cloud services will tend to become more 'clustery' due simply to the way performance optimisers work.
“A basic Azure account gives you 20 storage accounts and each one can have 200TB of storage. The on-prem equivalent would costs you millions."
According to http://www.windowsazure.com/en-us/documentation/articles/storage-whatis-account/, you only get 5 accounts. Still, that is a whole lot of storage. Wonder how cheap it is in the cloud? Local replication only as that is the cheapest option:
200TB * 5 accounts = 1000TB = £0.045*1*1024 + £0.042*49*1024 + £0.039*450*1024 + £0.036*500*1024 = £38,556.67 per month for storage, plus a fee per read or write, plus data transfer fees if you're going out of the cloud.
As a slightly silly comparison, for the same money you could buy 1080TB of new storage ($64357 / $59.54 per TB in backblaze storage pod 3.0s) each month, and then it is yours forever (after month 2, you have local redundancy). Even with the power and cooling costs, you're not going to save money putting bulk data into the cloud.
The advantages of the cloud are correctly sizing instances to loads, rapid scaling, ease of management, and reliability. As long as it is easier and cheaper to spin up a new instance than it is to provision a new VM, and reliability is comparable, cloud makes sense. Mass storage doesn't, so far, seem sensible (even ignoring the vendor lock in potential of having all your data held by a 3rd party).
And I'm thinking that, if you are using 200TB of data per user, you are not going to want to have to wait for Internet speeds to get to them. Locally you can have fibre for accessing all that data at GigaByte speeds, but the Internet is going to knock you down to 100Mbps in the very best of cases. Don't argue ; if you have the money to get a GB Internet connection, then you have the money to lay a 100Gbps line in your own company, with all the infrastructure around it.
Plus I doubt very much that having a petabyte of data available is going to cost you "millions". A few hundred thousand for sure, purchasing the various bits and bobs that make a petabyte of data available (NAS and RAID), redundancy and cooling and whatnot, plus an admin to make sure everything putts along nicely. That will take you into the five-digit range for sure, but six ?
I think the dynamic provisioning (auto scaling) is more applicable then is suggested in the article. Most websites have a daily traffic cycle where more resources are required at certain times of the day. In the typical on-premises situation, the standing statically provisioned compute resource (servers) is likely to be quite over provisioned so there is quite a lot of wastage (even at peak times of the day). Auto scaling is likely to use a lot less resource on average.
Re: Dynamic provisioning
The majority of companies have similar daily/weekly io usage patterns. The cloud providers have to build based on peak usage. A bit like highways. Not everyone works 8-5 but enough do that the highways are built to accommodate that. You'll be paying for cloud providers to overbuild as well. Just like the rest of the IT world does. No free lunches.
Re: A bit like highways
It costs millions to lay one mile of highway.
It costs millions to build a data center, for sure, but providing the bandwidth is going to be a side-issue compared to scaleability, reliability and redundancy costs.
The comparison is not entirely wrong, but I doubt that the scale is comparable.
Another weak link in the chain
"even if I have to pull down terabytes of data every day to my premises to drive industrial equipment."
That's going to be a fat internet connection to move that much data. One also has to consider that the data may be needed with very low latency. One errant cable cut and your factory is going to be idle until it's fixed.
I don't see many articles on purging data or procedures to limit data sprawl. I spent some time working for an aerospace company and the technical documentation repository was stuffed with useless garbage. Spreadsheets with no labels or explanations, memos to long gone employees reminding them of a weekly department meeting and all sorts of other debris that had no use or historical value. The biggest problem was that management was happy with just throwing more storage space on the server. What do they do now? Get a cloud account and upload thousands of unneeded files to a service they will be paying monthly?
It's not the technology that improves the bottom line, but a well thought out data system that leverages the use of company data and saves time finding relevant document. A cloud service MAY be a part of that system, but just going to the cloud because everyone is doing it is a waste of money. From an operational and security standpoint, it may be more efficient to keep things in-house.
The BOFH had a relevant observation
THE CLOUD: You want to keep data which is local, only ever going to be local, only needed locally, never accessed remotely, not WANTED to be made available outside our building, which can only WEAKEN our security by being off site, hosted offsite.
BOFH: Simon Travaglia
There are some data situations which definitely ought to remain within the walls of the company (plus secure offsite backups), accessible only by the company.
And the NSA.
- iPad? More like iFAD: We reveal why Apple ran off to IBM
- +Analysis Microsoft: We're making ONE TRUE WINDOWS to rule us all
- Climate: 'An excuse for tax hikes', scientists 'don't know what they're talking about'
- Analysis Nadella: Apps must run on ALL WINDOWS – PCs, slabs and mobes
- Yorkshire cops fail to grasp principle behind BT Fon Wi-Fi network