“Cloud computing does not mean the end of the IT professional.” So says Professor Marin Litoiu, research professor at York University in Canada, erstwhile IBM research director and now one of the world’s foremost thinkers on cloud. This may seem a strange statement - coming from a man who has predicted that cloud computing will …
The "IT profession" has many meanings - probably one for everybody in it. At its broadest, pretty much anyone who earns their pay by sitting in front of a keyboard is an IT professional: from online porn workers to city traders (though you could argue that they both screw people for money, so the difference is small) to programmers, to CEOs.
When the cloud takes off (i.e. stops being merely fog) then it's reasonable to assume that almost all the the jobs performed by people who sit in front of keyboards can and will be done by semi-AI enabled chat bots with/without avatars - depending on how much the "john" is paying. This includes all call centres and telesales operations.
The question that arises naturally is what will all the people, that these technologies displace, now be able to do for a living. Since these were the people who originally worked the land, then worked in factories, then in chicken-sheds with headsets attached, then - what? exactly? and how long will this "revolution" take?
Maybe it's time to stop educating the next generation for jobs that won't be around for long: certainly gone by the time they retire and probably by the time they've paid off their student loans (maybe even by the time they graduate). Maybe we need to look at the jobs that only people can do - although just how many hairdressers does a country need?
How, exactly, does adding another (or more) ...
... layer(s) to the stack make for better/cheaper/faster/more secure computing?
The very concept of "cloud" is daft to the core, that's why PLATO never went anywhere. Real IT professionals have been moving away from the idea since the days of Xerox's Alto.
 Note I said "professionals"; I'm not talking Redmond or Cupertino here ...
About "...adding another (or more) layer(s)..."
Just testing myself whether I'm able to sell the cloud idea to Jake....
Q1: How, exactly, does adding another (or more) layer(s) to the stack make for better computing?
The concept of cloud as I currently understand it is to offer a pool of resources (cpu/ram/storage/network interfaces/etc).
From this pool we can choose the components that are required to complete a specific job.
For example building a router requires multiple network interfaces and right amount of ram but less cpu and storage.
In contrast building an email server requires large amount of storage and ram but we can probably manage with a single network interface (not the best practice though).
Cloud infrastructure enables us to easily build a machine that fits the specific purpose thus I think it makes for better computing.
Q2: How, exactly, does adding another (or more) layer(s) to the stack make for cheaper computing?
In terms of CPU cycles I don't think it makes for cheaper computing. From the costs perspective it can save you money.
For example let's assume I work for an agile software company that makes a new software release once a month.
On the release day company's web servers are working hard as customers are downloading the new version of the software.
For the rest of the month servers are sitting mostly idle.
Being able to scale up/down when the demand goes up/down can save you money in environments where the charging is done per CPU hour (eg.Amazon EC2).
Q3: How, exactly, does adding another (or more) layer(s) to the stack make for faster computing?
Virtualisation used in cloud computing doesn't directly offer performance gains in comparison to conventional bare-metal computing.
What cloud infrastructure enables us to do differently is to increase capacity when additional computing resources are needed.
As an example when doing cpu-intesive parallel processing in cloud one can quite quickly add 10+ nodes into the cluster which together
will get the job done quicker than 1 bare-metal computer. Deploying 10 new nodes in cloud environment is 100 quicker than deploying 10 new bare-metal computers thus it saves time.
Q4: How, exactly, does adding another (or more) layer(s) to the stack make for more secure computing?
It does not make for more secure computing.
In my opinion security is the number one common concern at the moment that slows down the adoption of cloud computing.
"Just testing myself whether I'm able to sell the cloud idea to Jake...."
Good luck with that ...
"Cloud infrastructure enables us to easily build a machine that fits the specific purpose thus I think it makes for better computing."
I've been doing this with un*x and VMS for around 40 years, in-house.
"For example let's assume I work for an agile software company that makes a new software release once a month."
Here's a hint, kid: "Agile" is another marketard term that only exists to separate management from internal funding, kinda like "cloud". Once a month software releases are, by definition, un-tested and as a result not trustworthy in a corporate environment.
"As an example when doing cpu-intesive parallel processing in cloud one can quite quickly add 10+ nodes into the cluster which together will get the job done quicker than 1 bare-metal computer."
So, basically, what you are saying is that when consuming massive amounts of computing resources, it's OK to not really understand the complexity of the issue until run-time, at which point we can throw money at the problem? You've really drunk the cool-aid, haven't you?
"It does not make for more secure computing."
No. It does not. Which should end this discussion, permanently..
"Once a month software releases are, by definition, un-tested and as a result not trustworthy in a corporate environment."
I understand you are not a great fan of Windows but many of us (individuals as well as corporates) are using this operating system which has a monthly release cycle for software patches.
Also the trend in the open source space seems like moving towards more frequent release development cycle as per announcement by Mozilla Foundation here http://blog.mozilla.com/blog/2011/04/13/new-channels-for-firefox-rapid-releases/.
"So, basically, what you are saying is that when consuming massive amounts of computing resources, it's OK to not really understand the complexity of the issue until run-time, at which point we can throw money at the problem?"
No, you still have to do capacity planning and estimation but for unexpected situations, such as OperationPayback that took down Visa and Mastercard sites, it can help if you can quickly deploy additional capacity to cope with unexpected issues.
"I understand you are not a great fan of Windows but many of us (individuals as well as corporates) are using this operating system"
::shrugs:: That's not my issue, it's yours. And has nothing to do with my argument.
"which has a monthly release cycle for software patches."
Patches for software that has been around for years is not "Agile Development"; rather, it is patching software that has been broken since inception.
"Also the trend in the open source space seems like moving towards more frequent release development cycle"
Do you understand what the term "development cycle" means? This is test-build, not official release software. That's how FOSS works ... The initial code is available to all and sundry, who are encouraged to expose it's faults & foibles. Unlike certain more commercial offerings I could mention. Comprende the difference, compadre?
"such as OperationPayback that took down Visa and Mastercard sites"
The mere fact that the annonytwatskiddies could take down *any* computer owned by a multibillion dollar international corporation tells me more about the lack of ability of the IT staff in the multi-nationals than it does "usefullness" of clod computing.
 Said ability is probably more managerial cluelessness over where to properly spend money than a lack technical ability amongst the IT staff.
 Typo ... but I'll assume it's Freudian and leave it ;-)
Customer IT Expertise
I think the main point to take from this is the need to answer an important question: is it smarter for the cloud customer to outsource all IT infrastructural expertise? Should they have no one involved whose annual review hinges on ensuring that some or all aspects of the customer's IT provisioning are not only providing the necessary services, but also adequately minimizing all risks inherent with such infrastructure? Is business continuity guaranteed? Best of all, are trends or even innovation being evaluated for the potential to enable competive advantage? How much do you trust your cloud provider(s) to put the interests of your business ahead of the interests of their business(es) when incongruence exists? Indeed, how can you be sure you will even recognize such incongruence? How willing are you to bet your business on one or more cloud providers?
That my job stuffed
I'm finding it very difficult working for a SME to keep up with the skills required for the systems users expect. To be frank they dont need much more than they had 15 years ago. Some simple word processing, email and basic spreadsheets with the odd (very) bit of browsing, however systems have become so complex I need more and more support, and I feel that systems suppliers just keep upping the ante to get money for support packages that I really should not need for the same (used) functionality. We only update because we have to (versions "expire"), not out of vanity.
I can see standard "packages" for small businesses becoming the norm in the same way as telephony is for households. That will be the death of in-house IT, with everyone on thin PCs and someone from "maintenance" popping round on a scooter only if there is a local networking issue. This will also impact on all those software suppliers as many of the SME just stop running their own stuff, so they've cooked their own goose, for the most part, as well as mine.
We thought in the 80's that we could be a knowledge economy (including finance) and sell our skills globally. Then we invented the internet and suddenly let all the equally or better qualified workers in cheaper economies took over much of that in the same way they did with manufacturing.
It's deja vu all over again.
@dlc.usa Very astute comment
How many of our Windows "engineering" buddies will be sucked into the cloud? Lowest common denominator (LCD) clouds for the LCD masses. Cheers!
Oh no you dont
Cloud ? still fog as previously said. As for speeding up provisioning. <spew> New keyboard.
Fully 80% of IT professionals time is now killing trees, wasting oxygen and feeding drones in the latest ITIL/Prince/PHB scam. In short being part of a "managed process", not actually doing IT !!
The cloud simply adds much more complexity of issues (SSL every where anyone, and we know how secure that is, dont we ?) to achieving what a Vax in back office used to do quietly, reliably 25 years ago. Who owns data, where is it , how secure, how do we meet latest privacy fictions, how do PHBs meet lawyers gobbleygook requirements, who is responsible for local thin clients, and network so users can see something, gateways, firwalls, updating apps. In short, cloud just abstracts already abstracted device (VM, LPARS, containers et al) so there is an increase in job disastisfaction
In short, service path techs have a bright future, while it palls and greys for the rest of us.
Lastly, the poor techs in cloud data center will probably have to fill in forms to go for a leak, let alone "touch" any hardware, so nothing much will happen there either. Cue angry users, as usual and it's all the technical staffs fault for not following standard practice/process.
- Apple: We'll unleash OS X Yosemite beta on the MASSES on 24 July
- Pics It's Google HQ - the British one: Reg man snaps covert shots INSIDE London offices
- White? Male? You work in tech? Let us guess ... Twitter? We KNEW it!
- The END of the FONDLESLAB KINGS? Apple and Samsung have reason to FEAR
- Researcher sat on critical IE bugs for THREE YEARS