More dark linings have been exposed in the cloud computing craze, this time by web security expert Russ McRee, who demonstrates how a flaw in a single provider can spell trouble for numerous customers it serves. In this case, the provider is software-as-a-service, or SaaS, provider Baynote, which offers search and other online …
This is the most inane "critical" article I've read on cloud computing in a while.
Traditional software running on web servers:
1. customers all install a piece of software.
2. vulnerability is discovered
3. vulnerability is exploited, with the help of Google
4. over the course of the next year or so, customers patch the vulnerability
1. customers sign up to service
2. vulnerability is discovered
3. vulnerability is exploited
4. SaaS vendor patches vulnerability and _all_ of the customers are instantly patched
I'm failing to see how SaaS is somehow worse than the traditional route, except in the very narrow scenario where you've got the in-house technical resources to roll out your own security patches before your vendor does (good luck if it's not open source, eh?)
Oh this just has to be FUD.
From what I understood from the article, this isn't a cloud specific vulnerability, it's a problem with a virtual directory being available on all customer domains. This is common place, try /server-info on most Apache servers.
This is a minor security issue, exacerbated by a config fuckup.
It's your BOFH vs. their BOFH...
This has happened several time in the past with web server farms hosting multiple clients, and, indeed, is an issue that has been around since the days of time share and mainframes. A problem with a "core" application or service impacts all services that use the "core" will ALWAYS be a shared vulnerability.
The impact of these types of flaws has to be weighed against the cost and benefit of using the shared resource in the first place. While a larger number of clients WILL be affected by a problem on the common server platforms, the cost of mitigation and the impact of any loss of service and outage will, in all likelihood, be lower than a similar issue impacting stand-alone servers - simply because while ALL clients on a server may share the impact, remediation needs to be done once. And the discovery of a problem will likely be quicker, due to the number of clients that may report a problem vs. the total number of clients, some of which will NOT notice the failure.
To use a recent example: if Conficker hit one of these "cloud" systems, all the clients would be affected by it. However, remediation would also occur to all clients at the same time - including those that, with a stand-alone server, would NOT apply the fix in a timely manner.
No matter HOW or WHERE you host services, there is a probability that you will suffer an outage, that the outage WILL negatively impact business, and that repair will take time to implement. It's up to YOU to decide if you trust the service provider to adequately support you during an interruption, or if you trust your own IT apes to get it fixed or prevented any better.
So, as I see it, it's six-to-five-and-you-pick-'em.
Google et al
This is why I stopped using GMail and the rest of Google's integrated apps / labs. I became increasingly concerned that one day there will be a flaw in, say, Google Docs which allows someone to access the rest of the platform by, say, inserting a specific string into a cell. You get the idea. Since Google have no clear support route, and since Google are developing more and more half-done 'cool' projects, the potential for cross-application vulnerability appears to be expanding exponentially. Combine that with multiple vendors implementing various apps on the same platform and it's quite scary. One day there will be a large public exposure in one of these platforms and people will start to wake up a bit. Or probably not.
Who'd have thunk it?
Luckily this never happens when people own their software. Can't ever recall hearing about a software bug that affected thousands of businesses like this.....
"An oversight by a single vendor..."
You mean like an oversight by a single vendor such as Adobe (Flash), Microsoft (...), etc? For non-cloud-computing in-house software?
I'm not saying there's not a vulnerability here. But non-cloud software can have security issues too.
"No matter HOW or WHERE you host services, there is a probability that you will suffer an outage, that the outage WILL negatively impact business, and that repair will take time to implement."
My clients don't have that issue ... Some of us know how to run teh intrawebsoftware properly.
"Since Google have no clear support route"
If you use paid-for Google Apps, you get dedicated e-mail and telephone support.
if your response to "what is the resolution time for outages?" is "never gunna happen", well i won't even get as far as asking you any more questions
what happens when one of your transit links goes down? do you have a time machine to go back in time to a few minutes before the JCB started digging and send out a withdraw so it has finished propagating before the line goes down? if so, could i borrow it for a couple of hours this evening? i have to send myself a couple of numbers...
Finally. The same applies to any shared security connections, particularly passwords. Better than that are the unofficial sites that are springing up for some of the bigger companies, which means a vulnerability that's going to be hidden from the company itself/themselves. The further an extension, the weaker it is, if not by anything else the fulcrum effect. No one wants to be "odd man out" and there have been some secrets revealed under nothing more than dares. Not that we males aren't all infinitely wise. We are. We just look stupid sometimes.
"if your response to "what is the resolution time for outages?" is "never gunna happen", well i won't even get as far as asking you any more questions"
Most corporate clients who need it see well over six nines, a couple are getting close to seven nines ... My Internet-facing home network and data has been up and available online for over a quarter century, non-stop, with no losses & no break ins.
"what happens when one of your transit links goes down?"
You do know what redundancy means, right?
"do you have a time machine to go back in time to a few minutes before the JCB started digging and send out a withdraw so it has finished propagating before the line goes down?"
JCB is a trademark, not a bit of kit. The standard unit you are looking for is "backhoe". HTH.
"if so, could i borrow it for a couple of hours this evening?"
No. But you can rent me for a few weeks. If you have to ask how much, you can't afford me.
"i have to send myself a couple of numbers..."
Eggs and 1 basket come to mind
You might be very competent in networking and well funded to boot, both of which can substantially reduce the likelihood of failure, but that likelihood is still some nonzero value.
Further, if the cost of achieving this bullet proof level of uptime exceeds the potential losses, it's a waste of money, hence the need to evaluate. Example: spending $1M per year to hit five nines would make sense if you lose a million bucks for every hour you're offline. A greater ratio of prevention/incident cost wouldn't make sense.
"My Internet-facing home network and data has been up and available online for over a quarter century, non-stop, with no losses & no break ins."
It's good to hear that you survived the massive attacks that home Internet connections suffered in 1985.
My house has been up and facing the street for over a quarter of a century, non-stop, with no losses or break ins. I guess that means I should be a security consultant for banks and other high profile targets, huh?
...Mines the one with a BGP reference manual in one pocket, and papers with some generator/UPS/transfer switch calculations in the other.
The difference is that... IF... you own, and run, your own servers, or systems/software... AND, a "common vulnerability" exists, and is exploited... You MAY be vulnerable... you MAY have a security issue... you MAY be targeted... you MAY not have adequately protected your system... you MAY be hit by the problem... you MAY have issues, and losses... possibly.
If, however, you are dependent upon any, EXTERNAL, single point-of-attack/vulnerable-point... then you WILL be hit... you WILL be affected... you WILL have losses... and you WILL be totally-dependent upon EXTERNAL-interests in "fixing", and recovering... based upon THEIR competence, and on THEIR time-table... and, to suit THEIR perception of THEIR interests.
In other words, ALL YOUR EGGS in [SOMEONE ELSES] basket.
"You might be very competent in networking and well funded to boot, both of which can substantially reduce the likelihood of failure, but that likelihood is still some nonzero value."
No shit. I'm not perfect, and I don't claim to be. At some point I will probably lose data and/or become compromised. Hasn't happened yet ... but I keep my eyes & ears open. Paranoid? Not quite, but I'm getting there ;-)
"It's good to hear that you survived the massive attacks that home Internet connections suffered in 1985."
If you are referring to the Morris worm, that was November of '88 ... it didn't affect my systems at home. It DID affect my systems at work ...
If you are wondering, the generator runs on PG&E supplied gas, the fall-backs run on propane, and the battery is made up of several 8D "fire engine" lead-acid batteries, which get rotated out & rebuilt regularly. Data is backed up twice daily, both locally & to a couple of remote locations. Why all this trouble for a home network? Research. I do this for a living ...
- 'Windows 9' LEAK: Microsoft's playing catchup with Linux
- Infosec geniuses hack a Canon PRINTER and install DOOM
- Boffins say they've got Lithium batteries the wrong way around
- Game Theory Half a BILLION in the making: Bungie's Destiny reviewed
- Review A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND