If you're building systems to run at a large scale, then rather than waste time and money trying to avoid any failure, you need to suck it up and accept that faults will happen – and make sure you've got enough cheap gear to recover. So says Salesforce, which argued that this strategy can save you a bunch of cash you'd otherwise …
but not practical for most folks. Web scale obviously doesn't apply to most orgs. Which is one of my biggest beefs that is the crap of some public clouds like amazon. Most folks don't understand it's built to fail. They don't build their apps in that manor. Very few do. I've worked at a bunch of places including two who launched their production from day 1 in amazon cloud, none of them built their apps to such standards. Even after having massive issues in cloud as a result of built to fail they still don't change their models. Look at how many web sites go down when amazon has a hiccup. It's a wide spread issue.
It makes sense at big scale - though most orgs will never get there.
built to fail is just a whole lot of fail for I'd wager 99.9% of organizations. Really the only way I see this changing in a big way is if the built to fail model is further abstracted away by the platforms. So in salesforce case, the interfaces they expose to the end users/developers writing stuff for sales force may be quite reliable because the heavy lifting is done by salesforce itself. For those writing their own apps at a lower level though, not using a PaaS or something I don't see built to fail growing gaining a lot of traction. Organizations would rather write code that brings in more users then write code that is more robust to failure.
One of the folks I know had a pretty priceless quote recently "this code is so bad, that i need to go to the ocean to make my tears look small"
What I want to know...
Is how much Duck Tape your average Salesforce.com datacenter goes through in one year. Then again, considering our CRM and digital marketing is on SFDC, maybe I don't want to know!!
(I agree that you can build to fail if you have a web-scale deployment, then it just gets to be a laws of probability thing. In smaller deployments you can't afford to lose this or that component along the way and still meet your SLAs.)
This shall be called the "Nostromo" approach.
Nothing to do with Conrad, everything with Ridley.
I want to see repairmen on the lower decks!
Wait a minute...
Didn't they recently permanently lose a load of customer data when it turned out they were only doing bi-hourly backups to save time and not mirroring the data.
Oracle sits at the core, but after that things get ugly
and no SLA's as standard !
Perhaps this is why info such as the below appears on Salesforce in addition to a number of outages they incurred in the past 2 years. There are many strong alternatives to Salesforce and independent comparison can be found at the G2 Crowd analysis website.
Surprisingly, Salesforce.com provides the least effective Service Level Agreement (SLA) in the SaaS CRM industry. In fact, Salesforce.com generally doesn't even provide an SLA unless the customer requests or negotiates it. Even then, the SLAs are not strong as their uptime thresholds fall below competitors and their maintenance windows are plentiful.
I'll just leave this out here:
>"Salesforce has a preference for buying "the shittiest SSDs money can buy,"
I was wondering who OCZ's last major customer was.
btrfs has an option for that
btrfs has options where it'll have an index like this, for supporting data duplication (triplication? 8-) ) and deduplication type stuff. I wanted to use it just for deduplication, but found the index's overhead is quite high. This type of tech is quite effective though to ensure data integrity.
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Hands on Satisfy my scroll: El Reg gets claws on Windows 8.1 spring update
- Video Snowden: You can't trust SPOOKS with your DATA
- 166 days later: Space Station astronauts return to Earth
- What did you see, Elder Galaxies? What made you age so quickly?