AOL has been operating a trial datacenter that runs without any on-site staff since the start of the month, and reports that the system is resilient and cuts costs. Dubbed ATC, the datacenter uses off-the-shelf, pre-racked/vendor integrated gear with open source code, is run as a 100 per cent lights out facility (no BOFH …
That's how many people laid off?
elimination of overlaps and AOL’s datacenter prowess, could save $1.5bn.
Glad I don't work there.
mtfb? how do disks get swapped? do they wait until a certain % fail then swap the shipping container?
Simple, the vendor takes care of it. "AOL has been operating a trial datacenter that runs without any on-site staff" The vendor would not be considered on-site staff. The vendor could have someone on-site and AOL could still say that it runs without any on-site staff as the vendor is not considered staff.
So how's this different to other forms of outsourcing where some external party looks after the hardware?
Seems to me it's just outsourcing the maintenance of a data centre on a larger scale.
.. is that it comes with a free shipping container :-).
But yes, you're right. They just cut down on the number of reset buttons..
I think it's more likely that the DC runs without staff say 98% of the time and then a weekly maintinace crew goes to site and swaps disk's et'al for 12 - 24 hours then theoretically moves on to the next DC in the rotation, and a serious amount of remote monitoring of the site.
With possibly a emergency crew to attend the site should the excrement hit the air con.
Either way I would be watching this with interest.
I'm sure they can get some RAID with 10 spare drives which will last a lifetime for them. I mean how many customers do they still have? How many of those still dial in?
You have no idea..
.. how many people still have an AOL email account. I'm astonished at the number myself _ come across them in the most surprising places. Even long term Internet entrepreneurs..
I Would Have...
...nightmares featuring unemployed PFYs roaming the streets like extras in a George Romero movie.
Next week's headline
AOL Datacenter burglarized.
No witnesses found.
I dont want to stop progress but as an industry, IT has managed to help offshore many unrelated jobs, to the general detriment of the economy, automate more (and yes the convenience and lower unit cost is nice, but the higher national welfare budget isn't) and now we're finding means to put ourselves out of work as well, between things like this and cloud based SAAS.
All we need to do is perfect the robots to hammer the nails that can't be done over the internet and we're finally screwed, if you'll pardon the pun.
The human free datacentre
For the human free ISP. Otherwise known as a field.
Hmmmm still have people monitoring it all remotely
So given they have people monitoring remotely and the like then for all effect they have moved the NOC to head office. Ok so they will save on alot of costs but those costs have other area's they indirectly cover - security being one of them. Now if memory prices rise then we all know what happened during that time with RAM(sic) raid attacks. A empty no people data center would be more open to such issues than peopled ones.
A setup like this would have alot of redundancy by design, but it will only be as good as the monitoring.configuration managment/automated support scripts. Seen enough people run just at the prospect of running skulker on a nix box so nice to see things moved. But you have to cater for all exceptions otheriwse automation can compound things. Take RIM who had a core switch fail to fail over and utterly fail yet was apparently tested for such situations! Thats an exception and an example were you can't automate everything. As a rule the more people you have interacting with your system either directly or indirecly then the more possible exception your exposed too. You need people to balance out that factor. I've walked into a room and had a server smell like it was about to die (soldier burn kinda smell) - nothing on system health monitoring and was good state of the art monitoring as well and not a cheap server. Day or so later that server died, got alerted by monitoring. That example is one were human interaction was able to see a potentual problem and able to plan ahead to resolve it incase it happend and failed. But with enough spares in the racks and VM, things can be easier. if they run right but you can automate that pile of poo into a complete heap if your not careful.
But a true automated datacentre IMHO is one which nobody knows about until it retires. But anything that uses electricity or has moving parts needs love every now and then and only a BOFH can give the kind of love that is needed.
The opening scene from Colossus: The Forbin Project (http://www.imdb.com/title/tt0064177/)
Next: AOL datacentre discovers Google's servers, they link up and all our data are belong to them, forever, for our own good. This concludes the broadcast from the Cloud Control.
Didn't Dilbert cover this recently?
... this should be under ROtM?
So what happened during the earthquake was that idle, unused capacity was made to do something useful, probably involving a phone call/intarweb chat to temporarily open up some latent licenses, followed by running a few scripts.
I mean, the way the spokesdroid froths you'd think it involved sparkly Star Trek replicator technology precipitating machines out of thin air to boldly supply bandwidth where no-one had done so before.
- Product round-up Coming clean: Ten cordless vacuum cleaners
- Something for the Weekend, Sir? I need a password to BRAKE? What? No! STOP! Aaaargh!
- Episode 13 BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
- Vulture at the Wheel Ford's B-Max: Fiesta-based runaround that goes THUNK
- Worstall @ the Weekend BIG FAT Lies: Porky Pies about obesity