I am glad I am not that man (or woman)!
How an Amazon engineer's slip-up started a 20-hour Netflix cock-up
An Amazon engineer hit the wrong button on Christmas Eve, deleting critical data in its load balancers and ultimately knackering vid streaming biz Netflix for 20 hours. The Netflix outage hit customers in the US, Canada and Latin America on 24 December, particularly those using games consoles and mobiles to watch films, while …
-
-
-
-
-
Thursday 3rd January 2013 13:12 GMT Field Marshal Von Krakenfart
No personal experience of such an incident, but..
It was probably the computer operators playing Frisbee with the tape container covers in the computer room and accidentally hitting the tape drive reset button with the cover.
Not that I have any actual experience of such a thing happening <cough> <cough>, but I have heard that it happens sometimes.
I'm innocent, I promise...
-
-
-
-
-
-
Wednesday 2nd January 2013 17:20 GMT asdf
easy
>What went wrong at Amazon/Netflix that allowed this to happen?
Poor management which is almost always the case in these incidents. Its much easier to blame some peon but for mission critical infrastructure like this not only should it not have been possible for the peon to accidentally do this but he should not have been able to affect service even if he maliciously tried (yes i know a pipe dream in most reactive only crap corporate environments). If the often times sociopaths in charge are going to take the big salaries then they should occasionally be responsible for something.
-
-
-
Thursday 3rd January 2013 02:00 GMT Fatman
Re: Sadly I work in a stupid company where all major things happen on a Sunday morning
Are you so sure that is all bad?
WROK PALCE had to """fix""" a telco related """fire hazard""" involving a shitload of phone lines that were not "plenum rated cable" (according to """fire marshal"""). On a Sunday morning, WROK PALCE is only manned by security, and no one else. So, I have to ask, do you want phone lines going down during the business day, with employees at WROK, or do you want the phone lines going down when most employees are at church???
Let me see, I will take Sunday morning, any time for this kind of downtime.
-
-
Thursday 3rd January 2013 13:55 GMT Tom 13
Re: Sunday morning 6am>10am
Granted 6pm < 10 pm on Friday would probably be better depending on the business, that's still better than 4am>8am Monday morning.
Of course the guys I really feel sorry for are the point of sale vendors for fast food joints. My friend's migration schedule is always 3am to done with training before and after.
-
-
-
-
Wednesday 2nd January 2013 17:27 GMT Anonymous Coward
I guess it went like this.
Netflix top honcho moans to Amazon top honcho about wanting everything to be as fast as possible and super dooper for Christmas and everything seems a bit "slow"
Head Amazon honcho moans to AWS top honcho going "Why is netflix complaining everything is slow RAR RAR RAR"
AWS head guy goes to Ops manager "RAR RAR RAR I just had Our Head Honcho moan at me that the systems slow clean it up!"
Ops manager sighs goes to team "I know it's bollocks but Bob you need to run the maintenance on the nodes for Netflix coz everyone is moaning"
Bob wanting to go home and start drinking runs the processes but against the wrong object ID then goes home to the family / to the pub
That is one possible option.Given netflix isn't really a mature outfit and AWS will do what they're told, I can imagine that being the situation.
-
Thursday 3rd January 2013 12:25 GMT Psyx
"What went wrong at Amazon/Netflix that allowed this to happen?"
They made a techie work on Xmas eve. That was their first mistake. He probably didn't want to be there and wanted to get home.
The second mistake was being too tight to pay the over-time for TWO guys to watch each other's backs and spot mistakes.
-
Thursday 3rd January 2013 15:55 GMT Van
24x7 ?
The poster claiming it was operators playing frisbee is a closer guess. I would expect the data center to be manned by a large team of operators 24 x7. And with a 25% shift allowance + extra holidays, they most certainly would want to be there. Eating Pizza, watching TV, in between housekeeping tasks.
-
This post has been deleted by its author
-
Friday 4th January 2013 06:30 GMT P. Lee
> we usually have a ban on major changes on Fridays or just before a public holiday,
Yep, changes are Tuesdays (to leave Monday for final planning and cleaning up after the weekend and avoiding "Monday-itus") and Thursdays (because no-one wants to work weekends and its cheaper on overtime payments).
Would it be rude to point out that torrents are naturally fault tolerant and cheaper than F5's?
-
-
-
-
-
-
Thursday 3rd January 2013 13:26 GMT Field Marshal Von Krakenfart
Re: WHY, did an image of Steve Ballmer ...
-
-
-
-
Thursday 3rd January 2013 17:29 GMT Tom 13
Re: Huh ?
Even AT&T has problems building elasticity that can handle losing a large chunk of their normal bandwidth. The expectation is for random single failures that account for maybe 1% of the load. They get good at dealing with those. But kill 25% instantaneously and the cascade failures start taking down the rest of the system. Sure they stress test it in a VM lab, but for some reason the real world never seems to work that way. And you rarely get real world
No it shouldn't be that way, but all too often it is.
-
-
-
-
Thursday 3rd January 2013 14:44 GMT Field Marshal Von Krakenfart
"good. people shouldn't be watching movies on the eve of
Jesus's birthdayDies Natalis Solis InvictiFixed it for you.
There's also the god of wine, Dionysus, also called Bacchus, also called Iacchus, Born December 25th to a virgin mother; performing miracles such as changing water into wine; died and was resurrected after three days and ascended into heaven.
If I remember correctly there was also a minor cult in the Roman army who worshipped a dead Roman soldier who was born on December 25th, died/was killed and was resurrected after three days.
-
-
Wednesday 2nd January 2013 17:51 GMT Paul Hovnanian
Change processes themselves have to be tested and controlled. That errant maintenance process should have been run against a test environment prior to being used on production sites.
And then a test suite needs to be included and run to ensure that the test/production sites are still up and running after the change is applied.
-
Wednesday 2nd January 2013 18:09 GMT Rick Giles
Now that I know
Netflix is using Amazon's cloud I may just have to drop them. This is the kind of shit that is going to happen more and more as these idiots give up control of their data and infrastructure.
Besides the fact that Netflix doesn't have a Linux app is probably the main reason I want to drop them.
-
Thursday 3rd January 2013 22:00 GMT Mike VandeVelde
"going to happen more and more"
At current monthly subscription rates that must have been almost $0.25 worth of service we each lost there, no joke if they keep that up the whole economy will soon grind to a screeching halt! One less option for ignoring friends and family, on Christmas of all days when you all know we need it most!! Think of the children!!!
-
-
-
Wednesday 2nd January 2013 19:51 GMT Anonymous Coward
I know I'm going to be unpopular but what the heck...
I'm getting rather fed up of the argument from some that this is all down to the change processes and that heads should not roll. I know my organisation's efficiency would improve immensely if I were allowed to fire some asses now and then rather than just shuffle them off to the side to some role where (I hope) they cannot do any damage. I often wish that management had not downsized HR quite so much so that there were actually some warm bodies who would help me satisfy all the regs for sacking someone so I could use the money to hire someone decent instead...
-
Wednesday 2nd January 2013 20:16 GMT Hooksie
Re: I know I'm going to be unpopular but what the heck...
No wonder you were AC on that comment. You think it should be ok to sack people because managers like you continue to ask them to do things that they aren't trained for, don't have the time to finish, isn't their responsibility and that you already outsourced or downsized the team that was SUPPOSED to do that job. Oh, and on top of that you give them a 2.5% pay 'increase' then blame the market conditions.
To err is human, to really fuck things up requires a computer, a tired engineer and piss poor management.
-
Thursday 3rd January 2013 19:19 GMT asdf
Re: I know I'm going to be unpopular but what the heck...
> there were actually some warm bodies who would help me satisfy all the regs for sacking someone
Wow definitely not an American in a right to work state then. Right to work someplace else no questions asked is what it should be called. It sounds worse than it is though in that it is generally easier to find a job as their is less risk in hiring someone but you are lucky if you find a place that treats you as anything but an asset though.
-
-
Wednesday 2nd January 2013 20:48 GMT Anonymous Coward
Making bad assumptions
Actually I'm the kind of manager that fights tooth and nail to get my team trained, proper pay rises, promotions and fight against outsourcing and downsizing. I have hated it in the past when I have had to make good people redundant. I don't ask any of my team to do anything that I cannot do myself. All of which is why I'll never rise any further. However the propensity of some people to take the piss does make life worse for everyone else. If you know your UK employment law and your employer stints on HR you can be almost unsackable.
And 2.5%. I'd love to be able to secure that kind of rise for the best people in my team.
-
Thursday 3rd January 2013 13:42 GMT Anonymous Coward
Re: Making bad assumptions
" I don't ask any of my team to do anything that I cannot do myself"
You are either the most talented person in the world, run the least skilled IT department in the world, or the best bullshitter in the world, or as you don't seem to expect your staff to do stuff you can't do, could explain your own lack of promotion.
-
-
-
This post has been deleted by its author
-
Wednesday 2nd January 2013 22:36 GMT pstones578
Change Control / Change Freeze
Would it not make sense to have some proper change control and then Amazon could have reviewed their change documentation and hey presto notice a change had happened around the time of the problem. Also while they are at it wouldn't it also be a good idea to have a change freeze around such a critical time of year! Unbelievable
-
Wednesday 2nd January 2013 23:36 GMT Vince
Blame the engineer, ignore the cause.
So the problem is...
(a) Netflix have poor business continuity planning and rely on a single supplier (AWS) for its systems.
Cause: Poor management decisions/understanding
(b) AWS have poor processes that allow a single point of failure
Cause: Poor management decisions/understanding
(c) Netflix believed the "cloud" of Amazon would be redundant against anything and assumed they had covered the issues in (a)
Cause: Poor management decisions/understanding
The real issue isn't the engineer that "made an error" but the AWS system that can fail despite supposedly being uber-geo-redundant and so on, and the Netflix management who decided to put the eggs in one basket.
As I understand it, Netflix have local content caches with various ISPs so I assume the issue was the database/account side and not the underlying content availability - so it would be a *relatively* less expensive task to put a better system in place (I'm not pretending it is trivial, but it's obviously "less tricky" when you haven't got to replicate what I assume is a huge amount of content which would be costly to store/stream en masse
A better fix would have been to have multiple providers and the ability to have the Netflix client(s)/website(s) detect/choose/forced etc.
Of course this would require more expenditure and at £5.99 (or it seems a penny more if you subscribed more recently) it's unlikely there's enough margin I guess.
-
Wednesday 2nd January 2013 23:47 GMT Anonymous Coward
Change control
It really says something about the Amazon change control process. It also says volumes about their support staff; both the person that did the deleting and the subsequent ones that did the troubleshooting. When they encountered missing data, the first thing should have been to look at who made a change, what the change was and what was actually changed. I think Amazon needs to invest in an AAA solution.
-
Thursday 3rd January 2013 00:40 GMT John H Woods
As with the NatWest disaster ...
... it should not be possible for a single engineer to wreak this kind of havoc: systems like this should be resistant even to deliberate malice. Your engineer could be tired, inexperienced or unwell. But they could also be a saboteur working for a competitor, an employee with a grudge, a criminal who is going to hold your system to ransom or even an out-and-out terrorist.
-
Thursday 3rd January 2013 15:36 GMT Anonymous Coward
Thats Netflix fixed.....now for MSFT Media Center?
Now all we need if for the engineer, PFY or intern who went on xmas vac forgeting to flick the switch to update the TV Guide data in Media Center to sort out the updates there (we know BDS Ltd have sent data packages to MSFT for upload) then everyone will be happy :-) (For ref UK data ended Jan 1 so having to use dead tree TV guides and ending up with loads of "Manual Recordings" :-( )
-
Monday 14th January 2013 14:46 GMT Anonymous Coward
maybe, just maybe
the idiot who didn't check his/her work should shoulder some responsibility for this. perhaps, before initiating a change that could cause a major service outage during a peak usage period, they should take a minute to really look at what they've instructed the system to do before they hit the go button.
i can't see how this is management's fault: it's just poor workmanship.
i'm assuming it's all techies who have blamed the managers. well, i am a techie and this is just someone doing a shit job because it's xmas eve and they're not paying attention.