Capacity planning and disaster recovery?
There are many assumptions made above and ridiculous comments with the exception of a couple sensible comments. I'm curious to know how many of the people above have actually used cloud computing within any scale. Cloud/Grid computing all have their positive and negative points, however you need to be smart enough to take this on board.
It effectively comes down to capacity planning and disaster recovery. 19 hours is obviously a significant outage, which would make you ask why Nøhr was not sufficient in dealing with the outage and recovering off-site and/or splitting traffic between Amazons EC2's data centers?
Throwing software at the cloud will not prevent (D)DOS attacks, their are many forms. In addition Amazon's turn around time in terms of their support is excellent. I have not spoken to any incompetent engineers nor had serious delays in getting any matter resolved from routing issues to instance problems..
A little better planning could have alleviated the issue altogether, and Amazon I'm sure will be happy to help put the matter to a close.