Amazon Web Services has turned on a new facility that allows migration of data from its S3 cloud storage service to its new Glacier cloud archive service. Glacier was launched a few weeks ago, and offers cloud storage at $0.01/Gb/month for most US regions ($0.011 in some North California areas and Ireland, and $0.012 in Tokyo), …
Launch the rocket, then write the guidance software...
Amazon are doing some really cool things with their infrastructure but this policy of "launch, then write the software to allow us to control it" is becoming a bit of a pesky habit. Whilst rudimentary tooling is generally available from the get-go, it seems that the useful bits (like a handle for a hammer head) often follow along at a later date. But it seems AWS don't want to (or can't) press the pause button to make sure the tool sets and features do what you might naturally expect them too.
"Data resident in S3, by contrast, is accessible in real time."
Define "real time"?
"Define "real time"?"
With S3, I can give you a URL to a file stored there, you can view it straight in your web browser: no separate retrieval step or waiting for a batch process to complete. With Glacier, you ask for the file to be brought back from tape, then it pops up a while later.
I've been using S3 for static web hosting for a while (as have some rather bigger outfits, like Twitter!) - the response times are pretty reasonable, nothing to complain about on that front.
They're pricing storage by the gigabit?