Haven't we seen more AWS outage issues than "east coast shutdown"-level storms over the past, oh, say, 5 years?
EDIT: Thinking now this may be a temporary move, not a permanent one. As such, it probably makes sense to have a tested process to be able to move between cloud/dedicated/whatever as quickly and painlessly as possible.
We've seen many people who failed to read AWS' documentation and put all of their services in one data center. If you follow Amazon's guidelines and used multiple AZs (i.e. if you use RDS check that box), you've had very little (~20 minutes) downtime; if you went multi-region it's even less.
It's not like this is a well-hidden secret - head over to AWS's whitepaper section
“As the AWS web hosting architecture diagram in this paper shows, we recommend that you deploy EC2 hosts across multiple Availability Zones to make your web application more fault-tolerant.”
Avoiding EBS - or at least planning seriously for how you'll handle failures, as with any storage system you use - is a good idea but it's unrelated to my point.
But... didn't we also have some comments before about how Amazon's own control panel systems are all in one data center? If Amazon themselves don't 'get this right', perhaps it's because it's too hard, and they need to take steps to make this easier (or indeed automatic, and charge a premium)?
Those comments were in the wrong, and they were posted because parts of the control panel stopped working during the last outage. This was actually due to the throttling of the API, as mentioned in their write up.
AWS is resilient enough so long as you dont park all of your environment in the same AZ/Region.
EDIT: Thinking now this may be a temporary move, not a permanent one. As such, it probably makes sense to have a tested process to be able to move between cloud/dedicated/whatever as quickly and painlessly as possible.