A major argument for placing computing resources into a public cloud is to remove a troublesome burden for your business and let someone else worry about data back-ups and failover systems. But unless your cloud computing provider has completely redundant systems in multiple geographical locations and can explain exactly how they recover from disasters, with evidence of successful test recoveries, then you should still worry. One example of complete data loss was in October 2009 when many T-Mobile Sidekick users lost their contacts data that were stored in a cloud provided by Microsoft’s Danger unit (Fried, 2009). Another example was in September 2007 when the deployment of new monitoring software caused some Amazon EC2 virtual machine instances to be deleted, affecting a small number of Amazon customers (Miller, 2007).
So what can you do to prevent data loss in public clouds? One solution is to take a hybrid approach where only non-critical business applications and data are stored in public clouds. Another solution is to use a secondary public cloud as a back-up for your primary public cloud, assuming you are not locked in to a particular technology – see next section. But if you do decide to put critical business data in a public cloud then it is your responsibility to ensure that your provider’s disaster recovery processes are tried and tested. Your business can survive occasional system outages but very few businesses survive the loss of their data.