Pro tip: Go old-school for cloud data transfer

In a world of high-speed data connections, loading up disks is still the best way to transfer huge volumes of data

So you’re looking for the cloud to take care of your storage needs, including an on-premises database that handles more than a petabyte? You have a few options.

First, you can use the open internet as a path to upload the data. However, do the math: It will take five years and cost $100,000 in bandwidth. OK, that option is out.

Second, you can use a cloud gateway appliance that facilitates bigger and faster transfers, some using dedicated connections into the cloud provider. But do the math again: It will take one and half years, plus the cost of the gateway appliance.

Third, you can do what everyone is doing: Load your data on sets of hard drives and send them to the cloud provider. Hopefully, they will be loaded in the correct order and none will be damaged in shipping. Called integration by Federal Express, it long predates networks. 

Yes, this is a primitive process -- I mean, shipping disks? However, you have very little choice given the amounts of data that enterprises typically need to load. In this case, technologically primitive works better than technologically advanced.

Fortunately, Amazon Web Services has better twist on this primitive solution, called AWS Snowball. AWS ships you a storage device with software and instructions on how to load it. You load it from your network, and the volumes are automatically labeled so that they can be loaded in the correct order when received by AWS. Other public cloud providers have similar approaches to handle data transfer via disk.

The good news is you typically only have to do this once. The bad news is you have to do it once. 

You can’t change the laws of physics, and you can get only so much down a network pipe. Although that pipe will get bigger in the future, so will your data. Keep those disks handy!

Copyright © 2016 IDG Communications, Inc.