As critical as data backups are for maintaining continuity of operations, backups themselves have been known to fail on occasion. Other mishaps occur when organizations fail to keep a copy of that data offsite or don’t document their recovery procedures. However, IT managers can avoid these problems with some careful planning.
It’s common to hear of backups that seem to finish without errors, but that later turn out to be unreadable or incomplete when someone attempts to restore the files. Test regularly to ensure that backups are going off without a hitch.
Restore a single file chosen at random once a week, a folder or directory structure once a month and a server once a quarter. If there are multiple backup locations, such as an onsite appliance and an offsite cloud-based archive, test both.
New technologies are always in development, and many have the potential to greatly improve the quality and frequency of backups without a large increase in cost. For example, migrating from tape to disk-based backup systems can significantly speed backup and restoration processes and improve reliability. Improvements in storage interface speeds can also shrink the backup window. Using a physical-to-virtual backup, or vice versa, eases the task of migrating servers.
IT managers may employ deduplication to reduce the size of backup files. Deduplication replaces duplicate files with a pointer to the first copy, so only the parts of a file that have changed need to be updated in the backup system. Deduplication can also make it more effective to render multiple snapshots of a system at a particular point in time because the second snapshot needs to contain only the data that changed since the last one.
Older backup systems may use a weekly full backup with nightly incremental backups. This approach intends to provide a workable level of protection without the overhead of a full backup every night. But when a restore is needed, IT managers often need to restore from the last full backup, then apply each incremental backup in turn to ensure that all changes are captured. Newer systems consolidate all incremental backups into the full backup so that only one restore operation is required.
Current systems also capture all the versions of a file that has been changed over time, which is useful for, say, determining when a file was encrypted by ransomware and restoring the most recent version before that took place.
Typically implemented in a storage area network or network-attached storage system rather than a backup system, snapshots and replication provide a high level of redundancy and data protection. IT managers can use snapshots to make backup copies of systems that can’t be shut down and always have files open, such as databases and email systems.
The snapshot can then be used to create a more traditional backup. Because the snapshot is effectively a separate copy of a given volume, it can be mounted and used to restore files directly if necessary.
Replication creates another copy of a given volume to another SAN in the data center or to a remote location. Whether performed in real time or after hours, replication provides the ultimate in data protection because even if a data center goes offline, the replicated data is still available at the remote location. Many cloud backup services offer this functionality as well.
The data most commonly restored by IT managers is data that’s lost due to user error. Many backup solutions now provide a self-service portal so that users can find and restore their own files rather than require administrator intervention. This can save an enormous amount of administrative time. The backup portal generally only allows an end-user to restore data, not delete or change it, so there’s minimal danger in providing users with portal access.