Availability is essential to smooth business operation. Many organizations think of availability in terms of network access. After all, if you can’t access your servers, you can’t do business. Availability in this sense is measured by how long the network is down per year. The cost of such downtime can be measured in millions of dollars per hour for some companies. To be enterprise-class, availability needs to be four or five 9s, meaning the network is up 99.99% or 99.999% of the time.
But availability involves far more than just whether you can get onto the network. If your data or applications become unavailable for any reason – storage corruption, hacking, or accidental deletion – it doesn’t matter whether the servers are blinking green. Similarly, if there is a local disaster, such as a storm taking out the power to your data center for several hours, you aren’t going to get much work done.
For the highest robustness, consider taking the three-level approach to availability – data/applications, servers, and site. When any one of these three is disrupted, your business operations can be shut down. Therefore, your availability plan needs to protect your organization at all three levels. This requires a blend of technologies, from simple data backup to the ability to recover critical applications at a secondary site during a disaster.
In “The 3-2-1-0 Rule to High Availability,” Doug Hazelman from Veeam describes how backup is one of the foundations of availability. As a foundation, however, it is not the entire solution. Rather, it is a starting point upon which a comprehensive availability plan is built.
Consider how traditional backup technology offers only limited protection against data loss. Because system backups occur off-hours, users can lose an entire day of data and work. Today, virtualization technology makes it possible to backup at the virtual machine level. In addition, backup can be implemented without the use of agents to eliminate any negative impact on application performance or availability. This also significantly increases how often critical data and applications can be backed up. Finally, the cloud provides a cost-effective, flexible, and highly scalable way to implement a secondary data center without large CAPEX outlay.
Together, these technologies decrease the recovery time and time to productivity for any restoration. Because it is the virtual machine that is backed up, not just the data or application, the system is able to quickly and automatically restore the working environment itself. The result is that IT can achieve a 15-minute recovery for all data and applications, regardless of whether a single email is being recovered or an entire site restored.
The key to operating a reliable network is to focus on the end result rather than any of its underlying technologies. Organizations that continue to think in terms of backing up user data limit themselves to a 24-hour recovery point and slow, manual recovery when a problem does arise.
Instead, focus on what is important to your organization: the availability of your data and applications and the infrastructure you need to access them. In this way you can take advantage of multiple technologies that, when integrated together, protect your organization at every level.
Learn more about how you can protect your organization by taking a 3-level approach to availability.