Sections:News

Lose your data, lose your business

April 28, 2009, 02:18 PM —  Via Network World — 

Disasters, by definition, strike with little or no warning. Whether it's an extended power outage, a devastating storm, or some other unforeseen disruption, the most nerve-wracking part of owning a business is the unknown. But good news: we can prepare effectively to weather even the worst of storms. A solid disaster-recovery (DR) plan can mean the difference between a business bouncing back from a catastrophe or closing for good.

In past Network World contributions, my colleagues have underscored the importance of having DR plans, citing studies that show data losses stemming from IT outages can prove fatal to small businesses more often than most of us would guess. For example, a U.S. National Archives and Records Administration study found that 25% of companies experiencing an IT outage of two to six days went bankrupt immediately, with even more following in the longer term.

The question for every business is, if that kind of disruption happened to you, would you be one of the survivors or one of the casualties?

The core of almost all DR plans is data replication in some form -- duplication and storage of vital data in a safe, secure place where you can retrieve it if some catastrophe destroys or damages the primary location. There are essentially two different data replication strategies: host-based and controller-based.

If your organization has not committed to either yet, keep in mind that it's very difficult to switch from host-based solutions to controller-based solutions because the two aren't compatible. Each is handled differently and uses different components (hardware to software, and vice-versa). If you're unsure about which type is right for your businesses, be sure to seek the guidance of a trusted adviser.

Host-based data replication
Host-based solutions usually are recommended for small businesses as they are the most cost-effective and "easiest" systems to adopt. This type of implementation occurs at an organization's operating level by pairing two separate servers that will each save data, ensuring redundancy. Servers in a host-based system can be paired at a one-to-one level, or with multiple servers-to one location, depending on the needs and capabilities of the organization.

A host-based solution is effective because the back-up server can be deployed remotely, potentially eliminating any need to restart the server should an event occur. In addition, it is very efficient and has a limited footprint, both in terms of office space and energy consumption. However, keep in mind that host-based solutions employ a variety of software systems, all of which likely will require a license.

Controller-based data replication
Controller-based data replication, by comparison, is typically used by larger organizations and involves replicating data at the byte-level onto a storage-area network (SAN) that connects remote storage devices to servers while appearing locally attached. Often more expensive than host-based solutions, controller-based replication can be implemented in two ways, each of which have advantages and disadvantages.

 Synchronous replication: Synchronous replication, commonly referred to as mirroring, automatically and instantly stores data to two different sites upon initial acknowledgement of the information. If one of the storage drives fails, the system can switch to the second without any loss of data or service, whether it is in the same data center or across the country. While this immediacy appears to be a significant advantage over the other options, there are other issues to consider. For example, synchronous replication usually employs two sites that are a significant geographical distance apart, which creates the need for a high-speed link between the two locations. Without this, some data latency is sure to develop within the system.

 Asynchronous replication: The alternative controller-based method is asynchronous replication. The biggest difference is data stored on an asynchronous system is replicated to the second site at user-prescribed intervals. This data latency bears a higher level of business risk because, should an event occur, any data still waiting to be replicated will be lost.

When determining which data replication solution best fits your organization the first objective should be to decide the recovery point objective (RPO) and recovery time objective (RTO). The RPO is the amount of data loss the organization is willing to sustain, while the RTO is the amount of time it is willing to live without its business critical applications -- the maximum tolerable outage.

If a disaster occurs, how much time can your business afford to lose? An hour? A day? A week? An organization that requires immediate recovery time will need to budget significantly more funds for data replication than an organization that can afford to be down for a few days or a week.

Similarly, a tight RPO is expensive, but small-to-midsize businesses must weigh preventive expenditures against the potentially exorbitant cost of significant data loss. Identifying the RPO and RTO will help you allocate the appropriate resources and move forward accordingly (see "Is Your Business Playing Russian Roulette With System Availability?").

Do your homework and evaluate which system is best for you -- data replication is a solution that varies case-by-case. When planning for your worse case scenario, think about your most critical data needs and how data could save you from suffering the consequences of an emergency.

» posted by ITworld staff

blog comments powered by Disqus