emc recoverpoint family ov wp

17
EMC RecoverPoint Family Overview A Detailed Review Abstract This white paper provides an overview of EMC ® RecoverPoint, establishing the basis for a functional understanding of the product and technology. This information includes the primary design concepts, basic architecture, components, and data flow. February 2010

Upload: gonna009

Post on 27-Dec-2015

47 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Emc Recoverpoint Family Ov Wp

EMC RecoverPoint Family Overview A Detailed Review

Abstract

This white paper provides an overview of EMC® RecoverPoint, establishing the basis for a functional understanding of the product and technology. This information includes the primary design concepts, basic architecture, components, and data flow.

February 2010

Page 2: Emc Recoverpoint Family Ov Wp

Copyright © 2006, 2007, 2008, 2009, 2010 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part Number H2346.10

EMC RecoverPoint Family Overview A Detailed Review 2

Page 3: Emc Recoverpoint Family Ov Wp

Table of Contents Executive summary ............................................................................................4 Introduction.........................................................................................................4

Audience ...................................................................................................................................... 4 Overview of EMC RecoverPoint.........................................................................4

Consistency groups and replication policies................................................................................ 4 Continuous replication.............................................................................................................. 5

System architecture ..................................................................................................................... 6 RecoverPoint appliance ........................................................................................................... 6 Configuration ............................................................................................................................ 6 Repository volume.................................................................................................................... 7 Journal volume ......................................................................................................................... 7

Write splitters ............................................................................................................................... 7 CLARiiON splitter ..................................................................................................................... 8 Intelligent-fabric splitter ............................................................................................................ 8 Host-splitter driver (KDriver)..................................................................................................... 9

Replication modes ....................................................................................................................... 9 Asynchronous replication mode ............................................................................................... 9 Synchronous replication mode................................................................................................. 9 Dynamic synchronous mode.................................................................................................. 10

Data flow .................................................................................................................................... 10 Continuous remote replication................................................................................................ 11 Continuous data protection .................................................................................................... 12 Concurrent local and remote data protection ......................................................................... 13 Extensions to the basic system architecture .......................................................................... 13 Management interface............................................................................................................ 14

Overview of EMC RecoverPoint/SE.................................................................16 Conclusion ........................................................................................................17

Advantages of EMC RecoverPoint ............................................................................................ 17 System highlights....................................................................................................................... 17

References ........................................................................................................17

EMC RecoverPoint Family Overview A Detailed Review 3

Page 4: Emc Recoverpoint Family Ov Wp

Executive summary EMC® RecoverPoint is an enterprise-scale solution designed to protect application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on an out-of-band appliance and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data both locally and remotely.

Innovative data change journaling and application integration capabilities enable customers to address their pressing business, operations, and regulatory data protection concerns. Customers implementing RecoverPoint will see dramatic improvements in application protection and recovery times as compared to traditional host and array snapshots or disk-to-tape backup products.

This white paper is designed to give a technology decision-maker a deeper understanding of RecoverPoint design, features, and functionality, and how its capabilities can be applied within their environment. Additionally, it will describe the functional differences between EMC RecoverPoint/SE for CLARiiON® and EMC RecoverPoint.

Introduction This white paper provides an overview of EMC RecoverPoint and helps the reader develop a deeper functional understanding of the product and technology. Information includes:

• Primary design concepts • Basic architecture, components, and data flow • Alternatives to the basic architecture • Advantages of the architecture • Introduction to the management interface, through which administrators carry out most of their system

administration tasks • RecoverPoint licensing • Users and permissions

Audience This white paper is targeted to corporate management and technical decision-makers, including storage and server administrators, IT managers, and application engineers, as well as storage integrators, consultants, and distributors.

Overview of EMC RecoverPoint EMC RecoverPoint provides local and remote data protection, enabling reliable replication of data over any distance; that is, locally within the same site, and/or remotely to another site—even halfway around the globe. Specifically, RecoverPoint protects and supports replication of data that applications are writing over Fibre Channel to local SAN-attached storage. RecoverPoint uses existing Fibre Channel infrastructure to integrate seamlessly with existing host applications and data storage subsystems. For long distances, it uses either Fibre Channel or an IP network to send the data over a WAN.

Consistency groups and replication policies Replication by RecoverPoint is based on a logical entity called a consistency group. SAN-attached storage volumes at the primary and secondary sites— called replication volumes by RecoverPoint — are assigned to a consistency group to define the set of data to be replicated. An application, such as Microsoft Exchange, typically has its storage resources defined in a single consistency group so there is a mapping between an application and a consistency group. RecoverPoint ensures that data consistency and dependent

EMC RecoverPoint Family Overview A Detailed Review 4

Page 5: Emc Recoverpoint Family Ov Wp

write-order fidelity are maintained across all the volumes defined in a consistency group, including any volumes that are accessed by different hosts or reside on different storage systems.

Replication by RecoverPoint is policy-driven. A replication policy, based on the particular business needs of your company, is uniquely specified for each consistency group. The policy comprises a set of parameters that collectively governs the way in which replication is carried out. Replication behavior changes dynamically during system operation in light of the policy, level of system activity, and availability of network resources.

Throughout this paper, the two ends of the data replication process in a consistency group are normally designated as follows: Source site—location from which data is replicated Target site — location to which data is replicated

In some instances, users may need or want to execute a failover, to facilitate replication in the opposite direction. In these instances, the designations of source and target sites switch.

Continuous replication The great advantage of asynchronous continuous replication is its ability to provide synchronous-like replication without degrading the performance of the host applications. For asynchronous remote replication, RecoverPoint pioneered the use of high frequency — or small-aperture — image captures. By reducing the lag between writing data to storage at the source site and writing the same data at the target site, the extent to which the copy is not up to date is dramatically reduced. For local replication, the lag is zero and every change is replicated and retained in the local journal. For remote synchronous replication, the lag is also zero. Among the other advantages inherent in RecoverPoint’s support for synchronous or asynchronous remote replication is that only the writes are transferred, and then only after applying powerful bandwidth reduction and compression technologies. This results in a significant savings in bandwidth. Moreover, because the quantity of data that comprises a change is small, RecoverPoint can maintain a journal containing many images — which is useful in the event rollback is necessary. Hence RecoverPoint replication offers an intelligent and effective remote replication solution. EMC RecoverPoint automatically optimizes replication performance based on current conditions, including the replication type (local, remote, or both), application load, throughput capacity, and replication policy. Regardless of the replication optimization, EMC RecoverPoint is unique in its ability to guarantee a consistent copy at the target site under all circumstances, and in its ability to maintain the distributed write- order fidelity in multi-host heterogeneous SAN environments.

EMC RecoverPoint Family Overview A Detailed Review 5

Page 6: Emc Recoverpoint Family Ov Wp

System architecture Specific components of EMC RecoverPoint are shown in Figure 1. Details on the components are then described later in this paper.

Figure 1. EMC RecoverPoint architecture

RecoverPoint appliance The RecoverPoint appliance (RPA) is EMC supplied and is the intelligent hardware platform that runs RecoverPoint software on top of a custom-built 64-bit Linux kernel environment. The RPA manages all aspects of the local and remote data replication and recovery at both sites.

During replication for a given consistency group, an RPA at the source site makes intelligent decisions regarding when and what data to transfer to the target site. It bases these decisions on its continuous analysis of replication load and resource availability, balanced against the need to prevent degradation of host-application performance and to deliver maximum adherence to the specified replication policy. The RPA at the target site distributes the data to the target-site storage.

In the event of failover, these roles can be reversed. Moreover, RecoverPoint supports simultaneous bi-directional replication, where the same RPA can serve as both the source and target of replication.

In a RecoverPoint installation, there is a minimum of two RPAs at each site, which constitute a RecoverPoint cluster. Physically, a RecoverPoint cluster is located in the same facility as the host and production storage subsystems though in a stretch-CDP or CLR configuration the RPAs may be located in a bunker site some distance from the host and production storage subsystems.

All RPAs in a cluster have identical functionality and are active all of the time. If one of the RPAs in a cluster goes down, RecoverPoint immediately switches to one of the other RPAs with no loss of replication or recovery data.

Configuration An RPA cluster is comprised of at least two RPA nodes per site, with each node active. Additional RPAs can be added to an existing cluster, with up to eight RPAs supported per site. For remote replication, additional RPAs have to be added in pairs, one at each site before they are available for use. For single-site configurations, additional RPAs can be added one at a time.

During installation, each RPA is installed and configured individually; however, once an RPA is configured, it can be managed as one of the nodes in an RPA cluster. Regardless of the site used to work from, the administrator can perform all management activities for the nodes in the local RPA cluster, as

EMC RecoverPoint Family Overview A Detailed Review 6

Page 7: Emc Recoverpoint Family Ov Wp

well as the RPA cluster at the other site. In other words, once configured, the entire RecoverPoint installation can be managed from a single location.

Throughout this paper, the system configuration is based on hosts on which a host splitter driver, also known as a KDriver, has been installed to handle the write splitting function.

The following storage entities reside on the local and remote storage subsystems and are used by RecoverPoint during its operation.

Repository volume A repository volume is defined on the SAN-attached storage at each site for each RecoverPoint cluster. The repository volume serves all RPAs of the particular cluster and the splitters associated with that cluster. It stores configuration information about the RPAs and consistency groups; this enables a properly functioning RPA to seamlessly assume the replication activities of a failing RPA from the same cluster. Additional copies of the repository volume are stored on the local hard disks of the first two RecoverPoint appliances. This means that if the repository volume is unavailable, there will not be any impact on the RecoverPoint system, and it will continue to replicate normally. Journal volume The journal volume (or set of volumes) is provisioned on the storage at both sites for each consistency group. The journal holds images waiting to be distributed, or that have already been distributed, to the target volume(s). Each consistency group has either two or three journals: one or two at the source site, and one at the target site. For local and remote replication being performed in the same consistency group, there will be three journals, the local site will have two journals, and the remote site will have one journal. A production journal is required in order to support failover from the production site to the other site; additionally, the production journal volume stores information about the replication process—marking information—that is used to make resynchronization of the replication volumes at the two sites, when required, much more efficient. Each journal holds as many images as its capacity allows, after which the oldest image—provided that it has already been distributed—is removed to make room for the newest one; that is, in a first-in, first-out (FIFO) manner.

Users can also consolidate older images so that a much longer history can be saved in the journal. These images can be consolidated automatically using the Management Application GUI or the CLI. Images can also be consolidated manually though the CLI. The user specifies the period of time during which all images are retained. After this period, a lower granularity can be specified for older images: daily images, weekly images, and monthly images.

The actual number of images in the journal is variable, depending on the size of the image and the capacity of the storage dedicated to the journal. Storage efficiency is maintained in the journal by retaining only changes between an image and its predecessor. Additionally, the journal is also compressed, resulting in even more storage savings. Source and target data on the replication volumes is always consistent upon completing the distribution of each image. Journal snapshots can also be consolidated by policy, which saves journal space and enables longer retention periods but with less granular recovery points.

Individual snapshots can be addressed in the journal. Hence, if required due to a disaster, the stored data image can be rolled back to an earlier snapshot unaffected by the disaster. Frequent small-aperture snapshots provide high granularity for achieving maximum data recovery in the event of such a rollback.

Write splitters RecoverPoint is an out-of-band solution that utilizes write splitters that monitor writes and ensure that a copy of all writes to a protected volume are tracked and sent to the local RecoverPoint appliance. RecoverPoint supports three different types of write-splitters, the CLARiiON-based write splitter, the intelligent fabric-based write splitter, and the host-based write splitter. RecoverPoint supports all three write splitters, and RecovePoint/SE supports the CLARiiON splitter and the Windows host-based write splitter. A RecoverPoint configuration requires at least one type of splitter at each site, though all three can be used simultaneously if required.

EMC RecoverPoint Family Overview A Detailed Review 7

Page 8: Emc Recoverpoint Family Ov Wp

CLARiiON splitter RecoverPoint supports a CLARiiON resident splitter that runs inside the storage processors on CLARiiON CX3 and CX4 arrays. In this case, the splitter function is carried out by the CLARiiON storage processor; a KDriver is not installed on the host. The CLARiiON splitter is supported with FLARE® 26, 28, and 29 and requires the installation of the no-charge RecoverPoint enabler. Unlike the other splitters, the CLARiiON splitter supports LUNs up to 32 TB in size; other splitters are limited to LUNs up to (2 TB-512 MB) in size. The CLARiiON splitter is also supported by RecoverPoint/SE, which enables RecoverPoint/SE to support non-Windows hosts such as AIX, HP-UX, Linux, OpenVMS, Solaris, and VMware. A CLARiiON splitter can be shared between multiple clusters, enabling the use of a single CLARiiON CX3 or CX4 by up to four RecoverPoint or RecoverPoint/SE clusters. A LUN cannot span clusters, which means that the same LUN cannot be used by more than one cluster.

Intelligent-fabric splitter

Note: Intelligent-fabric splitters are not supported by RecoverPoint/SE.

RecoverPoint is designed to support storage services APIs available on intelligent-fabric switches, such as the Brocade Storage Application Services API for the Connectrix® AP-7600B switch application platform and the PB-48K-AP4-18 blade application platform. It also supports the Cisco SANTap API for the Connectrix MDS 18/4 Multi-Services Blade, the Connectrix Storage Services Module blade installed in a Connectrix MDS 9000 intelligent switch, and the Connectrix MDS-9222i fabric switch. In this case the splitter function is carried out by an intelligent switch using the switch vendor’s APIs. When using an intelligent-fabric splitter, a KDriver is not used and is not installed on the host. Figure 2 shows intelligent-fabric splitting.

Source volumes

WRITE

RPA

APIs

Figure 2. Intelligent-fabric splitting

The system behaves basically the same as it does when using a KDriver on the host to perform the splitting function. The Transfer and Distribution data flows documented previously are unchanged; only the Write data flow is different:

1. The host writes data to the volume through the switch fabric. At the switch, the write is split, with one copy sent to the RPA, and the other sent to the source volume.

2. The storage system returns an ACK upon successfully writing the data to storage.

EMC RecoverPoint Family Overview A Detailed Review 8

Page 9: Emc Recoverpoint Family Ov Wp

3. Immediately upon receiving the data, RPA returns an ACK to the switch.

4. The switch sends an ACK to the host that the write has been completed successfully.

Host-splitter driver (KDriver)

Note: RecoverPoint/SE supports the CLARiiON splitter and the Windows host-splitter.

The KDriver is system software installed on all hosts that have access protected volumes that are locally replicated using continuous data protection and/or remotely replicated using continuous remote replication. The KDriver supports AIX, Solaris; and Windows hosts; other hosts are supported with the CLARiiON splitter or intelligent-fabric splitter. The primary function of a splitter driver is to “split” application writes so that they are sent not only to their normally designated storage volumes, but also to the RecoverPoint appliance. The host-splitter driver carries out this activity efficiently, with little perceptible impact on host performance, since all CPU-intensive processing necessary for replication is performed by the RPA.

Replication modes RecoverPoint is unique in its ability to guarantee a consistent replica at the target site under all circumstances, and in its ability to retain write-order fidelity in multi-host heterogeneous SAN environments. RecoverPoint replicates data in one of two replication modes, asynchronous and synchronous. It also offers dynamic synchronous replication, which enables you to establish policies that are used to automatically switch between synchronous and asynchronous replication. Asynchronous replication mode In asynchronous replication mode the host application initiates a write, and does not wait for an acknowledgement from the remote RPA before initiating the next write. Asynchronous replication is supported over Fibre Channel and an IP network. A copy of every write is stored in buffers in the local RPA, and acknowledged at the local site. The RPA decides, based on the lag policy, system loads, and available resources, when to transfer the writes stored in the RPA to RPAs at the other site that have access to the replica storage. The primary advantage of asynchronous replication is its ability to provide synchronous-like replication without regulating the write activity of host applications. If the link between the sites goes down then writes are held in the buffers in the RPAs until those buffers fill. If the link comes up before the buffers fill then the pending writes will be transferred to the remote site. If the link is down when the buffers fill, then the RPA will move into marking mode and will use the production journal to mark the blocks that were written during the link outage. When the link comes back up, this marking information will be used to identify the blocks written during the outage that need to be sent to the remote site. In asynchronous replication mode, a Snapshot Granularity policy is used to regulate data transfer: • Fixed (per write): To send the changes from every write operation. • Fixed (per second): To send changes once per second. • Dynamic: To have the system determine the snapshot granularity according to available resources. New consistency groups are created with Snapshot Granularity set to dynamic. By default, new consistency groups are created with asynchronous mode enabled, and must be set to replicate synchronously through the RecoverPoint Management Application. Synchronous replication mode In synchronous replication mode the host application initiates a write, and then waits for an acknowledgement from the remote RPA before initiating the next write. Synchronous replication is not the default and must be specified by the user. Synchronous replication mode is supported over Fibre Channel.

EMC RecoverPoint Family Overview A Detailed Review 9

Page 10: Emc Recoverpoint Family Ov Wp

Replication in synchronous mode produces a replica that is always up to date with its production source. Synchronous replication mode is efficient for replication both within the local SAN environment (as in CDP configurations), as well as for replication over Fibre Channel (as in CRR configurations). However, when replicating synchronously, the longer the distance between the production source and the replica copy is, the greater the latency. In synchronous replication mode all host application write activity must be regulated by RecoverPoint to ensure that no subsequent writes are made until an acknowledgement is received from the remote RPA. Synchronous replication requires application regulation which results in the application not receiving an acknowledgement of its write until a copy of the write is sent to the other site and acknowledged as being received at the other site. This will impact the application’s performance under heavy write loads. If your applications cannot be regulated for any reason, choose asynchronous replication mode. Users can also configure RecoverPoint to dynamically alternate between synchronous and asynchronous replication modes, according to predefined lag and/or throughput conditions.

Dynamic synchronous mode When remotely replicating data, users can set RecoverPoint to replicate in dynamic synchronous mode. In this mode, users define group protection policies that enable the consistency group to start out replicating synchronously and then automatically switch to replicating asynchronously whenever the group’s data throughput or latency reaches a maximum threshold. Once the group’s throughput or latency falls below a minimum threshold it will automatically switch back to synchronous mode. When the replication policy is controlled dynamically by both throughput and latency (both Dynamic by latency and Dynamic by throughput are enabled), it is enough that one of the two maximum thresholds (Max latency for sync or Max throughput for sync) is met for the RecoverPoint to automatically start replicating asynchronously to a replica. However, both minimum thresholds (Min latency for sync and Min throughput for sync) must be met before RecoverPoint will automatically revert to synchronous replication mode. To prevent jittering, the values specified for minimum thresholds (Min latency for sync and Min throughput for sync) must be lower than the values specified for their corresponding maximum thresholds (Max latency for sync or Max throughput for sync), or the system will issue an error. Groups undergo a short initialization phase every time the replication mode changes. During this initialization phase, data is transferred asynchronously. The user can also manually switch between replication modes using the RecoverPoint CLI. This is useful, for example, if the user generally requires synchronous replication, but wishes to use CLI scripts and a system scheduler to manually switch between replication modes during different times in the day, such as during backups.

Data flow Figure 3 shows data flow in the basic system configuration for data written by the host, and where the system replicates in snapshot mode to a remote site.

EMC RecoverPoint Family Overview A Detailed Review 10

Page 11: Emc Recoverpoint Family Ov Wp

Switch

Journal

Journal

TRANSFER

DISTRIBUTION

Switch

Switch

KDriver

Source volumes

WRITE

Target volumes

Source site Target site

Target site

Source site

Figure 3. RecoverPoint data flow for continuous remote replication

Continuous remote replication For replication, data originates as a write from a host at the source site. The data is then transferred to the target site, and then distributed to the appropriate volume(s). This data flow is described in detail next.

Write The flow of data for a write transaction is as follows:

1. The host writes data to the volume through KDriver. KDriver sends it to the RPA and to the source replication volume

2. Immediately upon receiving the data, RPA returns an ACK to KDriver. The storage system holding the source replication volume returns an ACK to the KDriver upon successfully writing the data to storage.

3. KDriver sends an ACK to the host that the write has been completed successfully.

This sequence of events 1-3 can be repeated multiple times before the data is transferred.

Transfer The flow of data for transfer is as follows:

1. After processing the image data (for example, applying the various compression techniques), the RPA sends the image over the WAN or Fibre Channel to its peer RPA at the target site.

2. The RPA at the target site writes the image to the journal.

3. Upon successful writing of the complete image to the journal, an ACK is returned to the target RPA.

4. The target RPA returns an ACK to its peer at the source site.

Upon receiving this ACK, the source RPA removes the associated marking information for the completed transfer from the repository volume.

EMC RecoverPoint Family Overview A Detailed Review 11

Page 12: Emc Recoverpoint Family Ov Wp

Distribution RecoverPoint proceeds at the first opportunity to distribute the image to the appropriate location on the target-site storage. The logical flow of data for distribution follows:

1. The target RPA reads the image from the journal.

2. The RPA then reads existing information from the relevant target replication volume.

3. The RPA writes “undo” information (that is, information that can support a rollback, if necessary) to the journal.

The RPA then writes the image to the appropriate target replication volume. The remote journal and replica volume must handle five I/Os, two writes, and one read for the journal and a read and a write to the replica volume.

Continuous data protection RecoverPoint can be used to perform replication within the same local building using continuous data protection technology. For CDP, the data is continuously written to the journal and to the replica image. Other than this, the operation of the system is the same, including the ability to use the journal to recover back to a point in time, and the ability, if necessary, to fail over to the target volume(s).

Every write is kept in the history volume, allowing recovery to any point in time. In Figure 4, there is no WAN, the target volume(s) are part of the storage at the same site, and the same RPA appears in each of the segments. The data flow is described in detail next.

KDriver

Source volumes

Journal

Journal

TRANSFER

DISTRIBUTIONWRITE

Target volumes

Switch

Switch

Switch

Figure 4. Data flow for CDP

Write The flow of data for a write transaction follows:

1. The host writes data to the volume through the KDriver. The KDriver sends it to the RPA and to the source replication volume

2. Immediately upon receiving the data if synchronous replication is selected then the RPA must write the data to the journal before returning the ACK.

EMC RecoverPoint Family Overview A Detailed Review 12

Page 13: Emc Recoverpoint Family Ov Wp

3. The KDriver sends an ACK to the host that the write has been completed successfully.

Transfer The flow of data for transfer for asynchronous replication is as follows:

1. RPA writes the image to the journal.

2. Upon successful writing of the complete image to the journal, an ACK is returned to the RPA.

Upon receiving this ACK, the RPA removes the associated marking information for the completed image from the repository volume.

Distribution RecoverPoint proceeds at first opportunity to “distribute” the image to the appropriate location on the target-site storage. The logical flow of data for distribution is as follows:

1. The target RPA reads the image from the journal.

2. The RPA then reads existing information from the relevant target replication volume.

3. The RPA writes “undo” information (that is, information that can support a rollback, if necessary) to the journal.

4. The RPA then writes the image to the appropriate target replication volume.

For continuous data protection every write is captured and resides either in the RPA memory or on the journal. In the event of a failure the latest changes are always available.

Concurrent local and remote data protection RecoverPoint can be used to perform both local replication using CDP and remote replication using CRR for the same set of production volumes. This type of replication is called concurrent local and remote (CLR) data protection. A single copy of the write is sent to the RPA by the splitter; at that point, it is divided into two streams, with one stream being handled as a CRR stream and the other stream being handled as a CDP stream. The flow for these two streams is identical to the CRR and CDP flow mentioned previously, with each stream independent of the other. If local replication is paused, this does not affect the remote replication stream, which will continue. Similarly, if remote replication is paused, the local replication will continue.

RecoverPoint and RecoverPoint/SE support a simultaneous mix of groups for CRR, CDP, or CLR. Certain policy parameters do not apply for CDP and will not be visible in the management interface. Consistency groups that are managed and controlled by VMware vCenter Site Recovery Manager (SRM) can be either CRR or CLR consistency groups; however, only the remote replicas will be utilized by VMware vCenter SRM for failover. Consistency groups that are managed and controlled by Cluster Enabler can only be CRR consistency groups.

Extensions to the basic system architecture The following are additional configurations that build upon the basic architecture defined previously.

Distributed consistency groups A consistency group can be handled by more than one RecoverPoint appliance. The default is to have a single appliance manage all the writes for a consistency group. However, there are some instances when the write activity of a single consistency group can exceed the throughput of a single RecoverPoint appliance. In such a case you define the consistency group as a distributed consistency group, and select a primary RPA and up to three secondary RPAs. All writes for a distributed consistency group are split and the copy is sent to the primary RPA. The primary RPA identifies the block range of the write and depending on the range may handle the write or may send all or part of the write to one or more of the secondary RPAs. Splitting the write into specific ranges helps avoid an RPA being overwhelmed by writes

EMC RecoverPoint Family Overview A Detailed Review 13

Page 14: Emc Recoverpoint Family Ov Wp

that occur over a narrow section of the LUN. The following figure helps describe this operation. In this example, RPA 1 is the primary RPA, and RPA 2, RPA 3, and RPA 4 are the secondary RPAs.

Figure 5. Distributed consistency group architecture

Management interface RecoverPoint management activities for configuring, managing, and monitoring the complete RecoverPoint cluster are preformed via the virtual site-management IP address using either the command line interface (CLI) or the RecoverPoint Management Application Graphical User Interface (GUI). The management interfaces provide access to all nodes in the local RecoverPoint cluster, as well as to the RecoverPoint cluster at the other site for CRR and CLR.

Command line interface The command line interface is accessed by using a secure shell (SSH) login. The CLI supports two operational modes: interactive and command line. Interactive mode allows users to enter a command name, after which the system will prompt for the mandatory and optional parameters. In command line mode, all of the information for the command is entered in a single statement. Command line mode is valuable for automation, allowing complete CLI sessions to be run using CLI scripts. By using SSH to establish the necessary connection between the relevant RPA and a designated script, the system to runs the session automatically and securely.

EMC RecoverPoint Management Application GUI Management activities can also be performed using the RecoverPoint Management Console GUI invoked though a standard web browser. The GUI is started by initiating an http or secure https session to the virtual site management IP address. The GUI is Java-based and will automatically download the necessary components upon its first invocation.

EMC RecoverPoint Family Overview A Detailed Review 14

Page 15: Emc Recoverpoint Family Ov Wp

All functions that can be accomplished though the GUI can also be executed though the CLI, either in interactive or command mode. Additionally, up to four GUI sessions can be open on different workstations, allowing multiple users to simultaneously monitor or manage RecoverPoint, as well as automate operations.

Figure 6. RecoverPoint Management Application GUI

EMC RecoverPoint Family Overview A Detailed Review 15

Page 16: Emc Recoverpoint Family Ov Wp

Overview of EMC RecoverPoint/SE EMC RecoverPoint/SE software provides customers a comprehensive solution to protect or replicate data in the event of data loss due to failures or errors for a CLARiiON CX array environment only. RecoverPoint/SE is a data protection solution that supports a local continuous data protection module and a continuous remote replication module. These modules run on their own RecoverPoint appliances to reduce cost and ease management and can be deployed separately or together, based on user needs for end-to-end data protection. RecoverPoint/SE offers bidirectional replication between two CLARiiON arrays with no distance limitation, guaranteed data consistency, and advanced bandwidth reduction technology designed to dramatically reduce WAN bandwidth requirements and associated costs. RecoverPoint/SE supports only one CLARiiON array for local replication and two CLARiiON arrays (for instance, one CX4-480 at the source site and one CX3-40 at the target site) for local and remote replication. Table 1 summarizes the differences between the two products.

Table 1. Comparison of RecoverPoint/SE and RecoverPoint

Feature RecoverPoint/SE RecoverPoint Operating systems

supported Heterogeneous* with CLARiiON splitter Windows with host splitter (native or VMware)

Heterogeneous*

Storage arrays supported CLARiiON AX4-5, CX, CX3, CX4; Celerra NX4, NS20, NS40, NS80, NS-120, NS-480, NS-960

Heterogeneous*

Number of arrays 1 per site Unlimited Concurrent local and

remote data protection Included Included

Licensing Per replicated capacity Per replicated capacity Number of RPAs 2 to 8 per site 2 to 8 per site Synchronous replication Included Included Journal compression Not included Included Capacity Licensed up to 150 TB Licensed up to 600 TB Splitters supported CLARiiON, Windows AIX, Solaris, Windows,

Brocade FAP or Cisco SANTap, CLARiiON

* Refer to the EMC Support Matrix for an exact list of supported OS, storage, and multipathing combinations.

EMC RecoverPoint Family Overview A Detailed Review 16

Page 17: Emc Recoverpoint Family Ov Wp

Conclusion

Advantages of EMC RecoverPoint EMC RecoverPoint provides significant advantages over typical host- or array-based snapshot and replication technology. Placing the intelligence in an out-of-band appliance located at the junction between the WAN and SAN enables RecoverPoint to monitor the SAN and WAN behavior on an ongoing basis, and then to use the information to support policy-driven dynamic system behavior.

Using RecoverPoint enables customers to achieve synchronous-level protection at the source site without the associated degradation of application performance possible with host-based solutions. Additionally, RecoverPoint’s policy-based replication management and data compression algorithms dramatically reduce the storage and WAN bandwidth required as compared to host- or array-based local and remote replication solutions.

System highlights • RecoverPoint protects data locally and remotely, enhancing the operational recovery and disaster

recovery of your data. • RecoverPoint provides maximum protection against data corruption due to human error and rolling

disasters. Moreover, the replicated data remains consistent across any type of failure at the local or remote site.

• RecoverPoint uses an out-of-band appliance, not the host server (where it would use memory and CPU cycles), and not on the storage subsystem, where it would use storage resources. This ensures that local and remote replication does not impact production applications.

• The replicated copy is up to date, with multiple copies of the data accessible at all times enabling failover with no data loss. RecoverPoint provides a suite of technologies that ensures the most reliable, up-to-date, consistent copy of the data possible.

• Data transfer continues without interruption even during concurrent processing of one or both replicated copies. RecoverPoint minimizes use of bandwidth, while reacting dynamically to changing conditions in real time.

• Use of additional storage resources for data replication is minimized since RecoverPoint leverages existing software, hardware, and operating system infrastructure, without compromising its solution for disaster protection.

References Visit the EMC RecoverPoint page for more information. White paper titles include:

• Introduction to EMC RecoverPoint 3.3: New Features and Functions • Improving Microsoft Exchange Server Recovery with EMC RecoverPoint • Using EMC RecoverPoint Concurrent Local and Remote for Operational and Disaster Recovery • Improving VMware Disaster Recovery with EMC RecoverPoint • Solving Data Protection Challenges with EMC RecoverPoint

EMC RecoverPoint Family Overview A Detailed Review 17