emc appsync performance and scalability guidelines … · emc white paper . emc appsync performance...

23
EMC WHITE PAPER EMC APPSYNC PERFORMANCE AND SCALABILITY GUIDELINES ABSTRACT This document contains performance guidelines that will help you configure EMC ® AppSync ® to optimize for performance and scalability of your application protection solution. Use this document in conjunction with other AppSync documents that contain supplemental information as referenced in the “References” section on page 23. July 2016

Upload: hanhan

Post on 05-Jun-2018

234 views

Category:

Documents


0 download

TRANSCRIPT

EMC WHITE PAPER

EMC APPSYNC PERFORMANCE AND SCALABILITY GUIDELINES

ABSTRACT This document contains performance guidelines that will help you configure EMC® AppSync® to optimize for performance and scalability of your application protection solution. Use this document in conjunction with other AppSync documents that contain supplemental information as referenced in the “References” section on page 23.

July 2016

2

To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store

Copyright © 2016 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part Number h11153.1

3

TABLE OF CONTENTS

EMC APPSYNC PERFORMANCE AND SCALABILITY GUIDELINES ................ 1

ABSTRACT ....................................................................................................... 1

EXECUTIVE SUMMARY ..................................................................................... 6

Audience ...................................................................................................... 6

USING THE PERFORMANCE AND SCALABILITY GUIDELINES ........................... 6

HARDWARE RESOURCE GUIDELINES ............................................................... 7

Hardware considerations ................................................................................. 7

Available disk space for AppSync protection data ................................................ 7

MICROSOFT EXCHANGE SERVER PROTECTION PERFORMANCE GUIDELINES ... 7

Hardware specifications that were used for Microsoft Exchange Server testing ........ 8

Deployment time estimate for AppSync Microsoft Exchange Server Protection ........ 8

Supported AppSync Microsoft Exchange mailbox database throughputs ................. 8

AppSync internal workload distribution mechanisms for Microsoft Exchange Server . 9

Microsoft Exchange database protection service cycle time and resource utilization estimates (VMAX VPSnap, Clone) ..................................................................... 9

Microsoft Exchange Database protection service cycle time and resource utilization estimates (XtremIO Snap) ............................................................................ 10

Microsoft Exchange Database protection service cycle time and resource utilization estimates (VPLEX with XtremIO as backend Array) ........................................... 10

MICROSOFT SQL SERVER PROTECTION PERFORMANCE GUIDELINES ............ 11

Hardware specifications that were used for Microsoft SQL Server testing (RecoverPoint) ............................................................................................ 11

Deployment time estimate for AppSync Microsoft SQL Server Protection .............. 12

Supported AppSync Microsoft SQL Server database throughputs ........................ 12

AppSync internal workload distribution mechanisms for Microsoft SQL Server ....... 12

Microsoft SQL Server database protection service cycle time and resource utilization estimates (RecoverPoint CDP) ....................................................................... 13

Microsoft SQL Server database protection service cycle time and resource utilization estimates (XtremIO Snap) ............................................................................ 13

4

Microsoft SQL Server database restore service cycle time and resource utilization estimates (RecoverPoint CDP) ....................................................................... 13

SQL Database protection service cycle time and resource utilization estimates (VMAX Snap, VMAX Clone) ...................................................................................... 14

SQL Database protection service cycle time and resource utilization estimates (XtremIO Snap) ........................................................................................... 14

SQL Database protection service cycle time and resource utilization estimates (VPLEX XtremIO) ......................................................................................... 15

FILESYSTEM PERFORMANCE GUIDELINES ..................................................... 15

Deployment time estimate for AppSync FileSystem Protection ............................ 15

Supported AppSync FileSystem throughputs .................................................... 16

AppSync internal workload distribution mechanisms for FileSystems ................... 16

VMWARE VMFS DATASTORE (NON-NFS) PROTECTION PERFORMANCE GUIDELINES .................................................................................................. 16

Hardware specifications that were used for VMware Datastore testing ................. 16

Deployment time estimate for AppSync VMware Datastore Protection .................. 17

Supported AppSync VMware Datastore throughputs .......................................... 17

AppSync internal workload distribution mechanisms for VMware Datastores ......... 17

VMware Datastores protection service cycle time estimates (VNX Snapshots) ....... 18

VMware Datastores restore service cycle time estimates (VNX Snapshots) ........... 18

VMware Datastores protection service cycle time and resource utilization estimates (VMAX Snap, VMAX Clone) ............................................................................ 18

VMware Datastores Database protection service cycle time and resource utilization estimates (XtremIO Snap) ............................................................................ 19

VMware Datastores Database protection service cycle time and resource utilization estimates (VPLEX XtremIO) ........................................................................... 19

ORACLE DATABASE PERFORMANCE GUIDELINES .......................................... 20

Hardware specifications that were used for Oracle Database testing .................... 20

Deployment time estimate for AppSync Oracle Database Protection .................... 20

AppSync Oracle Database Protection Key Performance Factor: Source Storage .... 21

Supported AppSync Oracle Database throughputs ............................................ 21

Oracle Database protection service cycle time estimates (VNX Snapshots) ........... 22

5

APPSYNC SERVER RE-INITIALIZATION TIME AND RESOURCE UTILIZATION ESTIMATES .................................................................................................... 22

END USER RESPONSE TIMES WITHIN APPSYNC USER INTERFACE ................ 22

BEST PRACTICES ........................................................................................... 22

APPSYNC SERVER SETTINGS FOR PERFORMANCE FINE TUNE ....................... 23

CONCLUSION ................................................................................................ 23

REFERENCES ................................................................................................. 23

White papers ............................................................................................... 23

Product documentation ................................................................................. 23

EMC ONLINE SUPPORT .................................................................................. 23

6

EXECUTIVE SUMMARY This document contains measured performance data, guidelines, and descriptions of internal workload distribution mechanisms in the EMC AppSync Server. This document provides guidance for planning, deploying, and configuring EMC® AppSync™ protected Microsoft Exchange Server, Microsoft SQL Server, Oracle Database, VMware VMFS Datastores (Non-NFS), and FileSystem software environments. The information and guidelines described in this document will help you maximize performance and scalability, and therefore realize the full potential of the EMC AppSync product.

AUDIENCE This document is intended for EMC internal and field personnel, and technically qualified EMC customers who will be deploying and operating EMC AppSync.

USING THE PERFORMANCE AND SCALABILITY GUIDELINES The guidelines and performance measurements described in this document have been stated by EMC AppSync engineering. Every effort to provide clearly-defined guidelines has been made, however, many external environmental factors make it challenging and variable to establish guidelines. Overall system performance is determined by the following factors:

• Count and/or size of environmental items like arrays, appliances, hosts, application objects, and number of LUNs for source storage.

• Number of service plans scheduled to be concurrently active and the number of application objects that each plan is protecting.

• Level of network latency between the AppSync Server and hosts.

• Level of network latency between the AppSync Server and vCenter Servers.

• Level of network latency between the AppSync Server and the VNX arrays and RecoverPoint appliances.

• Number of concurrent active consoles.

• Hardware specifications and overall performance of the virtual machine or physical server hosting the AppSync Server.

• Hardware specifications and overall performance of the virtual machine or physical servers hosting a Microsoft Windows OS used as Microsoft Exchange Servers, Microsoft SQL Servers, and Utility hosts. (This includes each individual host’s performance of the Microsoft VSS writers, Virtual Disk Service, and Volume Shadow Copy Service).

• Hardware specifications and overall performance of the virtual machine or physical servers hosting a UNIX/Linux OS used as Oracle Database or FileSystem Servers. (This includes performance of disk managers and disk services).

• Hardware specifications and overall performance of the virtual machine or physical servers used as VMware VCenter Servers.

• Hardware specifications and overall performance of the virtual machine or physical servers used as SMI-S Servers.

• Hardware specifications and overall performance of ESX Servers hosting the VMFS Datastores that are protected.

• The number of powered-on virtual machines in a datastore (when VM-Consistent copies are created).

• Hardware specifications and overall performance of your EMC storage arrays and RecoverPoint appliances that the protected applications reside on.

• Running the user interface directly on the machine hosting the AppSync Server versus from a remote machine. For best performance, do not use the EMC AppSync console on the machine hosting the AppSync Server once the AppSync Server has been put into production.

• Using TimeFinder Clone replication technology for VMAX Array storage can require long-duration sync or re-sync of target devices, causing longer create-copy time durations versus other replication technologies.

• When AppSync provisions replication targets for VMAX replication technologies, the time duration for the create-copy phases will run longer until the maximum number of copies for application objects is initially reached. The maximum number of copies is determined by the expiration rotation policy service plan setting.

• When using XtremIO Snap replication technology, see the release notes for your storage systems’ software version to learn the following limits:

o Number of snapshots per production volume

7

o Number of members in snapshot set operation

o Number of folders

o Number of volumes in folder

These limits will affect the scalability of snapshot retention, the number of source volumes for an app object, and the number of protected app objects within your AppSync application protection environment.

The best practice when adopting a new product is to build up your configuration incrementally, testing as you go. This action ensures that configuration issues can be resolved promptly, and not affect the performance of the rest of your large-scale software environment.

HARDWARE RESOURCE GUIDELINES Learn about the system resources your EMC AppSync software environment requires.

HARDWARE CONSIDERATIONS Note: EMC requires that the AppSync Server is the only application running on the system that is hosting it.

Minimum AppSync Server hardware Memory: 6 GB

Number of CPU’s: 2

Disk Space: 2GB

Recommended AppSync Server hardware Memory: 8 GB

Number of CPU’s: 4

Disk Space: 10 GB (includes space for application protection data for 50 application objects)

Minimum AppSync Host Plug-In hardware Memory: 6 GB

Number of CPU’s: 2

Disk Space: 2GB

Note: When using a virtual machine to host your AppSync Server, ensure that it is able to utilize 100 percent of the CPU and memory resources that are allocated to it.

AVAILABLE DISK SPACE FOR APPSYNC PROTECTION DATA For every 50 databases, datastores, or filesystems protected, allocate 4.25 GB of available disk space for the AppSync datastore (which is installed on the AppSync Server). This guarantees enough space for data growth requirements.

MICROSOFT EXCHANGE SERVER PROTECTION PERFORMANCE GUIDELINES Learn about the Microsoft Exchange Server hardware specifications, deployment time estimates, throughputs, service cycle time and resource utilization estimates, and internal workload distribution mechanisms.

8

HARDWARE SPECIFICATIONS THAT WERE USED FOR MICROSOFT EXCHANGE SERVER TESTING All systems were virtual machines hosted on VMware ESX Servers. 100% reservation for CPU and memory was enabled to ensure there was not a machine performance bias.

AppSync Server • OS: Windows Server 2008 Service Pack 2

• Memory: 6GB

• CPU: 2 (2.40 GHZ cores)

Microsoft Exchange Server Hosts • OS: Windows Server 2008 R2 Enterprise

• Memory: 4 GB

• CPU: 1 (2.50 GHZ core)

• Microsoft Exchange Server version: Exchange Server 2010

• Number of volumes per mailbox database: 2

• RDM devices were used

Mount Hosts • OS: Windows Server 2008 R2 Enterprise

• Memory: 4 GB

• CPU: 1 (2.50 GHZ core)

• RDM Devices were used

Storage Hardware • VMAX array model: VMAX 40K

• VMAX Enginuity version: 5876.229.145

• VMAX Pool Storage: Thin Pools [TDEV’s] used

EMC VPLEX Metro 5.4.1 • XtremIO Storage System Configuration: 1 X-Brick

• XtremIO Storage System Software version: 4.0.2 Build 80

DEPLOYMENT TIME ESTIMATE FOR APPSYNC MICROSOFT EXCHANGE SERVER PROTECTION Table 1 shows time estimates for deploying and configuring the EMC AppSync product for Microsoft Exchange Server Protection.

Table 1 Deployment time estimates

DEPLOYMENT TYPE TIME ESTIMATE

Installation of the AppSync Server 5 Minutes Push installing 8 hosts for use with AppSync 7 Minutes Configuring 2 service plans each protecting 25 Microsoft Exchange databases across 2 Microsoft Exchange Servers and using 2 utility hosts for mounting copies

10 Minutes

SUPPORTED APPSYNC MICROSOFT EXCHANGE MAILBOX DATABASE THROUGHPUTS AppSync is designed for the following throughputs for Microsoft Exchange Server mailbox database protection:

• 50 Mailbox databases replicated per hour

• 25 Mailbox databases replicated and mounted per hour

9

The throughputs do not include the time that is required for consistency checking the databases, because that time can be variable depending on the utility host performing the check, the size of the databases, and the number of logs that must be checked. For more information about Microsoft Exchange database consistency checking, see the EMC AppSync User Guide.

The throughputs stated are maximums recommended by EMC. They may not be achievable for all AppSync Protected Microsoft Exchange Server environments, due to the factors listed in “Using the Performance and Scalability Guidelines” on page 6. Protection of active or passive copies within a Microsoft Exchange Server DAG cluster should not have a major effect on performance. The size of the mailbox databases and log files has little effect on copy creation and mounting a copy.

APPSYNC INTERNAL WORKLOAD DISTRIBUTION MECHANISMS FOR MICROSOFT EXCHANGE SERVER A workload for AppSync Microsoft Exchange Server protection is defined as the total number of mailbox databases that are protected by AppSync. In the EMC AppSync Server, the workload is divided into efficiency groupings for optimal performance.

When distributing protected application objects into groupings, consider the following:

• Databases within the same service plan.

• Databases on the same host.

• Databases with storage sourced on the same storage array or appliance.

Each efficiency grouping can have a maximum of 12 Microsoft Exchange mailbox databases. AppSync is optimized to be concurrently working on a service plan phase for three of these efficiency groupings, with the maximum of 12 databases per grouping. If the efficiency groupings in the AppSync configuration contain a smaller number of databases (for example, two databases per efficiency grouping), then up to five efficiency groupings will be concurrently worked on. When the optimal AppSync Server protection activity level is reached, other work that is currently scheduled is queued to be run as soon as possible to allow for maximum protection performance.

AppSync is optimized to protect large amounts of application objects that are subscribed to a service plan. EMC does not recommend a one to one mapping of application object to service plans, unless it is a requirement.

Note: EMC recommends a maximum workload of 300 mailbox databases per AppSync Server.

MICROSOFT EXCHANGE DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VMAX VPSNAP, CLONE) Table 2 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft Exchange DAG mailbox Protection and Restore databases, with the Bronze service plan.

Database Layout:

12 Microsoft Exchange Mailbox databases, each with Data and Logs on separate LUNs with 4GB in size.

10

Table 2 Protection and Restore service cycle time and resource utilization (VMAX VP Snap, TF Clone)

SERVICE CYCLE TYPE DURATION*

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER

MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-IN

MAXIMUM

MEMORY (MB)

Replication of 12 databases (VP Snap)

00:03:00 22% 3.67 11% 165

Replication and Mount of 12 databases (VP Snap)

00.21.00 35% 4.34 15% 280

Unmount of 12 databases (VP Snap)

00:23:15 27% 4.20 12% 220

Restore of 1 Microsoft Exchange Database (VP Snap)

00:03:25 5% 2.34 3% 57

Replication of 12 databases (TF Clone)

00:03:30 24% 5.50 13% 190

Replication and Mount of 12 databases (TF Clone)

00:22:00 25% 4.60 14% 315

Unmount of 12 databases (TF Clone)

00:24:30 18% 4.10 11% 190

Restore of 1 Microsoft Exchange Database (TF Clone)

00:03:28 6% 2.55 5% 65

*Duration does not include initial automatic provisioning of target devices. In the test environment, required target devices are created and placed in a storage group, which is configured in AppSync. Replication takes another one hour to provision the target devices.

MICROSOFT EXCHANGE DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (XTREMIO SNAP) Table 3 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft Exchange DAG mailbox Protection and Restore databases, with the Bronze service plan.

Database Layout:

12 Microsoft Exchange Mailbox databases, each with Data and Logs on separate LUNs with 4GB in size.

Table 3 Protection and Restore service cycle time and resource utilization (XtremIO Snap)

SERVICE CYCLE

TYPE

DURATION

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER

MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-

IN MAXIMUM

MEMORY (MB)

Replication of 12 databases

00:01.20 22% 5.87 18% 324

Replication and Mount of 12 databases

00:13:30 32% 5.50 22% 468

Unmount of 12 databases

00:17:00 28% 4.35 23% 432

Restore of 1 Microsoft Exchange Database

00:01:40 23% 6.66 13% 281

MICROSOFT EXCHANGE DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VPLEX WITH XTREMIO AS BACKEND ARRAY)

Table 4 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft Exchange DAG mailbox Protection and Restore databases, with the Bronze service plan and VPLEX virtual volumes of RAID-0.

11

Database Layout:

12 Microsoft Exchange Mailbox databases, each with Data and Logs on separate LUNs with 4GB in Size.

Table 4 Protection and Restore service cycle time and resource utilization (VPLEX Snap)

SERVICE CYCLE

TYPE

DURATION

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-IN

MAXIMUM

MEMORY (MB)

Replication of 12 databases

00:03:20 26% 5.15 20% 127

Replication and Mount of 12 databases

00:15:17 24% 5.18 17% 130

Unmount of 12 databases

00:17:00 18% 5.20 16% 122

Restore of 1 Microsoft Exchange Database

00:02:00 24% 4.45 12% 130

VNXe Array Storage Performance Factors EMC recommends that Microsoft Exchange Server Mailbox Databases sourced on VNXe block LUNs have either:

• All of the data and log files contained within one VNXe LUN group.

• All of the data files contained within a LUN group and all of the log files contained within a separate LUN group.

MICROSOFT SQL SERVER PROTECTION PERFORMANCE GUIDELINES Learn about the Microsoft SQL Server hardware specifications, deployment time estimates, throughputs, service cycle time and resource utilization estimates, and internal workload distribution mechanisms.

HARDWARE SPECIFICATIONS THAT WERE USED FOR MICROSOFT SQL SERVER TESTING (RECOVERPOINT) All systems were virtual machines hosted on VMware ESX Servers. 100% reservation for CPU and memory was enabled to ensure there was not a machine performance bias.

AppSync Server • OS: Windows Server 2008 R2 Service Pack 1

• Memory: 6GB

• CPU: 2 (2.3 GHZ cores)

Microsoft SQL Server Hosts • OS: Windows Server 2008 R2 Service Pack 1

• Memory: 6 GB

• CPU: 2 (2.3 GHZ cores)

• Microsoft SQL Server version: SQL Server 2008 R2 Service Pack 1

• Number of volumes per database: 2

Utility Hosts • OS: Windows Server 2008 R2 Service Pack 1

• Memory: 6 GB

12

• CPU: 2 (2.3 GHZ cores)

• Microsoft SQL Server version: SQL Server 2008 R2 Service Pack 1

• Number of volumes per database: 2

Storage Hardware • VNX array model: VNX 7600

• VNX Block OE version: 5.33.000.5.038

• VNX Pool Storage Thick LUNs used

• RecoverPoint hardware version: Generation 4

• RecoverPoint software version: RecoverPoint 3.5 Service Pack 2

• Number of RPA’s: 2

• Number of RecoverPoint sites: 1

• XtremIO Storage System Configuration: 1 X-Brick

• XtremIO Storage System Software version: 4.0.2 Build 80

DEPLOYMENT TIME ESTIMATE FOR APPSYNC MICROSOFT SQL SERVER PROTECTION Table 5 shows time estimates for deploying and configuring the EMC AppSync product for Microsoft SQL Server database protection.

Table 5 Deployment time estimates

DEPLOYMENT TYPE TIME ESTIMATE

Installation of the AppSync Server 5 Minutes Push installing 4 hosts for use with AppSync 5 Minutes Configuring 2 service plans each protecting 25 Microsoft SQL Server databases across 2 Microsoft SQL Server Instances and using 2 additional hosts for mounting copies

7 Minutes

SUPPORTED APPSYNC MICROSOFT SQL SERVER DATABASE THROUGHPUTS AppSync is designed for the following throughputs for Microsoft SQL Server database protection:

• 50 databases replicated per hour

• 25 databases replicated and mounted per hour

The throughputs stated are maximums recommended by EMC. They may not be achievable for all AppSync Protected Microsoft SQL Server environments, due to the factors listed in “Using the Performance and Scalability Guidelines” on page 6. Most of the time, the size of the database files and log files has little effect on copy creation and mounting a copy.

APPSYNC INTERNAL WORKLOAD DISTRIBUTION MECHANISMS FOR MICROSOFT SQL SERVER A workload for AppSync Microsoft SQL Server is defined as the total number of databases that are protected by AppSync. In the EMC AppSync Server, the workload is divided into efficiency groupings for optimal performance.

When distributing protected application objects into groupings, consider the following:

• Databases within the same service plan.

• Databases on the same host.

• Databases with storage sourced on the same storage array or appliance.

Each efficiency grouping can have a maximum of 12 Microsoft SQL Server databases. AppSync is optimized to be concurrently working on a service plan phase for three of these efficiency groupings, with the maximum of 12 databases per grouping. If the efficiency groupings in the AppSync configuration contain a smaller number of databases (for example, two databases per efficiency grouping), then up to five efficiency groupings will be concurrently worked on. When the optimal AppSync Server protection activity

13

level is reached, other work that is currently scheduled is queued to be run as soon as possible to allow for maximum protection performance.

AppSync is optimized to protect large amounts of application objects that are subscribed to a service plan. EMC does not recommend a one to one mapping of application object to service plans, unless it is a requirement.

Note: EMC recommends a maximum workload of 300 Microsoft SQL Server databases per AppSync Server.

MICROSOFT SQL SERVER DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (RECOVERPOINT CDP) Table 6 shows the observed service cycle time and resource utilization (AppSync client) estimates for Microsoft SQL Server databases that are protected with the Bronze service plan.

Database Layout:

12 SQL Databases, each with Data and Logs on separate LUNs on a Single SQL instance.

Table 6 Protection service cycle time and resource utilization (RecoverPoint CDP)

SERVICE CYCLE TYPE DURATION

(HH:MM:SS)

HOST PLUG-IN

AVERAGE CPU USAGE

HOST PLUG-IN

MAXIMUM MEMORY (MB)

Replication of 12 databases 0:03:59 3% 63 Replication and mount of 12 databases 0:12:09 2% 58 Unmount of 12 databases 0:09:32 1% 51

MICROSOFT SQL SERVER DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (XTREMIO SNAP) Table 7 shows the observed service cycle time and resource utilization (AppSync client) estimates for Microsoft SQL Server databases that are protected with the Bronze service plan.

Database Layout:

12 SQL Databases, each with Data and Logs on separate LUNs on Single SQL instance.

Table 7 Protection service cycle time and resource utilization (RecoverPoint CDP)

SERVICE CYCLE TYPE DURATION

(HH:MM:SS)

HOST PLUG-IN

AVERAGE CPU USAGE

HOST PLUG-IN

MAXIMUM MEMORY (MB)

Replication of 12 databases 0:02:20 5% 60 Replication and mount of 12 databases 0:6:14 2% 60 Unmount of 12 databases 0:03:43 1% 30

MICROSOFT SQL SERVER DATABASE RESTORE SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (RECOVERPOINT CDP) Table 8 shows the observed service cycle time and resource utilization (AppSync client) estimates for a Microsoft SQL Server database’s copy that is restored after it is protected with the Bronze service plan*.

Table 8 Restore service cycle time and resource utilization (RecoverPoint CDP)

SERVICE CYCLE TYPE DURATION*

(HH:MM:SS)

HOST PLUG-IN

AVERAGE CPU USAGE

HOST PLUG-IN

MAXIMUM MEMORY (MB)

Full restore of 1 database 0:3:41 1% 77 *This time does not include synchronization or replay of data on the RecoverPoint appliance and Microsoft SQL Server.

14

SQL DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VMAX SNAP, VMAX CLONE) Table 9 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft SQL Server Protection and Restore databases with the Bronze service plan.

Database Layout:

12 SQL Databases, each with Data and Logs on separate LUNs on Single SQL instance.

Table 9 Protection and Restore service cycle time and resource utilization (VMAX Snap, VMAX Clone)

SERVICE CYCLE TYPE DURATION*

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER

MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-IN

MAXIMUM

MEMORY (MB)

Replication of 12 databases (VP Snap)

00:02:45 18% 3.67 12% 172

Replication and Mount of 12 databases (VP Snap)

00.22.00 40% 4.34 15% 290

Unmount of 12 databases (VP Snap)

00:23:30 32% 4.20 16% 232

Restore of 1 SQL Database (VP Snap)

00:03:40 7% 2.34 3% 82

Replication of 12 databases (TF Clone)

00:03:50 27% 5.50 17% 210

Replication and Mount of 12 databases (TF Clone)

00:24:00 28% 4.60 24% 345

Unmount of 12 databases (TF Clone)

00:23:00 15% 4.10 18% 170

Restore of 1 SQL Database (TF Clone)

00:02:50 5% 2.55 6% 85

* Duration does not include initial automatic provisioning of target devices. In the test environment, required target devices are created and placed in a storage group, which is configured in AppSync. Replication takes another one hour to provision the target devices.

SQL DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (XTREMIO SNAP) Table 10 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft SQL Server Protection and Restore databases with the Bronze service plan.

Database Layout:

12 SQL Databases, each with Data and Logs on separate LUNs one single instance.

15

Table 10 Protection and Restore service cycle time and resource utilization (XtremIO Snap)

SERVICE CYCLE

TYPE

DURATION

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER

MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-IN

MAXIMUM MEMORY

(MB)

Replication of 12 databases

00:01:45 15% 6.63 31% 280

Replication and Mount of 12 databases

00:12:30 13% 6.54 22% 342

Unmount of 12 databases

00:18:00 22% 5.47 19% 267

Restore of 1 SQL Database

00:02:15 11% 3.46 5% 160

SQL DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VPLEX XTREMIO) Table 11 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for Microsoft SQL Server Protection and Restore databases, with the Bronze service plan and VPLEX virtual volumes of RAID-0.

Database Layout:

12 SQL Databases, each with Data and Logs on separate LUNs on Single SQL instance.

Table 11 Protection and Restore service cycle time and resource utilization (VPLEX Snap)

SERVICE CYCLE

TYPE

DURATION

(HH:MM:SS)

SERVER

AVERAGE CPU

USAGE

SERVER

MAXIMUM

MEMORY (GB)

HOST PLUG-IN

AVERAGE CPU

USAGE

HOST PLUG-IN

MAXIMUM

MEMORY (MB)

Replication of 12 databases

00:03:10 24% 5.06 10% 230

Replication & Mount of 12 databases ( VPLEX Mount)

00:14:16 22% 5.05 20% 185

Unmount of 12 databases

00:15:00 20% 5.05 15% 127

Restore of 1 SQL Database

00:02:00 22% 4.47 23% 135

VNXe Array Storage Performance Factors EMC recommends that Microsoft SQL Server databases that are sourced on VNXe block LUNs, have either:

• All of the data and log files contained in one VNXe LUN group.

• All of the data files contained in a LUN group and all of the log files contained in a different lUN group.

FILESYSTEM PERFORMANCE GUIDELINES Learn about the FileSystem deployment time estimates, throughputs, and internal workload distribution mechanisms.

DEPLOYMENT TIME ESTIMATE FOR APPSYNC FILESYSTEM PROTECTION Table 12 shows time estimates for deploying and configuring the EMC AppSync product for FileSystem protection.

16

Table 12 Deployment time estimates

DEPLOYMENT TYPE TIME ESTIMATE

Installation of the AppSync Server 5 Minutes Push installing 4 hosts for use with AppSync 5 Minutes Configuring 2 service plans each protecting 25 FileSystems across 2 servers and using 2 additional hosts for mounting copies

7 Minutes

SUPPORTED APPSYNC FILESYSTEM THROUGHPUTS AppSync is designed for the following throughputs for FileSystem protection:

• 100 FileSystems replicated per hour

• 50 FileSystems replicated and mounted per hour

The throughputs stated are maximums recommended by EMC. They may not be achievable for all AppSync Protected FileSystem environments, due to the factors that are listed in “Using the Performance and Scalability Guidelines” on page 6. Most of the time, the size of the filesystem has little effect on copy creation and mounting a copy.

APPSYNC INTERNAL WORKLOAD DISTRIBUTION MECHANISMS FOR FILESYSTEMS A workload for FileSystem protection is defined as the total number of filesystems that are protected by AppSync. In the EMC AppSync Server, the workload is divided into efficiency groupings for optimal performance. When distributing protected application objects into groupings, consider the following:

• Databases within the same service plan.

• Databases on the same host.

• Databases with storage sourced on the same storage array or appliance.

AppSync is optimized to be concurrently working on a service plan phase for one or multiple efficiency groupings, totaling to 50 filesystems. For example, two efficiency groupings that contain 25 filesystems each, could have phases running concurrently. However, if the efficiency groupings in your AppSync Server configuration contain a small number of filesystems, then AppSync may not be able to work concurrently on 50 filesystems. The reason for this is the AppSync Server allows up to five efficiency groupings to be concurrently worked on. When the optimal AppSync Server protection activity level is reached, other work that is currently scheduled is queued to be run as soon as possible to allow for maximum protection performance.

AppSync is optimized to protect large amounts of application objects that are subscribed to a service plan. EMC does not recommend a one to one mapping of application object to service plans unless it is a requirement.

Note: EMC recommends a maximum workload of 300 FileSystems per AppSync Server.

VMWARE VMFS DATASTORE (NON-NFS) PROTECTION PERFORMANCE GUIDELINES Learn about the VMware Datastore hardware specifications, deployment time estimates, throughputs, service cycle time and resource utilization estimates, and internal workload distribution mechanisms.

HARDWARE SPECIFICATIONS THAT WERE USED FOR VMWARE DATASTORE TESTING

AppSync Server • OS: Windows Server 2008 R2 Service Pack 1

• Memory: 6GB

17

• CPU: 2 (2.3 GHZ cores)

VMware ESX Server • VMware ESXi version: 5.0 Update 1

• Memory: 6 GB

• CPU: 8 (2.3 GHZ cores)

• Number of datastores: 25

• Average number of virtual machines per datastore: 4

• Number of virtual machines registered: 92

• ESX Cluster : No

VMware vCenter Server environment • VCenter Server Version: 5.5.0

• Memory: 16 GB

• CPU: 4 (2.0 GHZ cores)

• Total Number of Datastores in tested Datacenter: 100

• Total Number of Virtual Machines in tested Datacenter: 350

• Total Number of ESXi Servers in tested Datacenter: 5

• Total Number of datacenters under vCenter Control: 8

• Total Number of Virtual Machines under vCenter Control: 400

• Total Number of ESXi Servers under vCenter Control: 16

Storage Hardware • VNX array model: VNX 7600

• VNX Block OE version: 5.33.000.5.038

• VNX Pool Storage Thick LUNs used

DEPLOYMENT TIME ESTIMATE FOR APPSYNC VMWARE DATASTORE PROTECTION Table 13 shows time estimates for deploying and configuring the EMC AppSync product for VMware Datastore Protection.

Table 13 Deployment time estimates

DEPLOYMENT TYPE TIME ESTIMATE

Installation of the AppSync Server 5 Minutes Registering and Discovery of VMware VirtualCenter Server 1 Minute Configuring 2 service plans each protecting 25 datastores 2 Minutes

SUPPORTED APPSYNC VMWARE DATASTORE THROUGHPUTS AppSync is designed to replicate 50 VMware VMFS datastores per hour.

This is a maximum recommended by EMC and may not be achievable for all AppSync VMware Datastore protection environments, due to the factors that are listed in “Using the Performance and Scalability Guidelines” on page 6. The size of the datastores has little effect on copy creation and mounting a copy.

APPSYNC INTERNAL WORKLOAD DISTRIBUTION MECHANISMS FOR VMWARE DATASTORES A workload for EMC AppSync VMware Datastore Protection is defined as the total number of datastores that are protected by AppSync. In the EMC AppSync Server, the workload is divided into efficiency groupings for optimal performance.

18

When distributing protected application objects into groupings, consider the following:

• Databases within the same service plan.

• Databases within the same VMware vCenter Server.

• Databases with storage sourced on the same storage array or appliance.

Each efficiency grouping can have a maximum of 12 datastores. AppSync is optimized to be concurrently working on a service plan phase for three of these efficiency groupings, with the maximum of 12 datastores per grouping. If the efficiency groupings in the AppSync configuration contain a smaller number of datastores (for example, two datastores per efficiency grouping), then up to five efficiency groupings will be concurrently worked on. When the optimal AppSync Server protection activity level is reached, other work that is currently scheduled is queued to be run as soon as possible to allow for maximum protection performance.

AppSync is optimized to protect large amounts of application objects that are subscribed to a service plan. EMC does not recommend a one to one mapping of application object to service plans unless it is a requirement.

Note: EMC recommends a maximum workload of 300 datastores per AppSync Server.

VMWARE DATASTORES PROTECTION SERVICE CYCLE TIME ESTIMATES (VNX SNAPSHOTS) Table 14 shows the observed service cycle time estimate for VMware VMFS datastores that are protected with the Bronze service plan. Table 14 Protection service cycle time (VNX Snapshots)

SERVICE CYCLE TYPE MEASURED TIME* DURATION(HH:MM:SS)

Replication of 12 datastores 0:01:20 *This time does not include the creation of VMware Snapshots for the virtual machines.

VMWARE DATASTORES RESTORE SERVICE CYCLE TIME ESTIMATES (VNX SNAPSHOTS) Table 15 shows the observed service cycle time and estimate for a VMware VMFS datastore’s copy that is restored after it is protected with the Bronze service plan.

Table 15 Restore service cycle time (VNX Snapshots)

SERVICE CYCLE TYPE MEASURED TIME* DURATION (HH:MM:SS)

Full restore of 1 VMware Datastore 0:1:18 *This time does not include synchronization or replay of data on the VNX Array, or any manipulation of virtual machines, such as powering them on after restore.

VMWARE DATASTORES PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VMAX SNAP, VMAX CLONE) Table 16 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for VMware Datastores Protection and Restore with the Bronze service plan.

Database Layout:

12 VMware Datastores each on separate LUN with 10GB Size.

19

Table 16 Protection and Restore service cycle time and resource utilization (VMAX VP Snap, TF Clone)

SERVICE CYCLE TYPE DURATION*

(HH:MM:SS)

SERVER AVERAGE CPU

USAGE

SERVER MAXIMUM

MEMORY (GB)

Replication of 12 datastores (VP Snap) 00:02:00 22% 3.47 Replication and Mount of 12 datastores (VP Snap) 00.24.00 35% 4.45 Unmount of 12 datastores (VP Snap) 00:22:00 42% 4.00 Restore of 1 VMware Datastore (VP Snap) 00:02:35 10% 3.54 Replication of 12 datastores (TF Clone) 00:03:00 34% 5.30 Replication and Mount of 12 datastores (TF Clone)

00:27:00 26% 4.20

Unmount of 12 datastores (TF Clone) 00:25:00 18% 4.30 Restore of 1 VMware Datastore (TF Clone) 00:02:00 8% 2.25 * Duration does not include initial automatic provisioning of targets devices. In the test environment, required target devices are created and placed in a storage group, which is configured in AppSync. Replication takes another one hour to provision the target devices.

VMWARE DATASTORES DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (XTREMIO SNAP) Table 17 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for VMware Datastores Protection and Restore with the Bronze service plan.

Database Layout:

12 VMware Datastores, each on separate LUN with 10GB Size.

Table 17 Protection and Restore service cycle time and resource utilization (XtremIO Snap)

SERVICE CYCLE TYPE DURATION

(HH:MM:SS)

SERVER AVERAGE CPU

USAGE

SERVER MAXIMUM

MEMORY (GB)

Replication of 12 databases 00:01:30 18% 6.43 Replication and Mount of 12 databases 00:10:30 17% 6.34 Unmount of 12 databases 00:17:00 32% 5.00 Restore of 1 VMware datastore 00:02:45 17% 3.20

VMWARE DATASTORES DATABASE PROTECTION SERVICE CYCLE TIME AND RESOURCE UTILIZATION ESTIMATES (VPLEX XTREMIO) Table 18 shows the observed service cycle time and resource utilization (AppSync Server and Client) estimates for VMware Datastores Protection and Restore with the Bronze service plan and VPLEX virtual volumes of RAID-0.

Database Layout:

12 VMware Datastores, each on separate LUN with 10GB Size.

Table 18 Protection and Restore service cycle time and resource utilization (VPLEX Snap)

SERVICE CYCLE TYPE DURATION

(HH:MM:SS)

SERVER AVERAGE CPU

USAGE

SERVER MAXIMUM

MEMORY (GB)

Replication of 12 databases 00:03:30 34% 5.40 Replication and Mount of 12 databases (VPLEX Mount) 00:15:16 42% 5.30 Unmount of 12 databases 00:16:00 30% 5.20 Restore of 1 VMware Datastore 00:02:30 18% 4.00

20

ORACLE DATABASE PERFORMANCE GUIDELINES Learn about the Oracle Database hardware specifications, deployment time estimates, throughputs, service cycle time and resource utilization estimates, and source storage being the key performance factor.

HARDWARE SPECIFICATIONS THAT WERE USED FOR ORACLE DATABASE TESTING

AppSync Server • OS: Windows Server 2008 R2 Service Pack 1

• Memory: 8GB

• CPU: 4 (2.3 GHZ cores)

Oracle Database Production Hosts • OS: Oracle Enterprise Linux 6.5

• Kernel: Red Hat Compatible Kernel

• Memory: 16 GB

• CPU: 16 (2.3 GHZ cores)

• Oracle Database version: Oracle Database 11g Enterprise Edition Release 11.2.0.3

• Volume Group: Native lvm2

• Number of volume groups per database: 2

• Number of logical volumes per database: 8

• Number of physical volumes per database: 128 (64 per volume group)

Mount Hosts • OS: Oracle Enterprise Linux 6.5

• Kernel: Red Hat Compatible Kernel

• Memory: 16 GB

• CPU: 16 (2.3 GHZ cores)

Storage Hardware • VNX array model: VNX 7600

• VNX Block OE version: 5.33.000.5.035

• VNX Pool Storage Thin LUNs used

DEPLOYMENT TIME ESTIMATE FOR APPSYNC ORACLE DATABASE PROTECTION Table 19 shows time estimates for deploying and configuring the EMC AppSync product for Oracle Database protection.

Table 19 Deployment time estimates

DEPLOYMENT TYPE TIME ESTIMATE

Installation of the AppSync Server 5 Minutes Push installing 4 hosts for use with AppSync 5 Minutes Configuring 1 service plan protecting an Oracle RAC Database and also scheduling a Repurposing Copy. 7 minutes

21

APPSYNC ORACLE DATABASE PROTECTION KEY PERFORMANCE FACTOR: SOURCE STORAGE The main performance and scalability factors of an AppSync Oracle Database protection environment are the number of storage devices that the Oracle Database resides on. The duration of time for creating a copy and mounting a copy is linear on the scale of the number of source devices. For example, if it takes five minutes to create the copy for an Oracle Database that is sourced on 25 devices, then it would take around 10 minutes to create the copy for an Oracle Database that is sourced on 50 devices.

EMC recommends protecting databases with AppSync that reside on a maximum of 128 storage devices.

VNX Array Storage Scalability Factors For crash consistent copies of an Oracle Database on VNX array storage, the number of LUNs that the database files can reside on is bound by the maximum number of LUNs allowed within a VNX Consistency Group. All the database files must be contained within the same VNX Consistency Group. For app consistent copies of an Oracle Database on VNX, there is no limit for how many LUNs the data files reside on, but the LUNs sourcing the archive logs must all reside within the same VNX Consistency Group.

Note: Oracle databases that are sourced on VNX array storage and that are protected by AppSync, must have all source LUNs contained within VNX Consistency Groups.

VNXe Array Storage Scalability Factors For crash consistent copies of an Oracle Database on VNXe array storage, the number of LUNs that the database files can reside on is bound by the maximum number of LUNs allowed within a VNXe LUN Group. All the database files must be contained within the same VNXe LUN Group. For app consistent copies of an Oracle Database on VNXe, there is no limit for how many LUNs the data files reside on, but the LUNs sourcing the archive logs must all reside within the same VNXe LUN Group.

Note: Oracle databases that are sourced on VNXe array storage and that are protected by AppSync, must have all source LUNs contained within VNXe LUN Groups.

Unity Array Storage Scalability Factors For crash consistent copies of an Oracle Database on Unity array storage, the number of LUNs that the database files can reside on is bound by the maximum number of LUNs allowed within a VNX Consistency Group. All the database files must be contained within the same Unity Consistency Group. For app consistent copies of an Oracle Database on Unity, there is no limit for how many LUNs the data files reside on, but the LUNs sourcing the archive logs must all reside within the same Unity Consistency Group.

Note: Oracle databases sourced on Unity array storage protected by AppSync must have all source LUNs contained within Unity Consistency Groups.

SUPPORTED APPSYNC ORACLE DATABASE THROUGHPUTS AppSync supports the following throughputs for Oracle database protection:

• 128 Oracle Database Source Storage Devices replicated per hour

• 64 Oracle Database Source Storage Devices replicated and replication target devices mounted per hour

The throughput for number of databases can be calculated by the number of source devices of each database. For example, eight oracle databases could be replicated per hour if each database had around 16 source storage devices. The throughputs stated are guidelines provided by EMC. They may not be achievable for all AppSync Oracle Database protection environments due to the factors that are listed in “Using the Performance and Scalability Guidelines” on page 6. The number of source storage devices that Oracle databases reside on can impact the protection throughputs. For more information, see “AppSync Oracle Database Protection Key Performance Factor: Source Storage”.

22

ORACLE DATABASE PROTECTION SERVICE CYCLE TIME ESTIMATES (VNX SNAPSHOTS) Table 20 shows the observed service cycle time estimate for a very large-scale Oracle Database protected with the Bronze service plan. For more information about the specifications of the Oracle Database used for testing, see “Hardware Specifications that were used for Oracle Database Testing” on page 20.

Table 20 Protection service cycle time (VNX Snapshots)

SERVICE CYCLE TYPE MEASURED TIME

DURATION (HH:MM:SS)

Replication of database 0:11:58* Replication and mount of database 1:05:28* *Does not include time for rotation of copies.

APPSYNC SERVER RE-INITIALIZATION TIME AND RESOURCE UTILIZATION ESTIMATES Table 21 shows the observed AppSync Server startup time estimate when the application is re-initialized after a reboot of the system that is hosting it.

Table 21 Re-initialization time

ACTION MEASURED TIME DURATION

AppSync Server re-initialization 55 seconds

END USER RESPONSE TIMES WITHIN APPSYNC USER INTERFACE Table 22 lists the estimated end user response times for common actions. These actions are based on AppSync’s protection of 20 application objects that each has 100 successful copies. The AppSync Server was put under a moderate workload while the UI response times were gathered.

Table 22 End User Response times within AppSync UI

UI TASK MEASURED DURATION

Clicking to see a list of Application Servers or Datacenters or Servers hosting FileSystems 3 seconds Clicking to see a list of Databases or Datastores or FileSystems owned by a parent application instance 3-4 seconds Clicking to see a list of 100 copies for a specific Database or FileSystem or Datastore 3-4 seconds Clicking on a specific copy for Events data 3-4 seconds Clicking on a Service plan 3 - 4 seconds Clicking on a specific Service Plan Cycle within the Events tab for a service plan 10 seconds

BEST PRACTICES • To avoid nslookup resolution delays, when registering in AppSync, provide IP addresses instead of the Full DNS names for

AppSync clients and Application Servers and Storage Arrays.

• For quick name resolutions, provide IP addresses and AppSync Clients Host names in the AppSync Server at: C:\Windows\System32\drivers\etc\hosts.

• For quick AppSync Server host name resolution, provide the IP address and the AppSync Server name in AppSync Clients in one of the following files:

o For UNIX Servers: /etc/hosts

o For Windows Servers: C:\Windows\System32\drivers\etc\hosts

23

XtremIO:

• For faster copy operation responses, configure one XMS Server for one XtremIO Cluster. This step reduces the load on the XMS Server.

• Include the XMS Sever and AppSync Server as part of one DNS Domain.

• Configure the XMS Server IP address and the XMS Server name in the following AppSync Server file:

C:\Windows\System32\drivers\etc\hosts file

VMAX:

• Provide a dedicated SMI-S Host Provider per VMAX Array for higher performance gains in the SMI-S calls to the VMAX array.

• Follow the SMI-S Provider Configuration recommendations.

APPSYNC SERVER SETTINGS FOR PERFORMANCE FINE TUNE Contact EMC Customer Support to configure AppSync Server Settings for performance fine tuning based on deployment size.

CONCLUSION EMC AppSync Servers are optimized and proven to achieve a high level of performance for a discrete number of application objects, to ensure that the end user can achieve the Recovery Point objectives that are offered for all the service plans that are bundled with the product.

REFERENCES This section lists related documentation.

WHITE PAPERS • Advanced Protection for Microsoft Exchange 2010 on EMC VNX Storage —

EMC VNX and EMC AppSync

• EMC AppSync Transition Guide –Transition from EMC Replication Manager to AppSync

• Enhancing the Performance and Protection of Microsoft SQL Server 2012 – EMC Next-Generation VNX, EMC FAST Suite, EMC AppSync, EMC PowerPath/VE, VMware vSphere

• AppSync 2.2.2 Integration with RecoverPoint and XtremIO

• Implementing FAST VP and Storage Tiering on Oracle Database 11g and EMC SYMMETRIX VMAX

PRODUCT DOCUMENTATION For additional information, see the following documentation:

• AppSync Installation and Administration Guide

• AppSync User Guide

• AppSync REST API Reference

EMC ONLINE SUPPORT Go to EMC Online Support at http://support.emc.com