ecc 6 1 performance scalability

72
1 This document contains guidelines to help you design and configure EMC ControlCenter for performance and scalability. The guidelines apply to ControlCenter components and products and the networks that connect them. Note: This document assumes that you are familiar with ControlCenter and have reviewed the EMC ControlCenter 6.1 installation and configuration documentation, especially EMC ControlCenter 6.1 Release Notes, EMC ControlCenter 6.1 Overview and EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1. Topics include: Revision History ................................................................................... 2 Introduction........................................................................................... 3 Planning guidelines for the infrastructure........................................ 6 Infrastructure installation guidelines .............................................. 13 Data Collection Policy guidelines .................................................... 17 Upgrade guidelines ............................................................................ 21 Networking guidelines ...................................................................... 22 Console guidelines ............................................................................. 25 Agent guidelines................................................................................. 27 StorageScope Guidelines ................................................................... 53 Hardware configuration examples .................................................. 69 EMC ControlCenter 6.1 Performance and Scalability Guidelines P/N 300-006-358 Rev A07 January, 2009

Upload: jnewman99

Post on 11-Mar-2015

203 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ECC 6 1 Performance Scalability

This document contains guidelines to help you design and configure EMC ControlCenter for performance and scalability. The guidelines apply to ControlCenter components and products and the networks that connect them.

Note: This document assumes that you are familiar with ControlCenter and have reviewed the EMC ControlCenter 6.1 installation and configuration documentation, especially EMC ControlCenter 6.1 Release Notes, EMC ControlCenter 6.1 Overview and EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1.

Topics include:

◆ Revision History ................................................................................... 2◆ Introduction........................................................................................... 3◆ Planning guidelines for the infrastructure........................................ 6◆ Infrastructure installation guidelines .............................................. 13◆ Data Collection Policy guidelines .................................................... 17◆ Upgrade guidelines............................................................................ 21◆ Networking guidelines...................................................................... 22◆ Console guidelines ............................................................................. 25◆ Agent guidelines................................................................................. 27◆ StorageScope Guidelines ................................................................... 53◆ Hardware configuration examples .................................................. 69

EMC ControlCenter 6.1

Performance and ScalabilityGuidelinesP/N 300-006-358

Rev A07

January, 2009

1

Page 2: ECC 6 1 Performance Scalability

2

Revision History

Revision History

Note: If you are using StorageScope functionality, refer to the”StorageScope Guidelines” on page 53 for specific information.

Table 1 Performance and scalability content updates by revision

What Changed Where Changed

Changes for Revision A07

Removed reference to CNT InRange. ”Configure FCC Agent DCPs to run more efficiently.” on page 35

Added Note before table for StorageScope parameters. Table 3 on page 9

Changes for Revision A06

Added StorageScope FLR information. Throughout entire document.

Changes for Revision A05

Editorial changes.

Changes for Revision A04

Editorial changes.

Changes for Revision A03

Editorial changes.

Changes for Revision A02

Added SPEC information. Throughout entire document.

Changes for Revision A01

New document for ControlCenter 6.1 release.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 3: ECC 6 1 Performance Scalability

Introduction

IntroductionThis document contains guidelines for planning, installing, and configuring EMC ControlCenter® 6.1. The guidelines can help you achieve maximum performance and scalability and therefore realize the full potential of ControlCenter.

Before you begin using the guidelines, read this section to gain a high-level understanding of ControlCenter. Also, read the following publications for details on ControlCenter installation, configuration, administration, and operation:

◆ EMC ControlCenter 6.1 Overview

◆ EMC ControlCenter 6.1 Release Notes

◆ EMC ControlCenter 6.1 Administration Guide

◆ EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1

◆ EMC ControlCenter 6.1 Planning and Installation Guide, Volume 2

◆ EMC ControlCenter 6.1 Upgrade Guide (for upgrade only)

Key ControlCenter concepts

The physical and logical elements managed by ControlCenter are known as managed objects—they include storage arrays, hosts, virtual machines, switches, databases, and fabric zones, etc. One of the major tasks in configuring ControlCenter for optimal performance and scalability is determining the overall size of a ControlCenter configuration, which involves considering the size, number, and types of the managed objects that ControlCenter will manage.

Data collection policies (DCPs) specify the data to be collected by ControlCenter agents and the frequency of collection. Each agent has predefined DCPs and collection policy templates that can be managed through ControlCenter Administration.

When planning and installing ControlCenter, you will be asked to choose between the following configuration types:

◆ Single-host infrastructure configuration — the infrastructure components (ECC Server, Store, Repository and conditionally StorageScope Repository, and Web Server) reside on one host.

◆ Distributed infrastructure configuration — the infrastructure components listed above reside on two or more hosts.

◆ Single-host infrastructure with agents — the infrastructure components and agents reside on one host.

3EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 4: ECC 6 1 Performance Scalability

4

Introduction

The listed configuration types, and the ControlCenter components that comprise the infrastructure, are described in detail in EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1.

The following interfaces allow users to monitor and manage the ControlCenter environment:

◆ The ControlCenter Console, a Java application through which you can view and manage your ControlCenter environment. Using a basic set of common functions, like discovery, to perform monitoring, management, and ControlCenter administration (for authorized users only).

◆ The Web Console, a Web-based interface where you can perform a limited number of ControlCenter functions such as monitoring, reporting, and alert management. Unlike the ControlCenter Console, no installation of the Web Console is necessary—users can access it through a Web browser. The Web Console is best suited to users who monitor ControlCenter remotely, or local users who do not require full ControlCenter Console capabilities.

◆ The StorageScope FLR Console, a browser-based interface that provides policies and reports. Console pages can be customized to include snapshots of information from other areas within the product.

◆ The Performance Manager Console, a browser-based interface that is invoked after data collection is complete. Performance Manager uses the data collections to create performance and configuration displays of components in your EMC ControlCenter environment.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 5: ECC 6 1 Performance Scalability

Introduction

Using the guidelines Specifically, the performance and scalability guidelines in this document can help you determine:

◆ The optimal ControlCenter configuration type (single-host or distributed infrastructure).

◆ The sizes of storage arrays, switches, hosts, and other managed objects.

◆ When to upgrade a ControlCenter configuration to the next size classification (refer to ”Guideline P2” on page 7 for details).

◆ Optimal settings for data collection policies (DCPs).

◆ Where to place ControlCenter components so network latency is minimized.

◆ Minimum number of GK required to manage and monitor a Symmetrix

When reading the guidelines, keep in mind that the following factors affect ControlCenter performance and scalability:

◆ Type, number, and size of managed objects

◆ Type, number, and frequency of DCPs

◆ System resources available to the agent host and ControlCenter infrastructure components (ECC Server, Repository, Store, StorageScope Repository, StorageScope Server, and Web Server)

◆ Number of active ControlCenter Consoles (and optionally, Web Consoles)

◆ Number of available Stores

5EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 6: ECC 6 1 Performance Scalability

6

Planning guidelines for the infrastructure

Planning guidelines for the infrastructure

Guideline P1 Determine managed object sizes, using Table 2.Managed object size is determined by the number of Managed Object resources (not the total capacity).

A Symmetrix logical volume (also known as a Symmetrix® logical device or hypervolume) is a virtual disk drive in a Symmetrix array.

If the size of a managed object exceeds Large, (other than extra large Symmetrix up to 64K which is addressed in Storage Agent guidelines section of this document), contact the EMC Solutions Validation Center ([email protected]).

Table 2 Managed object sizes

Managed Object Resource Small Medium Large

Symmetrix Logical volumes 1-800 801-2000 2001-8192

Front-end mappings 1-1600 1601-4000 4001-16000

CLARiiON® Disks 1-90 91-120 121-240

LUNs 1-512 513-1024 1024-2048

Celerra®a Logical devices 1-512 513-1024 1025 - 2048

Data movers 2 4 8

Centera Nodes 8 16 32

Switch Ports 1-16 17-64 65-256

Oracle or DB2 database Sum of data files and tablespaces 1-100 101-200 201-400

SQL Server or Sybase Sum of databases, data files, log files, file groups 1-100 101-200 201-400

ESX Server Virtual machines 8 16 32

Storage array devices 16 32 64

File systems (vmfs3) 8 16 32

SAN Mapped LUN 8 16 32

Redundant paths 2 2 2

a. All logical devices are mapped to all Data Movers.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 7: ECC 6 1 Performance Scalability

Planning guidelines for the infrastructure

Guideline P2 Determine the configuration size.Using Table 3 on page 9 and Table 4 on page 11, determine the configuration size. The numbers in the tables are based on medium-size managed objects and five hour data collection intervals. Refer to ”Hardware configuration examples” on page 69 for additional information on specification of hosts being used in the lab.

Note: ControlCenter infrastructure, StorageScope, Console and agent’s performance and scalability has been measured on two distinct hardware types – single-core and multi-core. A single-core (# cores per processor =1) host should have at least two processors whereas a multi-core (# cores per processor >= 2) host can be one or more processors (2xdual-core or 1xquad-core). Refer to page 69 for specification of single-core and multi-core hosts used in the lab. To help you compare performance of your hosts to hosts being used in the lab, SPEC score is used as an index. Throughout this document these two distinct hardware types are referred as single-core and multi-core. The minimum SPEC CINT2000 Rate baseline score for a single-core should be at least 26. Multi-core hardware should have a minimum of SPEC CINT2000 Rate baseline score of 123 or SPEC CINT2006 Rate baseline score of 61. For more information on SPEC score refer to ”Understanding SPEC” on page 70.

When using the sizing tables, keep the following in mind:

◆ If a host is to be connected to a storage array, use only dedicated server-class hosts listed in the EMC 6.1 Support Matrix on Powerlink™. Otherwise, select a server-class host that is comparable to those listed in Table 43 on page 69.

◆ Disk space is the amount required for installing both ControlCenter and the operating system on the target host. If you are upgrading, the amount of disk space required is greater. Refer to the EMC ControlCenter Upgrade Guide for information.

◆ The ControlCenter Components column shows the installed components—although not listed, it is assumed that the Master Agent and Host Agent are also installed.

◆ The Storage Arrays column lists the total number of medium-sized storage arrays of any type (EMC, HDS, HP, Sun, IBM, etc.).

◆ The number of managed objects that are published in a table row are the total number supported. For example, Table 3 on page 9, the last row says 200 arrays, 2500 hosts etc. ControlCenter supports a range up to 4400 for a combination of hosts, arrays, ESXs, databases, and switches together.

7EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 8: ECC 6 1 Performance Scalability

8

Planning guidelines for the infrastructure

◆ Various tables in the document (e.g. Table 3 on page 9) provide minimum or recommended specification of hosts for installing ControlCenter Infrastructure components or agents.

Note: In Table 3, a medium installation size means up to a medium size and includes the small installation size.

The number of hosts that a ControlCenter instance can manage is now defined by Host Device Logical Paths or hosts whichever is lower.

Host Device Logical Paths - The total number of logical paths to a storage device accessible from a host. Refer to Figure 1, “Host Device Logical Paths,” for an example.

Figure 1 Host Device Logical Paths

Where the number of Host device logical paths equals the number of zones created.

Host Device Logical paths = 4

◆ HBA0 P0 – port1 – port3 – FA2C0

◆ HBA0 P0 – port1 – port4 – FA16C0

◆ HBA1 P0 – port2 – port3 – FA2C0

◆ HBA1 P0 – port2 – port4 – FA16C0

FA-2C0

FA-16C0

Port3

Port4

Switch

Z1

Z2

Z3 Z4

Symmetrix

Dev 020

Host

HBA0 P0

HBA1 P0

Port1

Port2

HBA0 P1

HBA1 P0

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 9: ECC 6 1 Performance Scalability

Planning guidelines for the infrastructure

Note: StorageScope may be deployed on ControlCenter infrastructure hosts provided some conditions are met. Refer to ”Determine the appropriate placement of StorageScope.” on page 55 and ”Allocate disk space for the StorageScope host.” on page 56 for specific recommendations on component placement and disk space requirements.

Tables 3 and 4 support StorageScope. For specific StorageScope information, refer to ”StorageScope Guidelines” on page 53

Table 3 Infrastructure sizing: typical configurations

Installation Size

Host ConfigurationMaximum Number of Supported Medium-sized Managed Objects

Configuration Type

Min. Memory

Min. Disk

SpaceControlCenter Components

Storage Arrays

# Host Device Logical Paths

(Hosts) #

ESXs# Oracle

DBs# Switches or (Ports)

Configuration#1: Single-core infrastructure host

Minimum SPEC CINT2000 Rate (baseline) = 26

Medium Single-Host 2 GB 72 GBECC Server

StoreRepository

45 70,400(550) 25 280 28

(1792)

Large (1 Store) Distributed

2 GB 72 GB ECC Server Repository

60 87,040(680) 35 380 44

(2816)2 GB 36 GB Store 1

Large(2 Stores) Distributed

2 GB 72 GB ECC Server Repository

120 179,200(1400) 50 750 88

(5632)2 GB 36 GB Store 1

2 GB 36 GB Store 2

Large (3 Stores) Distributed

2 GB 72 GB ECC Server Repository

175 262,400(2050) 75 1200 132

(8448)2 GB 36 GB Store 1

2 GB 36 GB Store 2

2 GB 36 GB Store 3

9EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 10: ECC 6 1 Performance Scalability

10

Planning guidelines for the infrastructure

Configuration#2: Multi-core infrastructure host

Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61

Medium Single-Host 2 GB 72 GBECC Server

StoreRepository

120 179,200(1400) 50 750 88

(5632)

Large (2 Stores) Distributed

2 GB 72 GBECC Server Repository

Store 1 200 320,000(2500) 100 1500 156

(9984)

2 GB 72 GB Store 2

Table 3 Infrastructure sizing: typical configurations

Installation Size

Host ConfigurationMaximum Number of Supported Medium-sized Managed Objects

Configuration Type

Min. Memory

Min. Disk

SpaceControlCenter Components

Storage Arrays

# Host Device Logical Paths

(Hosts) #

ESXs# Oracle

DBs# Switches or (Ports)

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 11: ECC 6 1 Performance Scalability

Planning guidelines for the infrastructure

Table 4 Infrastructure sizing: infrastructure with agent configuration (Sheet 1 of 2)

Installation Size

Host ConfigurationMaximum Number of Supported Medium-sized Managed Objects

Configuration Type

Min. Memory

Min. Disk

SpaceControlCenter Components

Storage Arrays

# Host Device Logical Paths

(Hosts) # ESXs

# Oracle DBs

# Switches

(Ports)

Configuration #1: Single-core host with infrastructure and agents

Minimum SPEC CINT2000 Rate (baseline) = 26

Small Single-Host 2 GB 36 GB

ECC ServerStore RepositoryPerformance ManagerAny three of the following: • WLA Archiver• FCC Agent• VMWare Agent• Common Mapping

Agent• Oracle Agent

30 42,240(330) 25 180 24

(1536)

Small Single-Host 3 GB 36 GB

ECC ServerStore RepositoryPerformance ManagerControlCenter ConsoleAny five of the following: • Storage Agent for

Symmetrix • Symmetrix SDM

Agent• Storage Agent for

CLARiiON Agent• WLA Archiver• FCC Agent• VMWare Agent• Common Mapping

Agent• Oracle Agent

11EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 12: ECC 6 1 Performance Scalability

12

Planning guidelines for the infrastructure

When deploying StorageScope on Configuration #1: 2 GB RAM Host or Configuration #2: 3 GB RAM Host, ensure that additional 1 GB of RAM has been provided to the Infrastructure hosts to provide for new StorageScope repository and other related components. Refer to "StorageScope Guidelines" on page 64 for additional information. Ensure that you have minimum of 10 GB of free space on the drive installed with ControlCenter to be used by StorageScope Repository.

Note: : Refer to the ”Agent guidelines” on page 27 for restrictions on agent configuration.

Note: When deploying Storage Agent for Centera on an infrastructure host like the one shown in ”Configuration #1: Single-core host with infrastructure and agents” or ”Configuration #2: Multi-core host with infrastructure and agents” in Table 4, do not include it in the count of agents such as "Any three of the following:" or "Any five of the following:" or "Any seven of the following:" Storage Agent for Centera has a much smaller footprint than any of the other agents listed.

Configuration #2: Multi-core host with infrastructure and agents

Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61

Small Single-Host 3 GB 72 GB

ECC ServerStoreRepositoryPerformance ManagerControlCenter ConsoleAny 7 of the following:• Storage Agent for

Symmetrix • Symmetrix SDM

Agent• Storage Agent for

CLARiiON • WLA Archiver• FCC Agent • VMWare Agent• Common Mapping

Agent• Oracle Agent• Storage Agent for

NAS• Storage Agent for

Centera • Storage Agent for

HDS

60 87,040(680) 35 380 44

(2816)

Table 4 Infrastructure sizing: infrastructure with agent configuration (Sheet 2 of 2)

Installation Size

Host ConfigurationMaximum Number of Supported Medium-sized Managed Objects

Configuration Type

Min. Memory

Min. Disk

SpaceControlCenter Components

Storage Arrays

# Host Device Logical Paths

(Hosts) # ESXs

# Oracle DBs

# Switches

(Ports)

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 13: ECC 6 1 Performance Scalability

Infrastructure installation guidelines

Infrastructure installation guidelinesThis section contains guidelines for planning, installing, and maintaining the ControlCenter infrastructure.

Guideline I1 Install the Web Server only when required.The Web Console is an optional Web-based interface through which you can perform a limited number of ControlCenter functions, such as monitoring, reporting, and alert management. The Web Console is ideally suited to remote ControlCenter users and local ControlCenter users requiring only monitoring functions.

The Web Server, an infrastructure component that provides server-side communications for the Web Console, requires additional resources on the infrastructure host where it resides—therefore it should be installed only when you plan to use the Web Console interface. If users do not require Web Console access, skip the Web Server during ControlCenter infrastructure installation (you can always install the Web Server later if desired).

Guideline I2 Install the Web Server for optimal performance and scalability.Using Tables 2, 3, and 4 on page 6, page 9, and page 11, determine the configuration type that is most suitable for your storage environment.

The following are the recommended options for installing the Web Server (“new” installation or upgrade).

◆ Add 1 GB of RAM to the infrastructure host to allow for Web Server and ECCAPI components.

◆ Uninstall Console and install Web Server component.

◆ Install EMC ControlCenter Web Server and ECCAPI components on a separate host.

If you decide to install Web Server on a dedicated host, depending on the configuration, you can deploy some ControlCenter agents on that same host, as described in Table 5.

13EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 14: ECC 6 1 Performance Scalability

14

Infrastructure installation guidelines

a. See ”Guideline A22” on page 51 for additional disk space requirements when installing WLA Archiver.

Note: When deploying Storage Agent for Centera on an infrastructure host like the one shown in Table 5, do not include it in the count of agents such as "Any three of the following:" or "Any five of the following:" or "Any seven of the following:" Storage Agent for Centera has a much smaller footprint than any of the other agents listed.

Guideline I3 When a Store is overloaded or the configuration size increases to the next classification, install another Store or redistribute DCPs.A Store is considered to be overloaded when the alert with the display name Store Load Alert displays in the Alerts View and does not clear for about an hour. The alert triggers when the workload of a Store exceeds a threshold value, indicating that you might need to add another Store. A Store requires a Minimum of 600 MB of disk space. If no other ControlCenter components (other than a Store) are to reside on a disk drive, install the Store on a smaller drive (for example, 9 GB).

To determine the cause of the Store being overloaded:

◆ Investigate whether DCPs are properly distributed. If they are scheduled to start concurrently, reschedule them so they are staggered. Refer to ”Guideline D1” on page 17 and use the DCP Schedule Utility on Powerlink for help with tuning your DCP schedules.

Table 5 Dedicated Web Server host configuration

Single-core host Configuration

Software ComponentsMin Memory Min Disk Space

Minimum SPEC CINT2000 Rate (baseline) = 26

2 GB 8 GB

Web Server and ECCAPI, and any two of the following:• Workload Analyzer Archivera

• FCC Agent• Storage Agent for CLARiiON• Common Mapping Agent• Storage Agent for NAS• VMWare Agent• Console• Storage Agent for Centera

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 15: ECC 6 1 Performance Scalability

Infrastructure installation guidelines

Note: Be aware that DCPs run based on the system time of the agent host. The agent host system time and the Store system time may be different (for example, in different time zones).

Note: WLA DCP frequency does not impact Store processing. Transactions generated by WLA DCPs are either saved on the local disk of the agent host running WLA policies or sent to WLA Archiver bypassing Store.

◆ Determine if managed objects were added to ControlCenter over time without being redistributed or without DCPs being reconfigured.

After determining which DCPs caused the overload, address the overload conditions by doing one or all of the following:

◆ Decrease the frequency of the DCPs (for example, change the frequency from one minute to five minutes) so that bottlenecks are not created.

◆ Move from a single-host configuration to a distributed configuration by uninstalling the Store and installing it on a new host, or adding more Stores on new hosts. Migration from single-host to distributed configuration on dual-core hosts does not require un-installation of Store on the Repository host. Faster CPU and additional memory resources on the dual-core host configuration allows for collocation of Store on Repository host for distributed configuration.

Guideline I4 Follow these recommendations when providing high availability for infrastructure hosts.When allocating disk space on infrastructure hosts, consider these recommendations.

◆ For high availability, use a mirrored (RAID 1) disk (keep in mind that this doubles the number of required physical hard drives).

◆ For system performance, a RAID controller is the Minimum recommendation for disk mirroring. Other options for increased system performance include using disk arrays such as Symmetrix (metavolume) or CLARiiON (RAID-10 LUN).

◆ Host-based mirroring is strongly discouraged.

◆ Do not use RAID5 when installing the repository. ControlCenter is considered an OLTP application and RAID5 may degrade the performance.

15EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 16: ECC 6 1 Performance Scalability

Infrastructure installation guidelines

Guideline I5 Follow these recommendations to ensure optimal host operating system performance.Windows provides tools and programs designed to keep your computer safe, manage your system, and perform regularly scheduled maintenance that keeps your computer running at optimum performance. Here are some recommended maintenance tasks:

◆ Cleanup temporary disk space.

◆ Shutdown all ControlCenter components on the host and run a disk defragment utility to defragment the drive.

◆ Using a Microsoft performance tuning guide, shutdown any unnecessary services, leaving the required ControlCenter services running.

16 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 17: ECC 6 1 Performance Scalability

Data Collection Policy guidelines

Data Collection Policy guidelines

Guideline D1 Distribute daily data collection (polling) policies evenly.By default all agents with the same system time perform daily discovery data collection concurrently. Schedule multiple DCPs in one-hour, off-peak time slots to avoid resource contention. Use Table 6 to determine the maximum number of hosts to rediscover. Remember that agent polling cycles are based on the system time of the agent host. When creating DCPs, consider agent host’s time zone.

Guideline D2 Schedule daily data collection (polling) policies.Customers with more than 2 Stores (distributed configuration), should ensure that daily Discovery policy is evenly distributed to avoid overloading the ControlCenter repository.

The following are best practices:

Table 6 Guidelines for rediscovery of hosts

Installation Size

Host Configuration Discovery Guideline

Configuration Type# Host Device Logical Paths

(# Hosts Per Hour)

Single-core processor hostMinimum SPEC CINT2000 Rate (baseline) = 26

Medium Single-Host 179,200(40)

Large (1 Store) Distributed 21,760(170)

Large (2 Stores) Distributed 44,800(350)

Large (3 Stores) Distributed 65,280(510)

Multi-core processor hostMinimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61

Medium Single-Host 44,800(350)

Large (2 Stores) Distributed 80,000(625)

17EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 18: ECC 6 1 Performance Scalability

18

Data Collection Policy guidelines

◆ Avoid overlapping DCPs with critical application tasks running on the agent hosts. For example, avoid scheduling an agent host DCP execution at midnight if the agent host has to run a mission-critical backup task at that time.

◆ Recommendations for File Level Collection (FLR) DCPs are discussed in ”Data Collection Policy guidelines” on page 59.

◆ If possible, avoid overlapping DCP execution with ControlCenter maintenance tasks such as those listed in Table 7 on page 18.

.

Guideline D3 Disable WLA Revolving Data collection policy to conserve resource utilization on agent host.Performance statistics data being collected by WLA Daily, Revolving, and Analyst DCPs are identical but they operate at different frequencies. Typically, WLA Revolving DCP would collect statistics data for last 2 hours and save it on local disk of agent host. Data collected by WLA Revolving DCP is used during the day whereas WLA Daily data is for short and long term trending etc.

Note: WLA Daily, Revolving and Analyst polices are disabled by default.

Beginning with ControlCenter 5.2 Service Pack 3, data collected by the WLA Daily DCP is processed every hour by the WLA Archiver agent, instead of waiting until midnight before it begins to process WLA Daily data for the previous day. Automation reports will continue to be produced starting at midnight on a daily basis. This change provides several benefits, including but not limited to:

◆ Performance statistics data gathered by the WLA Daily DCP is available for immediate use.

Table 7 ControlCenter maintenance task scheduling

Task Name When Scheduled

Database Export 10 p.m. daily

Hot Backup of the Repository 2 a.m. daily

Computation of the free tablespaces in the Repository 9 p.m. daily

Database backup to external media Defined by the customer

Analyze tables 11 p.m. daily

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 19: ECC 6 1 Performance Scalability

Data Collection Policy guidelines

◆ The WLA Revolving DCP may no longer be needed since WLA Daily data can be used. Disabling the WLA Revolving policy helps conserve system resource utilization on the agent host running agents like CLARiiON, FCC etc.

19EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 20: ECC 6 1 Performance Scalability

Data Collection Policy guidelines

Guideline D4 Schedule WLA Analyst DCP for short duration only.The WLA Analyst policy, designed to collect granular performance data for troubleshooting a performance problem on one or a few managed objects at a time, should only run for time periods that sufficiently capture the event(s) necessary to identify problems. It is not intended run routinely for all managed objects.

In order to achieve the desired level of data granularity, the data collection frequency may be set at a higher rate than other DCPs. This high data collection frequency can have a potential performance impact on the target managed object, as well as the ControlCenter component hosts for the collecting agent and the WLA Archiver. The degree of impact depends on the type and size of the managed object and many other factors.

Only experienced users, who understand the effects of data collection frequency on the performance of a particular managed object, should enable this policy for use. Furthermore, it is expected that this DCP will be used when a performance issue has been identified using other tools or DCPs, and further detailed performance data is required to characterize the problem or root cause the issue. WLA Daily DCP should be routinely used to collect round the clock performance data for managed objects.

When running concurrent WLA Analyst DCPs each collecting data from a different managed object but using the same agent, restrict concurrent WLA Analyst policies to two managed objects for 3-6 hours per agent, or only as long as required to capture the necessary data, whichever duration is shorter [in order to minimize performance impact on the affected managed object(s)].

Scenario: Symmetrix #37 is having a spike on IO rate between 2:00 and 3:30 pm. This Symmetrix is managed by a Storage Agent for Symmetrix installed on host #46. Create one WLA Analyst policy at 2 minute interval for this Symmetrix. Apply a custom schedule to this policy for a period of 1:00 pm to 4:00 pm. At 4:00 pm, the policy will be automatically closed and data will be available in Performance Manager for analysis.

20 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 21: ECC 6 1 Performance Scalability

Upgrade guidelines

Upgrade guidelinesThis section contains guidelines for upgrading to ControlCenter 6.1 from a previous version.

Guideline U1 Upgrade agents in batches.When upgrading multiple agents, ControlCenter begins discovery of managed objects assigned to agents that are already deployed. To avoid system level performance problems during that time, restrict concurrent agent upgrade sessions as recommended in Table 8. If the agent host is selected for upgrade, then all agents on that host will be automatically upgraded. Limiting the number of agent hosts to be upgraded concurrently will also limit the number of concurrent agent upgrades. Also, wait at least one hour between agent upgrade sessions for the Infrastructure to finish discovering managed objects assigned to upgraded agents. Do not run multiple agent upgrade sessions at the same time—if you do, ControlCenter Console response time, and refresh resulting from configuration commands (SDR, Meta Device Configuration, etc.) might be slow.

Note: Install one Storage Agent for Symmetrix at a time that is managing Symmetrix with 32K devices or more. Upgrade two Storage Agent for Symmetrix at a time that are managing Symmetrix with 32K.

Table 8 Limitations of Concurrent Agent Upgrades

Managed Object Type (Agents) Medium Large

Host (Host or Common Mapping Agent) 200

Storage Array (Symmetrix, CLARiiON, HDS, etc.) 20 10

Database (Oracle or Common Mapping Agent) 20 5

21EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 22: ECC 6 1 Performance Scalability

22

Networking guidelines

Networking guidelines This section provides guidelines related to ControlCenter components in a distributed network.

Guideline N1 Minimize the effect of network latency on ControlCenter.As network latency increases, so does execution time. In situations where latency is high, some ControlCenter operations may time out, especially for real-time actions. Do the following to reduce the effect of latency on a ControlCenter configuration:

◆ Ensure that latency between infrastructure components (ECC Server, Store, Repository, StorageScope Repository, StorageScope, WLA Archiver, and Web Server) is less than 8 ms (milliseconds). Try to keep infrastructure components on the same LAN segment.

◆ Ensure that latency between ControlCenter agents and the infrastructure does not exceed 200 ms. The minimum bandwidth is 386 Kbps with 50 percent utilization.

Note: Because data traffic between agents and the infrastructure is far less than between infrastructure components or between ControlCenter Console interfaces and the infrastructure, agents can tolerate a higher latency.

◆ Limit the network latency between SMI provider and FCC agent to 100 MS. Network latency between the SMI provider and connectivity devices should not exceed 150 ms.

◆ Ensure that latency between the Web Console and the infrastructure does not exceed 50 ms.

◆ Ensure that latency between the ControlCenter Console and the infrastructure does not exceed 15 ms. Optimally, latency should be at 8 ms or less. If network latency exceeds 8 ms:

• Consider using the Web Console instead of the ControlCenter Console. Keep in mind that the Web Console does not have all the capabilities of the ControlCenter Console (refer to ”Key ControlCenter concepts” on page 3 for details).

• If you require the extended capabilities of the ControlCenter Console, use Citrix MetaFrame XP for Windows or Microsoft Terminal Services to access the ControlCenter Console.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 23: ECC 6 1 Performance Scalability

Networking guidelines

Note: Do not access the ControlCenter Console through Citrix or Terminal Services running on an infrastructure host.

Using Citrix MetaFrame XP allows you to:

– Access the ControlCenter Console over a high-latency, low-bandwidth network.

– Access the ControlCenter Console through a Network Address Translation (NAT) firewall—for example, on a virtual private network (VPN) over cable or DSL. Note that this only applies when the ControlCenter Console is accessed through Citrix MetaFrame XP.

– Operate with an additional layer of security.

Guideline N2 Consider the impact of network latency on various DCPs.High latency is generally associated with low bandwidth. ControlCenter agents are very efficient at data transfer. However, when deploying many ControlCenter agents in a remote location, consider the following:

◆ A host agent managing a medium or large host generates a burst of data nightly when Discovery or FLR DCPs are executed.

◆ By default, Discovery DCPs are scheduled to occur at the same time each day. This results in all agents transmitting data concurrently and possibly overloading the network segment/connection.

◆ Disable WLA DCPs if the agent to WLA Archiver network latency is more 75 ms.

◆ Concurrent execution of Discovery DCPs by many remote host agents, over a low-bandwidth network with latency under 200 ms, can cause latency to increase to over 200 ms.

◆ As previously recommended, avoid latency above 200 ms. High latencies may cause agent transactions to the Store to be incomplete and rejected, resulting in unpredictable behavior.

◆ The ECC Server dynamically assigns agents to Stores taking the Store load into account. There is no benefit in adding a Store in a remote location with the intention of all the remote agents in that location being assigned to that remote Store.

23EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 24: ECC 6 1 Performance Scalability

Networking guidelines

Table 9 on page 24 lists some DCPs and the resulting data volume that the agent sends to either Store or WLA Archiver. Using Table 9 on page 24, consider adjusting the DCPs of the remote agents so that they are staggered. In this way, the agents do not execute concurrently and create a bottleneck in the network.

Table 9 Data Volumes sent to the Store or WLA Archiver

Data Collection Policy Approximate Data Volume

Full configuration load of a medium Symmetrix array 1 MB

Rediscovery of a medium Solaris host 250 KB

Topology validation of a medium switch 40 KB

WLA Daily policy data for a medium Symmetrix array 600 KB

WLA Daily DCP data for a host with 128 host device logical paths 15 KB

24 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 25: ECC 6 1 Performance Scalability

Console guidelines

Console guidelinesThese guidelines apply to installing and configuring the ControlCenter Console, as well as utilizing the ControlCenter Console and Web Console simultaneously.

Guideline C1 When installing a stand-alone ControlCenter Console, ensure that the client host meets minimum specifications.Table 10 shows the minimum hardware requirements for stand-alone ControlCenter Console hosts (a stand-alone ControlCenter Console host does not contain any ControlCenter infrastructure components). The requirements in Table 10 are for one instance of the ControlCenter Console. Refer to the EMC ControlCenter 6.1 Support Matrix on Powerlink for supported operating system versions.

a 250 MB if logging is turned off.

Guideline C2 When running the Web Console, ensure that the client host meets minimum specifications.Table 11 lists the minimum hardware requirements for ControlCenter Web Console hosts.

Table 11 Host requirements for Web Consoles

Table 10 Host requirements for stand-alone consoles with logging

Operating System # of Processors Minimum Speed Minimum Memory Minimum Disk Space

Windows 1 500 MHz 512 MB 550 MBa

Solaris 1 360 MHz 512 MB 550 MBa

Operating System # of Processors Minimum Speed Minimum Memory

Windows 1 500 MHz 128 MB

AIX, HP-UX, Linux, and Solaris 1 360 MHz 128 MB

25EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 26: ECC 6 1 Performance Scalability

Console guidelines

Guideline C3 Limit the combined number of active ControlCenter Consoles and Web Consoles to ten during peak ControlCenter use.Limiting the combined number of active ControlCenter Consoles and Web Consoles to ten helps maintain performance. ControlCenter Consoles and Web Consoles are considered active if they are running user commands, including views that update real-time information.

An active Web Console session is one that is processing a user request. Additional Web Console sessions can be open as long as they are not simultaneously processing user requests.

Contact the Solutions Validation Center ([email protected]) for assistance.

Guideline C4 Use terminal emulation or the Web Console when latency between the ControlCenter Console and infrastructure exceeds 8 ms.When latency between the ControlCenter Console and infrastructure components exceeds 8 ms, consider using terminal emulation software to access the ControlCenter Console, or switch to the Web Console if limited functionality is acceptable. Refer to ”Guideline N1” on page 22 for details.

26 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 27: ECC 6 1 Performance Scalability

Agent guidelines

Agent guidelinesThis section contains guidelines for deploying and managing ControlCenter Agents.

Note: FCC and NAS agents do not coexist on the same host due to port conflict. Refer to the EMC ControlCenter Planning and Installation Guide, Volume 1 for more information on port assignments.

Guideline A1 When deploying agents, limit the initial discovery of managed objects.The ControlCenter Agent Installation consumes more resources during initial discovery than during rediscoveries of the same managed objects. Therefore, you should restrict concurrent initial discovery of managed objects per hour per store as recommend in Table 12. This is accomplished by limiting the number of concurrent agent installs.

Do not run multiple agent deployment sessions through the ControlCenter Console Agent Installation Wizard.

When deploying agents using tasks in the Agent Installation Wizard, ensure that you wait at least one hour between large tasks (those that involve more than 20 agents). This is because after the wizard reports successful completion of each task, the Infrastructure is still busy processing recently discovered managed objects. Starting consecutive installation processes may overload the Infrastructure.

ControlCenter Console response time may be degraded during the agent deployment process because the Infrastructure is busy processing recently discovered managed objects. Do not overlap agent installation process with configuration management commands (SDR, Meta Device Configuration, etc.). The ControlCenter Console might not be refreshed in this situation.

Table 12 Limitations of initial discovery

Managed Object Type Medium Managed Objects per Store per Hour Large Managed Objects per Store per Hour

Host 100

Symmetrix 10 5

27EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 28: ECC 6 1 Performance Scalability

28

Agent guidelines

Guideline A2 Follow these guidelines when deploying agents on stand-alone Store hosts.Follow these guidelines when deploying agents on stand-alone Store hosts.

Note: Stand-alone Store hosts are hosts installed only with Store components in a distributed configuration.

◆ Master and Host agents can be deployed unconditionally on stand-alone hosts.

◆ Deploy any two of the following agents.

◆ Deploy any one of the following agents if you plan to deploy Console on the stand-alone host.

◆ 3 GB of RAM is required if Storage Agent for Symmetrix and Symmetrix SDM Agent are installed on a Multi-core processor host. Otherwise, the minimum requirement is 2 GB of RAM.

◆ 2.5 GB of RAM is required if Storage Agent for Symmetrix and Symmetrix SDM Agent are installed on a Single-core processor host. Otherwise, the minimum requirement is 2 GB of RAM.

◆ Avoid executing Symmetrix configuration change commands during nightly discovery processing period (typically, midnight through 5 a.m.).

Table 13 Guidelines for agents and various host configurations

Agent Type(Pick any 2) Single-core or multi-core processor host

Storage Agent for Symmetrix and Symmetrix SDM Agent Refer to ”Guideline A10” on page 38.

Fibre Channel Connectivity (FCC) Agent Follow ”Guideline A9” on page 35.

WLA Archiver Follow ”Guideline A22” on page 51.

Storage Agent for CLARiiON Follow ”Guideline A15” on page 44

Common Mapping Agent Follow ”Guideline A20” on page 50.

Storage Agent for NAS Follow ”Guideline A16” on page 46

Vmware Agent Follow ”Guideline A24” on page 52.

Oracle Follow ”Guideline A23” on page 52.

HDS Follow ”Guideline A17” on page 48.

Storage Agent for Centera Follow ”Guideline A18” on page 49

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 29: ECC 6 1 Performance Scalability

Agent guidelines

Guideline A3 Follow these guidelines while setting up a dedicated ControlCenter agent host.Follow Table 14 for a list of agents that can be deployed on a dedicated agent host that is running ControlCenter agents only.

◆ Master and Host agents can be deployed unconditionally on stand-alone hosts.

Guideline A4 On agent hosts, allocate 300 MB additional disk space for each agent that runs WLA DCPs.The WLA Revolving DCP saves collected data to the local host disk and does not send it over to the WLA Archiver, until and unless the user requests it via the Console. The maximum amount of data that this policy can save on to the disk is 100 MB per policy per managed object.

If ControlCenter agents that run WLA Daily and WLA Analyst DCPs (for example, Storage Agent for Symmetrix) cannot communicate with Workload Analyzer Archiver (because of a network failure, for example), those agents will save up to 100 MB of WLA data for each policy on an agent host. This allows WLA data collection to continue while connectivity to Workload Analyzer Archiver is restored. To support this function, on hosts where agents that run WLA Daily and

Table 14 Agent deployment recommendation on a dedicated agent host

Agent Type Single-core or multi-core processor host

Storage Agent for Symmetrix and Symmetrix SDM agent Refer to ”Guideline A10” on page 38.

Fibre Channel Connectivity (FCC) Agent Follow ”Guideline A9” on page 35.

WLA Archiver Follow ”Guideline A22” on page 51.

Storage Agent for CLARiiON Follow ”Guideline A15” on page 44

Common Mapping Agent Follow ”Guideline A20” on page 50.

Storage Agent for NAS Follow ”Guideline A16” on page 46

Vmware Agent Follow ”Guideline A24” on page 52.

Oracle Follow ”Guideline A23” on page 52.

HDS Follow ”Guideline A17” on page 48.

Storage Agent for Centera Follow ”Guideline A18” on page 49.

29EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 30: ECC 6 1 Performance Scalability

30

Agent guidelines

WLA Analyst DCPs reside, allocate 200 MB additional disk space for each agent that runs those DCPs.

For example, if WLA Daily, WLA Analyst and WLA Revolving DCPs are planned to be enabled for Host Agent for Windows and FCC Agent on a single host, allocate 600 MB (300 MB for each agent) additional disk space on that host.

Note: Concurrent processing of all WLA DCPs on the same host will likely cause data gaps. Refer to ”Guideline D3” on page 18 and ”Guideline D4” on page 20 for more details.

Note: If the amount of locally collected data for WLA DCPs exceeds 100 MB, local data collection will continue (if connectivity to the Workload Analyzer Archiver host is not restored)—however, some of the collected data might be lost due to the 100 MB limit.

Guideline A5 Use agent resource utilization to help schedule DCPs.The impact of an agent on system resources depends on the agent type, polling frequency, and the number and size of the managed objects managed by the agent.

◆ For agents with polling frequencies that are less than one hour, the average Steady State (over 24 hours) CPU resource utilization of the agent should preferably remain under 1 percent when managing one medium-size managed object.

◆ For agents having a daily polling cycle, small spikes of resource utilization occur for a few minutes each day.

Table 15 on page 32 shows the CPU utilization for ControlCenter agent polling with one medium-size managed object. These duration and average CPU usage values were determined by tests conducted on the following agent host configurations.

Note: Consider these differences in hardware configuration when using performance data for agent host comparison.

◆ Windows: 2x3.0 GHz processors with 2 GB of RAM, running Windows 2003 Enterprise Edition, SP1.

◆ Solaris: 2x1.5 GHz processors with 4 GB of RAM, running Solaris 10 (SunFire v240).

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 31: ECC 6 1 Performance Scalability

Agent guidelines

◆ HP-UX: 2x1.0 GHz processors with 4 GB of RAM, running HP-UX B11.11 (rp 3440).

◆ AIX: 2x1.89 GHz processors with 4 GB of RAM, running AIX 5.3 (eServer P5 model 510).

Note: These performance measurements reflect maximum expected performance for hosts that are dedicated to agent processing with no other applications running. Your actual performance may vary if agents are run on the same host with other applications or other ControlCenter agents.

Use Table 15 to determine resource requirements for ControlCenter agents. When using the table, keep the following in mind.

◆ The information in the Default Polling Cycles column does not address functionality associated with user commands.

◆ Average CPU Usage By Platform includes Master Agent CPU usage. No real-time commands were processed during testing. Agents detected a very low number of alerts during testing.

◆ Estimated Memory By Platform shows an estimated memory allocation for an agent managing one medium-size managed object. Memory consumption increases proportionally when the size or count of the managed object increases.

31EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 32: ECC 6 1 Performance Scalability

32

Agent guidelines

Table 15 Resource requirements for ControlCenter agents (Sheet 1 of 3)

Agent Type Default Polling CyclesDuration (mm:ss)

Avg. CPU Usage

Estimated Memory

Required Disk Space

Storage Agent forSymmetrix (2000 devices)

Alert Polling 2 Minutes

Steady State

Windows <1%Solaris 1.5%HP-UX 1.7%

AIX 2.7% Windows 120 MBSolaris 81 MBHP-UX 26 MB

AIX 81 MB

Windows 130 MBSolaris 105 MBHP-UX 113 MB

AIX 109 MB

CLI Generation Once daily at Midnight

Configuration 10 Minutes

Historical Data Once daily at Midnight

Local Discovery Once daily at Midnight

Performance Statistics 2 Minutes

BCV/RDF Status 5 Minutes

Real-time BCV/RDF Status 1 Minute

WLA Daily 15 Minutes

Steady State

Windows <1%Solaris 1.9%HP-UX 1.9%

AIX 3.1%WLA Revolving 2 Minutes

Database Agent for Oracle

Discovery Once daily at 6:00 a.m.

Windows 01:10Solaris 01:57HP-UX 01:10

AIX 01:43

Windows 6%Solaris 54%HP-UX 8%AIX 13% Windows 24 MB

Solaris 45 MBHP-UX 26 MB

AIX 18 MB

Windows 97 MBSolaris 103 MBHP-UX 104 MB

AIX 107 MBWLA Daily 15 Minutes

SteadyState

Windows 3.0%Solaris 1.0%HP-UX 1.0%

AIX 10%WLA Revolving 15 Minutes

Fibre Channel Connectivity Agent

Fabric Validation 1 Hour

Steady State Windows <1.0%Solaris <1.0%

Windows 22 MBSolaris 42.3 MB

Windows 150 MBSolaris 220 MB

Device Validation 1 Hour

Performance Stats 15 Minutes

WLA Daily 15 Minutes

WLA Revolving 15 Minutes

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 33: ECC 6 1 Performance Scalability

Agent guidelines

Host Agent

Discovery

Once daily at MidnightOnce daily at 2:00 a.m.Once daily at 2:00 a.m. Once daily at 4:00 a.m.Once daily at 4:00 a.m.

Windows 01:27Solaris 00:28Linux 00:34

HP-UX 00:43AIX 00:27

Windows 14%Solaris 6%Linux 6%

HP-UX 7%AIX 12%

Windows 12 MBSolaris 17 MBLinux 10 MB

HP-UX 22 MBAIX 23 MB

Windows 113 MBSolaris 82 MBLinux 78 MB

HP-UX 93 MBAIX 85 MB

WLA Daily Collection 15 Minutes

SteadyState

Windows 1%Solaris 1%Linux 1.0%

HP-UX <1.0%AIX 2.0%

WLA Revolving Collection 2 Minutes

Watermarks for File Systems 15 Minutes

Watermarks for Logical Volumes 15 Minutes

Watermarks for Volume Groups 15 Minutes

Storage Agent for CLARiiON

Discovery (120 disks, 1024 LUNs) Once daily at Midnight

Windows 1:30Solaris 1:10HP-UX 1:32AIX 00:55

Windows 6.5%Solaris 10.3%HP-UX 10.0%

AIX 21.0% Windows 30.5 MBSolaris 49 MBHP-UX 66 MB

AIX 24 MB

Windows 85 MBSolaris 70 MBHP-UX 75 MB

AIX 80 MBWLA Dailya 15 Minutes

Steady State

Windows 1.2%Solaris 2.5%HP-UX 1.5%

AIX 2.0%WLA Revolving 2 Minutes

Storage Agent for HDS

Discovery (512 devices) Once daily at Midnight

Windows 01:10Solaris 01:15HP-UX 01:00

AIX 01:05

Windows 3.8%Solaris 6.2%HP-UX 3.0%

AIX 4.3% Windows 11 MBSolaris 12 MBHP-UX 8 MBAIX 12 MB

Windows 60 MBSolaris 60 MBHP-UX 70 MB

AIX 65 MBWLA Daily 15 Minutes

SteadyState

Windows 2.0%Solaris 1.5%HP-UX 2.0%

AIX 2.2%WLA Revolving 2 Minutes

Table 15 Resource requirements for ControlCenter agents (Sheet 2 of 3)

Agent Type Default Polling CyclesDuration (mm:ss)

Avg. CPU Usage

Estimated Memory

Required Disk Space

33EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 34: ECC 6 1 Performance Scalability

Agent guidelines

Symmetrix SDM Agent

Re-Discovery (4000 masking entries) 12 Hours

Windows 00:10Solaris 00:13HP-UX 00:13

AIX 00:18

Windows 2%Solaris 6%

HP-UX 16%AIX 14% Windows 31 MB

Solaris 34 MBHP-UX 20 MB

AIX 32 MB

Windows 108 MBSolaris 89 MBHP-UX 88 MB

AIX 89 MBMasking Configuration (4000

masking entries)6 Hours

Windows 00:06Solaris 00:06HP-UX 00:07

AIX 00:14

Windows 3%Solaris 6%HP-UX 2%AIX 3.2%

Storage Agent for NAS

Celerra Discovery Once daily at Midnight Windows 06:14 Windows 8.2%

Windows 10.5 MB

Windows 80 MB

WLA Daily 15 MinutesSteady State Windows 0.1%

WLA Revolving 10 Minutes

NetApp Discovery—1 filer, 12 file systems, 23

devices, 2 lvolumes)

Once daily at Midnight Windows 0:05 Windows < 1% Windows 8.2 MB

Storage Agent for Centera™

Discovery (16 nodes) Once daily at Midnight Windows 00:30 Windows 1.0% Windows 15 MB Windows 80 MB

Common Mapping Agent

Host Discovery (proxy) Once daily at Midnight Windows 01:14

Solaris 01:39Windows 1%Solaris 4%

Windows 22.6 MBSolaris 23 MB

Windows 73 MBSolaris 95 MB

Informix Discovery (proxy) Once daily at Midnight Solaris 01:38 Solaris < 1% Solaris 15.5 MB

Sybase Discovery (proxy) Once daily at Midnight

Solaris 04:20HP-UX 02:51

AIX 04:25

Solaris 11.38%HP-UX 12.50%

AIX 12.2%

Solaris 25 MBHP-UX 20 MB

AIX 50 MB

SQL Server Discovery (proxy) Once daily at Midnight Windows 00:45 Windows 2% Windows 14 MB

IBM UDB Discovery (proxy) Once daily at Midnight AIX 19:22 AIX 6.21% AIX 17 MB

VMware Agent

Discovery Once Daily 1:00 Windows 5%

Windows 8.6 MB Windows 130 MBCheckVCForServer 1 HourSteady State Windows < 1%

Initial Discovery 5 Minutes

a. To reduce CPU usage by WLA policies, adjust the policy frequency as described in ”Guideline A20” on page 50.

Table 15 Resource requirements for ControlCenter agents (Sheet 3 of 3)

Agent Type Default Polling CyclesDuration (mm:ss)

Avg. CPU Usage

Estimated Memory

Required Disk Space

34 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 35: ECC 6 1 Performance Scalability

Agent guidelines

FCC Agent Guidelines

Fibre Channel Connectivity (FCC) Agent discovers and monitors connectivity devices and fabrics. This section contains guidelines for deploying and managing FCC Agent.

Guideline A6 Deploy two agents to considerably improve DCP processing speed.Two FCC Agents process DCPs faster as one FCC Agent. The agents share the workload associated with processing DCPs and other user-initiated tasks. In a failover situation, either agent can take on the workload of the other agent.

Guideline A7 Before deploying FCC Agent and Storage Agent for NAS on the same host, update the agent port configuration.Unless you perform configuration updates, Fibre Channel Connectivity Agent and Storage Agent for NAS cannot reside on the same host. Refer to the Planning for Agents in EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1 for details.

Guideline A8 Perform single-switch discovery whenever possible.In cases where the configuration of only one switch is changed, rediscover only that switch (instead of the entire fabric). Rediscovery of a single switch typically takes less than 2 minutes.

Guideline A9 Configure FCC Agent DCPs to run more efficiently.The DCP execution time depends on the policy types, availability of the Store, switch size (number of ports, number of zones in zoneset/VSANs, nicknames/alias etc.) and network latency between agent and switches. Additionally, performance of different DCP’s for different switch types vary. Considering these factors, Table 16, 17, and 18 show the recommended DCP collection interval for Brocade, Cisco, and McData switches. Table Table 19 shows the DCP setting for mixed switches environment.

Note: WLA Daily DCP setting must be a factorial of 60 minutes (e.g., 10 min, 15 min, 20 min, 30 min, or 1 hour). Hence, DO NOT set WLA Daily DCP to 45 min, or to any setting higher than 1 hour (e.g., 2 hour, 3 hour, etc.). This is because WLA processing occur every 1 hour interval and requires at least 1 data point

35EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 36: ECC 6 1 Performance Scalability

36

Agent guidelines

.

Table 16 Recommended FCC Agent DCP Frequencies for Brocade Switch

Switch Vendor Brocade

Data Collection Policy Name

Number of Agent

1 to 512 Ports 513 to 1024 Ports

1025 to 2048 Ports

2049 to 3072 Ports

3073 to 4096 Ports

4097 to 8192 Ports

Fabric Validation1 15 minutes 20 minutes 30 minutes 1 hour 2 hours 3 hours

2 15 minutes 20 minutes 20 minutes 45 minutes 1 hour 2 hours

Device Validation1 15 minutes 30 minutes 1 hour 2 hours 2 hours 4 hours

2 15 minutes 20 minutes 45 minutes 1 hour 1 hour 2 hours

WLA Daily1 15 minutes 20 minutes 1 hour 1 hour Disable Disable

2 15 minutes 15 minutes 30 minutes 1 hour Disable Disable

WLA Revolving 1 or 2 Disable

Performance Statistics 1 or 2 Monitor up to 10 switches at a time

Table 17 Recommended FCC Agent DCP Frequencies for Cisco Switch

Switch Vendor Cisco

Data Collection Policy Name

Number of Agent

1 to 512 Ports 513 to 1024 Ports

1025 to 2048 Ports

2049 to 3072 Ports

3073 to 4096 Ports

4097 to 8192 Ports

Fabric Validation1 15 minutes 20 minutes 30 minutes 45 minutes 1 hour 2 hours

2 15 minutes 15 minutes 20 minutes 30 minutes 45 minutes 1 hour

Device Validation1 15 minutes 30 minutes 45 minutes 1 hour 2 hours 4 hours

2 15 minutes 20 minutes 30 minutes 45 hour 1 hour 3 hours

WLA Daily1 15 minutes 15 minutes 15 minutes 30 minutes 30 minutes 1 hour

2 15 minutes 15 minutes 15 minutes 30 minutes 30 minutes 1 hour

WLA Revolving 1 or 2 Disable

Performance Statistics 1 or 2 Monitor up to 10 switches at a time.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 37: ECC 6 1 Performance Scalability

37

Agent guidelines

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Table 18 Recommended FCC Agent DCP Frequencies for McDATA Switches

Switch Vendor McData

Data Collection Policy Name

Number of Agent

1 to 512 Ports

513 to 1024 Ports

1025 to 2048 Ports

2049 to 3072 Ports

3073 to 4096 Ports

4097 to 8192 Ports

Fabric Validation1 15 minutes 20 minutes 30 minutes 1 hour 2 hour 3 hours

2 15 minutes 15 minutes 30 minutes 45 minutes 1 hour 2 hours

Device Validation1 15 minutes 30 minutes 1 hour 2 hours 3 hours 4 hours

2 15 minutes 20 minutes 45 minutes 1 hour 2 hours 3 hours

WLA Daily1 20 minutes 30 minutes 1 hour Disable Disable Disable

2 15 minutes 30 minutes 1 hour Disable Disable Disable

WLA Revolving 1 or 2 Disable

Performance Statistics 1 or 2 Monitor up to 10 switches at a time.

Table 19 Recommended FCC Agent DCP Frequencies for Mix Vendor Switches

Switch Vendor Mix Vendor Switches

Data Collection Policy Name Number of Agent

1 to 512 Ports

513 to 1024 Ports

1025 to 2048 Ports

2049 to 3072 Ports

3073 to 4096 Ports

4097 to 8192 Ports

Fabric Validation1 15 minutes 30 minutes 45 minutes 1 hour 2 hours 4 hours

2 15 minutes 20 minutes 30 minutes 45 minutes 1 hour 3 hours

Device Validation1 15 minutes 30 minutes 1 hour 2 hours 3 hours 4 hours

2 15 minutes 20 minutes 45 minutes 1 hour 2 hours 3 hours

WLA Daily1 30 minutes 1 hour 1 hour Disable Disable Disable

2 20 minutes 30 minutes 1 hour Disable Disable Disable

WLA Revolving 1 or 2 Disable

Performance Statistics 1 or 2 Monitor up to 10 switches at a time.

Page 38: ECC 6 1 Performance Scalability

38

Agent guidelines

Storage Agent guidelines

This section describes guidelines when deploying and configuring these agents:

◆ Storage Agent for Symmetrix

◆ Storage Agent for CLARiiON

◆ Storage agents for third party arrays

◆ Storage agent for NAS

◆ Storage agent for Centera

Guideline A10 Follow these guidelines when deploying Storage Agent for Symmetrix/Symmetrix SDM agent on any dedicated, infrastructure or production hosts.◆ Master and Host agents can be installed unconditionally

◆ Appropriate number of gatekeepers must be assigned as described in ”Guideline A11” on page 40.

◆ The number of HBAs required to support the configuration depends on fan-out ratio. If an HBA has a 12:1 fan-out ratio (12 Symmetrix to 1 HBA), 2 HBAs are required on the agent host managing 20 Symmetrix arrays.

◆ If you plan to deploy Symmetrix Management Console on the host installed with Storage Agent for Symmetrix and Symmetrix SDM agent, refer to ”Guideline A11” on page 40 and ”Guideline A14” on page 44.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 39: ECC 6 1 Performance Scalability

Agent guidelines

Table 20 Single-core systems - Storage Agent for Symmetrix/Symmetrix SDM Agent deployment

Agent Deployment

Type 1 to 12,000 Devices 12,001 to 24,000 Devices 24,001 to 40,000 Devices 40,001 to 64,000 Devices

Installed on Dedicated Host as described in ”Guideline A3” on page 29.

Mange Up to 6 Symmetrix

Mange Up to 12 Symmetrix • Set Configuration DCP to 15 min interval

• Set BCV/RDF Status DCP to 10 min interval

• Set BCV/RDF Real-time Status aDCP to 3 min interval

a. BCV/RDF real-time status DCP interval controls how frequently the state of data protection task views will be updated. Example of state are restored, sync in progress, and split. When this DCP is set to 3 min interval, the Console will get refreshed only every 3 min. Even though the state value displayed on the Console may be out of sync, upon new task submission ControlCenter checks for state value and prevents any inappropriate tasks from being executed.

• Manage only one 64,000 devices, or two 32,000 devices or three 16,000 devices Symmetrix arrays. 64,000 devices spread over 20 Symmetrix arrays is not supported on single-core host.

• Installed on a dedicated agent host

• Not supported on AIX and HP-UX

• Set Configuration DCP to 20 min interval.

• Set BCV/RDF Status DCP to 15 min interval

• Set BCV/RDF Real-time Status a DCP to 4 min interval

• Set Performance Statistics DCP to 5 min interval

• Set WLA Revolving DCP to 10 min

Installed on a Stand Alone Store as described in ”Guideline A2” on page 28

Mange Up to 6 Symmetrix

Mange Up to 12 Symmetrix Not supported Not supported

Installed on a production host or infrastructure host as described in ”Configuration#1: Single-core infrastructure host” on page 9

Mange Up to 6 Symmetrix

Not supported Not supported Not supported

39EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 40: ECC 6 1 Performance Scalability

40

Agent guidelines

Note: Storage Agent for Symmetrix installed on HP-UX host may consume 1.2 GB virtual memory when managing 40,000 devices. The HP-UX operating system has limits as to the amount of virtual memory a process can consume. Kernel tuning may be necessary to allow agent process to grow up to 1.2 GB. Changing maxdsize larger than the memory size of the process as well as change the maxssize to 10 MB may be required. The limit may be increased for the login session to match the maxdsize and maxssize kernel parameters.

Guideline A11 Follow these minimum gatekeeper recommendations.

Symmetrix/SDM Agent◆ Gatekeeper requirements for Storage Agent for

Symmetrix/Symmetrix SDM Agent are dependent on the size of the Symmetrix and use of agent functionality.

◆ A larger Symmetrix will require a gatekeeper to be opened for a longer period of time to complete a DCP. The longer a gatekeeper is kept open by a DCP the more likely a different DCP will require an additional gatekeeper.

Table 21 Multi-core systems -Storage Agent for Symmetrix/Symmetrix SDM Agent deployment

Agent Deployment

Type 1 to 12,000 Devices 12,001 to 24,000 Devices 24,001 to 40,000 Devices 40,001 to 64,000 Devices

Installed on Dedicated Host as described in ”Guideline A3” on page 29.

Mange Up to 6 Symmetrix

Mange Up to 12 Symmetrix Mange Up to 20 Symmetrix • Manage up to 20 Symmetrix

• Support for 64,000 devices is available on Windows platform only.

Installed on a Stand Alone Store as described in ”Guideline A2” on page 28

Mange Up to 6 Symmetrix

Mange Up to 12 Symmetrix Mange Up to 20 Symmetrix Not supported

Installed on a production host or infrastructure host as described in ”Configuration#2: Multi-core infrastructure host” on page 10

Mange Up to 6 Symmetrix

Mange Up to 12 Symmetrix Not supported Not supported

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 41: ECC 6 1 Performance Scalability

Agent guidelines

◆ Storage Agent for Symmetrix/Symmetrix SDM Agent functionalities as they correspond to gatekeeper usage have been categorized as follows:

• Monitoring/reporting - All default enabled DCP's (not including WLA)

• Work Load Analyzer functionality - WLA default policies, Daily DCP (15 minute)

• Management - Any active management of Symmetrix (SDR, device configuration, feature controls, masking, etc.)

Symmetrix Management Console◆ The size of the Symmetrix being managed by SMC has little effect

on SMC gatekeeper requirements.

Refer to Table 22 to determine gatekeeper requirements for the Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC.

Table 22 Total Gatekeeper requirements for Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC

Symmetrix size

Monitoring and

Reporting only a

Monitoring/Reporting, and WLA

Policies onlya

Monitoring/Reporting/WLA Policies, and Config Changea

Monitoring/Reporting/WLA Policies, and

Config Change, and SMC (6 Gate Keepers)

a

Monitoring/Reporting/WLA Policies, and

Config Change, SMC, and CLI Scripts (2

Gate Keepers)b

6,001 to 16,000 devices

3 4 6 12 14

16,001 to 32,000 devices

4 5 7 13 15

32,001 to 64,000 devices

5 6 8 14 16

a. Gatekeeper requirements are for Solutions Enabler processes required by Storage Agent for Symmetrix/Symmetrix SDM Agent (storapid, storsrvd, storevntd). Some other running Solutions Enabler processes may require additional gatekeepers.

b. Gatekeeper requirements are for Solutions Enabler processes required by Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC (storapid, storsrvd, storevntd, storsrpd). Some other running Solutions Enabler daemons may require additional gatekeepers.

41EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 42: ECC 6 1 Performance Scalability

4

Agent guidelines

Guideline A12 Know the scalability limit for the open system Storage Agent for Symmetrix proxy.One Storage Agent for Symmetrix can act as a proxy for up to four hosts (each running Solutions Enabler) that are connected to three medium Symmetrix arrays. The recommended dedicated hardware configuration is a single-core host.

Table 23 lists the recommended DCP frequency when Storage Agent for Symmetrix in proxy mode is managing varying number of Symmetrix arrays.

Guideline A13 Plan for at least two dedicated agent hosts to manage Symmetrix arrays with more than 32K devices.The Storage Agent for Symmetrix when managing a Symmetrix array with 32K devices or larger has a resource footprint that is best managed by installing it on a dedicated agent host. By default, the Storage Agent for Symmetrix participates in failover, and if proper safeguards are not in place, management of a 32K device or 64K device Symmetrix array may be assigned to agent hosts that are ill equipped to handle the workload.

Table 23 Recommended DCP frequency for Storage Agent for Symmetrix in proxy mode

Data Collection Policy Name

Default Frequency

Recommended Frequency

12 Symmetrix arrays or 12K logical volumes

(whichever is smaller)

12 Symmetrix arrays but no more than 24K

logical volumes1 Symmetrix with 64 K

logical volumes

Alert Polling 2 Minutes 2 Minutes 2 Minutes 2 Minutes

BCV/RDF Status 5 Minutes 5 Minutes 10 Minutes 10 Minutes

CLI Generation Daily Daily Daily Daily

Configuration 10 Minutes 10 Minutes 20 Minutes 20 Minutes

Historical Data Daily Daily Daily Daily

Local Discovery Daily Daily Daily Daily

Performance Statistics 2 Minutes 2 Minutes 2 Minutes 2 Minutes

BCV Real-time 1 Minute 1 Minute 2 Minutes 5 Minutes

WLA Daily 15 Minutes 15 Minutes 15 Minutes 15 Minutes

WLA Revolving 2 Minutes 2 Minutes 10 Minutes 10 Minutes

2 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 43: ECC 6 1 Performance Scalability

Agent guidelines

Ensure that only one 64K device Symmetrix array or two 32K device Symmetrix arrays are zoned-in to two hosts with the proper hardware specification (”Guideline A10” on page 38). Example, HostA and HostB are both zoned-in to a 64K device Symmetrix array. HostA is the primary for the Symmetrix array while HostB is secondary (active/passive). HostA goes down and now the ECC Server is forced to assign the responsibility of managing the 64K device Symmetrix array to HostB as that is the only surviving host that can communicate to the 64K device Symmetrix array.

It is likely that a 32K or 64K device Symmetrix array would have remote Symmetrix arrays (R2). Use techniques like symavoid to exclude remote Symmetrix arrays from being discovered via the primary agent managing 32K or 64K device Symmetrix arrays.

Increase the number of agent log files for your Storage Agent for a Symmetrix with 64K devices.

In order to maintain 36 hours of historical log files, especially when running WLA DCPs with a 64K device Symmetrix array, increase the log file count from 10 log files to 25 log files. Log file size should remain the same. Increasing in log file would require 45 MB of additional disk space.

43EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 44: ECC 6 1 Performance Scalability

Agent guidelines

Guideline A14 Provide additional memory while deploying SMC and Storage Agent for Symmetrix on a host.The following options are available to deploy Symmetrix Management Console (SMC) with ControlCenter.

◆ Provide additional gatekeepers for SMC as per ”Guideline A11” on page 40.

◆ For more information regarding SMC Scalability and additional disc space requirements refer to the SMC documentation.

Guideline A15 Follow these guidelines when deploying Storage Agent for CLARiiON on any infrastructure, dedicated or production hosts.The agent DCP settings must comply with the guidelines shown in Table 26 on page 46, which show recommended DCP frequency for configurations that contain all small, all medium, and all large CLARiiONs. If your configuration contains a mixture of small, medium, and large CLARiiONs, use the DCP settings for the highest recommended configuration in Table 26 on page 46. For example, if your configuration has 5 small and 3 medium CLARiiONs, use the recommended DCP settings for 8 medium CLARiiONs. Refer to the discussion on disabling of WLA Revolving policy for the Storage Agent for CLARiiON in ”Guideline D3” on page 18.

Table 24 Additional host memory requirements for SMC and Storage Agent for Symmetrix

Single-core host with 2 GB RAMMinimum SPEC CINT2000 Rate (baseline) = 26

Multi-core host with 2 GB RAMMinimum of SPEC CINT2000 Rate (baseline) = 123 or

SPEC CINT2006 Rate (baseline) = 61

12,000 devices or 6 Symmetrix arrays whichever is smallerConditions:• Installed on a production host or infrastructure host as

described in ”Configuration #1: Single-core host with infrastructure and agents”

• Add 512 MB of memory to support SMC Or

24,000 devices or 12 Symmetrix arrays, whichever is smallerConditions: • Add 512 MB of memory to support SMC

Or40,000 devices or 20 Symmetrix arrays, whichever is smallerConditions:• Add 768 MB of memory to support SMC

24,000 devices or 12 medium Symmetrix arrays whichever is smaller.Conditions:• Installed on infrastructure host as described in ”Configuration #2:

Multi-core host with infrastructure and agents” • Add 512 of MB memory to support SMC

Or64,000 devices or 20 Symmetrix arrays whichever is smallerConditions: • Support for 64,000 devices is available on Windows platform

only.• Add 768 MB of memory to support SMC

44 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 45: ECC 6 1 Performance Scalability

Agent guidelines

If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each CLARiiON array. Set the frequency to 5 minutes for these DCPs.

The Storage Agent for CLARiiON managing medium and large CLARiiON arrays consumes significant CPU resources on the agent host. For example, on a single-core Windows host (with 2 GB of RAM), the Storage Agent for CLARiiON consumed 15% CPU and 160 MB of memory while managing 10 large or 14 medium CLARiiON arrays executing DCPs at recommended frequency, as provided in Table 26 on page 46.

If this level of CPU utilization is unacceptable on a production host, install the Storage Agent for CLARiiON on a dedicated agent host. Managing a small CLARiiON would result in 2-10% CPU utilization on an agent host based on the count of CLARiiON arrays being managed, and thus may be installed on production host.

Table 25 Recommended number of managed objects for Storage Agent for CLARiiON deployment on hosts

Configuration #1: Single-core host with infrastructure and agents

Configuration #2: Multi-core host with infrastructure and agents

Dedicated or shared single-core or multi-core host

Large: 4Medium: 8Small: 14

Large: 4Medium: 8Small: 14

Large: 10Medium: 14Small: 18

45EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 46: ECC 6 1 Performance Scalability

46

Agent guidelines

Guideline A16 Follow these guidelines when deploying Storage Agent for NAS (Celerra) on any infrastructure, dedicated, or production hosts.The agent DCP settings must comply with the guidelines shown in ”Table 28” on page 47, which show recommended DCP frequency for configurations that contain all small, all medium, and all large Celerras. If your configuration contains a mixture of small, medium, and large Celerras, use the DCP settings for the highest recommended configuration in ”Table 28” on page 47. For example, if your configuration has 5 small and 3 medium Celerras, use the recommended DCP settings for 8 medium Celerras. Refer to the discussion on disabling of WLA Revolving policy for the Celerra agent in ”Guideline D3” on page 18. If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each Celerra. Set the frequency to five minutes for such DCPs.

Table 26 Recommended DCP collection settings for CLARiiON arrays

Number of CLARiiON

Arrays

Discovery WLA DAily Frequency

All Sizes (small, medium, or large)

Small (to 90 disks, 512 LUNS)

Medium (to 120 disks, 1024 LUNS)

Large (to 240 disks, 2048 LUNS)

1

Once a day

5 Minutes 10 Minutes 10 Minutes

2 5 Minutes 10 Minutes 10 Minutes

4 10 Minutes 15 Minutes 20 Minutes

6 10 Minutes 20 Minutes 30 Minutes

8 15 Minutes 30 Minutes 60 Minutes

10 20 Minutes 30 Minutes 60 Minutes

12 30 Minutes 60 Minutes Disabled

14 60 Minutes Disabled Disabled

16 Disabled Disabled Disabled

18 Disabled Disabled Disabled

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 47: ECC 6 1 Performance Scalability

Agent guidelines

47EMC ControlCenter 6.1 Performance and Scalability Guidelines

.

The NAS (Celerra) agent managing medium and large Celerras consumes significant CPU resources on the agent host. For example, on a single-core Windows host with 2 GB of RAM), the NAS agent consumed 10-12% CPU and 300 MB of memory while managing 10 large or 14 medium Celerras executing DCPs at recommended frequency, as provided in Table 28. If this level of CPU utilization is unacceptable on a production host, install the NAS agent on a dedicated agent host. Managing small Celerras would result in 2-12% CPU utilization on an agent host based on the count of Celerras being managed, and thus may be installed on a production host.

Table 27 Recommended number of managed objects for Storage Agent for NAS (Celerra) deployment on hosts

Configuration #1: Single-core host with infrastructure and agents

Configuration #2: Multi-core host with infrastructure and agents

Dedicated or shared single-core host

Dedicated or shared multi-core host

Large: 8Medium: 12Small: 14

Large: 10Medium: 14Small: 16

Large: 8Medium: 12Small: 14

Large: 10Medium: 14Small: 16

Table 28 Recommended DCP collection settings for Celerra arrays

Number of Celerra Arrays

Discovery WLA Daily Frequency

All Sizes (small, medium, or large)

Small (2 Data Movers, 512 Logical

Devices)

Medium (5 Data Movers, 2048 Logical

Devices)

Large (14 Data Movers, 4096

Logical Devices)

1

Once a day

5 Minutes 5 Minutes 10 Minutes

2 5 Minutes 5 Minutes 10 Minutes

4 10 Minutes 10 Minutes 15 Minutes

6 10 Minutes 15 Minutes 30 Minutes

8 10 Minutes 15 Minutes 60 Minutes

10 15 Minutes 20 Minutes Disabled

12 15 Minutes 20 Minutes Disabled

14 15 Minutes 30 Minutes Disabled

16 15 Minutes Disabled Disabled

Page 48: ECC 6 1 Performance Scalability

48

Agent guidelines

Guideline A17 Follow these guidelines when deploying Storage Agent for HDS on any infrastructure, dedicated, or production hostsThe agent DCP settings must comply with the guidelines shown in Table 30 on page 49, which show recommended DCP frequency for configurations that contain all small, all medium, and all large HDS arrays. If your configuration contains a mixture of small, medium, and large HDS arrays, use the DCP settings for the highest recommended configuration in Table 30 on page 49. For example, if your configuration has 5 small and 3 medium arrays, use the recommended DCP settings for 8 medium HDS arrays. Refer to the discussion on disabling of WLA Revolving policy for the Storage Agent for HDS in ”Guideline D3” on page 18. If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each array. Set the frequency to five minutes for such DCPs.

The Storage Agent for HDS managing medium and large arrays consumes significant CPU resources on the agent host. For example, on a single-core Windows host with 2 GB of RAM, the Storage Agent for HDS consumed 10-15% CPU while managing 8 large or 14 medium HDS arrays executing DCPs at recommended frequency, as provided in Table 30. If this level of CPU utilization is unacceptable on a production host, install the Storage Agent for HDS on a dedicated agent host.

Table 29 Recommended number of managed objects for Storage Agent for HDS deployment on hosts

Configuration #1: Single-core host with infrastructure and agents

Configuration #2: Multi-core host with infrastructure and agents

Dedicated or shared single-core host

Dedicated or shared multi-core host

Large: 4Medium: 8Small: 16

Large: 8Medium: 14Small: 20

Large: 8Medium: 14Small: 20

Large: 10Medium: 16Small: 20

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 49: ECC 6 1 Performance Scalability

Agent guidelines

Guideline A18 Follow these guidelines when deploying Storage Agent for Centera on any infrastructure, dedicated, or production hostsStorage Agent for Centera can manage up to 20 Centera content-addressed storage systems of any size when the agent is deployed according to Table 31.

Table 30 Recommended DCP collection settings for HDS arrays

Number of HDS Arrays

Discovery WLA Daily Frequency

All Sizes (small, medium, or large) Small Medium Large

Minimum SPEC CINT2000 Rate (baseline) = 26

1

Once a day

5 Minutes 5 minutes 10 Minutes

2 5 Minutes 5 Minutes 10 Minutes

4 10 Minutes 10 Minutes 15 Minutes

6 10 Minutes 15 Minutes 15 Minutes

8 15 Minutes 15 Minutes 30 Minutes

10 20 Minutes 30 Minutes Disabled

12 30 Minutes 30 Minutes Disabled

14 30 Minutes Disabled Disabled

16 Disabled Disabled Disabled

Table 31 Recommended number of managed objects for Storage Agent for Centera deployment on hosts

Configuration #1: Single-core host with infrastructure and agents

Configuration #2: Multi-core host with infrastructure and agents

Dedicated or shared single-core host

Dedicated or shared multi-core host

Large: 20Medium: 40Small: 60

Large: 20Medium: 40Small: 60

Large: 20Medium: 40Small: 60

Large: 20Medium: 40Small: 60

49EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 50: ECC 6 1 Performance Scalability

50

Agent guidelines

Common Mapping Agent guidelines

The Common Mapping Agent can discover and monitor (but not manage) databases and host configuration information locally on a single host or on remote hosts (via proxy). One Common Mapping Agent can replace several individual host and database agents if limited functionality is needed for the associated managed objects. However, to ensure proper performance and scalability, limit the number of managed objects per Common Mapping Agent and consider using host and database agents for large managed objects, as described in this section.

Guideline A19 Do not discover large hosts or large databases via proxy with the Common Mapping Agent.Discovery and rediscovery of large hosts and databases (refer to Table 2 on page 6 for details) using Common Mapping Agent via proxy can take a very long time. For example, daily rediscovery of large hosts and databases with Common Mapping Agent can take up to an hour. When large hosts are to be monitored, use the Host agent for the appropriate platform. Avoid monitoring large SQL Server, Sybase, Informix, and IBM DB2 database instances with Common Mapping Agent. For details on deploying those agents, refer to Plan for Agents in EMC ControlCenter 6.1 Planning and Installation Guidelines, Volume 1.

Guideline A20 Limit the number of managed objects per Common Mapping Agent.When the number of managed objects per Common Mapping Agent exceeds the counts listed in Table 32, consider deploying additional Common Mapping Agents to sustain optimum performance.

For example, one Common Mapping Agent can manage up to a total count of 140 before adding a second Common Mapping Agent. Discovery polling will be done in five off-peak hours.

Note: The limitation is based on an agent host with minimum SPEC rate = 24 and 2 GB of memory.

Table 32 Managed objects per Common Mapping Agent

Managed Object Type (medium-sized objects) Managed Object Count

Hosts (Windows, Solaris, HP-UX, and AIX) 100

Databases (Sybase, Informix, and SQL Server) 40

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 51: ECC 6 1 Performance Scalability

Agent guidelines

Host, VMWare, Oracle, and WLA Archiver Agent guidelines

Host Agent for Windows allows you to effectively manage Windows servers. Workload Analyzer Archiver generates Performance Manager Automated Reports and processes performance data collected by individual ControlCenter agents. The Database Agent for Oracle allows you to manage the Oracle database. The VMWare Agent manages the ESX Servers.

Guideline A21 Ensure Watermark DCPs are not set below the default of 15 minutes for hosts having more than 1024 volumes and file systemsStarting with the 5.2 Service Pack 3 release, the Host agent has three new Watermarks DCPs that collect the capacity utilization information of volume groups, volumes, and file systems size information every15 minutes (default) and these policies are enabled by default. The capacity utilization data for the day is sent to the Store along with the Daily Discovery transactions which are typically scheduled once a day. Watermark data gathered from hosts is reported in various StorageScope reports. Ensure that Watermark DCP frequencies are not faster than 15 minutes for hosts having 1024 volumes and file systems.

On a test Solaris 10 host (SunFire V240R – 2x1.5GHz, 4GB RAM) with 256 devices, 1024 volumes, and 1024 file systems, the WLA Daily, WLA Revolving, and Watermark DCPs (all at default frequencies) consumed a steady state of 4% CPU utilization. If this level of CPU utilization is not acceptable on a production host, disable or reduce the Watermark policy frequency.

Guideline A22 Limit the number of managed objects per Workload Analyzer ArchiverTable 33 on page 52 shows the number of medium-sized managed objects that can be managed by a single Workload Analyzer Archiver when installed on a dedicated host or shared with other components on a single-core or multi-core host. Provide for 36 GB of storage.

51EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 52: ECC 6 1 Performance Scalability

Agent guidelines

If you have more managed objects being managed by one WLA Archiver than is shown in Table 33, you need to assign them to a new dedicated (or shared) WLA Archiver. In the WLA Archiver host, search for yesterdays BTP file (e.g. 20080708.btp) to get a count of managed objects that are managed by the WLA Archiver. If the number of managed objects on the WLA Archiver host exceed managed object count in Table 33, contact the Solutions Validation Center ([email protected]) for assistance.

Guideline A23 Know the scalability limit of Oracle agent (local and proxy).Manage no more than 50 medium instances when the Oracle agent is installed on the database host (local agent). It is assumed that the database server is a single-core host. An Oracle agent running on such a database server would need approximately 25 MB of virtual memory.

In proxy mode, a single Oracle agent can manage up to 30 medium database instances running on a database server.

This limitation takes into account processing of WLA Daily.

Guideline A24 Limit the number of ESX Servers managed by the VMWare Agent◆ 50 medium ESX servers when deployed on the Infrastructure

hosts (e.g. Single-host Infrastructure with agents)

◆ 100 medium ESX servers when deployed on an external Store, or on dedicated agent hosts

Table 33 Managed objects per Workload Analyzer Archiver

Managed Object Type (medium-sized objects) Single-core or Multi-core host

Storage arrays 50

Hosts 600

Databases 500

Switches 50

Total of all listed managed objects 1200

52 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 53: ECC 6 1 Performance Scalability

StorageScope Guidelines

StorageScope GuidelinesThis section of the document has guidelines specifically for StorageScope technology. It has been designed to answer common performance and scalability questions and to provide recommendations to improve user experience and appropriate placement of StorageScope components.

EMC StorageScope technology has changed significantly in ControlCenter 6.0. StorageScope data is now stored in a second Oracle database called the StorageScope Repository separate from ControlCenter Repository. Capacity and utilization data of managed objects is moved on a nightly basis from the ControlCenter Repository to the StorageScope Repository via a process called Extract-Transform-Load (ETL).

Figure 1 StorageScope 6.1 Architecture

StorageScope 6.0 offers the following:

◆ Reporting on detailed file-level storage utilization metrics and attributes including file type, owner, size, path, folder, volume, and host. This allows easy identification of hosts with large, rarely used, or non-business related files which may be candidates for storage reclamation or migration.

53EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 54: ECC 6 1 Performance Scalability

54

StorageScope Guidelines

◆ Unified drill-down views and built-in reports of VisualSRM with the custom reporting flexibility of StorageScope 5.x into a single unified GUI.

The StorageScope FLR license remains purchasable separately from the StorageScope base license. If the StorageScope FLR license has been enabled, the Data Collection Policy called File Level Collection (referred to as FLR DCPs in rest of the document) for Host agents can be scheduled to collect file/folder statistics on a nightly basis. The amount of data that the FLR DCP collects and stores in the StorageScope Repository depends on the selection that the ControlCenter Administrator makes on the DCP wizard.

While installing StorageScope, keep in mind that the following factors affect its performance and scalability.

◆ ControlCenter Installation size (small, medium or large)

◆ ControlCenter Configuration type (single-host, distributed or single-host with agents)

◆ Number of files and folders in file systems being processed by the FLR DCPs.

◆ Concurrency and types of FLR DCPs

• Scope of data collection. All Files and Folders will retrieve much more information for the target file systems as compared to Folders only. In addition All Files and Folders will greatly increase the amount of time consumed for DCP processing on the agent host and StorageScope server, ETL, and the demand for disk space. Refer to Table 42 on page 68 for DCP performance information.

• Exceptional Files and Folders Scans files and folders that are in the top <n> of the following categories, where <n> is any value specified: number of files greater than 1 MB and number of files not accessed in more than 60 days. These file statistics are used for file summary reports in StorageScope. The categories are: Oldest by create date, Oldest by modified date, Oldest by access date, Largest by actual size, Largest by allocated size.

◆ System resources available on StorageScope server host

◆ Time to complete ETL process

◆ Data retention period of StorageScope data

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 55: ECC 6 1 Performance Scalability

StorageScope Guidelines

Planning guidelines

Guideline P1 Determine managed object size based on Table 34. The managed object for the FLR DCP is file systems. System resource utilization for the FLR DCP on the agent and StorageScope hosts depends on the count of files and folders of the file systems and type of the DCP. The FLR DCP may be used to collect folder or file and folder level information. The host that is installed with StorageScope requires significantly more system resources (CPU and disk space) to process and record folder and file level DCP than the folder only. Use Table 34 on page 55 to determine the size of the managed objects as applicable to StorageScope. If the size of the managed object exceeds Large, contact the EMC Solutions Validation Center for additional recommendations ([email protected]).

Note: Classification of file servers is based on the size of and count of file systems that are mounted on them: ◆ Small file server: 6 small file systems◆ Medium file server: 4 medium file systems◆ Large file server: 2 large file systemsEach of the folders is configured as following irrespective of the size of the File system or the file server:◆ There are 50 files to a folder◆ 6 to 10 distinct file types (.txt, .jpg etc.) are present in each folder

Guideline P2 Determine the appropriate placement of StorageScope.Based on the factors discussed earlier, the following recommendations have been made for placement of StorageScope on appropriate host and associated requirements for deployment.

Table 34 Managed object sizes

Managed Object Resource Small Medium Large

File System Files/Folders 124,000/2,480 376,000/7,520 1,125,000/22,500

55EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 56: ECC 6 1 Performance Scalability

56

StorageScope Guidelines

Note: When StorageScope is installed on a dedicated host, the number of file systems that it can scan on a nightly basis is independent of installation size or configuration type. The current scalability of StorageScope is a maximum of 2000 file systems on a nightly basis. Out of total 2000 file system, All Files and Folder scans should be limited to a maximum 160 file systems. Refer to Guideline D5 ”Distribute FLR DCP evenly.”on how to schedule DCPs.

Guideline P3 Allocate disk space for the StorageScope host.The disk space requirement for StorageScope depends on three factors:

◆ Number of managed objects present in the ControlCenter Repository.

◆ Number of file systems being scanned by FLR DCPs on a nightly basis.

◆ FLR Data Removal Schedule of the FLR StorageScope data.

Use Table 36 on page 57 to estimate the disk space requirement for StorageScope. Note, this estimate includes StorageScope repository, database backup, database export and StorageScope application with default 1 year trending data.

Table 35 Installation size and configuration type

Installation Size Configuration Type

StorageScope StorageScope with FLR

Deployment option Deployment optionMax Medium File Systems

Small Infrastructure with agents minimum ”Configuration #1: Single-core host with infrastructure and agents”

Install on Infrastructure a Install on Infrastructure a 500

Small Infrastructure with agents minimum ”Configuration #2: Multi-core host with infrastructure and agents”

Install on Infrastructure a Install on Infrastructure a 1000

Medium Infrastructure with agents minimum ”Configuration #1: Single-core host with infrastructure and agents”managing up to 45 arrays, 550 hosts, 280 databases, and 28 switches.

Install on Infrastructure a Dedicated StorageScope host

2000

Medium minimum ”Configuration#2: Multi-core infrastructure host”managing up to 120 arrays, 1400 hosts, 750 databases and 88 switches.

Install on Infrastructure a Dedicated StorageScope host

2000

Large minimum ”Configuration#2: Multi-core infrastructure host” managing up to 200 arrays, 2500 hosts, 1500 databases and 156 switches.

Dedicated StorageScope host

Dedicated StorageScope host

2000

a. Additional 1 GB RAM is required to support for deployment of StorageScope on these configurations.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 57: ECC 6 1 Performance Scalability

StorageScope Guidelines

To compute disk space requirement for the FLR, use the following formula:

◆ All Files and Folders: One medium host (4 file systems having total of 1.5 million files) requires 900 MB disk space (400 MB on SRM_SCAN DATA tablespace and 500 MB on cold backup folder.

Note: If more than 250 million file records are to be stored in the database as result of All Files and Folder scan, you need to add data files to SRM_SCANDATA tablespace.

◆ Folders only: One medium host (4 file systems having total of 30080 folders) requires 280 MB disk space (130 MB on SRM_SCANDATA, 150 MB on cold backup folder).

Note: If more than 15 million folders records are to be stored in the database as result of Folders only scan, need to add data files to SRM_SCANDATA tablespace and SRM_SCANIDX tablespace.

Refer to ”Guideline M1” on page 68 to monitor disk space usage of data and index tablespaces.

◆ Exceptional Files and Folders: Disk space requirement is somewhere between Folders only and All Files and Folders scans. If # of exceptional files/folders to collect per category is kept low, disk space usage will lean more towards the formula of Folder only.

Table 36 Disk space requirements for StorageScope only

Number of managed objects Disk space for StorageScope only (no FLR)

30 arrays, 330 hosts, 180 databases, 25 ESX’s, and 24 switches 15 GB

45 arrays, 550 hosts, 280 databases, 25 ESX’s, and 28 switches 17 GB

60 arrays, 680 hosts, 380 databases, 25 ESX’s, and 44 switches 22 GB

120 arrays,1400 hosts,750 databases, 50 ESX’s, and 88 switches 50 GB

175 arrays, 2050 hosts, 1200 databases, 75 ESX’s, and 132 switches 95 GB

200 arrays, 2500 hosts, 1500 databases,100 ESX’s, and 156 switches 110 GB

57EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 58: ECC 6 1 Performance Scalability

StorageScope Guidelines

◆ StorageScope FLR data, by default, will be removed from StorageScope Repository nightly. Use the above formula to estimate the total disk space if you wish to retain StorageScope FLR data for longer period for reporting (example, Folders only data for 500 file systems on Monday, 100 All Files and Folders data on Tuesday and Folders only data for 1000 file systems on Wednesday).

Installation guidelines

Guideline I1 Follow these recommendations while selecting hosts for dedicated StorageScope installation

Guideline I2 Follow these recommendations while providing high availability for dedicated StorageScope host.When allocating disk space for the StorageScope Repository, consider these recommendations:

◆ For high availability, use mirrored (RAID 1) disk (keep in mind that this doubles the amount of required disk space)

Table 37 Hardware specifications and component placement

Single-core host with 2 GB RAMMinimum SPEC CINT2000 Rate (baseline) = 26

Multi-core host with 2 GB RAMMinimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61

StorageScope RepositoryStorageScope Web ApplicationMaster AgentHost Agent

• StorageScope Repository• StorageScope Web Application• Master Agent• Host Agent• With 1 additional GB of RAM:

- Store or Storage Agent for Symmetrix and Symmetrix SDM agent managing up to 40,000 devices or 20 Symmetrix arrays whichever is lower with an additional 1 GB of RAM (total of 3GB of RAM)

• Any three of the following- Storage Agent for CLARiiON- FCC Agent- WLA Archiver Agent- Vmware Agent- Common Mapping Agent- Oracle Agent- Storage Agent for HDS- Storage Agent for Centera- Storage Agent for ESS- Storage Agent for HP StorageWorks- Storage Agent for Invista- Storage Agent for NAS- Storage Agent for SMI

58 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 59: ECC 6 1 Performance Scalability

StorageScope Guidelines

◆ Other options for increasing system performance include use of disk arrays such as Symmetrix and CLARiiON.

◆ Host based mirroring is strongly discouraged

Guideline I3 Follow these recommendations to ensure optimal operating environment.Follow these maintenance best practices at least once a month.

◆ Cleanup temporary disk space

◆ Shutdown StorageScope Components and run disk de-fragmentation utility

Data Collection Policy guidelines

There are two data feeds for StorageScope. Metrics, configuration, status, and usage data for managed objects that ControlCenter agents manage are extracted from the ControlCenter Repository on a nightly basis and stored in the StorageScope Repository. This is the first data feed and ETL process does the data migration. If the StorageScope FLR license is enabled, the Host Agent collects file/folder data for file systems on a nightly basis. These transactions are processed by StorageScope Web Application (bypassing ControlCenter infrastructure) and stored in the StorageScope Repository. This is the second data feed to the StorageScope application. This section provides recommendations for various DCPs that impact StorageScope.

Guideline D1 Schedule Discovery DCPs such that it completes before ETL starts.The ETL process extracts managed object information from the ControlCenter Repository, and processes and loads it in the StorageScope Repository for trending and reporting. To obtain the most current information of various managed objects (CLARiiON, 3rd party arrays, hosts, databases, Virtual Machines etc.) schedule their nightly discovery DCPs such that they all complete prior to scheduled start up time of ETL process. If you have a large installation size, it may be necessary to delay the default start time (4:00 am) of the ETL process.

A quick and easy way to identify the duration of nightly discovery is to track the CPU utilization of the oracle.exe process using utilities like Windows Performance Monitor on the ControlCenter Repository host. Discovery DCPs are typically scheduled between midnight and 5 am daily. All Discovery DCPs update the ControlCenter Repository. If

59EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 60: ECC 6 1 Performance Scalability

StorageScope Guidelines

60 EMC ControlCenter 6.1 Performance and Scalability Guidelines

you observe that CPU utilization of oracle.exe ‘flattens out’ after 4:30 am you know that scheduling ETL at 5:00 am would allow for the most up to date managed object information to be presented in various StorageScope reports.

Note: Track the CPU utilization of the oracle.exe process before you schedule an ETL for the first time.

Guideline D2 Schedule processing of custom reports once ETL is overCompletion of ETL process updates the StorageScope Repository with up to data of managed objects and file/folder metrics. The ETL process duration depends on the count of managed objects in the ControlCenter Repository and the number of file systems being scanned by various FLR DCPs. Use the following table to estimate the duration of time that ETL would take to complete processing.

Table 38 ETL Processing time estimate based on dedicated StorageScope Server installed on single-core host and multi-core host

Configuration size

ETL processing time (hh:mm)Single-core host

Minimum SPEC CINT2000 Rate (baseline) = 26

ETL processing time (hh:mm)Multi-core host

Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61

Small (up to 30 arrays, 330 hosts, 25 ESX’s, 180 databases and 24 switches)

00:15 00:14

Medium (up to 45 arrays,550 hosts, 25 ESX’s,280 databases and 28 switches)

00:26 00:22

Large (up to 60 arrays, 680 hosts, 25 ESX’s, 380 databases and 44 switches)

00:30 00:27

Large (up to 80 arrays, 1400 hosts, 50 ESX’s, 750 databases and 64 switches)

00:59 00:58

Large (up to 175 arrays, 2050 hosts, 75 ESX’s, 1200 databases and 132 switches)

01:30 01:15

Large (up to 200 arrays, 2500 hosts,100 ESX’s, 1500 databases and 156 switches)

01:54 01:27

Page 61: ECC 6 1 Performance Scalability

StorageScope Guidelines

Note: The current scalability of StorageScope is a maximum of 2000 file systems on a nightly basis. Add 6 hours to ETL time for processing Folder only transactions for 2000 file systems. All Files and Folder ETL time for 160 file systems is estimated to take additional 4 hours. These durations are to be added to processing time in Table 38 to arrive at estimated total ETL time.

Guideline D3 To conserve system resource utilization on agent hosts, schedule only one type of FLR DCP for file systems.The FLR DCP definition wizard allows for the same file system to be assigned to more than one DCP. For example, if file system 1 on Host1 is already assigned to Folders only, the FLR DCP, the user may assign the same file system (file system 1 on Host1) to the All Files and Folders FLR DCP. If this happened, these consequences would occur:

◆ The File Server (agent host) would incur multiple processing costs of FLR DCPs. Refer to Table 41 on page 67 for system resource footprint for FLR DCPs.

Note: The StorageScope FLR license enables FLR functionality on host agents running ControlCenter version 6.0 and higher.

◆ The StorageScope Repository would store the last transaction, and overwrite all previous transactions for the same file systems. If you have scheduled summary and detailed FLR at 3 a.m. and 1 a.m. respectively for the same file systems (file system 1 on Host1), then the StorageScope Repository will record summary level information overwriting the detailed information collected earlier during the night.

If these consequences are undesirable, do not assign multiple FLR DCPs to the same file system.

Guideline D4 Avoid overlapping FLR DCP with scheduled tasks.The StorageScope installation utility configures various maintenance tasks that assure data recovery and routine upkeep of the StorageScope Repository. Avoid overlapping StorageScope processing tasks (ETL, FLR DCP etc.) with these important maintenance tasks to enable their timely completion.

61EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 62: ECC 6 1 Performance Scalability

StorageScope Guidelines

The StorageScope Repository export process starts at 11 pm daily and may take 15-60 minutes based on the size of repository. During this time, the StorageScope Repository is available for transaction processing but response time could degrade. Once the export has finished, the resulting .DMP file is compressed. Based on the size of the export file, the compression could run for 1-3 hours. As the compression process is running on an already exported external database file, the StorageScope repository is available to process transactions during this time. Schedule FLR DCP starting at midnight.

By default every Sunday at 2:30 am, a cold backup of the StorageScope Repository is scheduled. During the cold backup process, the StorageScope Repository is not available for transaction processing. Do not schedule FLR DCP on Sunday (uncheck Sunday on DCP definition wizard. It is enabled by default).

Guideline D5 Distribute FLR DCP evenly.The FLR DCP would collect different type and volume of data based on the selection made in the DCP definition wizard. To facilitate easy assignment of various file systems to specific type of FLR DCP, a different collection definition may be created.

Table 39 StorageScope maintenance tasks

Task Name When scheduled

StorageScope Repository export Daily 11:00 pm

Database backup to external media Defined by the customer

Cold backup of the StorageScope Repository Weekly Sunday 2:30 am

Recompile invalid objects in StorageScope Repository Weekly Sunday 6:00 pm

Rebuild StorageScope Repository index Weekly Sunday 12:05 am

Metrics partition process Weekly Sunday 1:00 am

Trending service of StorageScope Repository Weekly Saturday 7:30 am

Trend retention service of StorageScope Repository Weekly Saturday 10:30 pm

Trending service of StorageScope Repository Last day of each month 9:30pm

62 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 63: ECC 6 1 Performance Scalability

StorageScope Guidelines

As an example, you may create the following three distinct FLR DCPs from the standard template and assign appropriate file systems to them.

◆ Summary-only: Scope of data collection = folders only and Collect summary information on file types = No. Appropriate for reporting of utilization, trending and quick identification of under/over utilized file systems.

◆ Summary with File Type: Scope of data collection = folders only and Collect summary information on file types = Yes. In addition to information collected above, information related to utilization by file types (multimedia, audio, video, log, executable etc.) will be collected. It is expected that information retrieved by this type of FLR DCP would assist in planning for storage provisioning or reclamation.

◆ Detailed: Scope of data collection = All Files and Folders and Collect summary information on file types = Yes. This will allow for collection of all necessary information to troubleshoot issues with file systems.

The FLR DCP for all supported Host agent platforms is scheduled at 2 a.m. by default. To avoid resource contention on the StorageScope host (either dedicated or collocated with ControlCenter Infrastructure), schedule FLR DCPs evenly over an extended period of time starting at midnight.

Evenly schedule summary level DCPs start time (Folders only with or without file type summary) between midnight and 3 a.m., and schedule detail DCPs (All Files and Folders with file type summary) to start at 4 a.m. By following these recommendations, you spread the processing load over an extended period of time. This applies to StorageScope collocated on the ControlCenter infrastructure hosts as well as installed on a dedicated host.

Schedule maximum 250 hosts (or 1000 file systems) in a Folder Only DCP. StorageScope Server is expected to take an hour to complete processing of these transactions. If more than 250 hosts are to be scheduled for Folders Only DCP, create multiple DCPs assigning up to 250 hosts (or 1000 file systems) to each of these DCPs.

Note: For nightly basis 2000 file systems, you can schedule 1000 file systems in the first DCP, then another 1000 file systems in the second DCP.

63EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 64: ECC 6 1 Performance Scalability

StorageScope Guidelines

Schedule maximum of 40 hosts (or 160 file systems) in an All Files and Folders DCP. Expected processing time is 2 hours at StorageScope Server.

StorageScope server processing time does not include the time FLR DCP would take on agent hosts. Refer to Table 42 on page 68 for an estimate of DCP processing time of various FLR DCPs on agent hosts.

Distributing FLR DCP over a period allows spreading of transaction payload evenly on a customer network.

Guideline D6 Use filters to reduce the volumes of information to be processed and stored in the StorageScope repository.Consider the following settings in the FLR DCP to reduce processing and storage costs:

◆ Set Ignore files smaller than option to reasonable value (say 1 MB).

Use Collect file owner information only when needed. On a Windows platform it takes additional time (03:23 min compared to 01:42 min) for a medium file system when file owner information has been requested. On UNIX platforms, there is only a marginal, additional cost.

Guideline D7 Consider alternative of All Files and Folders FLR DCPs whenever possible.The All Files and Folders DCP can drastically impact system performance, gathering data which isn't very useful except in specific circumstances. It is best to perform the Folders Only DCP on a regular basis and only create the All Files and Folders scan when trouble spots are identified. It may be preferable to just perform a Exceptional Files and Folders scan, as opposed to a All Files and Folders scan. The Exceptional Files and Folders scan reports on large and rarely used files based on the criteria defined in the DCP. These files may be candidates for storage reclaimation or migration.

The five Exceptional categories are pre-defined in the system. If the same file belongs to multiple categories, only one record is written to the StorageScope Repository.

For example, Exceptional Files and Folders DCP has been assigned to Host A having file system's C:, D:, and E:, each having 100,000 files. exceptional files per category is set to 1000. At the end of DCP execution, the following count of files will be stored in the StorageScope Repository:

◆ Host A, File System C: 1000 - 5000 files◆ Host A, File System D: 1000 - 5000 files◆ Host A, File System E: 1000 - 5000 files

64 EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 65: ECC 6 1 Performance Scalability

StorageScope Guidelines

By using Exceptional Files and Folder scans on these three file systems, only maximum of 15,000 file records were written to the StorageScope Repository instead of 300,000 if All Files and Folder scan was opted.The processing workload on the StorageScope server is very close or a little higher for the exceptional FLR DCP as compared to the Folders only DCP. Refer to ”Guideline D5” on page 62 on Folders only for scheduling the Exceptional Files and Folders DCP.

Upgrade GuidelinesThis section contains guidelines for upgrading to StorageScope 6.1 from previous versions. Please review the StorageScope Data Migration Utility section in the EMC ControlCenter 6.1 Upgrade Guide for discussion on prerequisite, usage and troubleshooting of the migration utility.

Guideline U1 Upgrade from previous version of StorageScope in batches.StorageScope 6.1 has a utility (STSMigrate.bat) that allows migration of old 5.2.x XML repository to the new Oracle based StorageScope Repository. If you are migrating a very large amount of data (say 3 years of XML data for a medium/large ControlCenter Configuration size), than migrate data in stages (e.g. one year at a time).

Guideline U2 Allow for sufficient time to complete migration.Migration of XML to the StorageScope Repository is a one time process. It may take up to 5 hours to complete migration of a medium/large installation size having 3 years of StorageScope data.

Networking GuidelinesThis section provides guidelines related to StorageScope component placement on a distributed environment.

Guideline N1 Minimize the impact of network latency on StorageScopeAs network latency among ControlCenter Infrastructure, Agents and StorageScope increases, processing time goes up. Transactions may time out and application reliability may degrade.

Follow these recommendations while deploying StorageScope for a distributed configuration:

65EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 66: ECC 6 1 Performance Scalability

66

StorageScope Guidelines

◆ Ensure that network latency between ControlCenter Repository and dedicated StorageScope host do not exceed 8 ms (milliseconds). Try to keep these two hosts on the same LAN segment.

◆ Ensure that the network latency between the dedicated StorageScope host and hosts executing the FLR DCP do not exceed 200 ms. The minimum bandwidth is 386 Kbps with 50% average utilization.

◆ Keep network latency between SRM Console and StorageScope host up to 50 ms for optimum user response time.

Guideline N2 Consider the impact of the FLR DCP on network latency and bandwidth.Table 40 lists the resulting payload when the FLR DCPs execute on a medium file system.

*This volume of data is transferred between the Host agent and StorageScope Server per medium file system.

Based on the time synchronization of the FLR DCPs and the number of file systems being polled, a sudden surge of payload will be placed on the network. If this is undesirable, spread the DCPs evenly over time. This is especially recommended if Host agents are deployed on locations that are not supported by network having high latency and low bandwidth.

Table 40 FLR DCP payload

Data Collection Policy Approximate Payload*

FLR (Scope of data collection = Folders only; Collect summary information on file types = No)

1 MB

FLR (Scope of data collection = Folders only; Collect summary information on file types = Yes)

2 MB

FLR (Scope of data collection = All Files and Folders; Collect summary information on file types = Yes)

443 MB

FLR (Scope of data collection = Exceptional Files and Folders; Collect summary information on file types = Yes; # of exception files/folders to collect per category = 50000)

77 MB

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 67: ECC 6 1 Performance Scalability

StorageScope Guidelines

SRM Console Guidelines

Guideline C1 When running SRM Console, ensure that the client meets the minimum specifications.Table 41 lists the minimum hardware requirements for hosts used for SRM Console.

Guideline C2 Limit the number of concurrent active SRM Consoles to ten.A SRM Console session is considered as active if it is being used by user to request/view snapshots, reports etc.

Note: Active ControlCenter Java and Web Console sessions do not influence the performance of StorageScope. Likewise, the SRM Console does not impact performance of ControlCenter Infrastructure, Agents and Console performance.

Agent Guidelines

Guideline A1 Use agent resource utilization to help schedule DCPs.Table 42 shows the system resource utilization by Host agent executing the FLR DCP on a medium file server. Use the following information to efficiently schedule the FLR DCPs based on your resources.

Tests were conducted on hosts having the following configurations:

◆ Windows: 2x3.0 GHz processors with 2 GB RAM, running Windows 2003 Enterprise Edition, SP1.

◆ Solaris: 2x1.5 GHz processors with 4 GB of RAM, running Solaris 10 (SunFire v240).

◆ HP-UX: 2x1.0 GHz processors with 4 GB of RAM, running HP-UX B11.11 (rp 3440).

◆ AIX: 2x1.89 GHz processors with 4 GB RAM, running AIX 5.3 (eServer P5 model 510).

Table 41 SRM hardware requirements

Operating System CPU# Speed Memory

Windows 1 500 MHz 128 MB

67EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 68: ECC 6 1 Performance Scalability

68

StorageScope Guidelines

Maintenance Guidelines

Guideline M1 Routinely review storage utilization by StorageScope Repository.The emcsts_tbspusage.bat file runs automatically daily at 7:00p.m. to 7:00a.m., check the rpt_emcsts_tbspusage.log file routinely to ensure that your StorageScope Repository is not running out of storage. You may use the ETL Data Removal Schedule and decide appropriate balance between demand for disk space and availability of file level data for reporting. You may also use this scheduler to purge data on demand to free up disk space to avoid potential ‘out of disk space’ situations.

Table 42 Host agent system resource utilization during FLR data collection

Agent Type Default Polling CycleDuration(mm:ss)

Avg. CPU Usage

Estimated Memory Usage

Host Agent

FLR (Scope of data collection = Folders only; Collect summary information on file type = No)

Once daily at 2 am Windows 01:57Solaris 06:06HP-UX 04:16AIX 05:00Linux 08:49

Windows 34%Solaris 63%HP-UX 72%AIX 68%Linux 31%

Windows 157 MBSolaris 94 MBHP-UX 93 MBAIX 107 MBLinux 95 MB

FLR (Scope of data collection = Folders only; Collect summary information on file type = Yes)

Once daily at 2 am Windows 02:09Solaris 05:49HP-UX 04:35AIX 05:14Linux 07:32

Windows 36%Solaris 66%HP-UX 73%AIX 71%Linux 33%

Windows 155 MBSolaris 94 MBHP-UX 93 MBAIX 112 MBLinux 116 MB

FLR (Scope of data collection = All Files and Folders; Collect summary information on file type = Yes)

Once daily at 2 am Windows 08:00Solaris 07:47HP-UX 09:37AIX 06:09Linux 14:06

Windows 12%Solaris 65%HP-UX 61%AIX 68%Linux 31%

Windows 206 MBSolaris 123 MBHP-UX 157 MBAIX 134 MBLinux 141 MB

FLR (Scope of Data collection =Exceptional Files and Folders; Collect summary information on file type =Yes)

Once daily at 2 am Windows 04:18Solaris 07:00HP-UX 04:53AIX 05:05Linux 11:23

Windows 37%Solaris 63%HP-UX 78%AIX 74%Linux 48%

Windows 247 MBSolaris 257MBHP-UX 221 MBAIX 146 MBLinux 207 MB

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 69: ECC 6 1 Performance Scalability

Hardware configuration examples

Hardware configuration examplesTable 43 lists examples of the recommended specification for single-core and multi-core processor hosts. These hosts were used in the lab to measure ControlCenter performance and scalability.

Table 43 System components for dual processor configurations

Part

System Specifications

Multi-core processor host Single-core processor host

SPEC Score SPEC CINT2000 Rate (baseline) =123SPEC CINT2006 Rate (baseline) = 61

SPEC CINT2000 Rate (baseline) = 26

Base Unit Processor Dual-core Intel Xeon 5160 3.0 GHz processor, 4 MB cache, 1333 MHz FSB

Intel Xeon, 3.06 GHz with 1 MB Cache

Second Processor Dual-core Intel Xeon 5160 3.0 GHz processor, 4 MB cache, 1333 MHz FSB

Intel Xeon, 3.06 GHz with 1 MB Cache

Memory 4 GB, 533 MHz Dual Ranked DIMMs (4x1GB) 2 GB DDR SDRAM 266 MHz (2x1 GB)

Hard Drive 146 GB, SAS, 3.5”, 10K RPM 73 GB, 10 K RPM, Ultra 320 SCSI Hard Drive

Hard Drive Controller Dell PERC 5/i Integrated RAID Controller RAID on Motherboard, PERC3-DI 128 MB

Floppy Disk Drive 1.44 GB, 3.5”, internal 1.44 MB, 3.5" Floppy Drive

Operating System Windows 2000 Server orWindows 2003 Enterprise Edition, SP2

Windows 2000 Server orWindows 2003 Enterprise Edition, SP2

NIC Intel PRO 1000PT Dual Port Server Adapter, Gigabit NIC, CU, PCIex4

Dual On-Board NICs

CD-ROM orDVD-ROM Drive

24x IDE CR-RW/DVR ROM Drive 24X IDE Internal CD-ROM Drive

Additional Storage Products

146 GB, SAS, 3.5”, 10K RPM 73 GB, 10 K RPM, Ultra 320 SCSI Hard Drive

Miscellaneous N/A N/A

69EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 70: ECC 6 1 Performance Scalability

70

Understanding SPEC

Understanding SPEC

What is SPEC The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain, and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from our member organizations and other benchmark licensees.

What is a benchmark

A computer benchmark is typically a computer program that performs strictly defined set of operations, a workload, and returns some form of result, a metric, describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit of time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.

How does it work The basic SPEC methodology provides the benchmarker with a standardized suite of source code based upon existing applications that has already been ported to a wide variety of platforms by its membership. The benchmarker then takes this source code, compiles it for the system in question and then can adjust the system for the best results. The use of already accepted and ported source code greatly reduces the problem of making comparisons of non-like items.

Available SPEC benchmarks

SPEC has benchmarks for CPU, Mail Server, Web Server, File Systems, etc. Refer to www.spec.org for more information.

Note: ControlCenter uses the CPU benchmark.

EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 71: ECC 6 1 Performance Scalability

Understanding SPEC

How do I find SPEC score?

If you are planning to compare your host performance with the equipment being used in the lab to measure performance and scalability of ControlCenter, information provided on the SPEC website (www.spec.org) helps. There are two distinct CPU benchmark results available on this website. Hosts using old technology (e.g. single-core hosts) are benchmarked using SPEC CPU2000. This benchmark was retired in February 2007 and a new benchmark (SPEC CPU2006) is used by most hardware vendors.

Follow these steps to run the comparison:

1. Navigate to the SPEC website at www.spec.org,

2. From the top navigation bar choose "Result", "CPU2006" then "Search CPU2006 Results."

3. From the drop down list for Available Configurations, select "SPECint2006 Rates."

4. Select "Advanced" radio button for Search Form Request and click on "Go!"

5. Fill out the form with appropriate information (e.g. Hardware Vendor, System, # Cores, # Chips, # Cores Per Chip).

Note: In SPEC nomenclature, the term for "Processor" is "Chip".

6. Submit the search by clicking "Fetch Results" at the bottom of the page.

7. The value listed under Baseline column for your host is value that is compared to the minimum specified in this document.

If the CPU2006 benchmark is not available, try the CPU2000 benchmark. The steps to find the result for CPU2000 benchmark are the same as CPU2006.

Note: Results for CPU2000 can not be compared or extrapolated for CPU2006 and vice versa.

71EMC ControlCenter 6.1 Performance and Scalability Guidelines

Page 72: ECC 6 1 Performance Scalability

72

Understanding SPEC

Copyright © 2009 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

All other trademarks used herein are the property of their respective owners.

EMC ControlCenter 6.1 Performance and Scalability Guidelines