emc vnx7500 scaling performance for oracle … · vmware ® vsphere™ provides the virtualization...

Download EMC VNX7500 SCALING PERFORMANCE FOR ORACLE … · VMware ® vSphere™ provides the virtualization platform. ... Configure the virtual machine template ... Node scalability test

If you can't read please download the document

Upload: lekhuong

Post on 12-May-2018

229 views

Category:

Documents


1 download

TRANSCRIPT

  • White Paper

    EMC Solutions Group

    Abstract

    This solution illustrates the benefits of deploying EMC FAST Suite for Oracle OLTP databases in an optimized, scalable virtual environment. An Oracle Real Application Clusters (RAC) 11g database accesses an EMC VNX7500 array using Oracle Direct NFS (dNFS) client. This enables simplified configuration, improved performance, and enhanced availability. EMC SnapSure technology and the Oracle dNFS clonedb feature enable rapid provisioning of Oracle databases. VMware vSphere provides the virtualization platform.

    December 2012

    EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Automate performance Scale OLTP workloads Rapidly provision Oracle databases

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    2

    Copyright 2012 EMC Corporation. All Rights Reserved.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

    The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

    VMware, VMware vSphere, vCenter, ESX, and ESXi are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners.

    Part Number H11210

  • 3 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Table of contents

    Executive summary ............................................................................................................................... 6 Business case .................................................................................................................................. 6 Solution overview ............................................................................................................................ 6 Key results ....................................................................................................................................... 7

    Introduction.......................................................................................................................................... 9 Purpose ........................................................................................................................................... 9 Scope .............................................................................................................................................. 9 Audience ......................................................................................................................................... 9 Terminology ..................................................................................................................................... 9

    Technology overview .......................................................................................................................... 11 Introduction ................................................................................................................................... 11 EMC VNX7500 ................................................................................................................................ 11 EMC FAST Suite (FAST VP, FAST Cache) ........................................................................................... 11

    EMC FAST Cache ........................................................................................................................ 11 EMC FAST VP ............................................................................................................................. 11

    EMC SnapSure ............................................................................................................................... 11 VMware vSphere ............................................................................................................................ 12 Oracle RAC ..................................................................................................................................... 12 Oracle Direct NFS client .................................................................................................................. 12

    Solution architecture .......................................................................................................................... 13 Introduction ................................................................................................................................... 13 Hardware resources ....................................................................................................................... 14 Software resources ........................................................................................................................ 14 Oracle storage layout ..................................................................................................................... 15 Oracle file system allocation on VNX7500 ...................................................................................... 16 Oracle dNFS client configuration .................................................................................................... 17

    Configuring Oracle databases ............................................................................................................ 19 Database and workload profile ...................................................................................................... 19 Oracle database schema ................................................................................................................ 19 Enable HugePages ......................................................................................................................... 19

    Configuring FAST Cache on EMC VNX7500 .......................................................................................... 21 Overview ........................................................................................................................................ 21 Analyze the application workload .................................................................................................. 21 FAST Cache best practices for Oracle .............................................................................................. 21

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    4

    Configuring FAST VP on EMC VNX7500 ............................................................................................... 23 Overview ........................................................................................................................................ 23

    Tiering policies .......................................................................................................................... 23 Start high then auto-tier (default policy) .................................................................................... 23 Auto-tier .................................................................................................................................... 24 Highest available tier ................................................................................................................ 24 Lowest available tier.................................................................................................................. 24 No data movement .................................................................................................................... 24

    Configure FAST VP .......................................................................................................................... 24

    VMware ESX server configuration ....................................................................................................... 25 Overview ........................................................................................................................................ 25 Step 1: Create virtual switches ....................................................................................................... 25 Step 2: Configure the virtual machine template .............................................................................. 27 Step 3: Deploy the virtual machines ............................................................................................... 28 Step 4: Enable access to the storage devices ................................................................................. 29 Step 5: Enable Jumbo frames ......................................................................................................... 29

    Data mover ................................................................................................................................ 30 vDS ........................................................................................................................................... 30 Linux Server .............................................................................................................................. 30

    Node scalability test ........................................................................................................................... 31 Test objective ................................................................................................................................. 31 Test procedure ............................................................................................................................... 31 Test results .................................................................................................................................... 31

    FAST Suite test ................................................................................................................................... 33 FAST Suite and manual tiering comparison .................................................................................... 33 FAST Cache test ............................................................................................................................. 33

    FAST Cache warm-up ................................................................................................................. 33 FAST Cache test procedure ........................................................................................................ 34

    FAST VP test ................................................................................................................................... 35 FAST VP moving data across tiers .............................................................................................. 35 FAST VP test procedure .............................................................................................................. 36

    FAST Suite test ............................................................................................................................... 36 FAST Suite test procedure .......................................................................................................... 37

    Test results .................................................................................................................................... 37 FAST Suite effects on database transactions per minute ............................................................ 37 FAST Suite effects on read response time .................................................................................. 39 Wait statistics from Oracle AWR reports ..................................................................................... 40 Statistics from Unisphere for VNX .............................................................................................. 42

  • 5 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    dNFS clonedb test............................................................................................................................... 43 Test objective ................................................................................................................................. 43 Test procedure ............................................................................................................................... 43 Test results .................................................................................................................................... 44

    Resilience test .................................................................................................................................... 45 Test objective ................................................................................................................................. 45 Test procedures ............................................................................................................................. 45

    Physical NIC failure .................................................................................................................... 45 Data mover panic ...................................................................................................................... 46

    Test results .................................................................................................................................... 47 Physical NIC failure .................................................................................................................... 47 Data mover panic ...................................................................................................................... 47

    Conclusion ......................................................................................................................................... 48 Summary ....................................................................................................................................... 48 Findings ......................................................................................................................................... 48

    References.......................................................................................................................................... 50 White papers ................................................................................................................................. 50 Product documentation .................................................................................................................. 50 Other documentation ..................................................................................................................... 50

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    6

    Executive summary

    Oracle mission-critical applications for your business have service levels that require high performance, a fast end-user experience (low latency), and resilience. As a result, Oracle environments must address an increasingly broad range of business demands, including the ability to:

    Scale Oracle online transaction processing (OLTP) workloads for performance.

    VMware vSphere 5.1 enables efficienct use of the physical server hardware (database servers) by providing extensibility and scalability of the virtual environment in the following way:

    Larger virtual machinesVirtual machines can grow two times larger than in any previous release to support the most advanced applications.

    Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM).

    Maximize performance while reducing the cost of ownership of the system.

    The Oracle Database 11g Direct NFS (dNFS) client enables both resilience and performance for Oracle databases as a standard feature of the Oracle Database stack.

    The Oracle Database 11g dNFS client is optimized for Oracle workloads and provides a level of load-balancing and failover that significantly improves the availability and performance in a deployed NAS database architecture.

    Performance is further improved by load balancing across multiple network interfaces (if available).

    Free database administrators (DBAs) from the complex, repetitive, and disruptive manual processes associated with traditional methods of using Flash drive technology.

    EMC FAST Suite automatically and nondisruptively tunes an application, based on the access patterns.

    FAST Cache services active data with fewer Flash drives, while Fully Automated Storage Tiering for Virtual Pools (FAST VP) optimizes disk utilization and efficiency with Serial Attached SCSI (SAS) and Near-Line SAS (NL-SAS) drives.

    Deploying an Oracle NAS solution with 10 Gb Ethernet fabric on the EMC VNX7500 delivers both infrastructure cost and people and process cost efficiencies versus a block deployed storage architecture.

    Meet rapid on-demand Oracle provisioning requirements to create, deploy, and manage numerous production, development, and testing environments.

    This solution addresses all these challenges for a scalable virtualized Oracle Real Application Clusters (RAC) 11g database deployment.

    Business case

    Solution overview

  • 7 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    This solution uses the following technologies to support the demands of the growing enterprise infrastructure:

    EMC VNX7500 series

    EMC Unisphere

    EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP)

    EMC FAST Cache

    EMC SnapSure checkpoint

    VMware vSphere

    Oracle Direct NFS (dNFS) client

    Oracle dNFS clonedb

    Technologies such as simplified storage management and fully automated storage tiering provide an infrastructure foundation that meets the following business needs:

    EfficiencyAutomate Oracle performance tuning

    With FAST VP and FAST Cache enabled, the storage array continuously tunes an application, based on the access patterns.

    Cost savingsImprove the total cost of ownership (TCO)

    FAST Cache can manage active data with fewer Flash drives, while FAST VP optimizes disk utilization and efficiency across SAS and NL-SAS drives.

    ScalabilitySupport growing Oracle workloads that require increasingly high I/Os per second (IOPS) by scaling out a virtual Oracle RAC node with Oracle dNFS client and the latest 10 Gigabit Ethernet (GbE) data center technology.

    AgilityRapid clone of Oracle environments such as test, development, and patching of databases by using Oracle dNFS clonedb technology.

    vSphere 5.1 provides the following virtual machine-related enhancements:

    Supported for up to 64 vCPUs per virtual machine, doubling the number of supported vCPUs from vSphere 5.0 (32 vCPUs)

    Enhanced CPU virtualization, enabling the passing of low-level CPU counters and other physical CPU attributes directly to the virtual machine, where they can be accessed by the guest OS

    This solution demonstrates the following key results:

    Performance improvement with FAST Suite:

    2 times improvement in transactions per minute (TPM)

    3.5 times improvement in IOPS

    92 percent hit ratio after a warm-up period of FAST Cache

    Simple managementOnly a few steps are required to configure FAST VP and FAST Cache. Customers can enable or disable FAST Cache and FAST VP without affecting the system operation.

    Key results

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    8

    Nondisruptive performanceFAST VP and FAST Cache can identify hot data automatically and nondisruptively. This frees Oracle database administrators (DBAs) from the complex, repetitive, and manual processes of tuning the storage.

    ScalabilityCustomers can easily and nondisruptively scale out Oracle virtual RAC nodes as application needs evolve, enabling them to take an incremental approach to address growing workload needs.

    AgilityEMC SnapSure checkpoint and the Oracle dNFS clonedb feature enable Oracle DBAs to rapidly deploy additional database copies from a production database for testing, development, or other purposes, while minimizing the storage capacity requirements for those additional database instances.

    ResilienceThe EMC VNX standby data mover and the Oracle dNFS client enable high availability for Oracle RAC databases. The database is still up during physical NIC failure and data mover panic, enabling a resilient database with automatic failover.

  • 9 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Introduction

    This white paper introduces how Oracle OLTP applications can use EMC FAST technology with RAC databases to achieve scalability, performance, and resiliency in a virtual environment using VMware vSphere 5.1 on EMC VNX storage.

    The scope of the white paper is to:

    Introduce the key solution technologies.

    Describe the solution architecture and design.

    Describe the solution scenarios and present the results of validation testing.

    Identify the key business benefits of the solution.

    This white paper is intended for chief information officers (CIOs), data center directors, Oracle DBAs, storage administrators, system administrators, virtualization administrators, technical managers, and any others involved in evaluating, acquiring, managing, operating, or designing Oracle database environments.

    This paper includes the following terminology.

    Table 1. Terminology

    Acronym Term

    AWR Automatic Workload Repository

    dNFS Direct NFS

    FAST VP Fully Automated Storage Tiering for Virtual Pools

    FC Fibre Channel

    IOPS I/Os per second

    LUN Logical unit number

    NIC Network interface card

    NFS Network file system

    ODM Oracle Disk Manager

    OLTP Online transaction processing

    PFS Production file system

    vDS vNetwork Distributed Switch

    RAC Real Application Clusters

    SAS Serial Attached SCSI

    SCSI Small Computer System Interface

    SGA System global area

    Purpose

    Scope

    Audience

    Terminology

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    10

    Acronym Term

    TCO Total cost of ownership

    TPM Transactions per minute

    VNX OE VNX operating environment

  • 11 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Technology overview

    The solution uses the following hardware and software components:

    EMC VNX7500

    EMC FAST Suite

    EMC SnapSure

    VMware vSphere

    Oracle Database 11g Release 2 Enterprise Edition with Oracle Clusterware

    Oracle dNFS client

    VNX7500 is a member of the VNX series next-generation storage platform, which is powered by Intel quad-core Xeon 5600 series processors and delivers five 9s availability. The VNX series is designed to deliver maximum performance and scalability for enterprises, enabling them to dramatically grow, share, and cost-effectively manage multiprotocol file and block systems.

    The VNX operating environment (VNX OE) allows Microsoft Windows and Linux/UNIX clients to share files in multiprotocol NFS and Common Internet File System (CIFS) environments. VNX OE also supports iSCSI, FC, and Fibre Channel over Ethernet (FCoE) access for high-bandwidth and latency-sensitive block applications.

    The FAST Suite for VNX arrays includes FAST Cache and FAST VP.

    EMC FAST Cache

    FAST Cache uses Flash drives to add an extra layer of cache between the dynamic random access memory (DRAM) cache and rotating disk drives, thereby creating a faster medium for storing frequently accessed data. FAST Cache is an extendable, read/write cache. It boosts application performance by ensuring that the most active data is served from high-performing Flash drives and can reside on this faster medium for as long as is needed.

    EMC FAST VP

    FAST VP is a policy-based, auto-tiering solution for enterprise applications. FAST VP operates at a granularity of 1 GB, referred to as a "slice". The goal of FAST VP is to efficiently use storage tiers to lower TCO by tiering colder slices of data to high-capacity drives, such as NL-SAS, and to increase performance by keeping hotter slices of data on performance drives, such as Flash drives. This process occurs automatically and transparently to the host environment.

    SnapSure enables you to create point-in-time, logical images of a production file system (PFS). SnapSure uses a "copy on first modify" principle. When a block within the PFS is modified, SnapSure saves a copy of the blocks original contents to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS according to a

    Introduction

    EMC VNX7500

    EMC FAST Suite (FAST VP, FAST Cache)

    EMC SnapSure

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    12

    bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint.

    VMware vSphere provides the virtualization platform for the VMware ESXi virtual machines hosting the Oracle RAC nodes in the virtual environment.

    VMware vSphere abstracts applications and information from the complexity of the underlying infrastructure. It is the industrys most complete and robust virtualization platform, virtualizing business-critical applications with dynamic resource pools for unprecedented flexibility and reliability.

    VMware vCenter provides the centralized management platform for vSphere environments, enabling control and visibility at every level of the virtual infrastructure.

    Oracle RAC extends Oracle Database so that you can store, update, and efficiently retrieve data using multiple database instances on different servers at the same time. Oracle RAC provides the software that manages multiple servers and instances as a single group.

    Oracle Direct NFS Client (dNFS) is an alternative to using kernel-managed NFS. With Oracle Database 11g release 2 (11.2), instead of using the operating system kernel NFS client, you can configure an Oracle Database to access NFS V3 servers directly using an Oracle internal dNFS client. This native capability enables direct I/O with the storage devices, bypassing the operating system file cache and reducing the need to copy data between the operating system and database memory. The dNFS client also enables asynchronous I/O access to NFS appliances.

    Oracle dNFS uses simple Ethernet for storage connectivity. This eliminates the need for expensive, redundant host bus adaptors (such as FC HBA) or FC switches. In addition, since Oracle dNFS implements multipath I/O internally, there is no need to configure bonded network interfaces (such as EtherChannel or 802.3ad Link Aggregation) for performance or availability. This results in additional cost savings, as most NIC bonding strategies require advanced Ethernet switch support.

    VMware vSphere

    Oracle RAC

    Oracle Direct NFS client

  • 13 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Solution architecture

    This virtualized Oracle Database 11g NFS solution is designed to test and document:

    Node scalability

    Performance of FAST Suite

    Provisioning of test/development environments

    Resilience of an Oracle OLTP RAC database configured using dNFS

    We1

    The VNX array was configured as an NFS server and the Oracle RAC nodes were configured to access the NFS server directly using the Oracle internal dNFS client.

    carried out the testing on an Oracle RAC 11g database using a VNX7500 array as the underlying storage. VMware vSphere was used as the virtualization platform.

    Figure 1 depicts the architecture of the solution. With VMware vSphere version 5.1 installed, the ESXi server farm for the Oracle database consists of two ESXi servers; four virtual machines (two on each ESXi server) were deployed as a four-node RAC database. At Oracle Support's suggestion, we deployed Oracle RAC 11.2.0.3 for this virtualized solution. The storage and cluster interconnect networks used 10 Gigabit Ethernet.

    Figure 1. Architecture overview

    1 In this white paper, we refers to the EMC solutions engineering team that deployed and validated the solution.

    Introduction

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    14

    Table 2 details the hardware resources for the solution.

    Table 2. Hardware resources

    Hardware Quantity Configuration

    Storage array 1 EMC VNX7500 with:

    2 storage processors, each with24 GB cache 75 x 300 GB 10k 2.5 inch SAS drives 4 x 300 GB 15k 3.5 inch SAS drives (vault disk) 11 x 200 GB 3.5 inch Flash drives 4 x data movers( 2 primary and 2 standby) Dual-port 10 GbE for each data mover

    ESXi server 2 4 x 8-core CPUs, 256 GB RAM, 2 x dual-port 1 Gb/s Ethernet NICs 2 x dual-port 10 Gb/s CNA NICs

    Ethernet switch 2 10 Gb/s Ethernet switches (for interconnect/storage Ethernet)

    2 1 Gb/s Ethernet switches (for public Ethernet)

    Table 3 details the software resources for the solution.

    Table 3. Software resources

    Software Version Purpose

    EMC VNX OE for block 05.32.000.5.011 VNX operating environment

    EMC VNX OE for file 7.1.55-3 VNX operating environment

    Unisphere 1.2.0.1.0556 VNX management software

    Oracle Grid Infrastructure 11.2.0.3 Oracle ASM, Oracle Clusterware, and Oracle Restart

    Oracle Database 11.2.0.3 Oracle Database and Oracle RAC

    Oracle Enterprise Linux 6.3 Database server OS

    VMware vSphere 5.1 Hypervisor hosting all virtual machines

    VMware vCenter 5.1 Management of VMware vSphere

    Swingbench 2.4 TPC-C like benchmark tool

    Hardware resources

    Software resources

  • 15 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    The disk configuration uses four back-end 6 Gb SAS ports within the VNX7500 storage array. Figure 2 illustrates the disk layout of the environment.

    Figure 2. Disk layout

    Note A Cluster Ready Services (CRS) pool was deployed on the data vault disks due to low I/O activities.

    Figure 3 shows a logical representation of the layout of the file system used for the Oracle datafiles. We used four data movers in a 2+2 active/standby configuration. Two active data movers were used to access the file systems, which were distributed evenly across the four SAS ports. The back-end configuration was based on the I/O requirements.

    Oracle storage layout

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    16

    Figure 3. Datafile system logical view

    Unisphere provides a simple GUI to create and manage the file systems. Figure 4 shows the usage of each file system and its serving data mover. It is well balanced for the workload.

    Figure 4. The file system information panel in Unisphere

    Table 4 details the Oracle file system storage allocation on the VNX7500. All the storage pools were created on 300 GB 10k SAS drives.

    Table 4. Oracle file system allocation on VNX7500

    File type RAID type No. of LUNs Disk volumes (dVols) Size Data mover

    Datafiles, control files

    4+1 RAID 5 10 D1 to D10 2.5 TB Server2

    2.5 TB Server3

    Temp files 4+1 RAID 5 2 D22 133 GB Server2

    Oracle file system allocation on VNX7500

  • 17 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    File type RAID type No. of LUNs Disk volumes (dVols) Size Data mover

    D23 133 GB Server3

    Redo logs 2+2 RAID 10

    2 D24 100 GB Server2

    D25 100 GB Server3

    FRA files 4+1 RAID 5 10 D11 to D20 4 TB Server2

    CRS files 2+2 RAID 10

    1 D21 5 GB Server2

    Oracle dNFS client is a standard feature with Oracle Database 11g and provides improved performance and resilience over OS-hosted NFS. Oracle dNFS client technologies provide both resiliency and performance over OS-hosted NFS with the ability to automatically failover on the 10 G Ethernet fabric and to perform concurrent I/O which bypass any operating system caches and OS write-order locks.

    dNFS also performs asynchronous I/O that allows processing to continue while the I/O request is submitted and processed.

    The Oracle database needs to be configured to use the Oracle dNFS client ODM disk libraries. This is a one-time operation and, once set, the database will use the Oracle-optimized, native Oracle dNFS client, rather than the operating systems hosted NFS client.

    The standard ODM library was replaced with one that supports the dNFS client. Figure 5 shows the commands that enable the dNFS client ODM library.

    Figure 5. Enable the dNFS client ODM library

    We configured the Oracle dNFS client for the virtual environment. We mounted the Oracle file systems and made them available over regular NFS mounts. Oracle dNFS client used the oranfstab configuration file to determine the mount point settings for the NFS storage devices. Figure 6 shows an extract from the oranfstab file used for this solution.

    Oracle dNFS client configuration

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    18

    Figure 6. Extract from oranfstab configuration file

    Once configured, the management of dNFS mount points and load balancing is controlled from oranfstab and not by the OS.

  • 19 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Configuring Oracle databases

    Table 5 details the database and workload profile for this solution.

    Table 5. Database and workload profile

    Profile characteristic Details

    Database type OLTP

    Database size 2 TB

    Oracle RAC 4 nodes

    Oracle SGA for each node 12 GB

    Database performance metric TPM

    Database read/write ratio 60/40

    This solution applied a simulated OLTP workload by scaling users using Swingbench. We populated a 2 TB database. One TB data was accessed by different sessions that were running on the four nodes: vm-01, vm-02, vm-03, and vm-04. Another 1 TB schema data was left idle to simulate a more realistic skew in the dataset.

    HugePages is crucial for faster Oracle database performance on Linux if you have a large RAM and SGA. You need to configure HugePages if your combined database SGAs are large (more than 8 GB), though HugePages can even be important for smaller SGA size. The advantages of enabling HugePages include:

    Larger page size and fewer pages

    Better overall memory performance

    No swapping

    No 'kswapd' operations

    See Oracle MetaLink Note ID 361468.1 for details about HugePages on Oracle Linux 64 bit.

    We performed the following steps to tune the HugePages parameters for optimal performance:

    1. Ran script hugepages_settings.sh to calculate the values recommended for Linux HugePages. Make sure the database is running when running this script.

    For more information, see Oracle MetaLink Note ID 401749.1.

    2. Set the vm.nr_hugepages parameter in /etc/sysctl.conf to the recommended size. In this solution, we used 6145 to accommodate an SGA of 12 GB.

    3. Restarted the database.

    4. Checked the values of the HugePages parameters using the following command:

    [oracle@vm-01 ~]$ grep Huge /proc/meminfo

    Database and workload profile

    Oracle database schema

    Enable HugePages

    https://support.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=300388.1

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    20

    On our test system, this command produced the following output:

    AnonHugePages: 903168 kB HugePages_Total: 6145 HugePages_Free: 4956 HugePages_Rsvd: 4956 HugePages_Surp: 0 Hugepagesize: 2048 kB

  • 21 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Configuring FAST Cache on EMC VNX7500

    FAST Cache uses Flash drives to add an extra layer of high-speed cache between DRAM cache and rotating disk drives, thereby creating a faster medium for storing frequently accessed data. FAST Cache is an extendable, read/write cache. It boosts application performance by ensuring that the most active data is served from high-performing Flash drives and can reside on this faster medium for as long as is needed.

    FAST Cache is most effective when application workloads exhibit high data activity skew. This is where a small subset of data is responsible for most of the dataset activities. FAST Cache is more effective when the primary block reads and writes are small and fit within the 64 K FAST Cache track. The storage system is able to take advantage of such data skew by dynamically placing data according to its activity. For those applications whose datasets exhibit a high degree of skewing, FAST Cache can be assigned to concentrate a high percentage of application IOPS on Flash capacity.

    This section discusses using FAST Cache and outlines the main steps we carried out to configure and enable FAST Cache for this solution. You can perform the configuration steps using either the Unisphere GUI or the Unisphere command line interface (CLI). For further information about configuring FAST Cache, see Unisphere Help in the Unisphere GUI.

    Before you decide to implement FAST Cache, you must analyze the application workload characteristics. Array-level tools are available to EMC field and support personnel for determining both the suitability of FAST Cache for a particular environment and the right size cache to configure. Contact your EMC sales teams for guidance.

    Whether a particular application can benefit from using FAST Cache, and what the optimal cache size should be, depends on the size of the applications active working set, the access pattern, the IOPS requirement, the RAID type, and the read/write ratio. As indicated in the Technology overview > EMC FAST Cache section of this white paper, the workload characteristics of OLTP databases make them especially suitable for using FAST Cache. For further information, see the white papers: EMC FAST CacheA Detailed Review and Deploying Oracle Database 11g Release 2 on EMC Unified Storage.

    For this solution, we performed an analysis using the EMC array-level tools, which recommended using FAST Cache and four 200 GB Flash drives as the optimal configuration.

    The following are recommended practices:

    Disable FAST Cache on pool/LUNs that do not require it.

    Size FAST Cache appropriately, depending on the applications active dataset.

    Disable FAST Cache on pool/LUNs where Oracle online redo logs reside.

    Never enable FAST Cache on archive logs, because these files are never overwritten and are rarely read back.

    Overview

    Analyze the application workload

    FAST Cache best practices for Oracle

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    22

    EMC recommends that you enable FAST Cache for the Oracle datafiles only. Oracle archive files and redo log files have a predictable workload composed mainly of sequential writes. The arrays write cache and assigned HDDs can efficiently handle these archive files and redo log files. Enabling FAST Cache on these files is neither beneficial nor cost effective.

  • 23 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Configuring FAST VP on EMC VNX7500

    FAST VP is a game-changing technology that provides compelling advantages over traditional tiering options. It combines the advantages of automated storage tiering with Virtual Provisioning to optimize performance and cost while radically simplifying management and increasing storage efficiency.

    Like FAST Cache, FAST VP works best on datasets that exhibit a high degree of skew. FAST VP is very flexible and supports several tiered configurations, such as single tiered, multitiered, with or without a Flash tier, and FAST Cache support. Adding a Flash tier can locate hot data on Flash storage in 1 GB slices.

    FAST VP can be used to aggressively reduce TCO and/or to increase performance. A target workload that requires a large number of performance tier drives can be serviced with a mix of tiers, and a much lower drive count. In some cases, you can achieve an almost two-thirds reduction in drive count. In other cases, performance throughput can double by adding less than 10 percent of a pools total capacity in Flash drives.

    You can use FAST VP in combination with other performance optimization software, such as FAST Cache. A common strategy is to use FAST VP to gain TCO benefits while using FAST Cache to boost overall system performance. There are other scenarios where it makes sense to use FAST VP for both purposes. This paper discusses considerations for an optimal deployment of these technologies.

    For further information on FAST VP algorithm and policies, see EMC FAST VP for Unified Storage Systems.

    FAST VP includes the following tiering policies:

    Start high then auto-tier (default policy)

    Auto-tier

    Highest available tier

    Lowest available tier

    No data movement

    Start high then auto-tier (default policy)

    Start high then auto-tier is the default setting for all pool LUNs on their creation. Initial data placement is on the highest available tier and then data movement is subsequently based on the activity level of the data. This tiering policy maximizes the initial performance and takes full advantage of the most expensive and fastest drives first, while providing subsequent TCO by allowing less active data to be tiered down, making room for more active data in the highest tier.

    When a pool has multiple tiers, the start high then auto-tier design is capable of relocating data to the highest available tier regardless of the drive type combination. Also, when adding a new tier to a pool, the tiering policy remains the same and there is no need to manually change it.

    Overview

    Tiering policies

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    24

    Auto-tier

    FAST VP relocates slices of LUNs based solely on their activity level after all slices with the highest/lowest available tier settings have been relocated. LUNs specified with the highest available tier setting have precedence over LUNs set to Auto-tier.

    Highest available tier

    Select the highest available tier setting for those LUNs which, although not always the most active, require high levels of performance whenever they are accessed. FAST VP prioritizes slices of a LUN with the highest available tier selected above all other settings.

    Slices of LUNs set to the highest available tier are rank ordered with each other according to activity. Therefore, in cases where the sum total of LUN capacity set to the highest available tier is greater than the capacity of the pools highest tier, the busiest slices occupy that capacity.

    Lowest available tier

    Select the lowest available tier for LUNs that are not performance-sensitive or response time-sensitive. FAST VP maintains slices of these LUNs on the lowest storage tier available, regardless of activity level.

    No data movement

    The no data movement policy may be selected only after a LUN has been created. FAST VP will not move slices from their current positions once the no data movement selection has been made. Statistics are still collected on these slices for use if and when the tiering policy is changed.

    In this solution, we set the Auto-Tiering policy to Scheduled.

    For demonstration purpose, we configured the Data Relocation Schedule setting as Monday to Sunday, starting from 00:00 to 23:45. This determines the time window when FAST VP moves data between tiers.

    Note The Data Relocation Rate and Data Relocation Schedule are highly dependent on the real workload in a customer environment. Usually, setting the Data Relocation Rate to Low has less impact on the current running workload.

    Set the tiering policy for all LUNs containing datafiles to Auto-tier, so that FAST VP can automatically move the most active data to Flash drive devices.

    For the details of FAST VP configuration, refer to EMC FAST VP for Unified Storage Systems A Detailed Review.

    Configure FAST VP

  • 25 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    VMware ESX server configuration

    As virtualization is now a critical component of an overall IT strategy, it is important to choose the right vendor. VMware is the leading business virtualization infrastructure provider, offering the most trusted and reliable platform for building private clouds and federating to public clouds.

    For the virtual environment, we configured two ESXi servers on the same server hardware. Two virtual machines were created on each ESXi server to form a four-node Oracle RAC cluster.

    We created the virtual machines using a VMware template. First we created an Oracle Linux 6.3 virtual machine and installed Oracle prerequisites and software. We then created a template of this virtual machine and used this to create the other virtual machines to be used as cluster nodes.

    We performed the following main steps to configure the ESXi servers:

    1. Created virtual switches for the cluster interconnects and the connection to the NFS server.

    2. Configured the virtual machine template.

    3. Deployed the virtual machines.

    4. Enabled virtual machine access to the storage devices.

    One standard vSwitch and three vNetwork Distributed Switches (vDS) were created on the ESXi servers. The standard vSwitch was a public network configured with two 1 Gb NICs for fault tolerance, as shown in Figure 7.

    Figure 7. Standard vSwitch configuration

    The vDS was used to manage the network traffic between different virtual machines and to manage the connections from the virtual machine to external data movers. Each vDS was configured with next generation 10 Gb Ethernet connectivity.

    Overview

    Step 1: Create virtual switches

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    26

    As shown in Figure 8, a total of four virtual distributed switches were created.

    dvSwitch_interconnect was the private network dedicated to the Oracle cluster interconnects.

    dvSwitch_storage_1 and dvSwitch_storage_2 were private networks serving the two data movers of the NFS storage array.

    dvSwitch_storage_resil was created for storage redundancy to demonstrate the multipath function of Oracle dNFS.

    Figure 8. vDS configuration

    Each switch was created with a dvPort group and an uplink port group. The uplink port group was served by two uplinks. Each uplink used one physical NIC from each ESXi server, as shown in Figure 9.

  • 27 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Figure 9. Detailed vDS configuration

    The virtual machine template was configured in VMware vSphere Client according to the requirements and prerequisites for the Oracle software (see Table 6), including:

    Operating system and rpm packages

    Kernel configuration

    OS users

    Supporting software

    Table 6. Virtual machine template configuration

    Part Description

    CPU 8 vCPUs

    Memory 32 GB

    Operating system Oracle Linux Server release 6.3 (Santiago) 64-bit

    Kernel 2.6.39-200.24.1.el6uek

    Network interfaces Eth0: public/management IP network

    Eth1 (10 Gb): dedicated to cluster interconnect

    Eth2 (10 Gb): dedicated to NFS connection to Data Mover 2

    Eth3 (10 Gb): dedicated to NFS connection to Data Mover 3

    Eth4 (10 Gb): dedicated to NFS connection to Data Mover 2 as redundancy

    Eth5 (10 Gb): dedicated to NFS connection to Data Mover 3 as redundancy

    OS user (user created and password set)

    Username: oracle

    UserID: 1101

    OS groups Group: oinstall

    GroupID: 1000

    Group: dba

    GroupID: 1031

    Step 2: Configure the virtual machine template

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    28

    Part Description

    Software pre-installed The script sshUserSetup.sh was copied from the Oracle Grid Infrastructure 11g R2 binaries to /home/oracle/sshUserSetup.sh.

    rpm packages installed (as Oracle prerequisites)

    See the relevant Oracle installation guide.

    Disk configuration 30 GB virtual disk for root, /tmp, and the swap space

    15 GB virtual disk for Oracle 11g R2 Grid and RAC Database binaries

    Note As of Oracle Grid Infrastructure 11.2.0.2, allow for an additional 1 GB of disk space per node for the Cluster Health Monitor (CHM) Repository. By default, this resides within the Grid Infrastructure home.

    System configuration (Oracle prerequisites)

    See the relevant Oracle Installation Guide:

    Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux

    Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux

    We deployed three virtual machines from the template image stored in VMware vCenter. The Deploy Template wizard was used to specify the name and location of the new virtual machines and to select the option for customizing the guest operating system.

    We chose an existing customization specification (held in vCenter) to define the configuration of the network interfaces for new virtual machines, as shown in Figure 10.

    Figure 10. Deploy Template wizard

    Step 3: Deploy the virtual machines

  • 29 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    To enable host access using the Unisphere GUI, select the Create NFS Export option under Storage > Shared folder > NFS, and type the host IP addresses for each NFS export, as shown in Figure 11.

    Figure 11. Configure host access

    For Oracle RAC 11g installations, jumbo frames are recommended for the private RAC interconnect and storage networks. This boosts the throughput as well as possibly lowering the CPU utilization caused by the software overhead of the bonding devices. Jumbo frames increase the device MTU size to a larger value (typically 9,000 bytes).

    Jumbo frames are configured for four layers in a virtualized environment:

    VNX Data Mover

    vDS

    Oracle RAC 11g servers

    Physical switch

    Configuration steps for the switch are not covered here, as that is vendor-specific. Check your switch documentation for details.

    Step 4: Enable access to the storage devices

    Step 5: Enable Jumbo frames

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    30

    Data mover

    Figure 12 shows how to configure Jumbo frames on the data mover.

    Figure 12. Configure Jumbo frames on data mover

    vDS

    Figure 13 shows how to configure Jumbo frames on a vDS.

    Figure 13. Configure Jumbo frames on vDS

    Linux Server

    To configure Jumbo frames on a Linux server, run the following command:

    ifconfig eth2 mtu 9000

    Alternatively, place the following statement in the network scripts in /etc/sysconfig/network-scripts:

    MTU=9000

  • 31 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Node scalability test

    The objective of this test was to demonstrate the performance scalability, with both nodes and users scaled out on an Oracle RAC database with dNFS and 10 GbE in a virtualized environment. We ran an OLTP-like workload against a single node. We then added users and nodes to show the scalability of both node and user.

    We used Swingbench to generate the OLTP-like workload. The testing included the following steps:

    1. Ran the workload on the first node by gradually increasing the number of concurrent users from 50 to 250 in increments of 50.

    2. Added the second node into the workload, and ran the same workload as in the previous step on each node separately. This means the total users scaled from 100 (50 on each node) to 500 (250 on each node).

    3. Repeated the previous two steps after adding the third and fourth nodes.

    4. For each user iteration, we recorded the front-end IOPS from Unisphere, the TPM from Swingbench, and the performance statistics from Oracle Automatic Workload Repository (AWR) reports.

    Notes

    Benchmark results are highly dependent on workload, specific application requirements, and system design and implementation. Relative system performance varies based on many factors. Therefore, you cannot use this workload as a substitute for a specific environments application benchmark when making critical capacity planning or product evaluation decisions.

    The testing team obtained all performance data in a rigorously controlled environment. Results of other operating environments can vary significantly.

    EMC Corporation does not guarantee that a user can achieve similar performance demonstrated in TPM.

    The Cache Fusion architecture of Oracle RAC immediately uses the CPU and memory resources of the new node(s). Thus, we can easily scale out the system CPU and memory resource without affecting the online users. The architecture provides a scalable computing environment that supports the application workload.

    Figure 14 shows the TPM that Swingbench recorded during the node scalability testing, scaling both nodes and concurrent users. We scaled the RAC database nodes from one to four. In each RAC configuration, we ran the Swingbench workload with 50, 100, 150, 200, and 250 users on each node. We observed a near-linear scaling of TPM from Swingbench as the concurrent user load increased along with the scale of nodes. The chart illustrates the benefits of using EMC VNX7500 storage with Oracle RAC and dNFS for achieving a scalable OLTP environment. Oracle RAC provides not only horizontal scaling, but also guaranteed continuous availability.

    Test objective

    Test procedure

    Test results

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    32

    Figure 14. Node scalability test

    EMC FAST Suite automatically optimized the storage to ensure the highest system performance at all times, thus helping to improve the system efficiency. FAST Cache working with FAST VP not only boosted the application performance but also provided improved TCO of the whole system. See the FAST Suite test section for the test results of enabling FAST Suite on the OLTP workload.

  • 33 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    FAST Suite test

    Manual tiering involves a repeated process that can take nine hours or more to complete each time. In contrast, both FAST VP and FAST Cache operate automatically, eliminating the need to manually identify and move or cache the hot data. As shown in Figure 15, configuring FAST Cache is a one-off process that can take 50 minutes or less; hot and cold data is then cached in and out of FAST Cache continuously and automatically.

    Figure 15. FAST Suite and manual tiering comparison

    Note The time stated for configuring FAST VP is a conservative estimate. For details about configuring FAST VP, see the Configuring FAST VP on EMC VNX section of this white paper.

    FAST Cache boosts the overall performance of the I/O subsystem and works very well with Oracle dNFS in a virtualized Ethernet architecture. FAST Cache enables applications to deliver consistent performance by absorbing heavy read/write loads at Flash drive speeds.

    We configured four 200 GB Flash drives for FAST Cache. This provided a cache of 400 GB. We enabled FAST Cache for the storage pool that contains the database datafiles.

    FAST Cache warm-up

    FAST Cache requires some warm-up time before the I/O subsystem can achieve a high performance. Figure 16 tracks the FAST Cache read/write hit ratio of the storage pool that stores the datafiles.

    FAST Suite and manual tiering comparison

    FAST Cache test

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    34

    Figure 16. FAST Cache warm-up period

    FAST Cache was empty when it was initially created. During the warm-up period, as more hot data was cached, the FAST Cache hit rate increased gradually. In this test, the write hit ratio increased to 92 percent while the read hit ratio increased gradually to 89 percent after a warm-up period of approximately four and a half hours. When the locality of the active data changes, it is required to warm up the new data. This process is a normal, expected behavior and is fully automatic.

    FAST Cache test procedure

    To test the performance enhancement provided by FAST Cache, we ran the Swingbench workload on all of the four RAC nodes concurrently, with and without FAST Cache enabled.

    The test procedure included the following steps:

    1. Baseline testing:

    a. Ran the workload against the database from four RAC nodes at the same time without FAST Cache, and scaled it from 250 concurrent users to 750 on each node. The active data size was 1 TB, which was deployed on SAS drives only.

    b. Monitored the performance statistics, including average front-end IOPS and database TPM for each user iteration, from Oracle AWR reports and Unisphere.

  • 35 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    2. FAST Cache testing:

    a. Enabled FAST Cache on the storage array after the baseline testing, then ran the same workload and collected the same performance statistics as we did on the baseline. Along with the number of users, the running workload increased.

    b. After all the FAST Cache testing finished, we compared the performance data with the baseline to determine how much performance enhancement FAST Cache can offer.

    The results of the test are detailed in the Test results section.

    We created a two-tier FAST VP with a mixed storage pool consisting of five Flash drives and 40 SAS drives on VNX7500. FAST VP automatically relocated the LUN data from one disk tier to another within the pool.

    FAST VP moving data across tiers

    Initially, all datafiles were placed on SAS devices, as shown in Figure 17.

    Figure 17. Tier status before data movement

    With the workload running against the database from four RAC nodes at the same time for a few hours, FAST VP monitored and moved data. As long as the storage pool followed the applied FAST VP policy, the load continued to ensure the sustainability of the performance levels observed until it reached a steady state, as shown in Figure 18.

    Figure 18. FAST VP in a steady state

    FAST VP test

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    36

    FAST VP test procedure

    To test the performance enhancement provided by FAST VP, we enabled FAST VP and then ran the same workload as that used in the FAST Cache testing.

    The test procedure included the following steps:

    1. Ran the same workload and collected the same performance statistics as we did on the FAST Cache testing baseline.

    2. After all the FAST VP testing finished, we compared the performance data with the baseline to determine how much performance enhancement FAST VP can offer.

    The results of the tests are detailed in the Test results section.

    Note The synthetic benchmark used during testing is more uniformly random than real-world applications, which tend to have greater locality of reference. Customer environments have more inactive data. As a result, we believe that most organizations are able to use less Flash drive capacity to achieve similar, if not better, performance benefits with FAST VP. Customers can achieve additional cost savings if NL-SAS drives are added to create the third layer, as inactive data is down-tiered to NL-SAS. In this solution, the 1 TB of inactive data is automatically moved to the third layer if that layer has been configured.

    FAST Suite is the combination of FAST Cache and FAST VP. To demonstrate the advantages of FAST Cache in absorbing random I/O bursts and the benefits of FAST VPs auto-tiering feature in performance improvement and cost saving, we designed the following two test scenarios:

    Five Flash drives for FAST VP and four Flash drives for FAST Cache

    Five Flash drives for FAST VP and two Flash drives for FAST Cache

    Note Refer to the Analyze the application workload section to appropriately size the Flash drives for FAST Cache. To understand why we used five Flash drives for FAST VP, refer to EMC VNX Unified Best Practice For Performance - Applied Best Practices Guide. The rule of thumb for tier construction on extreme performance Flash tier is 4+1 RAID 5. This yields the best performance versus capacity balance.

    To test the performance of FAST Suite, we ran the same workload as that used in the FAST Cache and FAST VP test scenarios.

    FAST Suite test

  • 37 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    FAST Suite test procedure

    Initially, we placed all datafiles on SAS devices and tested the FAST Suite with four Flash drives for FAST Cache and five Flash drives for FAST VP, as follows:

    1. Enabled FAST Cache using four Flash drives and enabled FAST VP using five Flash drives.

    2. Generated the workload against the database to warm up the FAST Cache so that it could reach a stable read/write hit ratio and to ensure that FAST VPs moving of data was also stable.

    3. Generated the workload against the database to ensure that FAST VP was monitoring and moving data.

    4. Increased the number of users running transactions at intervals to determine how the database performed.

    5. Monitored the performance of the database, and recorded the average front-end IOPS and database TPM for each user iteration.

    Then we tested the FAST Suite with two Flash drives for FAST Cache and five Flash drives for FAST VP:

    6. Destroyed FAST Cache and enabled FAST Cache again using two Flash drives.

    7. Restored the database.

    8. Repeated step 2 to step 5.

    FAST Suite effects on database transactions per minute

    This section compares the database TPM for each test scenario mentioned previously, which includes the following:

    Baseline testing

    FAST Cache-only testing using four 200 GB Flash drives, configured with RAID 1

    FAST VP-only testing using five 200 GB Flash drives configured with RAID 5

    FAST Suite combination testing using four 200 GB Flash drives for FAST Cache and five 200 GB Flash drives for FAST VP

    FAST Suite combination testing using two 200 GB Flash drives for FAST Cache and five 200 GB Flash drives for FAST VP

    Figure 19 shows the TPM recorded during the period that the Swingbench workload scaled from 250 to 750 users on each node. This chart shows that the number of transactions processed was much higher when we introduced EMC FAST Suite.

    Test results

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    38

    Figure 19. TPM with and without FAST Suite

    When enabling FAST VP, we added five Flash drives to the data pool as RAID 5. The TPM increased by about 20 percent and stabilized at around 290,000, and the read response time was reduced by 42 percent.

    When enabling FAST Cache, we used four Flash drives. The TPM surged to around 510,000 and stabilized at that level. The response time was dramatically decreased to less than 10 ms. See the Wait statistics from Oracle AWR reports section for detailed analysis from the database side.

    The other two test results in Figure 19 show the performance of combining the two complementary technologies of FAST Cache and FAST VP. When using four Flash drives for FAST Cache and five Flash drives for FAST VP, the TPM was slightly higher than when using four Flash drives for FAST Cache only, and the read response time was reduced by 14 percent accordingly. When using two Flash drives for FAST Cache and five Flash drives for FAST VP, the TPM is slightly lower than when using four Flash drives for FAST Cache only, and the read response time was tripled.

    Notes

    When using FAST VP, customers can achieve additional cost savings if NL-SAS drives are added to create the third layer. The higher tiers fill to 90 percent of their available capacity, and cooler data is automatically migrated down to the lower tier as a result of hotter data displacing it.

    In this solution, we had 1 TB of inactive data, which would automatically be moved to the third layer if the layer was configured, thus freeing up capacity on higher performing SAS tiers for more demanding workloads.

  • 39 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Figure 20 shows a different view of TPM comparison. The performance improvement offered by using FAST Suite is clear.

    Figure 20. TPM comparison

    FAST Suite effects on read response time

    Figure 21 shows the significant improvement in read response time provided by EMC FAST Suite when compared with the baseline:

    When we used FAST VP, the response time decreased from 96.49 ms to 55.51 ms. In addition, if we had used a number of NL-SAS drives as a capacity layer, we could have reduced TCO by moving cold data to this layer.

    When we enabled FAST Cache, the response time decreased from 96.49 ms to 5.57 ms.

    When we used both FAST Cache and FAST VP, we reduced the read response time from 96.49 ms to 18.4 ms when using seven Flash drives, and we reduced the read response time from 96.49 ms to 4.78 ms when using nine Flash drives.

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    40

    Figure 21. Read response time comparison

    Wait statistics from Oracle AWR reports

    Oracle foreground wait statistics highlight potential bottlenecks in Oracle RAC environments. Figure 22 and Figure 23 show the top wait events from the RAC AWR reports and compare the waits for the baseline and FAST-only tests and the waits for the baseline and FAST Suite combination tests respectively.

    The figures show that the I/O performance was greatly improved when using FAST Cache or the FAST Suite combinationthe average wait time for the db file sequential read event decreased dramatically. Because of the increase in supported concurrent user transactions, the commit operations grew rapidly.

  • 41 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Figure 22. AWR reports comparison between baseline and FAST Cache-only tests

    Figure 23. AWR reports comparison between baseline and FAST Suite combination tests

    The total wait time of the db file sequential read event dropped by 87%, and the percentage of DB time also decreased by 67% when enabling FAST Suite.

    AWR RAC-level waits FAST Suite (5 Flash drives for FAST VP and 4

    Flash drives for FAST Cache)

    AWR RAC-level waits Baseline

    The total wait time of the db file sequential read event dropped by 85%, and the percentage of DB time also decreased by 63% when enabling FAST Cache.

    AWR RAC-level waits FAST Cache only

    AWR RAC-level waits Baseline

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    42

    Statistics from Unisphere for VNX

    Figure 24 shows the increase in average IOPS for the datafile systems. The IOPS increased over 250 percent when we enabled FAST Suite.

    Figure 24. IOPS comparison

    The I/O statistics generated from Unisphere, the TPM from Swingbench (Figure 20), and the read response time (Figure 21) demonstrate the advantages of enabling FAST Suite from different perspectives.

  • 43 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    dNFS clonedb test

    Customers often need to clone a production database to develop and test new application patches. The objective of this test was to clone a production database instantaneously using a new dNFS feature called clonedb.

    To quickly provision a test database based on a snapshot of the database file systems created by EMC VNX SnapSure using the dNFS clonedb feature, we performed the following steps:

    1. Installed Oracle 11.2.0.3 database software in the test environment.

    2. Ran the command to enable dNFS in the test/development environment and create a dNFS configuration file, as shown in the Oracle dNFS client configuration section.

    3. To take a hot backup:

    a. Put the database in hot backup mode with the following command in SQL*PLUS:

    alter database begin backup;

    b. Created the SnapSure checkpoint against the database file systems with the following commands:

    fs_ckpt data1 -name ck_data1 -Create pool=Save_pool fs_ckpt data2 -name ck_data2 -Create pool=Save_pool

    Note If using SnapSure to create user checkpoints of the primary file system, place SavVol on separate disks when possible and avoid enabling FAST Cache on SavVol.

    For details, see Applied Best Practices Guide: EMC VNX Unified Best Practices for Performance.

    c. Took the database out of hot backup mode with the following command in SQL*PLUS:

    alter database end backup;

    4. Mounted the SnapSure checkpoint to the target virtual database server.

    5. Generated the backup control file script from the production database with the following command in SQL*PLUS.

    alter database backup controlfile to trace;

    6. Copied the spfile and the backup control file from the production database to the test environment and made the necessary changes.

    Note To avoid failure of the dbms_dnfs.clonedb_renamefile procedure, we set clonedb=true in the initialization parameter file for the cloned database.

    7. Started up the cloned database instance with the nomount option and ran the modified backup control file script to create the control file manually.

    Test objective

    Test procedure

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    44

    8. Ran the dbms_dnfs.clonedb_renamefile procedure for each datafile in the cloned database. For example:

    declare begin dbms_dnfs.clonedb_renamefile('/u02/oradata/racdb784/ soe3_13.dbf', '/clonedb/uc784/dnfs784/soe3_13.dbf.dbf'); end;

    9. Recovered the database with the following command in SQL*PLUS: recover database using backup controlfile until cancel;

    This command prompts you to specify the archive logs for the period when the backup was taken and then apply those log files.

    10. Opened the cloned database with the resetlogs option.

    When the cloned database was up and running, we could perform read and write activities on the test database. When the workload was run, storage consumption of the cloned database grew with the speed at which the data was modified.

    To verify the function of the dNFS clonedb database, we used Swingbench to generatethe workload against the cloned database, as shown in Figure 25.

    Figure 25. Workload against the dNFS clonedb database

    Test results

  • 45 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Resilience test

    The objective of this test was to outline the availability and resilience of the dNFS architecture by demonstrating the database availability during physical NIC failure and a data mover panic.

    Up to four network paths defined in the oranfstab file for an NFS server can be used with Oracle dNFS features. The dNFS client performs load balancing across all specified paths. If one of the paths fails, dNFS reissues I/O commands over any other remaining paths.

    Physical NIC failure

    We manually shut down the NIC to simulate physical NIC failure. The test procedure included the following steps:

    1. Configured the database with two paths to each of two data movers separately.

    2. Ran the Swingbench workload against the first node with 100 users.

    3. Shut down the physical NIC on the virtual machine server to disconnect the resiliency path route to the two data movers as shown in Figure 26.

    Figure 26. Physical NIC shutdown

    4. Monitored the alert log for warnings such as those shown in Figure 27.

    Figure 27. dNFS path down messages

    5. After a few seconds, started up the physical NIC, as shown in Figure 28.

    Test objective

    Test procedures

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    46

    Figure 28. Physical NIC startup

    6. Monitored the alert log for warnings such as those shown in Figure 29.

    Figure 29. dNFS path up messages

    7. Waited for the Swingbench workload to be completed.

    Data mover panic

    We manually failed over one data mover to the standby data mover to simulate a data mover panic by following these steps:

    8. Deployed the database datafiles on two file systems across two data movers: server_2 and server_3.

    9. Ran the Swingbench workload against the first node with 100 users.

    10. Failed the data mover server_2 over to server_5, as shown in Figure 30.

    Figure 30. Data mover failover

    This command activated server_5 (standby data mover) to take over from server_2 (the primary data mover).

    11. Verified that the standby data mover server_5 replaced the primary data mover server_2, as shown in Figure 31.

  • 47 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    Figure 31. Data mover status check after failover

    12. Failed back the data mover server_2 as shown in Figure 32.

    Figure 32. Data mover failback

    13. Verified that the data mover server_2 was being restored to the primary data mover successfully, as shown in Figure 33.

    Figure 33. Data mover status check after failback

    14. Waited for the Swingbench workload to be completed.

    Physical NIC failure

    When simulating a physical NIC failure, we observed no database outages because Oracle dNFS provided proactive failover operations when using multiple paths. In this solution, we configured two paths to each data mover. When one path was down, the other path was still available.

    When we shut down one of the physical NICs, Oracle dNFS automatically completed the failover operation in two minutes. When we started up the physical NIC, the second path reconnected automatically and rebalanced the workload across available paths within one minute.

    Data mover panic

    The data mover failover and failback were completed in one minute or less and no database outage was observed. We checked the database status as well as the Swingbench status and found no error in the database log or the Swingbench log.

    Test results

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    48

    Conclusion

    VMware vSphere 5.1 enables efficient use of the server hardware (RAC database servers) platform by scaling the virtual environment to the following:

    Larger virtual machines Virtual machines can grow two times larger than in any previous release of VMware vSphere to support the most advanced applications.

    Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1 TB of virtual RAM (vRAM).

    Oracle RAC 11g can easily scale out the nodes to increase the resources (CPU and memory) of the database server as application needs grow, enabling customers to take an incremental approach to address increases in the Oracle workload.

    Oracle dNFS client technologies provide both resiliency and performance with the ability to automatically fail over on the 10 Gb Ethernet fabric and to perform concurrent I/O that bypasses any operating system caches and OS write-order locks. dNFS also performs asynchronous I/O that allows processing to continue while the I/O request is submitted and processed. Performance is further improved by load balancing across multiple network interfaces (if available).

    EMC FAST Suite, which includes FAST Cache and FAST VP, is ideal for the Oracle database environment. FAST Cache and FAST VP complement each other, can boost storage performance, and can lower TCO if used together. FAST Cache can improve performance immediately for burst-prone Oracle data workloads, while FAST VP optimizes TCO by moving Oracle data to the appropriate storage tier, based on sustained data access and demands over time.

    Additionally, deploying NAS with a 10 Gb Ethernet fabric on the VNX7500 (NFS, CIFS, and pNFS) delivers cost efficiencies with regard to infrastructure, people, and processes versus a block-deployed storage solution. The VNX7500 platform provides consistent, optimal performance scalability for the Oracle workload. By deploying an Oracle RAC database on a VNX7500 array, performance scales in a near-linear manner when additional storage network and RAC nodes are introduced, providing higher throughput based on the configuration in this solution.

    With the combination of the EMC SnapSure checkpoint and the Oracle dNFS clonedb feature, Oracle DBAs can replicate their production environments for test/development purposes in less than 30 minutes. This offers near immediate access to the newly provisioned database.

    The key findings of the testing performed for the solution demonstrate:

    Efficiency

    Automated Oracle performance tuningCompared with the baseline, the performance enhancements offered by FAST Suite include:

    FAST CacheCreating a FAST Cache with four Flash drives, TPM performance improves by 133 percent and the average read response

    Summary

    Findings

  • 49 EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    time reduces from 96.49 ms to 5.57 ms. Using FAST Cache as a secondary cache delivers a 247 percent improvement in IOPS.

    FAST VPEnabling FAST VP by using only five Flash drives can improve performance by 38 percent while reducing the average read response time from 96.49 ms to 55.51 ms. It delivers a 62 percent improvement in IOPS.

    FAST Suite (configuration 1)Combining FAST Cache and FAST VP when using seven Flash drives increases performance by 124 percent and decreases the average response time from 96.49 ms to 18.4 ms. It delivers a 213 percent improvement in IOPS.

    FAST VP and FAST Cache (configuration 2)Combining FAST Cache and FAST VP when using nine Flash drives increases performance by 135 percent and decreases the average response time from 96.49 ms to 4.78 ms. It delivers a 260 percent improvement in IOPS.

    Performance

    Scale OLTP workloadsThe TPM increased almost linearly when adding additional RAC nodes. Customers can take this solution as a baseline or foundation and scale it in a flexible, predicable, and near-linear way, by adding additional storage network, front-end ports, and RAC nodes, to provide higher throughput, based on the configuration in this solution.

    Performance improvement with FAST Suite:

    2 times improvement in transactions per minute (TPM)

    3.5 times improvement in IOPS

    92 percent hit ratio after the FAST Cache warm-up period

    Agility

    Rapid provisioning of Oracle databasesIn comparison with the traditional method of database cloning, using EMC SnapSure checkpoint, the Oracle dNFS clonedb feature can quickly and simply provision database clones for test/development purposes, minimizing the impact on the performance of the production database. This also saves DBA time and reduces the storage requirement.

    Resilience

    Automatic failoverdNFS client optimizes multiple network paths not only to load balance I/O across all available storage paths but also to provide high availability. EMC VNX 7500 integrates with the Oracle dNFS feature seamlessly to provide the high database availability. The database is still alive during the physical NIC failure and the data mover panic.

  • EMC VNX7500 Scaling Performance for Oracle 11g R2 on VMware vSphere 5.1 EMC VNX7500, EMC FAST Suite, EMC SnapSure, and Oracle RAC

    50

    References

    For additional information, see the following EMC white papers:

    Deploying Oracle Database Applications on EMC VNX Unified Storage

    EMC CLARiiON, Celerra Unified, and FAST CacheA Detailed Review

    EMC FAST CacheA Detailed Review

    Leveraging EMC FAST Cache with Oracle OLTP Database ApplicationsApplied Technology

    Optimizing EMC Celerra IP Storage on Oracle 11g Direct NFSApplied Technology

    EMC FAST VP for Unified Storage SystemsA Detailed Review

    For additional information, see the following EMC product documents:

    Unisphere Help in the Unisphere GUI

    Configuring Standbys on VNX

    EMC VNX Series Release 7.0Configuring and Managing Network High Availability on VNX

    EMC VNX Series Release 7.0Command Line Interface Reference for File

    EMC VNX Series Release 7.0Command Line Interface Reference for Block

    For additional information, see the following documents:

    Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux

    Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)

    Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux

    Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2)

    Support Position for Oracle Products Running on VMware Virtualized Environments [ID 249212.1]

    White papers

    Product documentation

    Other documentation

    EMC VNX7500 SCALING PERFORMANCE FOR ORACLE 11gR2 RAC ON VMWARE VSPHERE 5.1Executive summaryBusiness caseSolution overviewKey results

    IntroductionPurposeScopeAudienceTerminology

    Technology overviewIntroductionEMC VNX7500EMC FAST Suite (FAST VP, FAST Cache)EMC FAST CacheEMC FAST VP

    EMC SnapSureVMware vSphereOracle RACOracle Direct NFS client

    Solution architectureIntroductionHardware resourcesSoftware resourcesOracle storage layoutOracle file system allocation on VNX7500Oracle dNFS client configuration

    Configuring Oracle databasesDatabase and workload profileOracle database schemaEnable HugePages

    Configuring FAST Cache on EMC VNX7500OverviewAnalyze the application workloadFAST Cache best practices for Oracle

    Configuring FAST VP on EMC VNX7500OverviewTiering policiesStart high then auto-tier (default policy)Auto-tierHighest available tierLowest available tierNo data movement

    Configure FAST VP

    VMware ESX server configurationOverviewStep 1: Create virtual switchesStep 2: Configure the virtual machine templateStep 3: Deploy the virtual machinesStep 4: Enable access to the storage devicesStep 5: Enable Jumbo framesData movervDSLinux Server

    Node scalability testTest objectiveTest procedureTest results

    FAST Suite testFAST Suite and manual tiering comparisonFAST Cache testFAST Cache warm-upFAST Cache test procedure

    FAST VP testFAST VP moving data across tiersFAST VP test procedure

    FAST Suite testFAST Suite test procedure

    Test resultsFAST Suite effects on database transactions per minuteFAST Suite effects on read response timeWait statistics from Oracle AWR reportsStatistics from Unisphere for VNX

    dNFS clonedb testTest objectiveTest procedureTest results

    Resilience testTest objectiveTest proceduresPhysical NIC failureData mover panic

    Test resultsPhysical NIC failureData mover panic

    ConclusionSummaryFindings

    ReferencesWhite papersProduct documentationOther documentation