copyright © 2006 quest software title slide copyright: 8 pt. arial who needs benchmarking…you do!...

117
Copyright © 2006 Quest Software Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Upload: hester-casey

Post on 28-Dec-2015

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Copyright © 2006 Quest Software

Who Needs benchmarking…You Do!

Mike AultDomain Specialist, OracleQuest Software

Page 2: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Michael R. AultOracle Domain Specialist

- Nuclear Navy 6 years- Nuclear Chemist/Programmer 10 years - Kennedy Western University Graduate- Bachelors Degree Computer Science- Certified in all Oracle Versions Since 6- Oracle DBA, author, 16 years

Page 3: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Books by Michael R. Ault

Page 4: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Who Needs benchmarking? You Do!

Types of things that we need to predict when dealing with databases

• operating system memory needs

• operating system CPU needs

• operating system storage requirements.

• Growth

Page 5: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

What Needs Analysis?

The analysis may be simple:

• “When will this table run out of space”

Or complex:

• “How many CPUs will be required to support this database when the data has increased by a factor of 20 in size and user load has been increased ten fold”.

Page 6: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

How?

We must be able to control two specific parts of the database environment:

• User load

• Transaction mix

If you can not control the users or the transactions impacting the database then it becomes impossible to accurately predict the trends.

Page 7: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

What is a Normal Load?

if we are concerned with average loads and normal data growth then we must ensure that the transaction mix and user load when we do our measurements is “normal” if we don’t know what a normal load is our efforts will probably result in inaccurate forecasts based on biased trends.

Page 8: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Use Standard Tools

• One way to completely control both the users and the transaction load is to utilize benchmarking tools.

Page 9: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Types of Benchmarks

• AS3AP

• Scalable Hardware

• TPC-B

• TPC-C

• TPC-D

• TPC-H

• Home Grown

• Results from the “standard” benchmarks are posted to the www.tpc.org website

Page 10: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

AS3AP

• The AS3AP benchmark is a scalable, portable ANSI SQL relational database benchmark. This benchmark provides a comprehensive set of tests for database processing power; has a built-in scalability and portability that tests a broad range of systems; minimizes human effort in implementing and running benchmark tests; and provides a uniform metric straight-forward interpretation of benchmark results.

Page 11: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Scaleable Hardware

• The Scaleable Hardware benchmark measures relational database systems. This benchmark is a subset of the AS3AP benchmark. This benchmark tests the following: CPU, disk, network, and combination of these three entities.

Page 12: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-B

• The TPC-B benchmark stresses databases and is characterized by significant disk input/output, moderate system, application execution time, and transaction integrity. This benchmark targets database management systems (DBMS) batch applications, and back-end database servers. TPC-B is not an OLTP benchmark.

Page 13: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-C

• TPC-C is an online transaction processing (OLTP) benchmark. TPC-C involves a mix of five concurrent transactions of different types and complexity executed either online or queried for deferred execution. The database is comprised of nine types of tables with a wide range of record and population sizes. TPC-C is measured in transactions per minute (tpmC.).

Page 14: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-D

• The TPC-D benchmark represents a broad range of decision support (DS) applications that require complex, and long running queries against large complex data structures. Real-world business questions were written against this model, and resulted in 17 complex queries.

Page 15: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H

• The TPC-H is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications.

Page 16: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Home Grown

• You select the transactions

• You select the number of users

• Can be exact for your environment

Page 17: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Tests and Databases

• Usually you see TPC-C and TPC-H

• TPC-C is OLTP, small transactions

• TPC-H is more decision support

• You must choose the right test for your system

Page 18: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

An Example TPC-H Database Benchmark

Scalability study of a data warehouse on a 64bit Dell|Oracle 10g R2 RAC Linux solution using Industry standard grid components

Zafar Mahmood Dell Inc.Anthony Fernandez Dell Inc.Bert Scalzo Quest Software Inc.

Page 19: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Scalability study of a data warehouse on a 64bit Dell|Oracle 10g R2 RAC Linux solution using Industry standard grid components Zafar Mahmood Dell Inc.Anthony Fernandez Dell Inc.Bert Scalzo Quest Software Inc.

Page 20: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Overview

Dell Database and Applications Team

• The Dell Oracle Solutions Engineering team has a complete ownership of the product design and development cycle:– Integrating, validating, bundling, and sustaining Dell’s Oracle

DB and RAC solutions, based on PE servers, and Dell|EMC FC I/O Subsystems.• Perform comprehensive Oracle Solution integration testing to detect

defects that affect the database, OS, servers, interconnect or I/O subsystems (one-pass solution test).

• Continuously test with the latest version of OS (Linux and Windows X64, OS kernel updates, driver updates, and Oracle patch-sets/ASM/OCFS to verify continuous functionality of Oracle RAC Solutions (Rolling Test).

• Solution bundles listed with HW and SW component requirements at:

http://www.dell.com/10g

Page 21: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Standard Grid Components

Oracle Database release N (10g) R1&R2 – EE, SE & SE1

Red Hat AS Updates/Major Releases N-1, OCFS N-1 & Raw Devices N

Red Hat AS/ES, W2K3-SP1-SE Release N , OCFS N, and ASM N

PE1750, 1850, 2600, 2650, 2800, 2850 4600, 6600, 6800, 6650 & 6850

QLogic QLA2340/2342, QLA200, QLE 2360, 2460/2462(4G)

Emulex LP982/9802, LP10K, LP1050e, LP1150e(4G)

Intel GigE Broadcom GigE LOM GigE

Intel GigE Broadcom GigE

Database

OS/OCFS versions

Switch

HBA/RAID

Storage

Pub Net

Prvt Net

Servers

Brocade SW3200, 38x0, 4100 (4G), 200E(4G)

McData Sph 4500, 4400(4G), 4700(4G)

DA

CX200/400/600 AX100CX300/500/700

Page 22: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Dell | EMC Fibre Channel Storage

Server, Storage and Network Hardware

• Cluster Nodes – CPU: Dual Core Intel® Xeon™

Processor, 2x2MB Cache, 2.8GHz,

800MHz FSB – Memory: 8GB

– IO slots: 2 x PCI-X

• Shared Storage– Storage Processors: x2

– IO Ports: x4 per SP

– DAE’s/Spindles: 2x4/120

• Private interconnect: 2xOnboard Intel Gigabit

• Public LAN: 1xPCI-X Intel Gigabit NIC

• IO channels and HBA’s: 1xQLA2342 dual port

Page 23: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Server, Storage and Network Configuration

• Host configuration– IO load balancing

– Kernel Parameters

– User limits

– RAW device bindings

• Shared Storage Configuration– FC switch configuration

– Storage processor load balancing across two CX700

– Storage Cache settings

• Network configuration– Dual bonded interconnect

– Jumbo frames

Page 24: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Oracle 10g R2 RAC stack

Applications

Dell | EMC CLARiiON

Host BusAdapter

Host BusAdapter

PowerPath

Automatic Storage Management (ASM)

RDBMS Management Utilities

HBA Driver/SCSI mid layer

•Power Path multi path software for IO load balancing

•Automatic path failover

•Automatic detection and restore of failed components

•Dynamic load balancing

•User-selectable application priorities

•Online configuration and management

•Common HBA-driver support

Page 25: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Oracle 10g R2 RAC stack

•Power Path multi path software for IO load balancing on Dell| EMC Clariion

• Dual-port disks, redundant FC-AL loops– All disks have paths to both Storage

Processors

• CLARiiON LUNs are “owned” by one Storage Processor

– One Storage Processor services I/O for LUN– Second path to LUN is passive

* Asymmetrical Volume (LU) Access: LU accessible (active) on one Storage Processor at a time

StorageProcessor

CLARiiON

Cache

Host Host

HBA HBA HBA HBA

Switch Switch

SCSI disk

SCSI disk

SCSI disk

StorageProcessor

CacheState Info

CMIPort Port Port Port

Page 26: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Oracle 10g R2 RAC stack

• Linux Optimizations• Oracle Parameters• TPC-H Benchmark

– Test Description– Database Sizing– Data Model (ERD)– 22 TPC-H Queries– Sample Query SQL– Sample TPC-H Results

• Shared Storage Configuration– Physical Disk Layout– Tablespace Configuration

• Database Partitioning Scheme

Page 27: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Linux Optimizations

sysctl.conf

• kernel.shmmax = 8589934592

• kernel.sem = 250 32000 100 128

• fs.file-max = 65536

• net.ipv4.ip_local_port_range = 1024 65000

• net.core.rmem_default = 262144

• net.core.rmem_max = 262144

• net.core.wmem_default = 262144

• net.core.wmem_max = 262144

Page 28: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Oracle Parameters

Oracle spfile

• *._slave_mapping_enabled=FALSE (due to metalink Oracle bug)• *.db_block_size=8,192• *.db_file_multiblock_read_count=128 (max’s out at 1 MB or 128)• *.db_writer_processes=4• *.open_cursors=600• *.optimizer_index_caching=80 (default 0)• *.optimizer_index_cost_adj=40 (default 100)• *.parallel_execution_message_size=16,384 (default 4096??)• *.parallel_max_servers=128• *.pga_aggregate_target=2G• *.processes=4000• *.sga_target=6,442,450,944• *.star_transformation_enabled='TRUE'• *.undo_management='AUTO'• *.undo_retention=800,000

Page 29: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: Test DescriptionTransaction Processing Performance Council - defines database

benchmarks

The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.

The performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size), and reflects multiple aspects of the capability of the system to process queries. These aspects include the selected database size against which the queries are executed, the query processing power when queries are submitted by a single stream, and the query throughput when queries are submitted by multiple concurrent users. The TPC-H Price/Performance metric is expressed as $/QphH@Size.

http://www.tpc.org/tpch/default.asp

Page 30: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: Database Sizing

TPC-H Benchmarking Scale Factors

• 1 = 1 GB

• 10 = 10 GB

• 30 = 30 GB• 100 = 100 GB

• 300 = 300 GB Our tests – very common one• 1000 = 1 TB• 3000 = 3 TB Tried this size – too few

disks

• 10000 = 10 TB

• 30000 = 30 TB• 100000 = 100 TB

Page 31: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: Data Model (ERD)

25

5

150K / SF

45,000,000

10K / SF

3,000,000

200K / SF

60,000,000

800K / SF

240,000,000

6M / SF

1,800,000,000

1.5M / SF

450,000,000

Page 32: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: 22 TPC-H Queries

Page 33: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: Sample Query SQL

Page 34: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPC-H: Sample TPC-H Results

http://www.tpc.org/tpch/results/tpch_perf_results.asp?resulttype=cluster&version=2%&currencyID=0

Page 35: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Shared Storage Configuration - Physical Disk Layout

Voting/CRS

BackupDATA1_1 – 1TB

… DATA2_1 – 1TB

INDX – 400GB …

DATA3_1 – 800GB

Backup DATA1_2 – 1TB

… DATA2_2 – 1TB

DATA3_2 – 800GB

INDX – 400 GB

EMC CX700 #1 EMC CX700 #2

DAE-0

DAE-1

DAE-2

DAE-3

Page 36: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Shared Storage Configuration - Tablespace Configuration

*Note: Undotbs1-10 are created for quest user

`QUEST_DAT1

DATA1_1 DATA1_2

QUEST_DAT2

DATA2_1 DATA2_2

QUEST_INDX

INDX_1 INDX_2

QUEST_DAT3

DATA3_1 DATA3_2

+DG3/quest_dat3.dbf

+DG1/quest_dat1.dbf

+INDX/quest_indx.dbf

+DG2/quest_dat2.dbf

Page 37: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Database Partitioning SchemeGoal = spread IO across as many spindles as possible via partitioning

Page 38: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Test Case Strategy• Strategy

Objective: Find out the scalability of 10g R2 8 node RAC with TPC-H workload

– Test 1: Establish a single node single stream baseline– Test 2: Enable node level parallelism running 4 streams

on a single node – PARALLEL (DEGREE 4 INSTANCES 1)

– Test 3: Enable node and cluster level parallelism running 4 streams on 4 nodes – PARALLEL (DEGREE 4 INSTANCES 4)

– Test 4: Run Test 3 using Intel Xeon dual core processors– Test 5: Enable node and cluster level parallelism running

4 streams on 8 nodes (DEGREE 4 INSTANCES 8)– Test 6: Enable node and cluster level parallelism running

8 streams on 8 nodes (DEGREE 4 INSTANCES 8)• Compare average query response times for each test case

and expect equal or better query response times for each test case

Page 39: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Test Case Tools

• Monitoring and Test Tools • Quest Software - BenchMark Factory

– Simulates TPC-H environment (DSS environment)– Used to test scalability of up to 8 node Oracle 10g R2

RAC cluster• Quest Spotlight on RAC

– Provides monitoring and diagnosis information about cluster interconnect latency, throughput, ASM performance, Database bottlenecks and overall locking and wait events information

• Quest TOAD– Provides database monitoring, management and object

creation, query performance analysis, explain plan, AWR and ADDM reports generation facilities

• Oracle Enterprise Manager• AWR, ADDM reports

Page 40: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Benchmark Factory

Page 41: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Spotlight on RAC

Page 42: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Toad DBA

Page 43: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Scalability Results

• The Results– Total Run Time

– Average Response Time

• Lessons learned

• Best practices for a data warehouse on RAC

Page 44: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The Results: Total Run Time

-13.39% -13.60% -17.59%

Total Run Time (Mins)

0

100

200

300

400

500

Test 1: 4 Node, 2Single Core,Degree 4,

Instances 1

Test 2: 4 Node, 2Single Core,Degree 4,

Instances 4

Test 3: 4 Node, 2Dual Core,Degree 4,

Instances 4

Test 4: 8 Node, 2Dual Core,Degree 4,

Instances 8

Page 45: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The Results: Average Response Time

-25.57% -15.14%

Avg Repsonse Time (Secs)

05,000

10,00015,00020,00025,00030,000

Test 1: 4 Node,2 Single Core,

Degree 4,Instances 1

Test 2: 4 Node,2 Single Core,

Degree 4,Instances 4

Test 3: 4 Node,2 Dual Core,

Degree 4,Instances 4

Test 4: 8 Node,2 Dual Core,

Degree 4,Instances 8

Page 46: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The Results: Jumbo Frames

• Default Ethernet frame size is 1500 bytes• Can be increased to 9000 bytes for better

cluster interconnect performance• Enable Jumbo Frames for the private network

switch (some switches might have it disabled) • Enable Jumbo frames for the bonded interface

(bond0) for the private cluster interconnect• Perform the same frame length settings on all

cluster nodes• Rerun Test 6

Page 47: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Lessons Learned

• Lessons learned …– Parallel across cluster performs better than simply parallel

within node– Dual core processor may often or usually improve total

run time results regardless of other resource utilizations– Dual core processor will only significantly improve

average response time results when you’re not already IO bound

– TPC-H results are almost universally governed by the physical number of disk drives the data is spread across

– For 8 node test (test 6), the private interconnect started to exhibit larger latency. It is a good idea to consider another high speed interconnect technology such as InifiniBand running RDS for scaling out to greater than 8 nodes.

Page 48: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Best practices for a data warehouse on RAC• Establish a baseline on single node with single stream• Tune and optimize IO subsystem for parallelism using Parallel

Execution– Add additional IO paths (HBA’s) if IO waits are high– Use multi-path software application such as PowerPath or

MPIO for IO load balancing and failover across IO paths• Enable Node level parallelism using Parallel Execution• Add more processing power (Dual Cores CPU) to existing nodes

before adding additional nodes.• Add more nodes and enable cluster level parallelism if using

Parallel Execution• Use NIC bonding to provide interconnect scalability. Consider

low latency and higher throughput cluster interconnect if scaling data warehouse beyond 8 nodes.

• Use Jumbo Frames for the cluster interconnect

Page 49: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

What About Home Grown?

Let’s take a look at a test case to determine:

• Does a particular set of transactions, specifically data manipulation language (DML) transactions to the base tables for an Oracle materialized view have an effect on the number of users that can select from the view

• In addition how many users can perform DML operations while users are performing selects against the materialized views.

Page 50: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Is a Specific Architecture Useful?

• One of the suggested architectures to allow for rapid reporting without stressing the base tables is to use partitioned, refresh on commit, materialized views within Oracle. It is hoped this test will help show the affects of user load on such an architecture.

• In order to test this architecture the Quest Benchmark Factory was utilized with two GUI installations; one to do the INSERT into the base tables, the other to perform the SELECT activity against the refresh on commit materialized view.

Page 51: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Test Phases

The testing was performed in three phases:Phase 1:• In phase one both the INSERT and SELECT potions of the test were

cycled simultaneously from 1-60 users in 5 user increments on the INSERT side and 1-30 users in 5 user increments on the SELECT side.

Phase 2:• In phase two the INSERT side was cycled from 1-60 users in 5 user

increments until the response time exceeded 6 seconds while the SELECT side was run at a single constant user level during individual INSERT runs. The SELECT side was run at constant user levels of 5, 10 and 20 users during the INSERT tests.

Phase 3:• In phase 3 the materialized view was recreated as a single table and

the constant user level of 20 for SELECTs was used to test the difference between use of partitions and single tables.

Page 52: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

All Phases

• In all phases the SALES table was used for the update target with ON COMMIT processing for the materialized views causing selects from all the base tables in the PUBS schema (SALES, AUTHOR, BOOK, AUTHOR_BOOK, PUBLISHER, STORE) to publish data into the MV_AUTHOR_SALES materialized view.

• In Oracle ON COMMIT processing means just that, whenever there is a commit on the base tables, the effected materialized view records are updated, inserted or deleted.

• Prior to each test the MV_AUTHOR_SALES materialized view and the SALES table were both truncated.

Page 53: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The Test System

Ethernet

VIO PCG-GRT250P Gateway 9300

Minicomputer

HFDW

NStore 8 Disk RAID 5 Array8-19 Gig 10K RPM Drives

100 MBit Line1 GBit Line 1 GBit Line

Fast-Wide SCSI

Benchmark FactoryVIO using ODBC

Gateway using SQLNetVIO processing INSERT

Gateway Processing Select

5 user Selects10 User Selects20 User Selects

1-60 User Insertsin 5 User Increments

PUBS SchemaMV_AUTHOR_SALES materialized view

either 12 monthly partitionsOr single Table

RedHat OS10.1.0.3 Oracle Enterprise

Single 3 Ghz Hyperthreaded CPU2 Gigbytes Memory

Page 54: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Create the Partitioned MV base table

CREATE TABLE mv_author_sales PARTITION BY RANGE (order_date) (PARTITION p1 VALUES LESS THAN (to_date('012002','mmyyyy')), PARTITION p2 VALUES LESS THAN (to_date('022002','mmyyyy')), PARTITION p3 VALUES LESS THAN (to_date('032002','mmyyyy')),… PARTITION p12 VALUES LESS THAN (to_date('122002','mmyyyy')), PARTITION p13 VALUES LESS THAN (MAXVALUE))as(Select d.order_date, a.rowid idrowa, b.rowid idrowb, c.rowid idrowc, d.rowid idrowd, e.rowid idrowe, f.rowid idrowf, a.author_last_name, a.author_first_name,f.pub_name,a.author_contract_nbr, e.store_state,d.quantityFrom author a, book_author b, book c, sales d, store e, publisher fWhere a.author_key=b.author_keyAnd b.book_key=c.book_key And c.book_key=d.book_keyAnd e.store_key=d.store_keyand c.pub_key=f.pub_key)/

Page 55: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Create the Indexes

create index mv_rida on mv_author_sales(idrowa);create index mv_ridb on mv_author_sales(idrowb);create index mv_ridc on mv_author_sales(idrowc);create index mv_ridd on mv_author_sales(idrowd);create index mv_ride on mv_author_sales(idrowe);create index mv_ridf on mv_author_sales(idrowf);

Page 56: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Create the Materialized View

Create materialized view mv_author_sales on prebuilt tableRefresh on commitasSelect d.order_date,a.rowid idrowa, b.rowid idrowb, c.rowid idrowc, d.rowid idrowd, e.rowid idrowe, f.rowid idrowf, a.author_last_name, a.author_first_name,f.pub_name,a.author_contract_nbr,e.store_state,d.quantityFrom author a, book_author b, book c, sales d, store e, publisher fWhere a.author_key=b.author_keyAnd b.book_key=c.book_key And c.book_key=d.book_keyAnd e.store_key=d.store_keyand c.pub_key=f.pub_key/

Page 57: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Statistics

After creation and refresh the MV_AUTHOR_SALES and SALES tables we analyzed using a command similar to:

dbms_stats.gather_table_stats('PUBS','MV_AUTHOR_SALES',cascade=>true);

The dynamic sampling feature of 10g was utilized to maintain statistics for the test since the table and materialized view were growing during the entire test period for each test.

Page 58: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Transaction Details

• Two basic transactions were utilized to test the affect of locking on the INSERT and SELECT activities. The SALES table formed the base of the materialized view MV_AUTHOR_SALES so the INSERT transaction focused on inserts into the SALES table.

• The inserts into the SALES table force the materialized view refresh (REFRESH-ON-COMMT) to select records from all of the base tables.

• The following Benchmark Factory function scripts where used to populate random values into the INSERT statement:

– $BFRandList – Insert one of the provided list into the statement at this point with frequency based on the provided integer (“val”:f) if no integer is provided, use 1.

– $BFRandRange – Insert a random integer in the range specified.– $BFDate – Insert a random date in the range specified.

Page 59: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Example SQL

Insert Transaction:INSERT INTO sales VALUES ( '$BFRandList("S101","S103","S103","S104","S105","S106","S107","S108","S109","S110")','$BFRandList("B101","B102","B103","B104","B105","B106","B107","B108","B109","B110",

"B111","B112","B113","B114","B115","B116")', 'O'||to_char(order_number.nextval), to_date('$BFDate("01/01/2002","12/31/2002")','mm/dd/yyyy'), $BFRandRange(1,100));

The SELECT transaction was designed to fully access the materialized view placing the most stress on the view as possible.

Select Transaction:SELECT to_number(to_char(order_date,'mmyyyy'))

month_of_sales,author_first_name,author_last_name,sum(quantity) FROM mv_author_sales

GROUP BY to_number(to_char(order_date,'mmyyyy')),author_first_name,author_last_name;

Page 60: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Resulting Partition Loading

PARTITION COUNT(*)--------- ----------012002 831022002 765032002 805042002 799052002 885062002 788072002 896082002 864092002 871102002 843112002 888122002 857

Page 61: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1: Both Insert and Select Varying

In phase one, both Benchmark Factory tests were made to scale. From 1-30 users for selects in 5 user increments (1, 5, 10, 15, 20, 25, 30) and 1-60 users in inserts in 5 user increments. During testing locks were monitored using the procedure shown below.

Create or replace procedure get_locks(tim_in_min number) asinterations number;I integer;begininterations:=floor(tim_in_min*60/4)+1;for I in 1..interationsloopinsert into perm4_object_locksselect sysdate, b.object_name,count(*) from v$locked_object a, dba_objects b where a.object_id=b.object_id and object_name!=’PERM4_OBJECT_LOCKS’ group by object_name;commit;dbms_lock.sleep(4);end loop;end;

Page 62: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Example Locking ResultsThe locking was monitored at 4 second intervals and the results for Phase 1, 1-30 User SELECT processes in 5 user increments versus INSERT processing at 1-60 users in 5 user increments.Lock profile for MV_AUTHOR_SALES materialized view:

MEAS_ OBJECT_NAME SUM(NUM_LOCKS)----- --------------- --------------21:10 MV_AUTHOR_SALES 221:11 MV_AUTHOR_SALES 421:12 MV_AUTHOR_SALES 221:13 MV_AUTHOR_SALES 221:14 MV_AUTHOR_SALES 421:17 MV_AUTHOR_SALES 221:18 MV_AUTHOR_SALES 221:19 MV_AUTHOR_SALES 221:21 MV_AUTHOR_SALES 321:22 MV_AUTHOR_SALES 421:23 MV_AUTHOR_SALES 221:25 MV_AUTHOR_SALES 2

Page 63: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Results for the Insert Side

TPS

0

1

2

3

4

5

6

7

0 5 10 15 20

Userload

TPS

Insert Response Time

0

1

2

3

4

5

6

7

8

1 5 10 15

Insert Processes

Seco

nd

s

Insert RT

Page 64: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Results for the Select SideTPS

0

2

4

6

810

12

14

16

18

0 5 10 15 20 25 30 35

Userload

TPS

Select Response Time

00.20.40.60.8

11.21.41.61.8

2

1 5 10 15 20 25 30

Select Processes

Seco

nd

s

Select RT

Page 65: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1 Results Summary

• The results show that the locking affects INSERT processing resulting in the average time for inserts to increase to greater than 6 seconds within 15 user processes while SELECT processing shows little affect other than that which can be expected from the materialized view table size increase.

• However, the affects are hard to characterize when both INSERT and SELECT processes are varying.

Page 66: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 2: SELECT transaction level constant

• In Phase 2 the transaction levels for SELECTs will be held at constant levels (5,10,20), and we will check TPS and response for Inserts ( levels 1-60 or where Response >6 sec.)

• In Phase 2 testing the number of SELECT user processes is kept at constant values while the number of INSERT processes is increased in 5 user intervals until response time increases above 6 seconds.

• SELECT user levels of 5, 10 and 20 were used. The TPS and response time for the SELECT processes were recorded at each user level for each upward increment in the number of INSERT processes to gauge the affect of increased locking on the SELECT processing.

Page 67: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

5 concurrent SELECTSTPS

0

1

2

3

4

5

6

7

0 10 20 30 40

Userload

TPS

Insert Response Time 5 Select Users

0

1

2

3

4

5

6

7

1 5 10 15 20 25 30

Insert Processes

Seco

nd

s

Insert RT

Page 68: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

10 concurrent SELECTSTPS

0

1

2

3

4

5

6

7

0 10 20 30 40

Userload

TPS

Insert Response Time 10 Select Users

0

1

2

3

4

5

6

7

8

1 5 10 15 20 25 30

Insert Processes

Seco

nd

s

Insert RT 10U

Page 69: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

20 Concurrent Select Users

TPS

0

1

2

3

4

5

6

7

0 5 10 15 20 25 30

Userload

TPS

Insert Response Time 20 Select Users

0

1

2

3

4

5

6

7

1 5 10 15 20 25

Insert Processes

Seco

nd

s

Insert RT 20U

Page 70: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 2 Summary

• Over all the Phase 2 testing shows that locking has little or no affect on SELECT operations while the number of SELECT processes has an affect on the number of INSERT processes capable of operating with a less than 6 second response time and the number of TPS that can be processed for that user level.

Page 71: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 3: Materialized View with No Partitions

• In Phase three the affect of utilizing a single base table verses using multiple partitions at the maximum number of SELECT processes (20) is measured.

Page 72: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

20 Users No PartitionsTPS

0

1

2

3

4

5

6

7

0 5 10 15 20 25 30

Userload

TPS

Insert Response Time 20 Select Users No Partitions

0

2

4

6

8

10

1 5 10 15 20 25 30

Insert Processes

Seco

nd

s

Insert RT 20U NP

Page 73: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Value Distribution

The total row count for the single table test was 9299 vice 10092 in the partitioned testing (on the average.) The distribution of the values in the single table is shown below.

ORDER COUNT(*)------ ----------012002 786022002 692032002 733042002 737052002 810062002 722072002 828082002 803092002 803102002 781112002 824122002 780

Page 74: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 3 Summary

• Phase 3 shows that while partitions are good for SELECT processing they may have a slightly detrimental affect on INSERT processing.

• The INSERT processing affects may be mitigated by changing how rows are stored in the table such as by large PCTFREE allocations limiting the rows per block.

Page 75: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Combined Results

• It is easer to see the affects of the increasing number of SELECT processes by combining the results from the various tests into a series of graphs.

• In the first graph we examine the affect on transactions per second (TPS).

• The combined TPS graphs for the 5, 10, 20 SELECT Users and the 20 SELECT users with no partitions test results.

• Notice how the performance for the 20 SELECT user no partitions TPS is less than for the 20 SELECT user partitioned results.

• All of the other results show the affect of the increase stress of the SELECT processing on the INSERT users and the lack of affect of the INSERT processes on the SELECT users.

Page 76: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Combined Results

Constant Select TPS

0

5

10

15

20

25

1 5 10 15 20 25 30

Number Inserts

TP

S

5 select TPS10 Select TPS20 Select TPS20 Select TPS (NP)

Page 77: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Select Response Times

Constant Select Response Times

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1 5 10 15 20 25 30

Insert Processes

Seco

nd

s

5 Select RT10 Select RT20 Select RT20 Select RT (NP)

Page 78: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Insert TPS

Insert TPS

0

1

2

3

4

5

6

7

1 5 10 15 20 25 30

Insert Processes

TP

S

TPS 5TPS 10TPS 20TPS 20 NP

Page 79: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Response Time

Insert Response Time

0123456789

1 5 10 15 20 25 30

Insert Processes

Seco

nd

s

RT 5RT 10RT 20RT 20 NP

Page 80: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Combined Results Summary

• Again, these results show that locking, as expected, has little affect on SELECT processing since with Oracle’s single row (fine grained) locking model and multi-block concurrency model readers will not be blocked by writers and writers will not be blocked by readers.

• It also shows that using the REFRESH ON COMMIT materialized views should not adversely affect INSERT or SELECT processing.

• the tests seem to indicate that for SELECT processing using partitions is beneficial but for INSERT processing, at least at the single row per transaction level, the partitions may have a slightly negative affect on TPS and response time.

Page 81: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Recommendations• Based on the data in this report it is recommended that partitioned

materialized views using the REFRESH ON COMMIT refresh mechanism should be used to reduce the strain on the underlying OLTP based tables when the same database instance is used in OLTP and reporting.

• While using partitioned materialized views shows a slight increase in response times on INSERTS, the benefits of their use outweigh the potential down sides.

• In this section we have seen an example of utilizing a benchmarking tool to see if a particular architecture was correct for our application and checking to see how our application would scale on a particular hardware setup.

• But how can we determine if a particular hardware setup is correct?

• In the next section we show an example of the use of a benchmark tool to determine projected hardware needs based on user load and projected data size.

Page 82: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Planning Future Hardware & Software Needs

• Projecting future hardware and software needs requires you to establish what the current hardware is capable of, then project based on your criteria what hardware will be needed.

• For our example we will use an actual user test case. In this test case we have been tasked with determining for a production server with 20 times more data the number of CPUs and Memory that will be required to give performance comparable to that we currently experience.

• First let’s look at the architecture we will be testing.

Page 83: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Architecture

SunFi re V 4 8 0

Sun

EMC

EMC

EMC

EMC

Switch

Sunfire 480Solaris 2.9

2-900 Mhz CPU4 Gigabytes Memory

Oracle9i - 9.2.0.1

EMC Disk Array(Clarrion shown, actualEMC type not known)

Database allocated 4.7gigabytes of space,using 1.2 gigabytes

Apache WebserverSwitch

SunSunFi re V 2 1 0

Page 84: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Test Summary• Performance testing against the test table in the test database

instance was accomplished during the period April 3 – April 7, 2006. • The Benchmark Factory tool from Quest was utilized to simulate loads

against the test database for various user loads and queries similar to those that will be generated by the reporting system against the database during normal operations.

• Two general types of queries were tested during this time period, an issues type query set and a parts type query set.

• A basic template for each of the queries was provided by site personnel and the Benchmark Factory provided scripts were utilized to insert random values into the queries to generate the query loads.

• The random values used in the tool where selected from the test database instance to provide for a varying load for each type of query presented.

Page 85: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1

• In Phase 1 only issues queries were utilized to perform a SQL scalability test.

• In Phase 1 the user load was ramped in 6 user increments from 6 users to a maximum of 84 users.

• Each user was able to run any of the six queries at any time and no “think”, “keyboard” or other delays were programmed into the scenario.

• In Phase 1 operating system statistics and statspack were used to collect additional statistics.

• NOTE; For the purposes of this example, we don’t need to include phases 2 and 3, so for purposes of brevity they will be omitted.

Page 86: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Limitations and Caveats

• The testing described in this session was performed on a shared system where the team had no control over what other teams were doing on the server; therefore there are variations in transactions per second, transactions and bytes per second that would not be present in an isolated testing environment.

• The server memory configuration was not able to be adequately tuned due to the limitations of a shared environment, so there are some physical IOs which occur that would not have happened or would have been greatly reduced in a properly tuned environment.

• Oracle9i Release 2 was used for the test environment, there are several bugs in release two which caused Ora-00600 and other errors when large numbers of bind variables were utilized, the bind variables were considered “unsafe” (such as to replace a string value in a LIKE statement) and CURSOR_SHARING is set to SIMILAR.

Page 87: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1: Issues Query Testing

• Phase 1 is a standalone test of the Issues queries. The user load was ramped from 6 to 84 users (note that at 78 users the system would begin giving resource related errors and refuse further connections.)

Page 88: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Randomization of the Issues Queries

USER Issue Count

GEORGEB 230

FRANKL 225

MIKER 2673

SAMMYJ 417

BILLYB 460

OZZY O354

Page 89: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Product

PRODUCT Issue Count

RADIOX 3172

RADIOY 1479

DVDPX 1210

CDPY 7880

CAMCDR 13187

CAMCDR 22270

VCRX 1510

Page 90: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Randomization of Values

• The FINISHED column had the values of either NULL or ‘LATE’ so both of these conditions were utilized in various queries.

• At any time during the test a user process would be executing a query using any of the above values, providing a random number of return values for each unique query.

Page 91: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Graph of Transaction Times by Query by User Load

Transaction Time By Query

0

200

400

600

800

1000

1200

0 10 20 30 40 50 60 70 80 90

Userload

Milliseco

nd

id_and_productId_and_lateID_and_product_and_lateID_and_late_is_nullID_and_Product_and_Late_is_nulljust_ID

Page 92: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Average Transaction Times

Transaction Time

0.01

0.1

1

10

100

0 10 20 30 40 50 60 70 80 90

Userload

Sec

on

ds

Avg Time90th TimeAvg Time90th TimeAvg Time90th Time

Page 93: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Response Time

Response Time

0

0.05

0.1

0.15

0.2

0.25

0 10 20 30 40 50 60 70 80 90

Userload

Sec

on

ds

Avg Response Time90th Response TimeAvg Response Time90th Response TimeAvg Response Time90th Response Time

Page 94: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

TPS

Transactions Per Minute

0

50

100

150

200

250

300

350

0 20 40 60 80 100

Userload

Tra

nsacti

on

s

TPS

TPS

TPS

Notice one test has significantly higher TPS than the other two after the throttling effect at 18-24 users; this is due to the variances in server availability because of the shared user environment and makes it difficult to accurately predict overall performance for scaling

purposes

Page 95: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Database Activity

col meas_date format a19set pages 0 numwidth 12spool logical_reads_q2select a.instance_number,to_char(a.begin_interval_time,'yyyymmdd hh24:mi') meas_date, b.valuefrom dba_hist_snapshot a, dba_hist_sysstat bwhere b.stat_name='session logical reads'and a.begin_interval_time>sysdate-7and b.snap_id=a.snap_idorder by a.instance_number,a.begin_interval_time/spool offset numwidth 10 pages 22

Page 96: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Logical Reads

Logical READS

0

1000000

2000000

3000000

4000000

5000000

6000000

7000000

8000000

9000000

Logical READS

Page 97: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Physical Reads

Physical reads

1

10

100

1000

10000

100000

Physical reads

Page 98: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Scattered and Sequential Reads

Scattered and Sequential Reads

1

10

100

1000

10000

Date/Time

Rea

ds

DB File Scattered Reads

DB File Sequential Reads

Page 99: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Results

The Issues queries are performing primarily in memory which is why their performance is excellent (sub-second on the average) if this could be maintained for the production environment at least as far as the database is concerned, performance for these queries would be optimal.

However, it is projected that with the addition of the requirement to allow searching for issues that the current user (i.e. a supervisor checking on his subordinates issues) has no role would increase the number of table entries by a factor of around 20.

This would increase the size to nearly a gigabyte in size driving up physical reads if the current database memory size is maintained.

If the system shifts from predominately logical to predominately physical reads to satisfy the queries, processing time could increase by up to a factor of 17 to 100 times.

Page 100: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Table/Index Reads

Table ROWID and Scan Rows

1

10

100

1000

10000

100000

1000000

10000000

100000000

Date/Time

Ro

ws

Table Scan Rows

Table By Rowid

Page 101: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Operating System Activity

CPU Activity

0

10

20

30

40

50

60

70

80

90

100

1 16 31 46 61 76 91 106 121 136 151 166 181 196 211 226 241 256 271 286 301 316 331 346 361

10 Sec Interval

Per

cent

cpu us

cpu sy

cpu wt

cpu total

7 per. Mov. Avg. (cpu total)

Page 102: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

CPU Results

• We see that there appears to be a great number of processing peaks, these are due to the startup of the test users and can generally be disregarded, the 7 point moving average is a better indicator of actual CPU activity in this case and shows that even with 78 users hammering at the system we only reached 67 percent of CPU.

• The biggest issue which stopped testing at 78 users was memory related and not CPU

Page 103: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Memory Usage

Memory

1000

10000

100000

1000000

10000000

1 22 43 64 85 106 127 148 169 190 211 232 253 274 295 316 337 358

10 Sec Reading

1k p

ag

es

swapfree

Page 104: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1 Conclusions

• Phase 1 shows that for the current data size (42 megabytes in the ISSUES base table) the data is completely cached in the database buffer area leading to excellent query performance at the database level. However, if system data volume increase by a factor of 20 as is predicted then the memory will no longer be able to fully cache the ISSUES data and increased physical reads will seriously impact performance of the Issues queries.

• Should the data size increase by a factor of 20 to get the same performance the database cache area would also need to be increased by a similar amount (from 500 megabytes to 10 gigabytes) unless some form of partitioning on the ISSUES table is utilized to reduce the working data set size.

Page 105: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Phase 1 Conclusions

• In addition, increasing the data set size will increase the amount of logical IO and CPU usage.

• If CPU usage increases by a factor of 20 due to increases in data set size then to support the same 78 users with the same level of performance 10 CPUs would be required.

• Using ratios the current configuration utilizes 0.87 percent of the available CPUs (2) for each user at peak load: 68 percent of CPU at 78 users), increasing the workload by a factor of 20 would drive CPU usage to 1740 percent, allowing that this is for 2 CPUs, each CPU would be doing 870 percent, thus, at least 9 CPUs (assuming the data is fully cached) would be required to just handle the Issues type queries at a 78 user load with a factor of 20 data size increase.

Page 106: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Maintaining Service Level Agreements (SLA’s)

• An SLA (Service Level Agreement) is a contractual agreement usually between a service provider such as a hosting service, and a client.

• SLA’s may also be between other departments and the IT department.

• Generally SLA’s related to databases call for a specific response time, for example, a particular screen may need to be populated within 7 seconds or a particular report must return results within 3 seconds.

• The IT department must define specific tests that are performed at specific intervals to verify SLA compliance, or lack of compliance, before the users notice.

• The results from the SLA performance tests are usually graphed or trended.

Page 107: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Determining SLA Test Queries

• If a particular screen or report is the basis of the SLA, then the queries that fill the screen with data or the query that pulls the data for the report.

• In previous sections we saw queries that responded with data in sub second response times, yet the application response time was over the SLA of 3 seconds.

• The problem in the system in the previous section was the downstream reporting system.

• Even though the database responded in sub second time the downstream reporting system and web server resulted in delays that caused what the user saw as response time to be longer than the 3 second SLA, be sure your SLA is just for items you have control over!

Page 108: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Determining SLA Test Queries

• You must make sure that a SLA is meaningful for your part of the system

• Did I mention that the system with the 3 second SLA also had to service clients in Asia from a server in the Midwestern portion of the USA where the network latency was 500 milliseconds for each leg of the network round trip?

• You must make sure that not only do you have a meaningful SLA, but that the queries you choose to test with are of sufficient complexity and quantity to fully test the important parts of the SLA.

Page 109: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Ok, I have the SLA and Queries, What Now?

• Once you have a properly defined SLA and the queries to test it, you have to set up periodic testing routines that verify the SLA timing criteria are met.

• The tests must be run not just during off hours with a low load, but also during peak loads, after all, your users obviously don’t use the system on off-peak times only.

• Your test scripts can be as simple as a SQL test harness routine that utilizes the pre-chosen SQL and is run periodically during the day, to a mini-benchmark that is automatically run and a scheduled basis.

Page 110: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Issues with Generating Your Own Scripts

If you generate your own SQL test harness you face the following issues:

• SQL may change, requiring recoding

• It is very difficult to randomize code variables

• Capturing timing values can be problematic

Page 111: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Issues with Generating Your Own Scripts

• Without a benchmark utility you are limited to either manually running the scripts and capturing the timings to verify your SLA, or developing your own SQL test harness to inject code into your database.

• However, if you inject the same identical SQL statements in each test run you may get artificially good performance due to caching at the database level.

• You must introduce randomness into SQL variables and run queries multiple times, then average the results to get valid results.

Page 112: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The pseudo-code for such a SQL test harness Open procedure with “X” (number of times for each SQL iteration)Loop 1: Choose Template SQL statement from SQL tableLoop 2: SQL Processing (iterate X times)Read example SQL string from test SQL table

Parse variables from codeLoop3

Read variable types and random values from variable tableReplace variables in SQL string with proper variable

End Loop3Capture start timingExecute parsed and variable loaded SQL into cursorCapture end timingCalculate total time spent executing

Load result table for executed SQL with timingEnd Loop2

End Loop 1 Calculate Averages for all SQLCompare calculated averages to SLAsSend alert if any SLA exceededEnd Procedure

Page 113: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

The Easy Way

• If you are using Oracle, the Grid Control and Database Control interfaces allow you to enter new procedures to calculate metrics which will be tested and the results can generate server alerts that will send you emails when SLAs are exceeded.

• Another easy method is to use a Benchmark tool that allows you to enter the test SQL and program in randomization of variables.

• Then the most difficult part is scheduling the test to run.

• Tools like Benchmark Factory will even email you with results from SQL testing.

Page 114: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Supporting Server & Storage Consolidations

Page 115: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Supporting Server & Storage Consolidations

• Determining memory needs is a matter of looking at current usage during peak times across the various platforms to be consolidated and adding the results.

• Determining disk needs can be much easier than determining the CPUs and memory needs. You must not only look at disk capacity, you must consider IO capacity as well as user concurrency issues when dealing with disk capacity.

• Always look at IO rates and concurrency needs first, allow no more than 90 IO/sec per drive (less if you use RAID5), concurrency is a bit harder to figure out, but if you allow for needed IO rates usually you get pretty close to allowing for concurrency as well.

• The RAID calculations for RAID 10 and RAID 5 are shown in the spreadsheet on the next page.

Page 116: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Disk IO and RAID

Page 117: Copyright © 2006 Quest Software Title slide Copyright: 8 pt. Arial Who Needs benchmarking…You Do! Mike Ault Domain Specialist, Oracle Quest Software

Conclusions

• In this presentation we have examined the uses of benchmark tools to perform capacity analysis and prediction.

• We have seen examples using the Benchmark Factory tool to demonstrating the use of such tools for trending and planning for future needs.

• We have also examined the use of manual tools such as spreadsheets to do predictions of needed CPUs, memory and disks in server consolidations.