10.2 streams recommendations

23
My Oracle Support (the new MetaLink) Bookmarks Admin Profile Feedback Sign Out Help Headlines Knowledge Service Request Collector Patches & Updates Community Certify Knowledge Browser Advanced Search Bug Search Quick Find Go Advanced Saved Searches Did this article help solve your problem? Would you recommend this document to others? TIP: Click help for a detailed explanation of this page. Bookmark Go to End Subject: eztest Doc ID : 790415.1 Type: BULLETIN Modified Date : 18-MAR-2009 Status: UNDER_EDIT 10.2 Streams Recommendations 10.2 Streams Recommendations OVERVIEW CONFIGURATION 1. Software Version 2. Database Parameters 3. Database Storage 1. Tablespace for Streams Administrator queues 2. Separate queues for capture and apply 4. Privileges 5. Source Site Configuration 1. Streams and Flash Recovery Area (FRA) 2. Archive Logging must be enabled 3. Supplemental logging 4. Implement a Heartbeat Table 5. Perform periodic maintenance 6. Capture Process Configuration 7. Propagation Configuration 8. Additional Configuration for RAC Environments for a Source Database 6. Target Site Configuration 1. Privileges 2. Instantiation 3. Conflict Resolution

Upload: nomadkz

Post on 26-Dec-2014

116 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: 10.2 Streams Recommendations

My Oracle Support (the new MetaLink) Bookmarks Admin Profile Feedback Sign Out Help

Headlines Knowledge Service Request Collector Patches & Updates Community Certify

Knowledge Browser Advanced Search Bug Search

Quick Find Go Advanced Saved Searches

Did this article help solve your problem? Would you recommend this document to others?

TIP: Click help for a detailed explanation of this page. Bookmark Go to End

Subject: eztest

Doc ID: 790415.1 Type: BULLETIN

Modified Date : 18-MAR-2009 Status: UNDER_EDIT

10.2 Streams Recommendations

10.2 Streams RecommendationsOVERVIEWCONFIGURATION1. Software Version2. Database Parameters3. Database Storage1. Tablespace for Streams Administrator queues2. Separate queues for capture and apply

4. Privileges5. Source Site Configuration1. Streams and Flash Recovery Area (FRA)2. Archive Logging must be enabled3. Supplemental logging4. Implement a Heartbeat Table5. Perform periodic maintenance6. Capture Process Configuration7. Propagation Configuration8. Additional Configuration for RAC Environments for a Source Database

6. Target Site Configuration1. Privileges2. Instantiation3. Conflict Resolution

Page 2: 10.2 Streams Recommendations

4. Apply Process Configuration5. Additional Configuration for RAC Environments for an Apply Database

OPERATIONGlobal NameApply Error ManagementBackup ConsiderationsBatch ProcessingDDL ReplicationPropagationSource Queue GrowthStreams Cleanup/RemovalAutomatic Optimizer Statistics Collection

MONITORINGMonitoring Utility STRMMONAlert LogStreams Healthcheck Scripts

Configuration Operation Monitoring

OVERVIEW

Streams can be used for such diverse tasks as data replication, data warehousing, database migration between platforms, application upgrade or migration, and queuing

applications.

Streams can be configured in a number of different ways depending on the business requirements. For example, Streams can be configured to

· Replicate data from a database to one or more databases, even if those databases have different structures or naming conventions.

· Replicate data between hardware platforms and/or database releases and/or character sets.

· Consolidate data from multiple sources with varying structures into a single database.

· Provide high availability while performing database or application upgrades or while migrating between hardware platforms.

Below are some of the key recommendations for successful configuration of Streams in the 10.2 release of the Oracle Database.

For 10.2.0.4 recommendations, please review the following technical white papers available on OTN.

Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2 -- Technical White Paper (pdf) -- July 2008

Page 3: 10.2 Streams Recommendations

Oracle Streams Performance Tuning Best Practices: Oracle Database 10g Release 10.2 -- Technical White Paper (pdf) - November 2008

CONFIGURATION

To ensure a successful Streams implementation, use the following recommendations when constructing a Streams environment in the 10.2 release of the Oracle database.

· Software Version

· Database Settings: Parameters, Storage, and Privileges

· Source Site Configuration

· Target Site Configuration

1.0 Software Version

Oracle recommends 10.2.0.3 and the following critical patches on top of 10.2.0.3. Please assess if any recommended patch conflicts with existing patches on your system.

The list of recommended patches on top of 10.2.0.3 is: 6081550, 6081547, 6267873, 6043052, 6375952, and 5093060. The order in which you apply these patches is extremely

important. Please follow these steps:

· Apply patch 6081550

· Remove patches for 6067600 and 6067611 from 6081550 using the Opatch utility rollback command. Patches 6267873 and 6375952 (listed below) supercede the patches

for 6067600 and 6067611 respectively.

· Apply patches 6267873, 6375952, 6081547, 6043052, and 5093060

2.0 Database Parameters

For best results in a Streams environment, set the following init.ora parameters, as necessary, at each participating instance: global_names, _job_queue_interval, sga_target,

streams_pool_size

Parameter Name &

Recommendation Description Considerations

_job_queue_interval = 1 Scan rate interval (seconds) of job queue. Default is 5 This improves the scan rate for propagation jobs to every second, rather than

every 5 seconds.

COMPATIBLE = 10.2.0.0 This parameter specifies the release with which the Oracle server

must maintain compatibility. Oracle servers with different

compatibility levels can interoperate.

To use the new Streams features introduced in Oracle Database 10g Release

2, this parameter must be set to 10.2.0.0 or higher.

To use 10.2 downstream capture on logs from a 10.1 source , this parameter

Page 4: 10.2 Streams Recommendations

Parameter Name &

Recommendation Description Considerations

must be set to 10.1.0.0 at the source database and 10.2.0.0 at the

downstream capture database.

GLOBAL_NAMES =true Specifies whether a database link is required to have the same

name as the database to which it connects. Default is FALSE

This parameter should be set to TRUE at each database that is participating in

your Streams environment to eliminate errors resulting from incorrect

database connections. This parameter setting is honored by database links.

JOB_QUEUE_PROCESSES >number of propagationsdefined

Specifies the number of Jn job queue processes for each instance

(J000 ... J999). Job queue processes handle requests created by

DBMS_JOB.

This parameter controls the maximum number of jobs that can run

concurrently within the instance and should be set to a value greater than the

number of propagations configured for the database. Be sure to increase this

parameter if there are any automated jobs configured for the database.

LOG_ARCHIVE_DEST_n Defines up to ten log archive destinations, where n is 1, 2, 3, ... 10. A specific archive log destination should be specified if this database is the

source for a Streams capture process. Specify a specific destination other

than the flash recovery area (FRA) for storing archived logs if a local capture

process is enabled.

To use downstream capture and copy the redo log files to the downstream

database using log transport services, at least one log archive destination

must be to the site running the downstream capture process. Avoid copying

log files to a remote flash recovery area for downstream capture processing.

See Also: Oracle Data Guard Concepts and Administration

LOG_ARCHIVE_DEST_STATE_n Specifies the availability state of the corresponding destination.

The parameter suffix (1 through 10) specifies one of the ten

corresponding LOG_ARCHIVE_DEST_n destination parameters.

Enable archive logging to the specified destination for both local and

downstream capture. To use downstream capture and copy the redo log files

to the downstream database using log transport services, make sure the

destination that corresponds to the LOG_ARCHIVE_DEST_n destination for the

downstream database is set to enable.

PARALLEL_MAX_SERVERS Default: Derived from the values of the following parameters:

CPU_COUNT

PARALLEL_ADAPTIVE_MULTI_USER

PARALLEL_AUTOMATIC_TUNING

Range: 0 to 3599

Modifiable?: Yes

Specifies the maximum number of parallel execution processes and parallelrecovery processes for an instance. As demand increases, Oracle will

increase the number of processes from the number created at instance startup

up to this value.

In a Streams environment, each capture process and apply process may use

multiple parallel execution servers. Set this initialization parameter to an

appropriate value to ensure that there are enough parallel execution servers.

For each defined Streams process (capture or apply), increase this parameter

by 2+parallelism, where parallelism is the value of the capture or apply

parallelism parameter. .

REMOTE_ARCHIVE_ENABLE Enables or disables the sending of redo archival to remotedestinations and the receipt of remotely archived redo.

To use downstream capture and copy the redo log files to the downstreamdatabase using log transport services, this parameter must be set to true at

Page 5: 10.2 Streams Recommendations

Parameter Name &

Recommendation Description Considerations

both the source database and the downstream database. This parameter is not

required for local capture configuration.

SGA_MAX_SIZE Specifies the maximum size of SGA for the lifetime of a database

instance.

To run multiple Streams processes on a single database, you may need to

increase the size of this parameter.

SGA_TARGET =0 Specifies the total size of all System Global Area (SGA)

components.

If this parameter is set to a nonzero value, then the size of the

Streams pool is managed by Automatic Shared Memory

Management.

For best results, size the shared_pool and streams_pool explicitly.

Tune the

STREAMS_POOL_SIZE

Specifies (in bytes) the size of the Streams pool. The Streams pool

contains buffered queue messages. In addition, the Streams pool is

used for internal communications during parallel capture and

apply. Refer to V$STREAMS_POOL_ADVICE to determine the

correct size to avoid excessive spills.

This parameter is modifiable. If this parameter is reduced to zero when an

instance is running, then Streams processes and jobs will not run.

The size of the Streams pool is affected by each of the following factors:

· capture process parallelism. Increase the Streams Pool Size by 10

MB for each capture process. In addition, if the capture parameter

PARALLELISM is set greater than 1, increase the Streams Pool size

by 10Mb * parallelism. For example, if parallelism is set to 3 for a

capture process, then increase the Streams pool by 30 MB.

· Apply process parallelism. Increase the Streams Pool Size by 1 MB

for each apply process. In addition, if the apply parameter

PARALLELISM is set greater than 1, increase the Streams Pool size

by 1Mb * parallelism. For example, if parallelism is set to 5 for anapply process, then increase the Streams pool by 5 MB.

· Logical Change Records (LCRs) are stored in the buffered queue.

Increase the size of the Streams Pool to handle the volume of

replicated data managed at both the source and target databases.

Minimally set the Streams Pool Size to 256Mb on low activity databases or

500Mb on more active OLTP configurations. Adjust the Streams Pool size to

an appropriate value using the V$STREAMS_POOL_ADVICE view to avoid

excessive spill from the buffered queue to disk.

Page 6: 10.2 Streams Recommendations

3. Database Storage

1. Tablespace for Streams Administrator queues

Create a separate tablespace for the streams administrator schema (STRMADMIN) at each participating Streams database. This tablespace will be used for any

objects created in the streams administrator schema, including any spillover of messages from the in-memory queue.

For example:

CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 25 M REUSE AUTOEXTEND ON NEXT

25M MAXSIZE UNLIMITED;

ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name;

2. Separate queues for capture and apply

Configure separate queues for changes that are captured locally and for receiving captured changes from each remote site. This is especially important when

configuring bi-directional replication between multiple databases. For example, consider the situation where Database db1.net replicates its changes to databases

db2.net, and Database db2.net replicates to db1.net. Each database will maintain 2 queues: one for capturing the changes made locally and other queue receivingchanges from the other database.

Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the local changes directly to each other database, there will be 3 queues at each database. For

example at db1.net, queue1 for the capture process, and queue2 and queue3 for receiving changes from each of the other databases. The two apply processes on

db1.net (apply_from_db2, apply_from_db3) apply the changes, each associated with a specific queue (queue2 or queue3)

Queue names should not exceed 24 characters in length. Queue table names should not exceed 24 characters in length. To pre-create a queue for Streams, use the

SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or MAINTAIN_GLOBAL

procedures to configure Streams and do not identify specific queue names, individual queues will be created automatically.

Example: To configure a site (SITEA) that is capturing changes for distribution to another site, as well as receiving changes from that other site (SITEB), configure

each queue at SITEA with a separate queue_table as follows:

dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A, queue_name=>'CAP_SITEA', )

dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB', queue_name=>'APP_FROM_SITEB');

If desired, the above set_up_queue procedure calls can include a storage_clause parameter to configure separate tablespace and storage specifications for each

queue table. Typically, Logical Change Records (LCRs) are queued to an in-memory buffer and processed from memory. However, they can be spilled to disk if

they remain in memory too long due to an unavailable destination or on memory pressure (Streams_Pool memory is too low). The storage clause parameter can be

used to preallocate space for the queue table or specify an alternative tablespace for the queue table without changing the default tablespace for the Streams

Administrator.

4. Privileges

Page 7: 10.2 Streams Recommendations

In order to create capture and apply processes, the Streams Administrator must have DBA privilege. This privilege must be explicitly granted to the Streams Administrator.

GRANT DBA to STRMADMIN;

In addition, other required privileges must be granted to the Streams Administrator schema (strmadmin) on each participating Streams database with the

GRANT_ADMIN_PRIVILEGE procedure:

DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

5. Source Site Configuration

The following recommendations apply to source databases, ie, databases in which Streams capture is configured.

1. Streams and Flash Recovery Area (FRA)

Configure a separate log archive destination independent of the Flash Recovery Area for the Streams capture process for the database. Archive logs in the FRA can be

removed automatically on space pressure, even if the Streams capture process still requires them. Do not allow the archive logs for Streams capture to reside solely in the

FRA.

2. Archive Logging must be enabled

Verify that each source database is running in ARCHIVE LOG mode. For downstream capture sites (ie, databases in which the Streams capture is configured for

another database), the database at which the source redo logs are created must have archive logging enabled.

3. Supplemental logging

Confirm supplemental logging is enabled at each source site. In 10gR2, supplemental logging is automatically configured for tables on which primary, unique, or

foreign keys are defined when the database object is prepared for Streams capture. The procedures for maintaining streams and adding rules in the

DBMS_STREAMS_ADM package automatically prepare objects for a local Streams capture. For downstream capture sites (ie, databases in which the Streamscapture is configured for another database), the database at which the source redo logs are created must have supplemental logging for the database objects of

interest to the downstream capture process.

All target site indexed columns, including the primary key, unique index, and foreign key columns of a replicated table or database must be logged

at the source site. Primary Key logging must be unconditionally logged, unique index and foreign keys can be conditionally logged. This

supplemental logging is enabled automatically when the source table is prepared for capture with

DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION.

Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site.

Supplemental logging for these columns must be configured explicitly by the database administrator, using the table sql syntax: ALTER TABLE...

ADD SUPPLEMENTAL LOG... .

To Verify that supplemental logging has been specified at the source either at the database level or for the individual replicated table:

Page 8: 10.2 Streams Recommendations

Database level logging:

SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;

Table level logging:

SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION

SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION

SELECT supplemental-log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;

Check supplemental log groups

Select log_group_name, table_name,

decode(always, ‘ALWAYS’, ‘Unconditional’, NULL, ‘Conditional’) ALWAYS from DBA_LOG_GROUPS

Check columns in supplemental log groups

Select log_group_name, column_name, position

from dba_log_group_columns where table_name = ‘DEPARTMENTS’ and owner=‘HR’;

4. Implement a Heartbeat Table

To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. A "heart beat" table is especially useful for

databases that have a low activity rate. The streams capture process requests a checkpoint after every 10Mb of generated redo. During the checkpoint, the

metadata for streams is maintained if there are active transactions. Implementing a heartbeat table ensures that there are open transactions occurring regularly

within the source database enabling additional opportunities for the metadata to be updated frequently. Additionally, the heartbeat table provides quick feedback

to the database administrator as to the health of the streams replication.

To implement a heartbeat table: Create a table at the source site that includes a date or timestamp column and the global name of the database. Add a rule to

capture changes to this table and propagate the changes to each target destination. Make sure that the target destination will apply changes to this table as well. Setup an automated job to update this table at the source site periodically, for example every minute.

5. Perform periodic maintenance

A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUXtablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of

the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded

Page 9: 10.2 Streams Recommendations

(default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn(FIRST_SCN) can be purged and space in the SYSAUX

tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.

B. Dump fresh copy of Dictionary to redo. Issue a DBMS_CAPTURE_ADM.BUILD command to dump a current copy of the data dictionary to the redo

logs. Doing this will reduce the amount of logs to be processed in case of additional capture process creation or process rebuild.

C. Prepare database objects for instantiation Issue a DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where * indicates the level (TABLE,

SCHEMA, GLOBAL) for the database objects captured by Streams. This is used in conjunction with the BUILD in B above for new capture creation or

rebuild purposes.

6. Capture Process Configuration

A. Configuring Capture

As much as possible, use the MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) procedures to configure Streams. These procedures automate the entire

configuration of the streams processes between databases, following the Streams best practices. For local capture, the default behavior of these procedures is to

implement a separate queue for capture and apply. If you are configuring a downstream capture and applying the changes within the same database, override this

behavior by specifying the same queue for both the capture_queue_name and apply_queue_name.

If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES, ADD_SCHEMA_RULES

for DML and DDL, ADD_SUBSET_RULES for DML only, and ADD_GLOBAL_RULES for DDL only). These procedures minimize the number of steps required

to configure Streams processes. It is also possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.

The Streams capture process requires a rule set with rules. The ADD_GLOBAL_RULES procedure can be used to capture DML changes for entire database aslong as a negative ruleset is created for the capture process that includes rules for objects with unsupported datatypes.. ADD_GLOBAL_RULES can be used to

capture all DDL changes for the database.

A single Streams capture can process changes for multiple tables or schemas. For best performance, rules for these multiple tables or schemas should be simple.

Rules that include LIKE clauses are not simple and will impact the performance of Streams.

To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the

Streams process. Setting this clause will eliminate ALL changes, tagged or not, for the table or object.

B. Capture Parameters

Set the following parameters after a capture process is created:

Parameter & Recommendation Default Values Comment

PARALLELISM=1 Default: 1

Number of parallel execution servers to configure one or more preparer

processes used to prefilter changes for the capture process.

Recommended value is 1.

_CHECKPOINT_FREQUENCY=500

Modify the frequency of logminer checkpoints especially in a database

with significant LOB or DDL activity. Larger values decrease the

Page 10: 10.2 Streams Recommendations

Default: 10 <10.2.0.4

Default 1000 in 10.2.0.4

frequency of logminer checkpoints. Smaller numbers increase the

frequency of those checkpoints. Logminer checkpoints are not the

same as database checkpoints. Availability of logminer checkpoints

impacts the time required to recover/restart the capture after database

restart. In a low activity database (ie, small amounts of data or the data

to be captured is changed infrequently), use a lower value, such as 100.

A logminer checkpoint is requested by default every 10Mb of redo

mined. If the value is set to 500, a logminer checkpoint is requested after

every 500Mb of redo mined. Increasing the value of this parameter is recommended for active databases with significant redo generated per

hour.

It should not be necessary to configure

_CHECKPOINT_FREQUENCY in 10.2.0.4 or higher

_SGA_SIZE Default: 10

Amount of memory available from the streams pool for logminer

processing. The default amount of streams_pool memory allocated to

logminer is 10Mb. Increase this value especially in environments where

large LOBs are processed. This parameter should not be increased unless

the logminer error ORA-1341 is encountered. Streams pool memory

allocated to logminer is unavailble for other usage.

Capture parameters can be set using the SET_PARAMETER procedure from the DBMS_CAPTURE_ADM package. For example, to set the checkpoint

frequency of the streams capture process named CAPTURE_EX, use the following syntax while logged in as the Streams Administrator to request a logminer

checkpoint after processing every Gigabyte (1000Mb) of redo:

dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');

7. Propagation Configuration

A. Configuring Propagation

If the maintain_* procedures are not suitable for your environment, please use the ADD_*__PROPAGATION_RULES procedures

(ADD_TABLE_PROPAGATION_RULES, ADD_SCHEMA__PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL.,

ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of steps required to configure Streams processes. Also, it is

possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.

The rules in the rule set for propagation can differ from the rules specified for the capture process. For example, to configure that all captured changes be

Page 11: 10.2 Streams Recommendations

propagated to a target site, a single ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though multiple

ADD_TABLE_RULES might have been configured for the capture process.

B. Propagation mode

For new propagations configured in 10.2. set the queue_to_queue propagation parameter to TRUE. If the database is RAC enabled, an additional service is created

typically named in the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A streams subscriber is a defined

propagation between two Streams queues or an apply process with the apply_captured parameter set to TRUE. This service automatically follows the ownership of

the queue on queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the network name column of DBA_SERVICESview.

If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The

database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA

clause of the descriptor. See section 6 on Additional Considerations for RAC below.

Propagations configured prior to 10.2 continue to use the dblink mode of propagation. In this situation, if the database link no longer connects to the owning

instance of the queue, propagation will not succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance window

recreate propagation. Make sure that the queue is empty with no unapplied spilled messages before you drop the propagation. Then, recreate the propagation with

the queue_to_queue parameter set to TRUE.

Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take advantage of the automatic service generation and

queue_to_queue propagation. Be sure to perform this activity when the queue is empty and no new LCRs are being enqueued into the queue.

C. Propagation Parameters

Parameter &

Recommendation

Default

ValueComment

latency=5

Default:

60

Maximum wait, in seconds, in the propagation window for a message to be propagated

after it is enqueued.

The default value is 60. Caution: if latency is not specified for this call, then latency will

over-write any existing value with this default value (60).

For example, if the latency is 60 seconds, then during the propagation window, if there

are no messages to be propagated, then messages from that queue for the destination will

not be propagated for at least 60 more seconds. It will be at least 60 seconds before the

queue will be checked again for messages to be propagated for the specified destination.If the latency is 600, then the queue will not be checked for 10 minutes and if the latency

is 0, then a job queue process will be waiting for messages to be enqueued for the

destination and as soon as a message is enqueued it will be propagated.

Page 12: 10.2 Streams Recommendations

Propagation parameters can be set using the ALTER_PROPAGATION_SCHEDULE procedure from the DBMS_AQADM package. For example, to set the

latency parameter of the streams propagation from the STREAMS_QUEUE owned by STRMADMIN to the target database whose global_name is DEST_DB for

the queue Q1, use the following syntax while logged in as the Streams Administrator:

dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queue','DEST_DB',destination_queue=>'Q1',latency=>5);

D. Network Connectivity

When using Streams propagation across a Wide Area Network (WAN), increase the session data unit (SDU) to improve the propagation performance. The

maximum value for SDU is 32K (32767). The SDU value for network transmission is negotiated between the sender and receiver sides of the connection: the

minimum SDU value of the two endpoints is used for any individual connection. In order to take advantage of an increased SDU for Streams propagation, the

receiving side sqlnet.ora file must include the default_sdu_size parameter. The receiving side listener.ora must indicate the SDU change for the SID. The sending

side tnsnames.ora connect string must also include the SDU modification for the particular service.

Tuning the tcp/ip networking parameters can significantly improve performance across the WAN. Here are some example tuning parameters for Linux. These

parameters can be set in the /etc/sysctl.conf file and running sysctl -p . When using RAC, be sure to configure this at each instance.

net.core.rmem_max = 16777216 net.core.wmem_max = 16777216

Page 13: 10.2 Streams Recommendations

# increase Linux autotuning TCP buffer limits # min, default, and max # number of bytes to use net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216

In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE sqlnet.ora parameters increase the performance of propagation on your system. These parameters

increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis on their overall

impact on system performance.

For further information, please review the Oracle Net Services Guide

8. Additional Configuration for RAC Environments for a Source Database

Archive Logs

The archive log threads from all instances must be available to any instance running a capture process. This is true for both local and downstream capture.

Queue Ownership

When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same

instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that

· a capture process is run at the owning instance of the source queue.

· a propagation job must run at the owning instance of the queue

· a propagation job must connect to the owning instance of the target queue.

Ownership of the queue can be configured to remain on a specific instance, as long as that instance is available, by setting the PRIMARY _INSTANCE and/orSECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0), the queue

ownership will return to the specified instance whenever the instance is up.

Capture will automatically follow the ownership of the queue. If the ownership changes while capture is running, capture will stop on the current instance and

restart at the new owner instance.

For queues created with Oracle Database 10g Release 2, a service will be created with the service name= schema.queue and the network name

SYS$schema.queue.global_name for that queue. If the global_name of the database does not match the db_name.db_domain name of the database, be sure to

include the global_name as a service name in the init.ora.

For propagations created with the Oracle Database 10g Release 2 code with the queue_to_queue parameter to TRUE, the propagation job will deliver only to the

specific queue identified. Also, the source dblink for the target database connect descriptor must specify the correct service (global name of the target database ) to

connect to the target database. For example, the tnsnames.ora entry for the target database should include the CONNECT_DATA clause in the connect descriptor

for the target database. This claus should specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do NOT include a specific

INSTANCE in the CONNECT_DATA clause.

For example, consider the tnsnames.ora file for a database with the global name db.mycompany.com. Assume that the alias name for the first instance is db1 and

Page 14: 10.2 Streams Recommendations

that the alias for the second instance is db2. The tnsnames.ora file for this database might include the following entries:

db.mycompany.com= (description= (load_balance=on) (address=(protocol=tcp)(host=node1-vip)(port=1521)) (address=(protocol=tcp)(host=node2-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com))) db1.mycompany.com= (description= (address=(protocol=tcp)(host=node1-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com) (instance_name=db1))) db2.mycompany.com= (description= (address=(protocol=tcp)(host=node2-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com) (instance_name=db2)))Use the italicized tnsnames.ora alias in the target database link USING clause.

DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies all active services for the database In non_RAC configurations, the

service name will typically be the global_name. However, it is possible for users to manually create alternative services and use them in the TNS connect_data

specification . For RAC configurations, the service will appear in these views as SYS$schema.queue.global_name.

Propagation Restart

Use the procedures START_PROPAGATION and STOP_PROPAGATION from DBMS_PROPAGATION_ADM to enable and disable the propagation

schedule. These procedures automatically handle queue_to_queue propagation.

Example:

exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation'); or

exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation',force=>true);

exec DBMS_PROPAGATION_ADM.start_propagation('name_of_propagation');

6. Target Site Configuration

Page 15: 10.2 Streams Recommendations

The following recommendations apply to target databases, ie, databases in which Streams apply is configured.

1. Privileges

Grant Explicit Privileges to APPLY_USER for the user tables

Examples:

Privileges for table level DML: INSERT/UPDATE/DELETE,

Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX, CREATE (ANY) PROCEDURE

2. Instantiation

Set Instantiation SCNs manually if not using export/import. If manually configuring the instantiation scn for each table within the schema, use the

RECURSIVE=>TRUE option on the DBMS_STREAMS_ADM.SET_SCHEMA_INSTANTIATION_SCN procedure

For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or GLOBAL level).

3. Conflict Resolution

If updates will be performed in multiple databases for the same shared object, be sure to configure conflict resolution. See the Streams Replication Administrator's Guide

Chapter 3 Streams Conflict Resolution, for more detail.

To simplify conflict resolution on tables with LOB columns, create an error handler to handle errors for the table. When registering the handler using the

DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the ASSEMBLE_LOBS parameter as TRUE.

4. Apply Process Configuration

A. Rules

If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES ,

ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES (DML only). These procedures minimize the number of

steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in

a rule carefully.

APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all changes in the queue for the database. If no ruleset is

specified for the apply process, all changes in the queue are processed by the apply process.

Page 16: 10.2 Streams Recommendations

A single Streams apply can process rules for multiple tables or schemas located in a single queue that are received from a single source database . For best

performance, rules should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.

To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the

Streams process. Setting this clause will eliminate all changes, tagged or not, for the table or object.

B. Parameters

Set the following parameters after a apply process is created:

Parameter Values Comment

DISABLE_ON_ERROR=NDefault: Y

If Y, then the apply process is disabled on the first unresolved error,

even if the error is not fatal.

If N, then the apply process continues regardless of unresolved errors.

PARALLELISM= 4 Default: 1

Parallelism configures the number of apply servers available to the

apply process for performing user transactions from the source

database. Choose a value 4, 8, 12, 16 based on the concurrent

replicated workload generated at the source AND the number of CPUs

available at the target.

Apply parameters can be set using the SET_PARAMETER procedure from the DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR

parameter of the streams apply process named APPLY_EX, use the following syntax while logged in as the Streams Administrator:

exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');

In some cases, performance can be improved by setting the following hidden parameter. This parameter should be set when the major workload is UPDATEs and

the updates are performed on just a few columns of a many-column table.

Parameter Values Comment

_DYNAMIC_STMTS=Y Default: NIf Y, then for UPDATE statements, the apply process will optimize the

generation of SQL statements based on required columns.

_HASH_TABLE_SIZE=1000000 Default: 80*parallelismSet the size of the hash table used to calculate transaction dependencies to 1

million.

5. Additional Configuration for RAC Environments for an Apply Database

Queue Ownership

Page 17: 10.2 Streams Recommendations

When Streams is configured in a RAC environment, each queue table has an "owning" instance. All queues within an individual queue table are owned by the same

instance. The Streams components (capture/propagation/apply) all use that same owning instance to perform their work. This means that

· the database link specified in the propagation must connect to the owning instance of the target queue.

· the apply process is run at the owning instance of the target queue

Ownership of the queue can be configured to remain on a specific instance, as long as that instance is available, by setting the PRIMARY _INSTANCE and

SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0), the queue

ownership will return to the specified instance whenever the instance is up.

Apply will automatically follow the ownership of the queue. If the ownership changes while apply is running, apply will stop on the current instance and restart at

the new owner instance.

Changing the GLOBAL_NAME of the Database

See the OPERATION section on Global_name below. The following are some additional considerations when running in a RAC environment. If the

GLOBAL_NAME of the database is changed, ensure that the queue is empty before changing the name and that the apply process is dropped and recreated with

the apply_captured parameter = TRUE. In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the

GLOBAL_NAME in the list of services for the database in the database parameter initialization file.

OPERATION

A Streams process will automatically restart after a database startup, assuming that the process was in a running state before the database shut down. No special startup or

shutdown procedures are required in the normal case.

Global Name

Streams uses the GLOBAL_NAME of the database to identify changes from or to a particular database. Do not modify the GLOBAL NAME of a Streams database after

capture has been created. Changes captured by the Streams capture process automatically include the current global name of the source database. This means that if the

global name is modified after a capture process has been configured, the capture process will need to be dropped and recreated following the GLOBAL_NAMEmodification. In addition, the system-generated rules for capture, propagation, and apply typically specify the global name of the source database. These rule will need to

be modified or recreated to adjust the source_database_name. Finally, if the GLOBAL_NAME does not match the db_name.db_domain of the database, include the

GLOBAL_NAME in the list of services for the database in the database parameter initialization file.

If the global name must be modified on the database, do it at a time when NO user changes are possible on the database and the Streams queues are empty with no outstanding

changes to be applied, so that the Streams configuration can be recreated. Keep in mind that all subscribers (propagations to target databses and the target apply processes) must

also be recreated if the source database global_name is changed. Follow the directions in the Streams Replication Administrator's Guide for Changing the DBID or GLOBALNAME of a source database.

It is also strongly recommended that the database init.ora parameter global_names be set to TRUE to guarantee that database link names match the global name of the target

Page 18: 10.2 Streams Recommendations

database.

Apply Error Management

The view DBA_APPLY_ERROR includes the message_number within the transaction on which the reported error occurred. Use this message number in conjunction with the

procedures in the Streams Documentation ( Chapter 17 for Streams Monitoring "Displaying detailed information about error transactions") to print out the column values of

each logical change record within the failed transaction.

Backup Considerations

Ensure that any manual backup procedures that include the any of the following statements include a non-null Streams tag:1.

ALTER TABLESPACE ... BEGIN BACKUP

ALTER TABLESPACE ... END BACKUP

The tag should be chosen such that these DDL commands will be ignored by the capture rule set.

To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified to avoid capturing these commands.

Backups performed using RMAN do not need to set a Streams tag.

Do not allow any automated backup of the archived logs to remove necessary archive logs. It is especially important in a Streams environment that all necessary archived

logs remain available online and in the expected location until the capture process has finished processing them. If a log required by the capture process is unavailable, the

capture process will abort. Force a checkpoint (capture/logminer) before beginning the manual backup procedures. To force a checkpoint, explicitly reset the hidden capture

parameter ‘_CHECKPOINT_FORCE’ to ‘Y’. The REQUIRED_CHECKPOINT_SCN column of the DBA_CAPTURE view specifies the lowest required SCN to restart

capture. A procedure to determine the minimum archive log necessary for successful capture restart is available in the Streams health check script.

2.

3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete

recovery.

4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value, FULL.

5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. Implementing a

heartbeat table ensures that the metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health of streams replication.

6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow the instructions in Chapter 9 of the Streams Replication

Administrators Guide

Performing Point-in-Time Recovery on the Source in a Single-Source EnvironmentPerforming Point-in-Time Recovery in a Multiple-Source Environment

7. In situations that result in incomplete recovery at the destination site, follow the instructions in Chapter 9 of the Streams Replication Administrator's Guide

Page 19: 10.2 Streams Recommendations

Performing Point-in-Time Recovery on a Destination Database

Batch Processing

For best performance, the commit point for batch processing should be kept low. It is preferable that excessively large batch processing be run independently at each site. If this

technique is utilized, be sure to implement DBMS_STREAMS.SET_TAG to skip the capture of batch processing session. Setting this tag is valid only in the connected session

issuing the set_tag command and will not impact the capture of changes from any other database sessions.

DDL Replication

When replicating DDL, keep in mind the effect the DDL statement will have on the replicated sites. In particular, do not allow system generated naming for constraints or indexes,

as modifications to these will most likely fail at the replicated site. Also, storage clauses may cause some issues if the target sites are not identical.

If you decide NOT to replicate DDL in your Streams environment, any table structure change must be performed manually. Review the Best Practice: Manual DDL in aStreams Environment web page for tips on how to effectively perform DDL in a Streams environment.

Propagation

At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. The typical solution is to disable the propagation

and then re-enable it.

exec dbms_propagation_adm.stop_propagation('propagation_name');

exec dbms_propagation_adm.start_propagation('propagation_name');

If the above does not fix the problem, perform a stop of propagation with the force parameter and then start propagation again.

exec dbms_propagation_adm.stop_propagation('propagation_name',force=>true);

exec dbms_propagation_adm.start_propagation('propagation_name');

An additional side-effect of stopping the propagation with the force parameter is that the statistics for the propagation are cleared

Source Queue Growth

Source queue may grow if one of the target sites is down for an extended period, or propagation is unable to deliver the messages to a particular target site (subscriber) due tonetwork problems for an extended period.

· Automatic flow control minimizes the impact of this queue growth. Queued messages (LCRs) for unavailable target sites will spill to disk storage while messages for

available sites are processed normally.

· Propagation is implemented using the DBMS_JOB subsystem. If a job is unable to execute 16 successive times, the job will be marked as "broken" and become disabled.

Be sure to periodically check that the job is running successfully to minimize source queue growth due to this problem.

Page 20: 10.2 Streams Recommendations

Streams Cleanup/Removal

Removing the Streams administrator schema with DROP USER …. CASCADE can be used to remove the entire Streams configuration.

Automatic Optimizer Statistics Collection

Oracle database 10g has the Automatic Optimizer Statistics Collection feature that runs every night and gathers optimizer stats of tables whose stats have become stale. The

problem with volatile tables, such as the Streams queue tables, is that it is quite possible that when the stats collection job runs these tables may not have data that is representative

of their full load period. For this reason we recommend to customers that for volatile tables, they run the dbms_stats.gather job manually on them when they are at the fullest and

then immediately lock the stats of using the PL/SQL API's (dbms_stats.lock ...) provided. This will ensure that when the nightly Automatic Optimizer Statistics Collection job runs,these volatile tables will be skipped and hence not analyzed.

These volatile AQ/Streams tables are created through a call to dbms_aqadm.create_queue_table(qtable_name, etc.) or dbms_streams_adm.setup_queue() command with a user

defined queue table (qtable_name). In addition to the queue table, the call internally creates the following tables which also tend to be volatile:

aq$_{qtable_name}_i

aq$_{qtable_name}_h

aq$_{qtable_name}_t

aq$_{qtable_name}_p

aq$_{qtable_name}_d

aq$_{qtable_name}_c

Oracle has the ability to restore old stats on tables including data dictionary tables using the dbms_stats.restore... API's. This feature can be used for short term resolution,

but the real solution is the first one, where you lock optimizer stats of volatile tables.

MONITORING

All Streams processing is done at the "owning instance" of the queue. To determine the owning instance, use the query below:

SELECT q.owner, q.name, t.queue_table, t.owner_instance, FROM DBA_QUEUES q, DBA_QUEUE_TABLES t

WHERE t.object_type = 'SYS.ANYDATA' AND

q.queue_table = t.queue_table AND

q.owner = t.owner;

To display the monitoring view information, either query the monitoring views from the owning instance or use the GV$ views for dynamic streams views.

Dynamic Streams views

The views listed below with larger size font are the most commonly monitored runtime views in Streams. The hyperlinks below link to the view descriptions in the

Oracle Database 10g Release 2 Database Reference manual

Page 21: 10.2 Streams Recommendations

Streams View Name Streams View Name from any RAC instance

V$STREAMS_CAPTURE GV$STREAMS_CAPTURE

V$STREAMS_APPLY_COORDINATOR GV$STREAMS_APPLY_COORDINATOR

V$STREAMS_APPLY_READER GV$STREAMS_APPLY_READER

V$STREAMS_APPLY_SERVER GV$STREAMS_APPLY_SERVER

V$STREAMS_POOL_ADVICE GV$STREAMS_POOL_ADVICE

V$STREAMS_TRANSACTION GV$STREAMS_TRANSACTION

V$BUFFERED_PUBLISHERS GV$BUFFERED_PUBLISHERS

V$BUFFERED_QUEUES GV$BUFFERED_QUEUES

V$BUFFERED_SUBSCRIBERS GV$BUFFERED_SUBSCRIBERS

V$PROPAGATION_RECEIVER GV$PROPAGATION_RECEIVER

V$PROPAGATION_SENDER GV$PROPAGATION_SENDER

V$RULE GV$RULE

V$RULE_SET GV$RULE_SET

V$RULE_SET_AGGREGATE_STATS GV$RULE_SET_AGGREGATE_STATS

Static Streams Views

The views listed below with larger size font are the most commonly monitored configuration views in Streams.

The hyperlinks below link to the view descriptions in the Oracle Database 10g Release 2 Database Reference manual.

Capture Views Apply Views

DBA_CAPTUREDBA_CAPTURE_EXTRA_ATTRIBUTES

DBA_CAPTURE_PARAMETERSDBA_CAPTURE_PREPARED_DATABASE

DBA_CAPTURE_PREPARED_SCHEMAS

DBA_CAPTURE_PREPARED_TABLES

DBA_APPLYDBA_APPLY_CONFLICT_COLUMNS

DBA_APPLY_DML_HANDLERS

DBA_APPLY_ENQUEUE

DBA_APPLY_ERRORDBA_APPLY_EXECUTE

DBA_APPLY_INSTANTIATED_GLOBAL

DBA_APPLY_INSTANTIATED_OBJECTS

DBA_APPLY_INSTANTIATED_SCHEMAS

DBA_APPLY_KEY_COLUMNS

DBA_APPLY_OBJECT_DEPENDENCIES

Page 22: 10.2 Streams Recommendations

DBA_APPLY_PARAMETERSDBA_APPLY_PROGRESSDBA_APPLY_SPILL_TXN

DBA_APPLY_TABLE_COLUMNS

DBA_APPLY_VALUE_DEPENDENCIES

Propagation & Queue Views Streams Views

DBA_PROPAGATIONDBA_QUEUE_SCHEDULESDBA_QUEUE_SUBSCRIBERS

DBA_QUEUE_TABLES

DBA_QUEUES

DBA_REGISTERED_ARCHIVED_LOG

DBA_RECOVERABLE_SCRIPT

DBA_RECOVERABLE_SCRIPT_BLOCKS

DBA_RECOVERABLE_SCRIPT_ERRORS

DBA_RECOVERABLE_SCRIPT_PARAMS

DBA_STREAMS_ADD_COLUMN

DBA_STREAMS_ADMINISTRATOR

DBA_STREAMS_DELETE_COLUMN

DBA_STREAMS_GLOBAL_RULES

DBA_STREAMS_MESSAGE_CONSUMERS

DBA_STREAMS_MESSAGE_RULES

DBA_STREAMS_NEWLY_SUPPORTED

DBA_STREAMS_RENAME_COLUMN

DBA_STREAMS_RENAME_SCHEMA

DBA_STREAMS_RENAME_TABLE

DBA_STREAMS_RULESDBA_STREAMS_SCHEMA_RULES

DBA_STREAMS_TABLE_RULES

DBA_STREAMS_TRANSFORM_FUNCTION

DBA_STREAMS_TRANSFORMATIONS

DBA_STREAMS_UNSUPPORTED

DBA_RULE_SET_RULES

DBA_RULE_SETS

DBA_RULES

DBA_HIST_BUFFERED_QUEUES

DBA_HIST_BUFFERED_SUBSCRIBERS

DBA_HIST_RULE_SET

DBA_HIST_STREAMS_APPLY_SUM

DBA_HIST_STREAMS_CAPTURE

DBA_HIST_STREAMS_POOL_ADVICE

Page 23: 10.2 Streams Recommendations

Monitoring Utility STRMMON

STRMMON is a monitoring tool focused on Oracle Streams. Using this tool, database administrators get a quick overview of the Streams activity occurring within a database. In a

single line display, strmmon reports information The reporting interval and number of iterations to display are configurable. STRMMON is available in the rdbms/demo directory

in $ORACLE_HOME. The most recent version of the tool is available from Metalink article 290605.1

Alert Log

Streams capture and apply processes report long-running and long transactions in the alert log.

Long-running transactions are open transactions with no activity( ie, no new change records , rollback or commit ) for an extended period (20 minutes). Large transactions are

open transactions with a large number of change records. The alert log will report the fact that a long-running or large transaction has been seen every 20 minutes. Not all such

transactions will be reported - only 1 per 10 minute period. When the commit or rollback is received, this fact will be reported in the alert log as well.

Streams Healthcheck Scripts

The Streams health check script is a collection of queries to determine the configuration of the streams environment. This script should be run at each participating database in a

streams configuration. In addition to configuration information, analysis of the rules specified for streams is included to enable quicker diagnosis of problems. A guide to

interpreting the output is provided. The healthcheck script is an invaluable tool for problem solving customer issues. The Streams Healthcheck script is available from Metalink as

article

.

My Oracle Support (the new MetaLink) Bookmarks Admin Profile Feedback Sign Out Help

Copyright © 2006, Oracle. All rights reserved. Legal Notices and Terms of Use | Privacy Statement