Tuesday, January 15, 2008

Oracle Streams step by step setup and configuration

Oracle Streams

Oracle Streams enables the sharing of data and events in a data stream, either within a database or from one database to another. This article is intended to assist Replication DBAs in setting up and configuring Oracle Streams Replication.

There are three basic tasks of stream

  1. Capture
  2. Staging
  3. Apply

In a nutshell, replication using streams is implemented in the following way.

  1. A background capture process is configured to capture changes made to tables,schemas, or the entire database. The capture process captures changes from the redo log and formats each captured change into a logical change record (LCR). The capture process uses logminer to mine the redo/archive logs to format LCRs.
  2. The capture process enqueues LCR events into a queue that is specified.
  3. This queue is scheduled to propagate events from one queue to another in a different database.
  4. A background apply process dequeues the events and applies them at the destination database.

STREAMS SETUP

The Setup is divided into the following sections.

Section 1: Initialization Parameters Relevant to Streams

Section 2: Database privileges, Database storage

Section 3: Steps to be carried out at the Destination Database “prod2”

Section 4: Steps to be carried out at the Source Database “prod1”

Section 5: Export, import and instantiation of tables from Source to Destination Database

Section 6: Start the Capture process

Section 1, Initialization parameters:

At each participating Streams site, confirm that the following required init.ora parameters are set appropriately for each database:

  1. COMPATIBLE: To use Streams, Compatible must be set to 10.1.0 or higher.
  2. GLOBAL_NAMES: This parameter has to be set to TRUE at each database if you want to use Streams to share information between databases.
  3. JOB_QUEUE_PROCESSES: This parameter specifies the number of processes that can handle requests created by DBMS_JOB. Ensure that it is set to 4 or higher.
  4. PARALLEL_MAX_SERVERS: Each capture process and apply process may use multiple parallel execution servers. The apply process by default needs two parallel servers. So this parameter needs to set to at least 2 even for a single non-parallel apply process. Specify a value for this parameter to ensure that there are enough parallel execution servers
  5. SHARED_POOL_SIZE: Each capture process needs 10MB of shared pool space, but Streams is limited to using a maximum of 10% of the shared pool. The 10% of the shared_pool_size is in reference to the size of the buffer queue before spillover occurs.
  6. OPEN_LINKS: Specifies the maximum number of concurrent open connections to remote databases in one session. Ensure that it is set to 4 or higher
  7. PROCESSES: make sure the value of this parameter allows for all background processes, such as locks, job queue processes, and parallel execution processes. In Streams, capture processes and apply processes use background processes and parallel execution processes, and propagation jobs use job queue processes
  8. SESSIONS: f you plan to run one or more capture processes or apply processes in a database, and then you may need to increase the size of this parameter. Each background process in a database requires a session.
  9. SGA_MAX_SIZE: If you plan to run multiple capture processes on a single database, then you may need to increase the size of this parameter. Note: _SGA_SIZE should only be increased if a logminer error is returned indicating a need for more memory. Any memory allocated to logminer is used solely by logminer - it is not returned to the shared_pool after it has been allocated by capture until the capture process is restarted.

  1. TIMED_STATISTICS: If you want to collect elapsed time statistics in the data dictionary views related to Streams, then set this parameter to true. The views that include elapsed time statistics include: V$STREAMS_CAPTURE, V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, V$STREAMS_APPLY_SERVER.
  2. LOG_ARCHIVE_DEST_n: To use downstream capture and copy the redo log files to the downstream database using log transport services, at least one log archive destination must be at the site running the downstream capture process. Defines up to ten log archive destinations, where n is 1, 2, 3, ... 10.
  3. LOG_ARCHIVE_DEST_STATE_n: To use downstream capture and copy the redo log files to the downstream database using log transport services, make sure the destination that corresponds to the LOG_ARCHIVE_DEST_n destination for the downstream database is set to enable. The parameter suffix (1 through 10) specifies one of the ten corresponding LOG_ARCHIVE_DEST_n destination parameters.
  4. REMOTE_ARCHIVE_ENABLE: Enables or disables the sending of redo archival to remote destinations and the receipt of remotely archived redo. To use downstream capture and copy the redo log files to the downstream database using log transport services, this parameter must be set to true at both the source database and the downstream database.
  5. STREAMS_POOL_SIZE > 200M: Specifies (in bytes) the size of the Streams pool. The Streams pool contains captured events. In addition, the Streams pool is used for internal communications during parallel capture and apply. If the size of the Streams pool is greater than zero, then any SGA memory used by Streams is allocated from the Streams pool. If the Streams pool size is set to zero, then SGA memory used by Streams is allocated from the shared pool and may use up to 10% of the shared pool. This parameter is modifiable. However, if this parameter is set to zero when an instance starts, then increasing it beyond zero has no effect on the current instance because it is using the shared pool for Streams allocations. Also, if this parameter is set to a value greater than zero when an instance starts and is then reduced to zero when the instance is running, then Streams processes and jobs will not run. You should increase the size of the Streams pool for each of the following factors: 10 MB for each capture process parallelism 1 MB for each apply process parallelism 10 MB or more for each queue staging captured events For example, if parallelism is set to 3 for a capture process, then increase the Streams pool by 30 MB. If parallelism is set to 5 for an apply process, then increase the Streams pool by 5 MB.
  6. UNDO_RETENTION >900: Specifies (in seconds) the amount of committed undo information to retain in the database. For a database running one or more capture processes, make sure this parameter is set to specify an adequate undo retention period. If you are running one or more capture processes and you are unsure about the proper setting, then try setting this parameter to at least 3600.If you encounter "snapshot too old" errors, and then increase the setting for this parameter until these errors cease. Make sure the undo tablespace has enough space to accommodate the UNDO_RETENTION setting.

Section 2: Database privileges, Database storage.

Create the “stradmin” user on both the source and destination database and grant the necessary privileges.

SQL> create user stradmin identified by stradmin;

SQL>grant connect, DBA, imp_full_database, exp_full_database, aq_administrator_role to stradmin;

SQL>exec DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRADMIN');

As an option, it is possible to create a separate tablespace for each streams administrator schema (STRADMIN) at each participating Streams database.This tablespace will be used for any objects created in the streams administrator schema, including any spillover of messages from the in-memory queue.

SQL> Create tablespace streams datafile ‘/disk1/oradata/prod/streams.dbf’ size 40m autoextend on;

SQL> alter user streams default tablespace streams;

Section 3: Steps to be carried out at the Destination Database “prod2”

1) Create the streams apply queue

connect STRADMIN/STRADMIN

BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(

queue_table => ‘STRM_APPLY_QUEUE_TABLE’,

queue_name => ‘STRM_APPLY_QUEUE’,

queue_user => ‘STRADMIN’);

END;

/

2) Add Apply Rules for the schema at the destination database

BEGIN

DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

schema_name => 'SCOTT',

streams_type => 'APPLY ',

streams_name => 'STRMADMIN_APPLY',

queue_name => 'STRADMIN.STRM_APPLY_QUEUE',

include_dml => true,

include_ddl => true,

source_database => 'prod.com');

END;

/

3) Specify an Apply user at the destination database

This is the user who would apply all DML statements and DDL statements. The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects.

BEGIN

DBMS_APPLY_ADM.ALTER_APPLY(

apply_name => 'STRMADMIN_APPLY',

apply_user => 'SCOTT');

END;

/

4) Start the Apply Process

DECLARE

v_started number;

BEGIN

SELECT decode(status, 'ENABLED', 1, 0) INTO v_started

FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';

if (v_started = 0) then

DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');

end if;

END;

/

Section 4: Steps to be carried out at the Source Database “prod1”

1) Move logminer tables from system tablespace to some other tablespace

By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.

SQL> Conn /as sysdba

SQL> CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON

MAXSIZE UNLIMITED;

SQL> BEGIN

DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');

END;

/

2) Turn on Supplemental logging at database level, or table level.

connect SYS/password as SYSDBA

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (primary key, unique, foreign key) COLUMNS;

SQL>ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP pk_emp (id) ALWAYS;

3) Create database private database link as stradmin to destination database

SQL> connect stradmin/stradmin@prod1.com

SQL> create database link prod2.com connect to stradmin identified by stradmin

Using ‘prod2.com’

4) Create Streams capture queue

connect STRADMIN/STRADMIN

BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE

(

queue_table => 'STRM_CAPTURE_QUEUE_TABLE',

queue_name => ‘STRM_CAPTURE_QUEUE',

queue_user => 'STRADMIN‘

);

END;

/

5) Add capture rules for the schema SCOTT at the source database:

BEGIN

DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

schema_name => 'SCOTT',

streams_type => 'CAPTURE',

streams_name => 'STRMADMIN_CAPTURE',

queue_name => 'STRADMIN.STRM_CAPTURE_QUEUE',

include_dml => true,

include_ddl => true,

source_database => 'prod.com');

END;

/

6) Add propagation rules for the schema SCOTT at the source database. This step will also create a propagation job to the destination database.

BEGIN

DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(

schema_name => 'SCOTT',

streams_name => 'STRMADMIN_PROPAGATE',

source_queue_name => 'STRADMIN.STRM_CAPTURE_QUEUE',

destination_queue_name => 'STRADMIN.STRM_APPLY_QUEUE@prod2.com',

include_dml => true,

include_ddl => true,

source_database => 'prod.com');

END;

/

Section 5: Export, import and instantiation of tables from Source to Destination Database

If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database Export from the Source Database:

Specify the OBJECT_CONSISTENT=Y clause on the export command. By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).

exp USERID=SYSTEM/manager@prod.com OWNER=SCOTT FILE=scott.dmp

LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE

Import into the Destination Database:

Specify STREAMS_INSTANTIATION=Y clause in the import command. By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.

imp USERID=SYSTEM@prod2.com FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y

If the objects are already present in the desination database, there re two ways of instanitating the objects at the destination site.

1. By means of Metadata-only export/import :

Specify ROWS=N during Export

Specify IGNORE=Y during Import along with above import parameters.

2. By Manaually instantiating the objects

Get the Instantiation SCN at the source database:

connect STRADMIN/STRADMIN@source

set serveroutput on

DECLARE

iscn NUMBER; -- Variable to hold instantiation SCN value

BEGIN

iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);

END;

Instantiate the objects at the destination database with this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN, then the apply process discards the LCR. Else, the apply process applies the LCR.

connect STRADMIN/STRADMIN@destination

BEGIN

DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(

SOURCE_SCHEMA_NAME => 'SCOTT',

source_database_name => 'prod.com',

instantiation_scn => &iscn );

END;

Enter value for iscn:

Not only the objects must exists on source and target sites, also the objects must be in sync or ORA-1403 will be obtained by the apply process

Note:In 9i, you must instantiate each table individually. In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN is used for instantiation...

Section 6: Start the capture process

connect STRMADMIN/STRMADMIN@prod.com

BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(

capture_name => 'STRMADMIN_CAPTURE');

END;

/

The setup is now ready to replicate data between the two databases using Oracle Streams.

Any DDL and DML objects in SCOTT schema will now be proagated to destination database.

Streams Administration tools

Oracle-Supplied PL/SQL packages

DBMS_STREAMS_ADM

DBMS_CAPTURE_ADM

DBMS_PROPAGATION_ADM

DBMS_APPLY_ADM

Streams Data Dictionary views

DBA_APPLY

V$STREAMS_CAPTURE

Oracle Enterprise Manager

No comments: