Category Archives: 11g DataGuard

11G DATAGUARD : AUTOMATIC JOB FAILOVER

In this post, I will demonstrate automatic failover of a job to standby which can be implemented as a result of the automatic client failover introduced in 11g dataguard.
Now,  we can create service for the database which can run on both primary and standby database and
following a switchover / failover, dataguard broker can automatically
- start the appropriate services on the new primary database,
- disconnect clients from the failed database and
- redirect them to the new primary database — no manual intervention is required.
For demonstration of automatic client failover please click here:
Using this feature, we can also enable  jobs running on primary to failover to standby database in the event of switchover / failover. To implement this, we need to create a job class which maps to a service and we assign job to the job class. On failover/switchover, service will failover to the standby database and along with it all the jobs assigned to the job classes which have been mapped to the service will also failover to standby database.
Job 1 —-|
Job 2 —+—–> Job class —>  Service
Job 3 —-|
OVERVIEW:
- Create a service orcls  for the database and start it
- Create net service name for the service which points to both the primary and standby database
- Create a trigger which starts the service on the primary database
- Connect to the primary database (orcl) using the service orcls
- Create a job class TEST_CLASS mapping ro service orcls
- Create and start a job belonging to TEST_CLASS
- Check that job is running on primary (orcl)
- Perform switchover
- Check that job has failed over and is now running on new primary (sby)
                                       —- CREATE SERVCE —-
– CHECK THE SERVICE_NAMES PARAMETER ON PRIMARY
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcl
– CREATE A SERVICE ORCLS 
PRI>exec dbms_service.create_service(-
       service_name => ‘orcls’,-
       network_name => ‘orcls’,-
       failover_method => ‘BASIC’, -
       failover_type  => ‘SELECT’,-
       failover_retries => 180,-
       failover_delay => 1);
– CHECK THAT LISTENER DOES NOT KNOW ABOUT THE SERVICE AS IT HAS NOT BEEN STARTED YET
$ lsnrctl services |grep orcls
– START THE SERVICE
PRI>exec dbms_service.start_service(‘orcls’);
– CHECK THAT SERVICE APPEARS IN PARAMETER SERVICE_NAMES
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcls
– CHECK THAT SERVICE IS AUTOMATICALLY REGISTERED WITH LISTENER
PRI>ho lsnrctl services |grep orcls
Service “orcls” has 1 instance(s).
– CREATE AFTER STARTUP TRIGGER WHICH STARTS THE SERVICE  –
    ON THE DATABASE WHICH IS CURRENTLY IN PRIMARY ROLE
PRI>create or replace trigger manage_service after startup on database
       declare
          role varchar2(30);
       begin
          select database_role into role from v$database;
          IF role = ‘PRIMARY’ THEN
             DBMS_SERVICE.START_SERVICE(‘orcls’);
          ELSE
             DBMS_SERVICE.STOP_SERVICE(‘orcls’);
          END IF;
        END;
/
- CREATE NET SERVICE NAME FOR SERVICE ORCLS
——————————-
Pls note that both primary and standby databases should be registered statically with listener in database home. Here database home listener is running on port 1521 for both standby and primary databases.
IF BOTH PRIMARY AND STANDBY DATABASES ARE ON SAME MACHINE
    . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing both primary database orcl and standby database sby)
– NOW WHEN SERVICE IS AVAILABLE ON ORCL (PRIMARY DATABASE), LISTENER WILL CONNECT TO ORCL DATABASE
– AND WHEN SERVICE IS AVAILABLE ON SBY (STANDBY DATABASE FOLLWING FAILOVER/SWITCHOVER), LISTENER WILL CONNECT TO  SBY DATABASE
———————————–
IF  PRIMARY AND STANDBY DATABASES ARE ON DIFFERENT MACHINES
– USING NETMGR
   . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing primary database orcl)
     – add address on host2 port 1521 (representing standby database sby)
– NOW WHEN SERVICE CONNECTS TO ORCL PRIMARY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST1 AND APPLICATION WILL GET CONNECTED TO ORCL DATABASE.
– NOW WHEN SERVICE CONNECTS TO SBY STANDBY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST2 AND APPLICATION WILL GET CONNECTED TO SBY DATABASE.
———————————
                            ———     JOB FAILOVER   ——————-
– check current configuration
DGMGRL> show configuration;
  Protection Mode: MaxPerformance
  Databases:
    orcl – Primary database
    sby  – Physical standby database
– create test  table on PRIMARY (ORCL)
ORCL>drop table hr.test purge;
           create table hr.test(id number, tim timestamp);
— create job class TEST_CLASS which is associated with service orcls
ORCL>exec dbms_scheduler.drop_job_class (‘TEST_CLASS’);
           BEGIN
sys.dbms_scheduler.create_job_class(
logging_level => DBMS_SCHEDULER.LOGGING_RUNS,
service => ‘orcls’,
job_class_name => ‘”TEST_CLASS”‘);
        END;
/
– create a job which inserts records in table HR.TEST every 3 seconds
   and belongs to job_class TEST_CLASS
ORCL>exec sys.dbms_scheduler.drop_job(job_name=>’sys.test_job’);
       BEGIN
sys.dbms_scheduler.create_job(
job_name => ‘”SYS”.”TEST_JOB”‘,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘begin
                        insert into hr.test values (1, systimestamp);
                        commit;
       end;’,
repeat_interval => ‘FREQ=SECONDLY;INTERVAL=3′,
start_date => systimestamp at time zone ‘Asia/Calcutta’,
job_class => ‘”TEST_CLASS”‘,
auto_drop => FALSE,
enabled => TRUE);
END;
/
– check that job executes every 3 seconds
ORCL>select * from hr.test
        order by tim;
– check that job is currently running on primary
ORCL>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
JOB_NAME
——————————
TEST_JOB
– check that job is currently not running on standby
SBY>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
no rows selected
– Perform switchover so that 
     SBY becomes new primary and
    ORCL becomes new physical standby
DGMGRL>switchover to sby;
– Startup new standby (orcl)
ORCL>conn / as sysdba
       startup
-
– check that switchover successful
DGMGRL>connect sys/sys@sby
                show configuration;
  Protection Mode: MaxPerformance
  Databases:
    sby  – Primary database
    orcl – Physical standby database
– check that job is still running by checking the records in hr.TEST table
ORCL>select * from hr.test
        order by tim;
– check that the job is now running on new primary i.e. SBY and not on eralier primary (orcl)
   as it has failed over along with service orcls
– check that job is currently running on new primary(SBY)
SBY>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
JOB_NAME
——————————
TEST_JOB
– check that job is currently not running on new standby (ORCL)
ORCL>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
no rows selected
– RESTORE EARLIER STATUS
DGMGRL>switchover to orcl;
ORCL>exec dbms_scheduler.drop_job(‘TEST_JOB’);
          drop table hr.test purge;
I hope you found this demonstration interesting. Pls give your valuable comments/suggestions
                             ——————————————————————
 Related links:

11G Dataguard :  Automatic Client Failover 

                                                       ——————–

11G DATAGUARD : AUTOMATIC CLIENT FAILOVER

In this post, I will demonstrate automatic client failover which is a new feature introduced in 11g dataguard.
Prior to 11g, one or more user written database triggers were required to automate client failover depending upon configuration.

Dataguard 11g release 2 simplifies configuration significantly by eliminating need for user written triggers to automate client failover. Role transitions managed by dataguard broker can automatically

– failover the database ,

– start the appropriate services on the new primary database,

– disconnect clients from the failed database and

– redirect them to the new primary database — no manual intervention is required.

OVERVIEW:
- Create a service orcls  for the database and start it
- Create net service name for the service which points to both the primary and standby database
- Create a trigger which starts the service on the primary database
- Connect to the primary database (orcl) using the service orcls
- Issue a long running query
- While the query is running, perform switchover to standby
- After the switchover, check that query has failed over to the new primary
IMPLEMENTATION
– CHECK THE SERVICE_NAMES PARAMETER ON PRIMARY
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcl
– CREATE A SERVICE ORCLS 

PRI>exec dbms_service.create_service(-

service_name => ‘orcls’,-

network_name => ‘orcls’,-

failover_method => ‘BASIC’, –

failover_type => ‘SELECT’,-

failover_retries => 180,-

failover_delay => 1);

– CHECK THAT LISTENER DOES NOT KNOW ABOUT THE SERVICE AS IT HAS NOT BEEN STARTED YET
$ lsnrctl services |grep orcls
– START THE SERVICE
PRI>exec dbms_service.start_service(‘orcls’);
– CHECK THAT SERVICE APPEARS IN PARAMETER SERVICE_NAMES
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcls
– CHECK THAT SERVICE IS AUTOMATICALLY REGISTERED WITH LISTENER
PRI>ho lsnrctl services |grep orcls
Service “orcls” has 1 instance(s).
CREATE AFTER STARTUP TRIGGER WHICH STARTS THE SERVICE ON  –
   THE PRIMARY DATABASE AND STOPS IT ON STANDBY DATABASE
PRI>create or replace trigger manage_service after startup on database

       declare

          role varchar2(30);

       begin

           select database_role into role from v$database;

           IF role = ‘PRIMARY’ THEN

                DBMS_SERVICE.START_SERVICE(‘orcls’);

          ELSE

               DBMS_SERVICE.STOP_SERVICE(‘orcls’);

  END IF;

  END;

/

 ——————————-

Pls note that both primary and standby databases should be registered statically with listener in database home. Here database home listener is running on port 1521 for both standby and primary databases.
IF BOTH PRIMARY AND STANDBY DATABASES ARE ON SAME MACHINE
    . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing both primary database orcl and standby database sby)
– NOW WHEN SERVICE IS AVAILABLE ON ORCL (PRIMARY DATABASE), LISTENER WILL CONNECT TO ORCL DATABASE
– AND WHEN SERVICE IS AVAILABLE ON SBY (STANDBY DATABASE FOLLWING FAILOVER/SWITCHOVER), LISTENER WILL CONNECT TO  SBY DATABASE
———————————–
IF  PRIMARY AND STANDBY DATABASES ARE ON DIFFERENT MACHINES
– USING NETMGR
   . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing primary database orcl)
     – add address on host2 port 1521 (representing standby database sby)
– NOW WHEN SERVICE CONNECTS TO ORCL PRIMARY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST1 AND APPLICATION WILL GET CONNECTED TO ORCL DATABASE.
– NOW WHEN SERVICE CONNECTS TO SBY STANDBY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST2 AND APPLICATION WILL GET CONNECTED TO SBY DATABASE.
———————————
– CREATE A USER DGTEST
PRI>create user dgtest identified by dgtest;
        grant create session to dgtest;
        grant select_catalog_role to dgtest;
PRESENTLY, I HAVE BOTH PRIMARY AND STANDBY ON SAME MACHINE
– CHECK CURRENT CONFIGURATION
   Primary – orcl
   Standby sby
DGMGRL>show configuration;
– CHECK THAT  SERVICE IS REGISTERED WITH LISTENER (1521) AND POINTS TO ORCL (PRIMARY DATABASE)
$lsnrctl services
– CONNECT USING SERVICE ORCLS AND CHECK THAT DB_UNIQUE_NAME=ORCL (PRIMARY)
PRI>connect dgtest/dgtest@orcls
        sho parameter db_unique_name
— CHECK THAT SESSION HAS NOT FAILED OVER TILL NOW (failed_over = no)
PRI>select failover_method, failover_type, failed_over
       from v$session
       where username=’DGTEST';
– ISSUE A LONG RUNNING QUERY IN THIS SESSION AND SIMULTANEOUSLY SWITCHOVER TO SBY    IN ANOTHER DGMGRL SESSION.
PRI>select * from dba_objects, dba_tables;
DGMGRL>switchover to sby;
– WHILE SWITCHOVER IS GOING ON CHECK THAT QUERY IS TEMPORARILY SUSPENDED
- AFTER SBY DATABASE IS STARTED, CHECK THAT SERVICE ORCLS IS POINTS TO SBY (NEW PRIMARY) NOW
$lsnrctl services
– CHECK THAT QUERY IS RESUMED AFTER SWITCHOVER
– CHECK CURRENT CONFIGURATION
DGMGRL>show configuration;
– CHECK THAT SESSION HAS FAILED OVER TO SBY DATABASE 
PRI>sho parameter db_unique_name
– CHECK THAT SESSION HAS FAILED OVER
PRI>select failover_method, failover_type, failed_over
       from v$session
       where username=’DGTEST';
– — RESTORE EARLIER CONFIGURATION
DGMGRL>switchover to orcl;
 We can use client failover feature of 11g dataguard to automatically failover a running job also  to standby in case of switchover/failover.
—————————————————————————–
Related links:

11g Dataguard Index

                                                        ———————

11G DATAGUARD: AUTOMATIC BLOCK MEDIA RECOVERY (AUTO BMR)

                          
In this post, I will demonstrate automatic block media recovery introduced in 11g Data Guard. This feature can be used only if active dataguard is enabled.
This feature enables the automatc repair of corrupt blocks transparent to the user and application. If corruption occurs on the primary database, blockmedia recovery is performed automatically using a good copy of the block from the standby database and vice versa.
OVERVIEW:
– Create and polulate a test table on primary
– Corrupt two blocks containing table data
– Check that blocks corrupted using dbv
– Flush buffer cache and issue query for corrupt blocks
– The query succeeds as blocks have been repaired automatically
– Verify automatic BMR using
     . Alert log of Primary
     . dbv utility
— IMPLEMENTATION –
– CREATE A TEST TABLESPACE ON PRIMARY
PRI>drop tablespace test including contents and datafiles;
       create tablespace test datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ size 30M;
– CREATE A TEST USER ON PRIMARY
   
PRI> create user test  identified by test default tablespace test;
       grant connect, resource to test;
– CREATE AND POLULATE TABLE TEST_TAB IN TEST TABLESPACE ON PRIMARY
PRI> Create table test.test_tab as select * from hr.employees;
         insert into test.test_tab select * from test.test_tab;
        /
        /
        /
        /
        commit;
       select table_name, tablespace_name from dba_tables where table_name=’TEST_TAB';
select count(*) from test.test_tab;
– CHECK THE LEAST BLOCK OCCUPIED BY THE TABLE IN DATAFILE
  PRI>select min(dbms_rowid.rowid_block_number(rowid))
           from test.test_tab;
       MIN(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
—————————————–
                                      131
– corrupt two blocks
#dd if=/dev/zero of=/u01/app/oracle/oradata/orcl/test01.dbf bs=8192 conv=notrunc seek=136 count=2
– check if datafile corrupted
[oracle@host1 ~]$ dbv file=/u01/app/oracle/oradata/orcl/test01.dbf blocksize=8192
Page 136 is marked corrupt
Corrupt block relative dba: 0x01800088 (file 6, block 136)
Completely zero block found during dbv:
Page 137 is marked corrupt
Corrupt block relative dba: 0x01800089 (file 6, block 137)
Completely zero block found during dbv:
Total Pages Marked Corrupt   : 2
-- FLUSH BUFFER CACHE AND ISSUE QUERY ON CORRUPTED BLOCKS
- QUERY SUCCEEDS AS BLOCKS HAVE BEEN REPAIRED AUTOMATICALLY
PRI>alter system flush buffer_cache;
       select count(*) from test.test_tab;
– CHECK THE ALERT LOG OF PRIMARY DATABASE
# tailf /u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl.log
——————
Tue Dec 18 12:43:01 2012
ALTER SYSTEM: Flushing buffer cache
Tue Dec 18 12:43:26 2012
– BLOCK 136 IS FOUND CORRUPT AND IS DUMPED IN TRACE FILE
Hex dump of (file 6, block 136) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12608.trc
Corrupt block relative dba: 0x01800088 (file 6, block 136)
Completely zero block found during multiblock buffer read
Reading datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ for corruption at rdba: 0x01800088 (file 6, block 136)
Reread (file 6, block 136) found same corrupt data
– REQUEST FOR AUTO BMR SENT FOR BLOCK 136
Requesting Auto BMR for (file# 6, block# 136)
– BLOCK 137 IS FOUND CORRUPT AND IS DUMPED IN TRACE FILE
Hex dump of (file 6, block 137) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12608.trc
Corrupt block relative dba: 0x01800089 (file 6, block 137)
Completely zero block found during multiblock buffer read
Reading datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ for corruption at rdba: 0x01800089 (file 6, block 137)
Reread (file 6, block 137) found same corrupt data
– REQUEST FOR AUTO BMR SENT FOR BLOCK 136
Requesting Auto BMR for (file# 6, block# 137)
– AUTO BMR SUCCESSFUL FOR BLOCK 136
Waiting Auto BMR response for (file# 6, block# 136)
Auto BMR successful
– AUTO BMR SUCCESSFUL FOR BLOCK 137
Waiting Auto BMR response for (file# 6, block# 137)
Auto BMR successful
– CHECK THAT BLOCKS HAVE BEEN REPAIRED
oracle@host1 ~]$ dbv file=/u01/app/oracle/oradata/orcl/test01.dbf blocksize=8192
Total Pages Marked Corrupt   : 0
Thanks for your time. Your comments and suggestions are welcome !!!
References:
————————————————————————————–
 Related links:

                                                

——————
 

11g DATAGUARD SETUP USING ACTIVE DATAGUARD

In this blog we’ll see how to configure 11g Data Guard using active dataguard i.e. standby database will be created from running primary database . We do  not need to take backup of database or controlfile. Primary database dg01 on node1.oracle.com (192.9.201.192)

Standby database dg02 on node2.oracle.com (192.9.201.193)

Listeners :

primary : on port 1521 in grid home

on port 1522 in database home

standby : on port 1521 in grid home

on port 1522 in database home

TNS entries

primary : dg01  on node1.oracle.com on ports 1521, 1522

registered with both the listeners

dg02 on node2.oracle.com on port 1522

primary : dg02  on node2.oracle.com on ports 1521, 1522

registered with both the listeners

dg01 on node1.oracle.com on port 1522

Configuration on primary site:-

——————————————-

- As root assign

.  IP address 192.9.201.192

. hostname node1.oracle.com

#service network restart

-  Make entries in /etc/hosts

192.9.201.182 node1.oracle.com node1

192.168.2.102 node2.oracle.com node2

– Reboot machine (mandatory as hostname has been changed)

create dg01 database in archivelog mode , with sample schemas, with FRA default

Invoke netmgr from db home

  • - make service dg01 available on port 1522
  • - create a listener in database home on port 1522
  • - statically register dg01 with listener on port 1522
  • - create service for dg02 running on node2.oracle.com on port 1522
  • - delete orcl database

Configuration on standby site:-

——————————————-

- As root assign

. IP address 192.9.201.193

. hostname node2.oracle.com

#service network restart

Reboot machine (mandatory as hostname has been changed)

- Make entries in /etc/hosts

192.9.201.182 node1.oracle.com node1

192.9.201.193 node2.oracle.com node2

Test

#ping node1

ping node2

– Invoke netmgr from db home

  • - create service for dg01 running on node1.oracle.com on port 1522
  • - create service for dg02 running on node2.oracle.com on port 1522
  • - create a listener in database home on port 1522
  • - statically register dg02 with it

—————————–
On primary node01

—————————–

— Configure the database dg01 in archive log mode and enable the force logging.

SQL>SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER DATABASE ARCHIVELOG;
    ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=6G;
    ALTER DATABASE FORCE LOGGING;
    select force_logging from v$database;
    ALTER DATABASE OPEN;

–  Configure standby redo log files to enable fast failover and real time apply. (Assume we have three redo log group)

– Find out no. and size of current redo log groups

SQL>select group#, bytes/1024/1024 MB   from v$log;

– Add standby log files –

Note: – Number of standby redo log group must be more than number of redo log group and size of the standby logfile must be same as the size of redo log file.

SQL>  ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/app/oracle/oradata/dg01/sredo04.log' size 50m;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/app/oracle/oradata/dg01/sredo05.log' size 50m;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/app/oracle/oradata/dg01/sredo06.log' size 50m;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/app/oracle/oradata/dg01/sredo07.log' size 50m;

– check that standby logfiles have been created

SQL> col member for a30
           select group#, TYPE, MEMBER from v$logfile where type = 'STANDBY';

GROUP# TYPE    MEMBER

———- ——- ——————————

4 STANDBY /u01/app/oracle/oradata/dg01/sredo04.log

5 STANDBY /u01/app/oracle/oradata/dg01/sredo05.log

6 STANDBY /u01/app/oracle/oradata/dg01/sredo06.log

7 STANDBY /u01/app/oracle/oradata/dg01/sredo07.log

SQL> select group#, bytes/1024/1024 MB from v$standby_log;

GROUP#         MB

———- ———-

4         50

5         50

6         50

7         50

– Make a staging folder

oracle@node1$mkdir /home/oracle/stage

— CREATE PFILE FROM SPFILE.

PRI>create pfile='/home/oracle/stage/initdg01.ora' from spfile;

-  Now configure following parameters for primary database:-

– add following lines to initdg01.ora

oracle@node1$vi /home/oracle/stage/initdg01.ora

LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dg01,dg02)’

DB_UNIQUE_NAME=’dg01′

SERVICE_NAMES=’dg01′

DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′

LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′

LOG_ARCHIVE_DEST_1=’location=USE_DB_RECOVERY_FILE_DEST’

LOG_ARCHIVE_DEST_STATE_1=enable

LOG_ARCHIVE_DEST_2=’service=dg02 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=dg02′

LOG_ARCHIVE_DEST_STATE_2=enable

STANDBY_FILE_MANAGEMENT=auto

FAL_CLIENT=’dg01′

FAL_SERVER=’dg02′

LOG_ARCHIVE_FORMAT=’%t_%s_%r.arc’

Note:

In case source and destination databases are ASM,  parameters db_file_name_convert and log_file_name_convert may be defined as follows:

db_file_name_convert = (“+DATA/dg01″,”+DATA/dg02″)

log_file_name_convert = (“+DATA/dg01″,”+DATA/dg02″, “+FRA/dg01″,”+FRA/dg02″)

– create spfile from pfile and start dg01 using new parameters

PRI>shu immediate;
    create spfile from pfile='/home/oracle/stage/initdg01.ora';
    startup;

—– Take the entire database backup on pre configured staging directory(optional with active dataguard)

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
      BACKUP FORMAT '/home/oracle/stage/%U' DATABASE PLUS ARCHIVELOG;
      BACKUP FORMAT '/home/oracle/stage/%U' CURRENT CONTROLFILE FOR STANDBY;
      sql "alter system archive log current";

– create staging folder on node2

oracle@node2$mkdir /home/oracle/stage

—- Copy the following items from node1 to node2.

PASSWORD FILE

[oracle@node1 ~]$scp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwdg01 node2:/u01/app/oracle/product/11.2.0/db_1/dbs/orapwdg02

PARAMETER FILE and rman backups(optional with active dataguard)

[oracle@node1 ~]$ scp /home/oracle/stage/* node2:/home/oracle/stage/

Note: – RMAN backup must be copied at the same directory on machine2

i.e. /home/oracle/stage

Configure net service for dg01 (primary database)  on node1 at port 1522

as well as for dg02 (standby database) on node2 at port 1522

set environment to dg01 and invoke netmgr

DG02 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = node2.oracle.com)(PORT = 1522))

)

(CONNECT_DATA =

(SERVICE_NAME = dg02)

)

)

DG01 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = node1.oracle.com)(PORT = 1522))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg01)

)

)

– Create a listener in database home on port 1522 and statically register dg01 with it.
—————————-

On standby node02

—————————–

 - Edit Pfile for Standby Database:
oracle@node2$mv /home/oracle/stage/initdg01.ora /home/oracle/stage/initdg02.ora
             vi initdg02.ora

— Final pfile should look like this

dg02.__db_cache_size=440401920

dg02.__java_pool_size=4194304

dg02.__large_pool_size=4194304

dg02.__shared_pool_size=163577856

dg02.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/dg02/adump’

*.control_files=’/u01/app/oracle/oradata/dg02/control01.ctl’

*.core_dump_dest=’/u01/app/oracle/admin/dg02/cdump’
*.db_block_size=8192

*.db_domain=”

*.db_file_multiblock_read_count=16

*.db_name=’dg01′

*.db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area’

*.db_recovery_file_dest_size=6G

*.dispatchers='(PROTOCOL=TCP) (SERVICE=dg02XDB)’

*.job_queue_processes=10

*.log_archive_format=’%t_%s_%r.dbf’

*.open_cursors=300

*.pga_aggregate_target=204472320

*.processes=150

*.remote_login_passwordfile=’EXCLUSIVE’

*.sga_target=613416960

*.undo_management=’AUTO’

*.undo_tablespace=’UNDOTBS1′

*.user_dump_dest=’/u01/app/oracle/admin/dg02/udump’

LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dg01,dg02)’
DB_UNIQUE_NAME=’dg02′

SERVICE_NAMES=’dg02′

DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02,’/u01/app/oracle/oradata/dg01′

LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′

LOG_ARCHIVE_DEST_1=’location=USE_DB_RECOVERY_FILE_DEST’

LOG_ARCHIVE_DEST_STATE_1=enable

LOG_ARCHIVE_DEST_2=’service=dg01 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=DG01′
LOG_ARCHIVE_DEST_STATE_2=enable

STANDBY_FILE_MANAGEMENT=auto
FAL_CLIENT=’dg02′

FAL_SERVER=’dg01′

LOG_ARCHIVE_FORMAT=’%t_%s_%r.arc’
@node2: Create the required directory for standby database

[oracle@node2 ~]$ mkdir -p  /u01/app/oracle/admin/dg02/adump
                  mkdir -p  /u01/app/oracle/oradata/dg02/
                  mkdir -p  /u01/app/oracle/flash_recovery_area/dg02

@node2:– Configure net service for dg01 (primary database)  on node1 at port 1522

as well as for dg02 (standby database) on node2 at port 1522

– set oracle home to database home and start netmgr
DG02 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = node2.oracle.com)(PORT = 1522))

)

(CONNECT_DATA =

(SERVICE_NAME = dg02)

)

)

DG01 =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = node1.oracle.com)(PORT = 1522))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = dg01)

)

)

@node2 –create a listener on port 1522 in database home and statically dg02 service with it

@node2 : Start dg02 at nomount stage

 

[oracle@node2 ~]$ export ORACLE_SID=dg02
SBY> CREATE SPFILE FROM PFILE='/home/oracle/stage/initdg02.ora';
     STARTUP NOMOUNT ;
     EXIT

@node2: Now using RMAN duplicate command create standby database

[oracle@node2 ~]$ export ORACLE_SID=dg02
RMAN TARGET SYS/oracle@dg01
RMAN> CONNECT AUXILIARY SYS/oracle@dg02

      DUPLICATE TARGET DATABASE FOR STANDBY from active database;

– Switch logs on primary

PRI>alter system switch logfile;

– start redo apply on standby

SBY>alter database recover managed standby database using current logfile disconnect;

– Verify that standby database is performing correctly

SBY>select sequence#, first_time, next_time from v$archived_log order by sequence#;

– Perform a few log switches on primary

PRI>alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;

– On standby database verify that new redo data was received , archived and applied

SBY>select sequence#, first_time, next_time,applied from v$archived_log order by sequence#;

——————————

Related links:

Home
11g Dataguard Index

11g Dataguard: Automatic Block Media Recovery (BMR)

11g Dataguard: Automatic Client Failover

11g Dataguard: Automatic Job Failover

11g Dataguard: Flashback  Standby After  Resetlogs On Primary

ORA-12528: TNS:listener: all appropriate instances are blocking new connection

Recover Standby Datafile From Primary

Warning: ORA-16792: configurable property value is inconsistent with database setting

                                                   ————