Category Archives: 11g R2 RMAN

RECOVER STANDBY DATAFILE FROM PRIMARY

In this post, I will demonstrate how we can recover a lost/corrupted/inaccessible datafile on standby from primary.

Overview:

— Simulate loss of a datafile on standby database by renaming it.
— Restart standby database – Stops at mount stage as datafile is inaccessible.
— Check that redo apply to standby is stopped.
— Connect to primary database as target and standby as auxiliary.
— Take backup of the affected datafile from primary  so that backup file is created on standby.
— Recover standby database.
— Open standby database.
— check that redo apply has been restrated on standby and configuration is successful again.

– Implementation –

– Check that dataguard configuration is running properly

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
dg01 - Primary database
dg02 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

– Find out names of datafiles on standby

SBY>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/dg02/system01.dbf
/u01/app/oracle/oradata/dg02/sysaux01.dbf
/u01/app/oracle/oradata/dg02/undotbs01.dbf
/u01/app/oracle/oradata/dg02/users01.dbf
/u01/app/oracle/oradata/dg02/example01.dbf

– To simulate loss of the datafile of example tablespace , rename it

SBY>ho mv /u01/app/oracle/oradata/dg02/example01.dbf /u01/app/oracle/oradata/dg02/example.dbf

– Restart standby database
— Stops at mount stage due to missing datafile

SBY>startup force;
ORACLE instance started.

Total System Global Area  146472960 bytes
Fixed Size                  1335080 bytes
Variable Size              92274904 bytes
Database Buffers           50331648 bytes
Redo Buffers                2531328 bytes
Database mounted.
ORA-10458: standby database requires recovery
ORA-01196: file 1 is inconsistent due to a failed media recovery session
ORA-01110: data file 1: '/u01/app/oracle/oradata/dg02/system01.dbf'

-- Switch logs on primary and verify that redo apply has stopped on standby

PRI>alter system switch logfile;

DGMGRL>  show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
dg01 - Primary database
dg02 - Physical standby database
 Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

– Using RMAN, connect to primary as target and standby as auxiliary

[oracle@node1 ~]$ . oraenv
ORACLE_SID = [dg01] ?
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle

[oracle@node1 ~]$ rman target / auxiliary sys/oracle@dg02

connected to target database: DG01 (DBID=434142737)
connected to auxiliary database: DG01 (DBID=434142737, not open)

RMAN>

– Try to take backupset type of backup of example tablespace on primary so that backup file is created on standby
— Fails as only image copies can be transported over network using RMAN

RMAN> backup tablespace example auxiliary format
2> '/u01/app/oracle/oradata/dg02/example01.dbf';

Starting backup at 26-DEC-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=48 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 12/26/2013 14:37:04
RMAN-06955: Network copies are only supported for image copies.

– Take image copy backup of example tablespace on primary so that backup file is created on standby

RMAN> backup as copy  tablespace example auxiliary format
2> '/u01/app/oracle/oradata/dg02/example01.dbf';

Starting backup at 26-DEC-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u01/app/oracle/oradata/dg01/example01.dbf
output file name=/u01/app/oracle/oradata/dg02/example01.dbf tag=TAG20131226T143727
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 26-DEC-13

– check that image copy copy has been created on standby

SBY>ho ls /u01/app/oracle/oradata/dg02/example01.dbf
/u01/app/oracle/oradata/dg02/example01.dbf

– Recover standby database and open it

SBY>recover managed standby database disconnect;
alter database open;

– Check that redo apply is resumed again and configuration is successful

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
dg01 - Primary database
dg02 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Note:  In 12c datafiles can be restored over network using backup sets/ compressed backupsets as shown in this post.

—————————————————————————————————–

Related links:

Home
11g Dataguard Index

11g  DataGuard: Flashback Standby After Resetlogs On Primary11g  DataGuard Setup Using Active DataGuard on RHEL/OEL- 5 
Flashback Through Role Transition For Physical Standby
How to reinstate the old primary as a standby after failover
12c Dataguard: Restore Datafile From Service

—————

Using RMAN Incremental Backups to Refresh Standby Database

In this post , I will demonstrate synchronization of a standby DB from an incremental RMAN backup. It might be needed in following scenarios:

1-The standby database is considerably lagging behind the primary . The copying and applying of archive logs from primary will be time consuming as it will apply both the COMMITED and the NON COMMITED transactions then will ROLLBACK the non committed transactions. Incremental backup will recover the standby database much faster than applying the archives as it will apply only the COMMITED transactions on the standby database .

2-Some archivelogs on the primary which have not been applied to the standby have been lost. In such cases, you can create an incremental backup of the primary database containing changes since the standby database was last refreshed. This incremental backup can be applied to the standby database to synchronize it with a primary database.

Overview:

  • Stop redo transport on primary
  • Switch logs on primary – results in some archived logs on primary which have not been sent to standby
  • Rename newly generated archived logs on primary to simulate their loss
  • Restart redo transport – gives error as gap cannot be resolved due to missing logs
  • Create a control file for standby database on primary
  • Take incremental backup on primary starting from the SCN# of standby database
  • Copy the incremental backup to the standby host and catalog it with RMAN
  • Mount the standby database with newly created standby control file
  • Cancel managed recovery of standby database and apply incremental backup to the standby database
  • Start managed recovery of standby database

Implementation:

– Check current log sequence on primary

PRI>archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     40
Next log sequence to archive   42
Current log sequence           42

- check that all the archived logs prior to the current log have been sent to standby

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
41

– stop redo transport from primary (dg01)

DGMGRL> show database dg01

Database - dg01

Role:            PRIMARY
Intended State:  TRANSPORT-ON
Instance(s):
dg01

Database Status:
SUCCESS

DGMGRL> edit database dg01 set state='Transport-off';
Succeeded.

DGMGRL> show database dg01

Database - dg01

Role:            PRIMARY
Intended State:  TRANSPORT-OFF
Instance(s):
dg01

Database Status:
SUCCESS

– switch log on primary

PRI>alter system switch logfile;
alter system switch logfile;
alter system switch logfile;

archive log list;

Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     44
Next log sequence to archive   45
Current log sequence           46

– check that logs after sequence# 42 are not being transported to sby

SBY> select max(sequence#) from v$archived_log
MAX(SEQUENCE#)
--------------
42

– Find out names of archived logs generated on primary which have not been transported to standby

PRI>set line 500
col name for a40
select sequence#, name from v$archived_log where sequence# > 42;

SEQUENCE# NAME
---------- ----------------------------------------
43 /u01/app/oracle/flash_recovery_area/DG01
/archivelog/2013_11_02/o1_mf_1_43_9780s1
ch_.arc

44 /u01/app/oracle/flash_recovery_area/DG01
/archivelog/2013_11_02/o1_mf_1_44_9780tl
t0_.arc

45 /u01/app/oracle/flash_recovery_area/DG01
/archivelog/2013_11_02/o1_mf_1_45_9780tl
vo_.arc

– To simulate loss of archived logs on primary, rename them

PRI>ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_43_9780s1ch_.arc  /home/oracle/arch_43.arc

ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_44_9780tlt0_.arc /home/oracle/arch_44.arc

ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_45_9780tlvo_.arc /home/oracle/arch_45.arc

– Restart redo transport

DGMGRL>  edit database dg01 set state='Transport-on';

– check that gap in redo logs on standby cannot be resolved as some logs are missing on primary

DGMGRL> show database dg01

Database - dg01

Role:            PRIMARY
Intended State:  TRANSPORT-ON
Instance(s):
dg01

Database Error(s):
 ORA-16783: cannot resolve gap for database dg02

Database Status:
ERROR

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
dg01 - Primary database
Error: ORA-16724: cannot resolve gap for one or more standby databases

dg02 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

– check current scn# of standby

SBY>select current_scn from v$database;

CURRENT_SCN
-----------
998647

– Create a standby control file:

PRI>ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/standby_control.ctl';

– copy the standby control file to standby host

PRI> !scp /home/oracle/standby_control.ctl node2:/home/oracle/standby_control.ctl

– Take incremental backup on primary starting from scn# of standby database
RMAN uses the selected SCN as the basis for this incremental backup. For all files being backed up, RMAN includes all data blocks that were changed at SCNs greater than or equal to the FROM SCN in the incremental backup.
Note:
• RMAN does not consider the incremental backup as part of a backup strategy at the source database. The backup is not suitable for use in a normal RECOVER DATABASE operation at the source database.

RMAN> Backup incremental from SCN 987005 database tag='FOR_STANDBY' format '/home/oracle/%d_%t_%s_%p';

Starting backup at 02-NOV-13

channel ORA_DISK_1: starting full datafile backup set
...
channel ORA_DISK_1: starting piece 1 at 02-NOV-13
channel ORA_DISK_1: finished piece 1 at 02-NOV-13
piece handle=/home/oracle/DG01_830395300_22_1 tag=FOR_STANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

– Move the backup pieces to the standby host

PRI> !scp /home/oracle/DG01_830395300_22_1 node2:/home/oracle/DG01_830395300_22_1

– Catalog the Incremental Backup Files at the Standby Database

RMAN>  catalog backuppiece  '/home/oracle/DG01_830395300_22_1';

list backup tag FOR_STANDBY;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
9       Incr    9.38M      DISK        00:00:00     02-NOV-13
BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: FOR_STANDBY
Piece Name: /home/oracle/DG01_830395300_22_1
Keep: NOLOGS             Until: 09-NOV-13

List of Datafiles in backup set 9

– Find out names of current control files

SQL>  col value for a50
col name for a15
select name, value   from v$parameter where upper(name)= 'CONTROL_FILES';

NAME            VALUE
--------------- --------------------------------------------------
control_files   /u01/app/oracle/oradata/dg02/control01.ctl, /u01/a
pp/oracle/flash_recovery_area/dg02/control02.ctl

- Shutdown the standby database and rename the original control file of the standby database:

SBY> Shu immediate;
!mv /u01/app/oracle/oradata/dg02/control01.ctl /u01/app/oracle/oradata/dg02/control01.bak
!mv /u01/app/oracle/flash_recovery_area/dg02/control02.ctl /u01/app/oracle/flash_recovery_area/dg02/control02.bak

– Restore the standby control file we just copied from the primary

SBY>    ! cp /home/oracle/standby_control.ctl /u01/app/oracle/oradata/dg02/control01.ctl
! cp /home/oracle/standby_control.ctl /u01/app/oracle/flash_recovery_area/dg02/control02.ctl

– Startup the Standby database with the new controlfile:

SBY> startup mount;

– Apply the Incremental Backup to the Standby Database
Use the RMAN RECOVER command with the NOREDO option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database.

SBY> recover managed standby database cancel;
RMAN> RECOVER DATABASE from tag for_standby NOREDO;

Starting recover at 02-NOV-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/dg02/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/dg02/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/dg02/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/dg02/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/dg02/example01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/DG01_830395300_22_1
channel ORA_DISK_1: piece handle=/home/oracle/DG01_830395300_22_1 tag=FOR_STANDBY
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

Finished recover at 02-NOV-13

Now your standby database is refreshed from the incremental backup.
You can start the Managed Recovery process on the standby DB:

SBY>recover managed standby database disconnect;
Media recovery complete.

You can now resume managed recovery at the standby. Any redo logs required at the standby with changes since those contained in the incremental are automatically requested from the primary and applied.

— check that SCN# of the standby database has advanced.

SBY>select current_scn from v$database;

CURRENT_SCN
-----------
1005729

DGMGRL> edit database dg01 set state='transport-on';
DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
dg01 - Primary database
dg02 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Note:
This technique cannot be used to update a duplicate database.

References :

http://dba-tips.blogspot.in/2011/10/refresh-standby-database-from-rman.html

http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14191/rcmdupdb008.htm

———————————————————————————————–

Related Links:

Home
11g Dataguard Index

 

Flashback through role transition for physical standby

In this post, I will demonstrate that database role does not change when a physical standby is flashed back through a switchover.

Overview:
– Note current SCN# (pre switchover)
– Check current configuration
pc00sby1 – Primary database
pc00prmy – Physical standby database
– Perform switchover
– check post switchover configuration
pc00prmy – Primary database
pc00sby1 – Physical standby database
– Flashback new primary (pc00prmy) to SCN# before switchover
– check that there is error in configuration and current standby(pc00sby1) has diverged from the primary as SCN# of standby is ahead of primary
– To bring back the standby database into configuration, flashback it to a scn# before switchover
— check that both the databases have retain their current roles after flashback
pc00prmy – Primary database
pc00sby1 – Physical standby database

– Implementation –

- Check current configuration

pc00sby1 – Primary database
pc00prmy – Physical standby database

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
pc00sby1 - Primary database
pc00prmy - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

– Find out current SCN# (before switchover)

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1100354

– Perform switchover

DGMGRL> switchover to pc00prmy;
Performing switchover NOW, please wait...
New primary database "pc00prmy" is opening...
Operation requires shutdown of instance "pc00sby1" on database "pc00sby1"
Shutting down instance "pc00sby1"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "pc00sby1" on database "pc00sby1"
Starting instance "pc00sby1"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "pc00prmy"

– check that switchover has taken place successfully
pc00prmy – Primary database
pc00sby1 – Physical standby database

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
pc00prmy - Primary database
pc00sby1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

PC00SBY1>select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY

PC00PRMY>select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

– Flashback new primary (pc00prmy) to SCN# before switchover (1100354)

PC00PRMY>shu immediate;
         startup mount;
         flashback database to scn 1100354;
         alter database open resetlogs;

         alter system switch logfile;

– check that there is error in configuration and current standby(pc00sby1) has diverged from the primary

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
pc00prmy - Primary database
pc00sby1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the database

Fast-Start Failover: DISABLED

Configuration Status:
 ERROR

DGMGRL> show database pc00sby1;

Database - pc00sby1

Enterprise Manager Name: pc00sby1.us.oracle.com
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: 0 seconds
Real Time Query: OFF
Instance(s):
pc00sby1

Database Error(s):
 ORA-16700: the standby database has diverged from the primary database
 ORA-16766: Redo Apply is stopped

Database Status:
 ERROR

— The reason for diverging is that SCN# of standby( 1101143) is ahead of primary (1100582)

PC00PRMY>select current_scn from v$database;

CURRENT_SCN
-----------
1100582

PC00SBY1>select current_scn from v$database;

CURRENT_SCN
-----------
1101143

— To bring back the standby database into configuration, we need to flashback it to an scn# behind primary.

– Let’s flashback standby database  to a scn# before switchover (1100354) since we want to verify that databases maintain their current role after flashback.

PC00SBY1>shu immediate;
startup mount;
flashback database to scn 1100354;

alter database open;

PC00SBY1>select current_scn from v$database;

CURRENT_SCN
-----------
1100354

– switch log on primary –

PC00PRMY>alter system switch logfile;

System altered.

– check configuration –
— Both the databasese retain their current roles –
pc00prmy – Primary database
pc00sby1 – Physical standby database

DGMGRL> show configuration;

Configuration - dgconfig1

Protection Mode: MaxPerformance
Databases:
pc00prmy - Primary database
pc00sby1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

I hope this post was useful.

Enjoy!

———————————————————————————–

Related links:

————————————————————————–

TUNING RMAN PART-II

  In continuation with my earlier post Tuning RMAN Part-I,, in this post, I will demonstrate the usage of asynchronous i/o to improve the performance of RMAN operations.
RMAN can perform i/o in two ways : Synchronous and Asynchr nous.
We are aware that synchronous i/o is a bottleneck and we should always aim at having asynchronous i/o for better performance. Some platforms support asynchronous by default while others don’t. On the platforms which do not support asynchronous i/o, we can simulate asynch i/o by setting parameter dbwr_io_slaves.
If i/o is performed asynchronously, we will have records in v$backup_async_io
If i/o is performed synchronously, we will have records in v$backup_sync_io
 Let’s have a demonstration :
The parameter disk_asynch_io decides whether asynchronous I/O is enabled at O/S level. By default its value is true if OS supported asynch i/o.
Currently my database is runing on windows server 2008 and it supports asynchronous i/o.
Let’s verify by looking at the default value of  disk_asynch_io parameter.
SYS>sho parameter disk_asynch_io;NAME
TYPE        VALUE
———————————————– ———–
disk_asynch_io     boolean     TRUE
When asynchronous i/o is performed at the O/S level, the buffers needed by RMAN are allocated from PGA. Let’s verify this by checking the session pga memory as a backup progresses.
– Take backup using multiplexing level = 4 – this needs to allocate 16m buffers from pga
– While backup is going on , issue the next query repeatedly which monitors pga usage by rman session. Note that pga consumption increases as backup progresses.
rman>RUN
 {
     allocate channel c1 device type disk;
   backup datafile 1,2,3,4 filesperset 4 ;
 }
– Query the pga memory usage by rman  session .
– Note that pga consumption increases as backup progresses.
– ALso note that size of buffers allocated = 18 MB (41-23) which is slightly > 16 MB
SQL> col name for a30
     set line 500
     select s.sid, n.name , s.value/1024/1024 session_pga_mb
      from  v$statname n, v$sesstat s
      where s.sid = (select sess.SID
                                  FROM V$PROCESS p, V$SESSION sess
                                 WHERE p.ADDR = sess.PADDR
                                       AND CLIENT_INFO LIKE ‘%rman%’)
        and n.name = ‘session pga memory’
        and s.statistic# = n.statistic#;
 

SID NAME    SESSION_PGA_MB
—————————————- ————–

14 session pga memory     23.1208076
SQL> /

SID NAME  SESSION_PGA_MB
—————————————- ————–

14 session pga memory      41.1833076
  If OS does not support asynchronous I/O, we can simulate by setting parameter dbwr_io_slaves to a non zero value.
4 slave processes will be allocated irrespective of the value of the parameter dbwr_io_Slaves. IN this case, buffers for RMAN will be allocated from large pool. 
If large pool is sized to a value lower than the size of the buffers required, RMAN will switch to synchronous I/O and write a message to the alert log.
Let’s verify this.
– Check current values of sga_target/memory_target which will be used later to return to the original state.
- Note the values
SQL>sho parameter sga_target
          sho parameter memory_target 
– Disable AMM/ASMM
– Disable asynchronous I/O at OS level –
– SImulate async IO by configuring slave processes –
– Size large pool to a size smaller than 16M (say 8M)
SQL>alter system set “_memory_imm_mode_without_autosga”=false scope=both;
            alter system set memory_target = 0;
           alter system set sga_target = 0;

alter system set disk_asynch_io=false scope=spfile;
alter system set dbwr_io_slaves=4 scope=spfile;
alter system set large_pool_size=8m scope = spfile;
startup force;
– Check that the parameters have been set to the specified values
               

SQL> sho parameter sga_targetsho parameter memory_target

sho parameter disk_asynch_io

sho parameter dbwr_io_slaves

sho parameter large_pool_size

NAME TYPE        VALUE
———————————————– ———–
sga_target big integer 0
memory_target big integer 0
disk_asynch_io boolean     FALSE
dbwr_io_slaves integer     4
large_pool_size big integer 8M
– Take backup using multiplexing level = 4
– this needs to allocate 16m buffers from large pool –
rman>backup datafile 2,3,4,5 filesperset 4;
– Check that i/o is not asynchronous –

SQL>select filename, buffer_size, buffer_count, type, status from v$backup_async_io
where status=’IN PROGRESS';

no rows selected
– check that synchronous i/o is taking place although async i/o was simulated –
SQL> col filename for a30 word_wrapped
               select FILENAME, BUFFER_SIZE, BUFFER_COUNT, TYPE,
STATUS from v$backup_sync_io

where status = ‘IN PROGRESS';

FILENAME BUFFER_SIZE BUFFER_COUNT TYPE      STATUS
—————————————– ———— ——— ———–
C:\APP\ADMINISTRATOR\ORADATA\O   1048576            4    INPUT     IN PROGRESS
RCL\SYSAUX01.DBF
C:\APP\ADMINISTRATOR\ORADATA\O  1048576            4  INPUT     IN PROGRESS
RCL\UNDOTBS01.DBF
C:\APP\ADMINISTRATOR\ORADATA\O  1048576            4  INPUT     IN PROGRESS
RCL\EXAMPLE01.DBF
C:\APP\ADMINISTRATOR\ORADATA\O  1048576            4  INPUT     IN PROGRESS
RCL\USERS01.DBF
– check that alert log indicates that sync i/o will be used –
                                                
ORA-04031: unable to allocate 1056812 bytes of shared memory (“large pool”,”unknown
object”,”large  pool”,”KSFQ Buffers”)
ksfqxcre: failure to allocate shared memory means sync I/O will be used
Now let us increase the size of large pool to a value > 16M (say 20M) and the issue the
same backup. This time asynchronous i/o should be used. we will verify this by checking
that records appear in  v$backup_async_io
SQL>alter system set large_pool_size=20m scope=spfile;
startup force;

sho parameter large_pool_size;

NAME   TYPE        VALUE
———————————————– ————-
large_pool_size  big integer 20M
– Take backup using  multiplexing level = 4
- this needs to allocate 16m buffers from large   pool –
rman>backup datafile 2,3,4,5 filesperset 4;
– Check that i/o is not asynchronous –
SQL> select filename, buffer_size, buffer_count, type, status from v$backup_async_io
              where status=’IN PROGRESS';
FILENAME  BUFFER_SIZE BUFFER_COUNT TYPE      STATUS
—————————————– ———— ——— ———–
C:\APP\ADMINISTRATOR\ORADATA\O  1048576            4  INPUT     IN PROGRESS
RCL\SYSAUX01.DBF
C:\APP\ADMINISTRATOR\ORADATA\O  1048576            4  INPUT     IN PROGRESS
RCL\UNDOTBS01.DBF
C:\APP\ADMINISTRATOR\FLASH_REC  1048576            4  OUTPUT    IN PROGRESS
OVERY_AREA\ORCL\BACKUPSET\2012
_09_14\O1_MF_NNNDF_TAG20120914
T100407_855DY50Z_.BKP
Conclusion:
- The parameter disk_asynch_io decides whether asynchronous I/O is enabled at O/S level. By default its value is true if OS supported asynch i/o.
- When asynchronous i/o is performed at the O/S level, the buffers needed by RMAN are allocated from PGA.
- If OS does not support asynchronous I/O, we can simulate it by setting parameter dbwr_io_slaves to a non zero value. 4 slave processes will be allocated irrespective of the value of the parameter dbwr_io_Slaves.
- In the case  of simulated asynchronous i/o, , buffers for RMAN will be allocated from large pool. 
  If large pool is sized to a value more than the required buffers, asynchronous i/o will be performed.
  If large pool is sized to a value lower than the size of the buffers required, RMAN will switch to synchronous I/O and write a message to the alert log.
-  The no. of buffers allocated by RMAN per file depends upon the level of multiplexing.
Multiplexing level        Allocation rule
  
   Level <= 4           1 MB buffers are allocated so that the total buffer size for all input files is 16 MB.
   4 < Level <= 8       512 KB are allocated so that the total buffer size for all files is less than 16 MB.
   Level > 8            RMAN allocates four 128 KB disk buffers per channel for each file, so that the total size is 512 
                                 KB per channel for each file.
-
- We can calculate the size of the buffers required by RMAN as follows:
    Multiply the total bytes for each data file by the number of data files that are being concurrently accessed by the channel, and then multiply this number by the number of channels.
Assume that you use one channel to back up four data files, and use the settings that are shown above. In this case, multiply as follows to obtain the total size of the buffers that are allocated for the backup:
4 MB per data file * 1 channel  *  4 data files per channel = 16 MB
- If asynchronous i/o is supported by the O/s, we should size PGA to a value more than the size of the buffers required by RMAN.
 
- If asynchronous i/o is not supported by O/S and we are simulating it, we should size large pool to a value more than the size of the buffers required by RMAN.
 
— CLEANUP —
SQL>drop tablespace rman1 including contents and datafiles;
    drop tablespace rman2 including contents and datafiles;
    drop tablespace rman3 including contents and datafiles;
    drop tablespace rman4 including contents and datafiles;
    alter system set disk_asynch_io=true scope=spfile;
    alter system set memory_target = <> scope=spfile;
    alter system set sga_target = <> scope=spfile;
    startup force;
 
————————————————————————————————

Related links:

Home

                                                               —-

 

TUNING RMAN PART – I



In this post and the next one, I will discuss various methods by which we can improve the
performance of RMAN backup and recovery.
RMAN backup/recovery performance is influenced by various parameters:
– Parallelism
– Maxpiecesize – Maximum size of each backup piece
– FIlesperset  _ The number of datafiles in each backupset
– Maxopenfiles – Maximum No. of files which can be read from simultaneously
– Multiplexing level
– Asynchronous / Synchronous I/O
– Large pool Size-
- SETUP
As part of setup I will create 4 more tablespaces . I already have 5 tablespaces in my
database. So in all I will have 9 tablespaces .
sql>conn / as sysdba
create tablespace rman1
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman1.dbf’ size 100m;
create tablespace rman2
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman2.dbf’ size 100m;
create tablespace rman3
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman3.dbf’ size 100m;
create tablespace rman4
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman4.dbf’ size 100m;
create table hr.test1 tablespace rman1 as select * from hr.employees;
create table hr.test2 tablespace rman2 as select * from hr.employees;
create table hr.test3 tablespace rman3 as select * from hr.employees;
create table hr.test4 tablespace rman4 as select * from hr.employees;
Let us discuss each parameters one by one .
– PARALLELISM –

we can parallelize the backup/recovery operation by

– setting parallelism for the device
– Allocating multiple channels and

  • . Let RMAN decide which file to backup/restore using which channel
  • . BAcking up/Restoring specified files using specified channel

Each channel used will create its own backupset.

-- SET PARALLELSM FOR DEVICE –

RMAN>delete backup;
configure device type disk parallelism 3;
backup format ‘c:\%U.bak’ tablespace rman1, rman2, rman3;
list backup of tablespace rman1, rman2, rman3;
List of Datafiles in backup set 25

6       Full 1583594    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF

List of Datafiles in backup set 26

7       Full 1583595    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF

List of Datafiles in backup set 27

8       Full 1583596    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
——————————————————————————————-
- Allocating multiple channels and let RMAN decide which file to backup using which channel
———————————————————————————————-
— Note that RMan decides to backup rman1 and rman3 using c1 and rman2 using c2
RMAN>configure device type disk parallelism 1;
delete backup;
RMAN>run{
allocate channel c1 device type disk format ‘c:\%U.bak';
allocate channel c2 device type disk format ‘c:\%U.bak';
backup tablespace rman1, rman2, rman3;
}

channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
input datafile file number=00008 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
-- Check that each channel created its own backupset

RMAN>list backup;
List of Datafiles in backup set 28
6       Full 1584178    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
8       Full 1584178    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF

List of Datafiles in backup set 29
7       Full 1584179    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
————————————————————————————–
- Allocating multiple channels and backing up specified files using specified channels
————————————————————————————–
— Note that RMan  backs up rman1 and rman2 using c1 and rman3 using c2 as specified
RMAN>delete backup;
run{
allocate channel c1 device type disk format ‘c:\%U.bak';
allocate channel c2 device type disk format ‘c:\%U.bak';
backup (tablespace rman1, rman2 channel c1)
(tablespace rman3 channel c2);
}

channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DB
input datafile file number=00007 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DB

channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DB

– Check that each channel created its pwn backupset
RMAN>list backup;
List of Datafiles in backup set 30
6       Full 1584550    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
7       Full 1584550    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
List of Datafiles in backup set 31
8       Full 1584551    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
———————
-— FILESPERSET –
-———————
– This parameter decides how many files will be included in one backupset.
If > 1 files are backed up and filesperset = 1, one backupset will be created for each file

rman>delete backup;
rman>backup datafile 7,8 filesperset 1 format ‘c:\%U.bak';

list backup ;
— Note that two backupsets are created with one datafile each
List of Backup Sets
===================
List of Datafiles in backup set 17

7       Full 1529096    11-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
List of Datafiles in backup set 18

8       Full 1529098    11-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF

Using this parameter we can reduce the recovery time by limiting the no. of files per
backupset, since to restore a particular datafile we will have to restore a smaller file.
This is also beneficial in case the backup fails in the middle. Next time, we can backup
only those datafiles which could not be successfully backed up.
————————–
— MAXPIECESIZE –
————————–
– This parameter controls the maximum size a backup piece can have
– If we backup only one file, it will split into multiple pieces

- Check the size of datafiles.
– Note that datafile datafile for tablespace sysaux is 570m in size

rman>report schema;
File Size(MB) Tablespace           RB segs Datafile Name
—- ——– ——————– ——- ————————
1    690      SYSTEM               ***     C:\APP\ADMINISTRATOR\ORADATA\ORCL\SYSTEM01.DBF
2    570      SYSAUX               ***     C:\APP\ADMINISTRATOR\ORADATA\ORCL\SYSAUX01.DBF

– Backup datafile for sysaux tablespace so that each piece size <= 200M
— Note that backup is spread over 3 backup pieces

rman>delete backup;
rman>run{
allocate channel c1 device type disk maxpiecesize 200m format ‘c:\%U.bak';
backup tablespace sysaux ;
}
list backup of tablespace sysaux;
List of Backup Pieces for backup set 21 Copy #1
BP Key  Pc# Status      Piece Name
——- — ———– ———-
21      1   AVAILABLE   C:\0NNKUKP0_1_1.BAK
22      2   AVAILABLE   C:\0NNKUKP0_2_1.BAK
23      3   AVAILABLE   C:\0NNKUKP0_3_1.BAK

This option can be used
– to exercise the operating system limit on the file size.
– to split the backup of a datafile into multiple pieces if the backup does not fit in one
tape
– if backup is to be transferred over network.
————————–
— MAXOPENFILES -
————————-
— This parameter decides how many files can be backed up simultaneously
i.e. data from how many files can be read at a time.
— Default value = 8

– Take backup of 2 datafiles with maxopenfiles = 1 so that  level of multiplexing = 1 i.e.
. datafile 1 will be backed up first
. datafile 2 will be backed up next
so that data from 2 files will not be intermingled.
Rather first all data from datafile 1 will be written and
then data from datafile 2 will be written.

– While the backup is still going on issue the next sql query
RMAN> run{
allocate channel c1 device type disk maxopenfiles 1;
backup datafile 1,2;
}

— Note the change in filename from system01.dbf to sysaux01.dbf
— Note that buffers are allocated to datafile 1 first,
when that has been backed up, then buffers are allocated
to datafile 2
— Since multiplexing level = 1, total of 16 buffers of 1M each are
all allocated to each file one by one

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576           16
ATA\ORCL\SYSTEM01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T152958_84
Y2WZ7B_.BKP

SQL> /
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576           16
ATA\ORCL\SYSAUX01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T152958_84
Y2WZ7B_.BKP

– Take backup of 2 datafiles with maxopenfiles = 2 so that  level of multiplexing = 2 i.e.
. datafile 1 and 2  will be backed up simultaneously so that data from 2 files will be intermingled.

– While the backup is still going on issue the next sql query
RMAN> run{
allocate channel c1 device type disk maxopenfiles 2;
backup datafile 1,2;
}

— Note that buffers are allocated simultaneously to datafile 1 and 2
— Since multiplexing level = 2 (< 4), total of 16 buffers of 1M each are  allocated.
Each file gets 8 buffers each of the size of 1M

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD    1048576            8
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576            8
ATA\ORCL\SYSAUX01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T153330_84
Y33M40_.BKP
——————–
— MULTIPLEXING
——————–
Level of multiplexing means the no. of files whose data is intermingled
in the backup file. It depends on the following:
– # of files being backed up using one channel
– FIlesperset
– Maxopenfiles
Level of multiplexing = Minimum of the above 3.
We increase level of multiplexing so that data can be simultaneously read from multiple
files so that even if a process to wait to read from one of the files being backed up due to
contention, it can still read from the other files so that output tape drives gets
continuous input. Otherwise, due to  lack of the input data, tape drive might overshoot
before stopping and will again have to come back to its earlier position when it gets next
stream of data.
Thus, increasing the level of multiplexing may increase the speed of backup. On the other
hand , it can reduce the restore speed as data for a file has to be extracted from the
intermingled data.
Thus , one has to decide the level of multiplexing based on whether performance of the
backup or recovery is more critial.
In case files being backed up are ASM files, level of multiplexing may be reduced as ASM
automatically takes care of the contention by striping the data.
Input buffers are allocated for the backup. The no. of buffers allocated per file depends
upon the level of multiplexing.
Multiplexing level        Allocation rule

Level <= 4           1 MB buffers are allocated so that the total buffer size for all
input files is 16 MB.
4 < Level <= 8       512 KB are allocated so that the total buffer size for all files is
less than 16 MB.
Level > 8            RMAN allocates four 128 KB disk buffers per channel for each file,
so that the total size is 512 KB per channel for each file.

For multiplexing level 1 and 2 we have already seen that 1 MB buffers are allocated so that
the total buffer size for all input files is 16 MB.
—————————————–
Let’s verify for multiplexing level of 5 .
—————————————–

RMAN>delete backup;
=- Note that 512 KB are allocated so that the total buffer size for all files is less than
16 MB.
— AS backup progresses the files which have been backed up, buffers allocated to them gets
freed
RMAN> run{
allocate channel c1 device type disk maxopenfiles 5;
backup datafile 1,2,3,4,5;
}

sql>/
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\USERS01.DBF

sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
sql>/

INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF

—————————————–
Let’s verify for multiplexing level of 9 (> 8) .
—————————————–

RMAN>delete backup;
RMAN> run{
allocate channel c1 device type disk maxopenfiles 9;
backup datafile 1,2,3,4,5,6,7,8,9;
}
— Note that buffers are allocated simultaneously to all the 9 datafiles so that data from all the files will be intermingled in the output file.
— Since multiplexing level = 9 ( Level > 8), RMAN allocates four 128 KB disk buffers per channel for each file, so that the total size is 512 KB per channel for each file.

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     131072            4
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\EXAMPLE01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN1.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN2.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN3.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN4.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\USERS01.DBF
Conclusion :

- We can parallelize the backup/recovery operation by
   – setting parallelism for the device
   – Allocating multiple channels and
      . Let RMAN decide which file to backup/restore using which channel
      . BAcking up/Restoring specified files using specified channels
- Filesperset
  . Using this parameter we can reduce the recovery time by limiting the no. of files per
backupset, since to restore a particular datafile we will have to restore a smaller file.
  . This is also beneficial in case the backup fails in the middle. Next time, we can backup
only those datafiles which could not be successfully backed up.
- Maxpiecesize
  This option can be used
   – to exercise the operating system limit on the file size.
   – to split the backup of a datafile into multiple pieces if the backup does not fit in one tape
   – if backup is to be transferred over network.
- Maxopenfiles
  This parameter decides how many files can be backed up simultaneously i.e. data from how many files can be read at a time.
- Multiplexing
  Level of multiplexing means the no. of files whose data is intermingled in the backup file. It depends on the following:
- # of files being backed up using one channel
- FIlesperset
- Maxopenfiles
Level of multiplexing = Minimum of the above 3.
   – Increasing the level of multiplexing can effectively keep the tape streaming.
   – Increasing the level of multiplexing may increase the speed of backup. On the other
hand , it can reduce the restore speed as data for a file has to be extracted from the
intermingled data.
   – Thus , one has to decide the level of multiplexing based on whether performance of the
backup or recovery is more critial.
   – In case files being backed up are ASM files, level of multiplexing may be reduced as ASM automatically takes care of the contention by striping the data.
- Input buffers are allocated for the backup. The no. of buffers allocated per file depends
upon the level of multiplexing.
Multiplexing level        Allocation rule
   Level <= 4           1 MB buffers are allocated so that the total buffer size for all
input files is 16 MB.
   4 < Level <= 8       512 KB are allocated so that the total buffer size for all files is
less than 16 MB.
   Level > 8            RMAN allocates four 128 KB disk buffers per channel for each file,
so that the total size is 512 KB per channel for each file.
- We can calculate the size of the buffers required by RMAN as follows:
    Multiply the total buffer bytes for each data file by the number of data files that are being concurrently accessed by the channel, and then multiply this number by the number of channels.
Assume that you use one channel to back up four data files, and use the settings that are shown above. In this case, multiply as follows to obtain the total size of the buffers that are allocated for the backup:
4 MB per data file Î 1 channel Î 4 data files per channel = 16 MB
In my next post
Tuning RMAN Part-II ,I will discuss the use of the asynchronous i/o to improve the performance of RMAN backup/recovery.

Thanx for your time. Happy Tuning !!

References

http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#BRADV89486
http://hemantoracledba.blogspot.in/2015/07/rman-5-useful-keywords-and-subclauses.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
http://hemantoracledba.blogspot.in/2015/07/rman-5b-more-useful-keywords-and.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
http://hemantoracledba.blogspot.in/2015/07/rman-5c-some-more-useful-keywords-and.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
Retention Policy and Control_file_record_keep_time
3 RMAN Parameters every DBA should know
Spreading RMAN Backups Across Multiple Disks
Triplexing RMAN Backups

———————————————————————————————————-

Related links:

Home

RMAN Index

                                                ——————-

 

MIGRATION OF A RAC DATABASE

In this post, I will demonstrate how we can transport a RAC database from one cluster to another
(e.g. from production to test environment).
Current scenario:
- Database to be converted : orcl
– Target cluster name : cluster01.example.com
– Target cluster SCAN name : cluster01-scan.cluster01.example.com
– No. of nodes in the target cluster : host01, host02, host03
– Name of the database after conversion : newdb
Overview:
———————-
On the source Server
———————-
- Check prerequisites
– Start the database in READ ONLY mode
– Verify the database is ready for migration
– Make a folder to hold converted datafiles
– Use RMAN Convert command to create
. Transport script
. Initialization parameter file
. Converted Datafiles———————
On the target server
———————

– Create appropriate folders
– Transfer above files to the proper location on the target system.
– Edit initialization parameter file.
– Edit the transport script transport.sql
– Make an entry for the database in /etc/oratab
– Run transport script to create a single instance database on the target server.
– Register the database with clusterware
– Put the database in archivelog mode if not aLREADY
– Use rconfig to convert the single instance database to RAC database
– Create entries for the database in tnsnames.ora

Implementation:

———————-
On the source Server
———————

– Check prerequisites

– check the platform name of current database —

SQL>SELECT PLATFORM_NAME
FROM  V$DATABASE;

PLATFORM_NAME
——————————
Linux IA (32-bit)

— Start the database in READ ONLY mode

[oracle@host01]$srvctl stop database -d orcl

SQL> startup mount;
alter database open read only;

— Verify the database is ready for migration

SQL>   SET  SERVEROUTPUT ON;
DECLARE
db_ready BOOLEAN;
BEGIN
db_ready :=
DBMS_TDB.CHECK_DB('Linux IA (32-bit)',DBMS_TDB.SKIP_READONLY);
if db_ready
then
dbms_output.put_line('YES your database can be transported. ');
else
dbms_output.put_line('NO your database can not be transported. ');
end if;
end;
/

– Make a folder to hold converted datafiles

[oracle@host01]$mkdir /home/oracle/convertdb

- Use RMAN Convert Database command to create
. Transport script
. Inittalization parameter file
. Converted Datafiles

The CONVERT DATABASE command specified in the example creates
– a transport script named /home/oracle/convertdb/transport.sql,
– a PFILE initnewdb.ora in file /home/oracle/convertdb/newdb
– a copy of all data files in the =/home/oracle/convertdb/newdb  directory in the format
of the target platform ‘Linux IA (32-bit)’

RMAN>CONVERT DATABASE
NEW DATABASE 'newdb'
TRANSPORT SCRIPT '/home/oracle/convertdb/transport.sql'
TO PLATFORM 'Linux IA (32-bit)'
FORMAT '/home/oracle/convertdb/newdb/%U';

Starting conversion at source at 23-FEB-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=42 instance=orcl1 device type=DISK

External table SH.SALES_TRANSACTIONS_EXT found in the database

Directory SYS.SS_OE_XMLDIR found in the database
Directory SYS.DATA_PUMP_DIR found in the database
Directory SYS.SUBDIR found in the database
Directory SYS.MEDIA_DIR found in the database
Directory SYS.LOG_FILE_DIR found in the database
Directory SYS.DATA_FILE_DIR found in the database
Directory SYS.XMLDIR found in the database
Directory SYS.ORACLE_OCM_CONFIG_DIR found in the database

BFILE PM.PRINT_MEDIA found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00001 name=+DATA/orcl/datafile/system.256.808138599
converted datafile=/home/oracle/convertdb/newdb/data_D-ORCL_I-1335919524_TS-SYSTEM_FNO-1_16o2n0vb
.
.
.
.
converted datafile=/home/oracle/convertdb/newdb/data_D-ORCL_I-1335919524_TS-USERS_FNO-4_1co2n13i
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03

Edit init.ora file /home/oracle/convertdb/newdb/init_00o2n0vb_1_0.ora. This PFILE will be used to

create the database on the target platform

Run SQL script /home/oracle/convertdb/transport.sql on the target platform to create database

To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished conversion at source at 23-FEB-13

———————
On the target server
———————

- Create appropriate folders

[oracle@host01]$mkdir -p /home/oracle/convertdb/newdb
mkdir -p /u01/app/oracle/admin/newdb/adump

- Transfer above files to the proper location on the target system
. transport.sql to /home/oracle/convertdb folder
. init….ora  and converted datafiles to /home/oracle/convertdb/newdb folder

Edit initialization parameter file

[oracle@host01]$vi  /home/oracle/convertdb/newdb/init_00o2n0vb_1_0.ora

# Please change the values of the following parameters:

# db_create_file_dest      = “/home/oracle/convertdb/newdb/+DATA”
db_create_file_dest      = “+DATA”

#  db_recovery_file_dest    = “/home/oracle/convertdb/newdb/+FRA”
db_recovery_file_dest    = “+FRA”

db_recovery_file_dest_size= 4039114752

# remote_listener          = “NEWDBcluster01-scan.cluster01.example.com:1521″
remote_listener          = “cluster01-scan.cluster01.example.com:1521″

#  cluster_database         = TRUE
cluster_database         = FALSE

#  audit_file_dest          = “/home/oracle/convertdb/newdb/adump”
audit_file_dest          = “/u01/app/oracle/admin/newdb/adump”
– Edit the transport script transport.sql

[oracle@host01]$vi  /home/oracle/convertdb/transport.sql

– Delete the following lines to create redo threads 2 and 3

— Create log files for threads other than thread one.

SQL>ALTER DATABASE ADD LOGFILE THREAD 2
GROUP 3 SIZE 50M BLOCKSIZE 512 REUSE,
GROUP 4 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD LOGFILE THREAD 3
GROUP 5 SIZE 50M BLOCKSIZE 512 REUSE,
GROUP 6 SIZE 50M BLOCKSIZE 512 REUSE;

- Make following entry for the database in /etc/oratab

newdb:/u01/app/oracle/product/11.2.0/db_1:N

- Run transport script to create a single instance database on the target server

[oracle@host01]$. oraenv - newdb

sqlplus  / as sysdba

SQL>@/home/oracle/convertdb/transport.sql

– check that database has been transported —

[oracle@host01]$. oraenv - newdb

sqlplus  / as sysdba

SQL>startup

select * from hr.employees;

- Register the database with clusterware

– Check the database is currently not registered with clusterware

[oracle@host01 ~]$ srvctl status database -d newdb

PRCD-1120 : The resource for database newdb could not be found.
PRCR-1001 : Resource ora.newdb.db does not exist

[root@host01 trace]# crsctl stat res -t |grep newdb

— Register the database

[oracle@host01 ~]$ srvctl add database -d newdb -o $ORACLE_HOME -p /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfilenewdb.ora -a DATA,FRA

– Check the database is now registered with clusterware

[root@host01 trace]# crsctl stat res -t |grep newdb

ora.newdb.db

— Check that the database has no configured instances as it is a single
instance database

[oracle@host01 ~]$ srvctl start database -d newdb

– Put the database in archivelog mode if not already
( needed to convert it to a RAC database)

SQL> shu immediate;
startup mount;
alter database archivelog;
alter database open;

- Use rconfig to convert the single instance newdb database to a RAC database running
on three nodes host01, host02 and host03 with

. Datafiles on +DATA diskgroup
. Flash recovery area on +FRA diskgroup

— copy ConvertToRAC_AdminManaged.xml to another file my.xml

[oracle@host01]$cd $ORACLE_HOME/assistants/rconfig/sampleXMLs

cp ConvertToRAC_AdminManaged.xml my.xml

— Edit my.xml and make following changes :

. Specify current OracleHome of non-rac database for SourceDBHome

. Specify OracleHome where the rac database should be configured. It can be same as

SourceDBHome

. Specify SID of non-rac database and credential. User with sysdba role is required to perform

conversion

. Specify the list of nodes that should have rac instances running for the Admin Managed

Cluster Database. LocalNode should be the first node in this nodelist.

. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from

db_unique_name

. Specify the type of storage to be used by rac database. Allowable values are CFS|ASM

. Specify Database Area Location to be configured for rac database.

. Specify Flash Recovery Area to be configured for rac database.

— Run rconfig to convert orcl from single instance database to 2 instance RAC database

host01$rconfig my.xml

Converting Database “NEWDB” to Cluster Database. Target Oracle Home:

/u01/app/oracle/product/11.2.0/dbhome_1. Database Role: PRIMARY.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Flash Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Starting Cluster Database

Operation Succeeded

<Oracle_Home>
/u01/app/oracle/product/11.2.0/dbhome_1

— Check the log file for rconfig while conversion is going on

oracle@host01$ls -lrt  $ORACLE_BASE/cfgtoollogs/rconfig/*.log

              tailf <logfilename>

— check that the database has been converted successfully

host01$srvctl status database -d newdb

Instance newdb1 is running on node host01
Instance newdb2 is running on node host02
Instance newdb3 is running on node host03

[root@host01 trace]# srvctl config database -d newdb

Database unique name: NEWDB
Database name: NEWDB
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfilenewdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: NEWDB
Database instances: newdb1,newdb2,newdb3
Disk Groups: DATA,FRA
Services:
Database is administrator managed

[root@host01 trace]# crsctl stat res -t

ora.newdb.db
1        ONLINE  ONLINE       host01                   Open
2        ONLINE  ONLINE       host02                   Open
3        ONLINE  ONLINE       host03                   Open

- Create entries for the database in tnsnames.ora

[oracle@host01]$vi $ORACLE_HOME/network/admin/tnsnames.ora

— Following is the entry I copied and added :

NEWDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = newdb)
)
)

– check that database can be connected remotely from remote node.

host02$sqlplus system/oracle@newdb

— check that datafiles have converted to ASM

SQL>select name from v$datafile;

NAME
——————————————————————————–
+DATA/newdb/datafile/system.276.808311515
+DATA/newdb/datafile/sysaux.275.808311589
+DATA/newdb/datafile/undotbs1.272.808311655
+DATA/newdb/datafile/users.282.808311747
+DATA/newdb/datafile/example.273.808311721
+DATA/newdb/datafile/undotbs2.280.808311735
+DATA/newdb/datafile/undotbs3.281.808311743

I hope that you found this post useful. Your comments and suggestions are always welcome.

——————————————————————————————————-


Related links:

Home

11G R2 RAC Index
11g R2 RAC: Clone Database Home

 

11g R2 RAC: CONVERT NON RAC DATABASE TO RAC DATABASE USING RCONFIG

Oracle supports the following methods to convert a single-instance database to an RAC database as long as the RAC and the standalone environments are running on the same operating system and using the same oracle release:
• DBCA
• Oracle Enterprise Manager (grid control)
• RCONFIG
• Manual method
In this post, I will focus on and demonstrate this using the rconfig command-line tool.
During the conversion, rconfig performs the following steps automatically:
• Migrating the database to ASM, if specified
• Creating RAC database instances on all specified nodes in the cluster
• Configuring the Listener and NetService entries
• Registering services with CRS
• Starting up the instances and listener on all nodes
 In Oracle 11g R2., a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.
When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS, you will find two sample XML input files.
- ConvertToRAC_AdminManaged.xml
- ConvertToRAC_PolicyManaged.xml
While converting a single-instance database, with filesystem storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM. Therefore, configuring parallel options to use multiple RMAN channels in the
RMAN on the local node may make backup run faster, which eventually reduces the conversion duration. For example, you may configure the following in the RMAN  settings of orcl database on the local node.
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6;
CURRENT SCENARIO:
- 3 node RAC setup
- Names of nodes : Host01, Host02, Host03
- Name of single instance database with filesystem storage : orcl
- Source Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
- Target Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
OBJECTIVE
- convert orcl to a Admin managed RAC database running on two nodes host01 and host02.
- change storage to ASM with
   . Datafiles on +DATA diskgroup
   . Flash recovery area on +FRA diskgroup
IMPLEMENTATION:
– copy ConvertToRAC_AdminManaged.xml to another file my.xml
host01$cd $ORACLE_HOME/assistants/rconfig/sampleXMLs
host01$cp ConvertToRAC_AdminManaged.xml my.xml
– Edit my.xml and make following changes :
   . Specify Convert verify as “YES”
   . Specify current OracleHome of non-rac database for SourceDBHome
   . Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
   . Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
   . Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist.
   . Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name
   . Specify the type of storage to be used by rac database. Allowable values are CFS|ASM
   . Specify Database Area Location to be configured for rac database.
   . Specify Flash Recovery Area to be configured for rac database.
Note: The Convert verify option in xml file has three options:
  • YES : rconfig performs check that prerequisites for single-instance to RAC conversion have been met before it starts conversion
  • NO : rconfig does not perform checks and starts conversion
  • ONLY: rconfig performs only prerequisite checks; it does not perform conversion after completing checks
– Run rconfig to convert orcl from single instance database to 2 instance RAC database
host01$rconfig my.xml
– Check the log file for rconfig while conversion is going on
oracle@host01$ls -lrt  $ORACLE_BASE/cfgtoollogs/rconfig/*.log
                         tailf
– check that the database has been converted successfully
host01$srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02
– Note that rconfig adds password file to all the nodes but  entry to   tnsnames.ora needs to be modified (to reflect scan name instead of host-ip) on the local node and added to rest of the nodes.
– For all other nodes, copy the entry for the database (orcl) from    tnsnames.ora on local node to tnsnames.ora on remote node(s).
– Following is the entry I modified on the local node and copied to rest of the nodes :
ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )
– check that database can be connected remotely from remote node.
host02$sqlplus system/oracle@orcl
– check that datafiles have converted to ASM
SQL>select name from v$datafile;
NAME
——————————————————————————–
+DATA/orcl/datafile/system.326.794838279
+DATA/orcl/datafile/sysaux.325.794838349
+DATA/orcl/datafile/undotbs1.305.794838405
+DATA/orcl/datafile/users.342.794838413
+DATA/orcl/datafile/undotbs2.348.794838
———————————————————————————————————–
Related links:

11g R2 RAC: Clone Database Home

 

                                                              ———————