Monthly Archives: February 2013

TUNING RMAN PART – I



In this post and the next one, I will discuss various methods by which we can improve the
performance of RMAN backup and recovery.
RMAN backup/recovery performance is influenced by various parameters:
– Parallelism
– Maxpiecesize – Maximum size of each backup piece
– FIlesperset  _ The number of datafiles in each backupset
– Maxopenfiles – Maximum No. of files which can be read from simultaneously
– Multiplexing level
– Asynchronous / Synchronous I/O
– Large pool Size-
- SETUP
As part of setup I will create 4 more tablespaces . I already have 5 tablespaces in my
database. So in all I will have 9 tablespaces .
sql>conn / as sysdba
create tablespace rman1
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman1.dbf’ size 100m;
create tablespace rman2
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman2.dbf’ size 100m;
create tablespace rman3
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman3.dbf’ size 100m;
create tablespace rman4
datafile ‘C:\APP\ADMINISTRATOR\ORADATA\ORCL\rman4.dbf’ size 100m;
create table hr.test1 tablespace rman1 as select * from hr.employees;
create table hr.test2 tablespace rman2 as select * from hr.employees;
create table hr.test3 tablespace rman3 as select * from hr.employees;
create table hr.test4 tablespace rman4 as select * from hr.employees;
Let us discuss each parameters one by one .
– PARALLELISM –

we can parallelize the backup/recovery operation by

– setting parallelism for the device
– Allocating multiple channels and

  • . Let RMAN decide which file to backup/restore using which channel
  • . BAcking up/Restoring specified files using specified channel

Each channel used will create its own backupset.

-- SET PARALLELSM FOR DEVICE –

RMAN>delete backup;
configure device type disk parallelism 3;
backup format ‘c:\%U.bak’ tablespace rman1, rman2, rman3;
list backup of tablespace rman1, rman2, rman3;
List of Datafiles in backup set 25

6       Full 1583594    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF

List of Datafiles in backup set 26

7       Full 1583595    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF

List of Datafiles in backup set 27

8       Full 1583596    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
——————————————————————————————-
- Allocating multiple channels and let RMAN decide which file to backup using which channel
———————————————————————————————-
— Note that RMan decides to backup rman1 and rman3 using c1 and rman2 using c2
RMAN>configure device type disk parallelism 1;
delete backup;
RMAN>run{
allocate channel c1 device type disk format ‘c:\%U.bak';
allocate channel c2 device type disk format ‘c:\%U.bak';
backup tablespace rman1, rman2, rman3;
}

channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
input datafile file number=00008 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
-- Check that each channel created its own backupset

RMAN>list backup;
List of Datafiles in backup set 28
6       Full 1584178    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
8       Full 1584178    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF

List of Datafiles in backup set 29
7       Full 1584179    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
————————————————————————————–
- Allocating multiple channels and backing up specified files using specified channels
————————————————————————————–
— Note that RMan  backs up rman1 and rman2 using c1 and rman3 using c2 as specified
RMAN>delete backup;
run{
allocate channel c1 device type disk format ‘c:\%U.bak';
allocate channel c2 device type disk format ‘c:\%U.bak';
backup (tablespace rman1, rman2 channel c1)
(tablespace rman3 channel c2);
}

channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DB
input datafile file number=00007 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DB

channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DB

– Check that each channel created its pwn backupset
RMAN>list backup;
List of Datafiles in backup set 30
6       Full 1584550    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN1.DBF
7       Full 1584550    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
List of Datafiles in backup set 31
8       Full 1584551    12-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF
———————
-— FILESPERSET –
-———————
– This parameter decides how many files will be included in one backupset.
If > 1 files are backed up and filesperset = 1, one backupset will be created for each file

rman>delete backup;
rman>backup datafile 7,8 filesperset 1 format ‘c:\%U.bak';

list backup ;
— Note that two backupsets are created with one datafile each
List of Backup Sets
===================
List of Datafiles in backup set 17

7       Full 1529096    11-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN2.DBF
List of Datafiles in backup set 18

8       Full 1529098    11-SEP-12 C:\APP\ADMINISTRATOR\ORADATA\ORCL\RMAN3.DBF

Using this parameter we can reduce the recovery time by limiting the no. of files per
backupset, since to restore a particular datafile we will have to restore a smaller file.
This is also beneficial in case the backup fails in the middle. Next time, we can backup
only those datafiles which could not be successfully backed up.
————————–
— MAXPIECESIZE –
————————–
– This parameter controls the maximum size a backup piece can have
– If we backup only one file, it will split into multiple pieces

- Check the size of datafiles.
– Note that datafile datafile for tablespace sysaux is 570m in size

rman>report schema;
File Size(MB) Tablespace           RB segs Datafile Name
—- ——– ——————– ——- ————————
1    690      SYSTEM               ***     C:\APP\ADMINISTRATOR\ORADATA\ORCL\SYSTEM01.DBF
2    570      SYSAUX               ***     C:\APP\ADMINISTRATOR\ORADATA\ORCL\SYSAUX01.DBF

– Backup datafile for sysaux tablespace so that each piece size <= 200M
— Note that backup is spread over 3 backup pieces

rman>delete backup;
rman>run{
allocate channel c1 device type disk maxpiecesize 200m format ‘c:\%U.bak';
backup tablespace sysaux ;
}
list backup of tablespace sysaux;
List of Backup Pieces for backup set 21 Copy #1
BP Key  Pc# Status      Piece Name
——- — ———– ———-
21      1   AVAILABLE   C:\0NNKUKP0_1_1.BAK
22      2   AVAILABLE   C:\0NNKUKP0_2_1.BAK
23      3   AVAILABLE   C:\0NNKUKP0_3_1.BAK

This option can be used
– to exercise the operating system limit on the file size.
– to split the backup of a datafile into multiple pieces if the backup does not fit in one
tape
– if backup is to be transferred over network.
————————–
— MAXOPENFILES -
————————-
— This parameter decides how many files can be backed up simultaneously
i.e. data from how many files can be read at a time.
— Default value = 8

– Take backup of 2 datafiles with maxopenfiles = 1 so that  level of multiplexing = 1 i.e.
. datafile 1 will be backed up first
. datafile 2 will be backed up next
so that data from 2 files will not be intermingled.
Rather first all data from datafile 1 will be written and
then data from datafile 2 will be written.

– While the backup is still going on issue the next sql query
RMAN> run{
allocate channel c1 device type disk maxopenfiles 1;
backup datafile 1,2;
}

— Note the change in filename from system01.dbf to sysaux01.dbf
— Note that buffers are allocated to datafile 1 first,
when that has been backed up, then buffers are allocated
to datafile 2
— Since multiplexing level = 1, total of 16 buffers of 1M each are
all allocated to each file one by one

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576           16
ATA\ORCL\SYSTEM01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T152958_84
Y2WZ7B_.BKP

SQL> /
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576           16
ATA\ORCL\SYSAUX01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T152958_84
Y2WZ7B_.BKP

– Take backup of 2 datafiles with maxopenfiles = 2 so that  level of multiplexing = 2 i.e.
. datafile 1 and 2  will be backed up simultaneously so that data from 2 files will be intermingled.

– While the backup is still going on issue the next sql query
RMAN> run{
allocate channel c1 device type disk maxopenfiles 2;
backup datafile 1,2;
}

— Note that buffers are allocated simultaneously to datafile 1 and 2
— Since multiplexing level = 2 (< 4), total of 16 buffers of 1M each are  allocated.
Each file gets 8 buffers each of the size of 1M

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD    1048576            8
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     1048576            8
ATA\ORCL\SYSAUX01.DBF
OUTPUT    IN PROGRESS C:\APP\ADMINISTRATOR\FLAS     1048576            4
H_RECOVERY_AREA\ORCL\BACK
UPSET\2012_09_11\O1_MF_NN
NDF_TAG20120911T153330_84
Y33M40_.BKP
——————–
— MULTIPLEXING
——————–
Level of multiplexing means the no. of files whose data is intermingled
in the backup file. It depends on the following:
– # of files being backed up using one channel
– FIlesperset
– Maxopenfiles
Level of multiplexing = Minimum of the above 3.
We increase level of multiplexing so that data can be simultaneously read from multiple
files so that even if a process to wait to read from one of the files being backed up due to
contention, it can still read from the other files so that output tape drives gets
continuous input. Otherwise, due to  lack of the input data, tape drive might overshoot
before stopping and will again have to come back to its earlier position when it gets next
stream of data.
Thus, increasing the level of multiplexing may increase the speed of backup. On the other
hand , it can reduce the restore speed as data for a file has to be extracted from the
intermingled data.
Thus , one has to decide the level of multiplexing based on whether performance of the
backup or recovery is more critial.
In case files being backed up are ASM files, level of multiplexing may be reduced as ASM
automatically takes care of the contention by striping the data.
Input buffers are allocated for the backup. The no. of buffers allocated per file depends
upon the level of multiplexing.
Multiplexing level        Allocation rule

Level <= 4           1 MB buffers are allocated so that the total buffer size for all
input files is 16 MB.
4 < Level <= 8       512 KB are allocated so that the total buffer size for all files is
less than 16 MB.
Level > 8            RMAN allocates four 128 KB disk buffers per channel for each file,
so that the total size is 512 KB per channel for each file.

For multiplexing level 1 and 2 we have already seen that 1 MB buffers are allocated so that
the total buffer size for all input files is 16 MB.
—————————————–
Let’s verify for multiplexing level of 5 .
—————————————–

RMAN>delete backup;
=- Note that 512 KB are allocated so that the total buffer size for all files is less than
16 MB.
— AS backup progresses the files which have been backed up, buffers allocated to them gets
freed
RMAN> run{
allocate channel c1 device type disk maxopenfiles 5;
backup datafile 1,2,3,4,5;
}

sql>/
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\USERS01.DBF

sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\EXAMPLE01.DBF
sql>/

INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSAUX01.DBF
sql>/

TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      524288            6
ATA\ORCL\SYSTEM01.DBF

—————————————–
Let’s verify for multiplexing level of 9 (> 8) .
—————————————–

RMAN>delete backup;
RMAN> run{
allocate channel c1 device type disk maxopenfiles 9;
backup datafile 1,2,3,4,5,6,7,8,9;
}
— Note that buffers are allocated simultaneously to all the 9 datafiles so that data from all the files will be intermingled in the output file.
— Since multiplexing level = 9 ( Level > 8), RMAN allocates four 128 KB disk buffers per channel for each file, so that the total size is 512 KB per channel for each file.

sql>col filename for a25 word_wrapped
select type, status, filename, buffer_size, buffer_count
from v$backup_async_io
where type <> ‘AGGREGATE’
and status = ‘IN PROGRESS';
TYPE      STATUS      FILENAME                  BUFFER_SIZE BUFFER_COUNT
——— ———– ————————- ———– ————
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD     131072            4
ATA\ORCL\SYSTEM01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\SYSAUX01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\UNDOTBS01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\EXAMPLE01.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN1.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN2.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN3.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\RMAN4.DBF
INPUT     IN PROGRESS C:\APP\ADMINISTRATOR\ORAD      131072            4
ATA\ORCL\USERS01.DBF
Conclusion :

- We can parallelize the backup/recovery operation by
   – setting parallelism for the device
   – Allocating multiple channels and
      . Let RMAN decide which file to backup/restore using which channel
      . BAcking up/Restoring specified files using specified channels
- Filesperset
  . Using this parameter we can reduce the recovery time by limiting the no. of files per
backupset, since to restore a particular datafile we will have to restore a smaller file.
  . This is also beneficial in case the backup fails in the middle. Next time, we can backup
only those datafiles which could not be successfully backed up.
- Maxpiecesize
  This option can be used
   – to exercise the operating system limit on the file size.
   – to split the backup of a datafile into multiple pieces if the backup does not fit in one tape
   – if backup is to be transferred over network.
- Maxopenfiles
  This parameter decides how many files can be backed up simultaneously i.e. data from how many files can be read at a time.
- Multiplexing
  Level of multiplexing means the no. of files whose data is intermingled in the backup file. It depends on the following:
- # of files being backed up using one channel
- FIlesperset
- Maxopenfiles
Level of multiplexing = Minimum of the above 3.
   – Increasing the level of multiplexing can effectively keep the tape streaming.
   – Increasing the level of multiplexing may increase the speed of backup. On the other
hand , it can reduce the restore speed as data for a file has to be extracted from the
intermingled data.
   – Thus , one has to decide the level of multiplexing based on whether performance of the
backup or recovery is more critial.
   – In case files being backed up are ASM files, level of multiplexing may be reduced as ASM automatically takes care of the contention by striping the data.
- Input buffers are allocated for the backup. The no. of buffers allocated per file depends
upon the level of multiplexing.
Multiplexing level        Allocation rule
   Level <= 4           1 MB buffers are allocated so that the total buffer size for all
input files is 16 MB.
   4 < Level <= 8       512 KB are allocated so that the total buffer size for all files is
less than 16 MB.
   Level > 8            RMAN allocates four 128 KB disk buffers per channel for each file,
so that the total size is 512 KB per channel for each file.
- We can calculate the size of the buffers required by RMAN as follows:
    Multiply the total buffer bytes for each data file by the number of data files that are being concurrently accessed by the channel, and then multiply this number by the number of channels.
Assume that you use one channel to back up four data files, and use the settings that are shown above. In this case, multiply as follows to obtain the total size of the buffers that are allocated for the backup:
4 MB per data file Î 1 channel Î 4 data files per channel = 16 MB
In my next post
Tuning RMAN Part-II ,I will discuss the use of the asynchronous i/o to improve the performance of RMAN backup/recovery.

Thanx for your time. Happy Tuning !!

References

http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#BRADV89486
http://hemantoracledba.blogspot.in/2015/07/rman-5-useful-keywords-and-subclauses.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
http://hemantoracledba.blogspot.in/2015/07/rman-5b-more-useful-keywords-and.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
http://hemantoracledba.blogspot.in/2015/07/rman-5c-some-more-useful-keywords-and.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
Retention Policy and Control_file_record_keep_time
3 RMAN Parameters every DBA should know
Spreading RMAN Backups Across Multiple Disks
Triplexing RMAN Backups

———————————————————————————————————-

Related links:

Home

RMAN Index

                                                ——————-

 

CLUSTERING FACTOR DEMYSTIFIED PART – II


In my last post, CLUSTERING FACTOR DEMYSTIFIED PART – I‘,  I had explained what is clustering factor, how and why does it affect the performance and had  demonstrated it with the help of an example. I had mentioned various methods to  resolve the problem of a high clustering factor.
Let’s explore the simplest one first  i.e. CTAS with order by:
Overview:
- Create another ‘intermediate’  table from ‘unorganized’ table using CTAS order by
- Create ‘reorganized’ table from ‘ intermediate’  table using CTAS
- Create index on id column of ‘reorganized’ table
- Trace  the same query and check that performance has improved
Implementation :
- Create another ‘intermediate’  table from ‘unorganized’ table using CTAS order by
SQL>create table intermediate as   select * from unorganized    order by id;

- Create ‘reorganized’ table from ‘ intermediate’  table using CTAS

SQL>create table reorganized as select * from intermediate;

drop table intermediate purge;

- Find out no. of blocks across which records of a key value are spread in the ‘reorganized’ table.
- Note that in ‘reorganized’  table,  records  are now clustered   i.e. rows for a key value are placed together in blocks rather than scattered across various blocks
SQL> select reorg.id,  reorg.cnt reorganized_block
         from ( select id,               count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt
               from reorganized 
                group by id) reorg
        order by id;

ID        REORGANIZED_BLOCKS

     ———- ——————–
         1                    5
         2                    6
         3                    6
         4                    6
         ………
        98                    5
        99                    5
       100                   6
-- Create index on id column of ‘reorganized’  table and gather statistics for the table
SQL>  create index reorganized_idx on reorganized(id);

exec dbms_stats.gather_table_stats(USER, 'reorganized', estimate_percent => 100, method_opt=> 'for all indexed columns size 254');

– Check the index statistics

– Note that the index on table ‘reorganized‘ has a clustering factor (488) which is equal to the no. of rows in the table i.e.  to fetch all the records for various key values using index,  blocks need not be switched unless all the records in the earlier block have been fetched
 SQL>set line 500<             col table_name for a15 >
           col index_name for a15 
          select blevel,  leaf_blocks, table_name, index_name, clustering_factor
         from user_indexes 
          where table_name like '%REORGANIZED%' 
         order by 1;
   BLEVEL    LEAF_BLOCKS       TABLE_NAME      INDEX_NAME      CLUSTERING_FACTOR
    ———- ———————– ——————- ——————      ————————–
         1           7                       REORGANIZED     REORGANIZED_IDX               488
- Trace  the same query on ‘reorganized’  table  and check that performance has improved
SQL> alter session set tracefile_identifier = 'cluster_factor'; 
           alter session set sql_trace=true; 
           select /*+ index(reorganized reorganized_idx) */ count(txt) from reorganized where id=id; 
           alter session set sql_trace=false;
– Find out the name of trace file generated
SQL> col trace_file for a100 >
           select  value trace_file from v$diag_info 
         where upper(name) like '%TRACE FILE%';
TRACE_FILE
—————————————————————————————————-
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_29116_cluster_factor.trc
– Run tkprof utility on the trace file generated 
$cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace 
    tkprof orcl_ora_29116_cluster_factor.trc cluster_factor.out 
   vi cluster_factor.out
******************************************************************************
SQL ID: bmsnm1zh6audy
Plan Hash: 3677546845
select /*+ index(reorganized reorganized_idx) */ count(txt)
from
 reorganized where id=id
Rows     Row Source Operation
——-  —————————————————
      1  SORT AGGREGATE (cr=496 pr=328 pw=0 time=0 us)
   3400   TABLE ACCESS BY INDEX ROWID REORGANIZED (cr=496pr=328 pw=0 time=42046 us cost=496 size=3073600 card=3400)
   3400    INDEX FULL SCAN REORGANIZED_IDX (cr=8 pr=0 pw=0 time=9945 us cost=8 size=0 card=3400)(object id 75125)
******************************************************************************
Note that :
Total no. of I/Os performed against the index on ‘reorganized’ table = 8 (cr=8 in the INDEX FULL SCAN ORGANIZED_IDX row source)
Total I/O’s performed by the query = 496 (cr = 496 in the TABLE ACCESS BY INDEX ROWID ORGANIZED)
Hence , No. of I/O’s made against the table = 496 – 8 = 488 which is equal to the clustering factor of the index.
We can verify the improvement in performance  by using autotrace on a query against the table:
SQL>set autotrace traceonly explain 
       select /*+ index(reorganized reorganized_idx) */ count(txt)  
         from      reorganized where id=id; 
          set autotrace off
————————————————————————————————
| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
————————————————————————————————
|   0 | SELECT STATEMENT             |                 |     1 |   904 |   496   (0)| 00:00:06 |
|   1 |  SORT AGGREGATE              |                 |     1 |   904 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| REORGANIZED     |  3400 |  3001K|   496   (0)| 00:00:06 |
|*  3 |    INDEX FULL SCAN           | REORGANIZED_IDX |  3400 |       |     8   (0)| 00:00:01 |
————————————————————————————————
Note that there is a cost of 8 for using the index for the REORGANIZED table and index – about 8 I/O’s against the index i.e. the query will hit one root block (1)and the leaf blocks (7) .
Then the query will be doing 488 more I/Os against the table(= number of blocks in table), because the rows needed are all next to each other on a few database blocks, for a total cost of 496.
Hence.  it can be seen that rows in  a table can be physically resequenced by using CTAS with order by. Here I would like to point out that in this case, the table becomes unavailable while it is being recreated. Hence,  if availability of the table can’t be compromised, this method is not preferable.
In my post Clustering Factor Demystified : Part – III, I have demonstrated the use of single table index and hash clusters to improve the clustering factor of an unorganized table.References:
http://www.praetoriate.com/t_op_sql_resequencing_clusters.htm

———————————————————————————————————


Related links:

Home

Tuning Index
Buffer Cache Wait Events
Clustering Factor Demystified Part-I 
Clustering Factor Demystified Part-III
Direct Read Enhancements in 11g
Single Table Index Cluster Tables

SQL Profile Demystified : Part – I

SQL Profile Demystified Part – II
Uudocumented Parameters in Oracle 11g

—————

TUNING

Always FTS On A Table Smaller Than DBFMRC : A Myth 
Buffer Cache Wait Events
Conditions Based On Inequalities Can’t Use Indexes – How To Resolve?
Clustering Factor Demystified Part – I  
ClusteringFactor Demystified Part – II  
ClusteringFactor Demystified Part – III  
Consistent Reads In Oracle: Part-I 
Consistent Reads In Oracle: Part-II 
Create Histograms On Columns That Already Have One
Cursor Sharing Demystified 
DB_FILE_MULTIBLOCK_READ_COUNT And Extent Size
Defragmentation Can Degrade Query Performance
Direct Read Enhancements in 11g
Find Out Values Of Bind Variables Across Multiple Executions Of A SQL
Find Values Of Another Session’s Parameters
Identify difference in CBO parameters across two executions of a SQL
Identify The Database Owning A CPU Intensive Process
Index Selectivity And Queries With Multiple Equality Conditions
Influence Execution Plan Without Adding Hints
Library Cache Lock And Pin Demonstrated
Latches, Locks, Pins And Mutexes
Multiblock Reads And Cached Blocks
Not NULL Constraint Influences Access Path
ORA-12543: TNS:destination host unreachable error 
Oracle Checkpoints
Oracle 11g : Automatic DOP – Parallel Threshold
Oracle 11g :  Parallel Statement Queueing
Parent And Child Cursors In Oracle
Quick Overview Of Shared Pool Tuning
Session Cursor Caching – Part-I
Session Cursor Caching – Part-II
Simulate Buffer Busy Wait and Identify Hot Object
Single Table Index Cluster Tables
Single Table Hash Clusters Demystified
Shared Pool Architecture 
SQL Profile Demystified : Part – I
SQL Profile Demystified : Part – II
Tuning  PGA : Part – I 
Tuning  PGA : Part – II 
Tuning  PGA : Part – III 
Tuning RMAN Part – I  
Tuning RMAN Part – II 
Tuning Shared Pool : A Demonstration 
Undocumented Parameters in Oracle 11g
Unselective Indexes: Selectivity
Unselective Indexes And Skewed Data Distribution

CLUSTERING FACTOR DEMYSTIFIED PART – I

In this post, I will explain what is clustering factor, how and why does it affect the performance and then demonstrate it with the help of an example.
What is  Clustering Factor?
The Index clustering factor is a number which represents the degree to which the data in table is synchronized with the entries in the index. It  gives a rough measure of  how many I/Os the database would perform if it were to read every row in that table via the index in index order.  If the rows of a table on disk are sorted in the same order as the index keys, the database will perform a minimum number of I/Os on the table to read the entire table via the index.
Let’s try to understand it by taking an analogy.  I want to locate all the books (records)  by (for)  an author (a key value)  in the library (table)  using the catalog card (index) .  If books (records) are arranged authorwise (sorted by key)  in racks (table blocks) , my catalog card (index)  will show me information like this :
Book1(Record1)    Rack1(Block1)
Book2(Record2)    Rack1(Block1)
Book3(Record3)    Rack1(Block1)
Bookn(Recordn)    Rack1(Block1)
It means that I need to visit only one rack (block) provided all the books (records)  fit in one rack (block) i.e. Rack1(Block1) to get all the required books (records) . If one rack (block) is insufficient to hold all the books (records) ,Hence,
No. of racks (blocks) to be visited = number of distinct racks (blocks in rowids) listed in the card(index)
Consider a scenario where books (records) of the same author (key value) are scattered across various racks (blocks) . In the worst scenario i.e. each book is in a different rack, my catalog card (index)  will be  like this:
Book1(Record1)    Rack1(Block1)
Book2(Record2)    Rack2(Block2)
Book3(Record3)    Rack3(Block3)
Bookn(Recordn)    Rackn(Blockn)
Now, to get all the books (records) I will have to gather them from multiple racks (blocks) .
In the worst case,  no. of racks (blocks)  visited = no. of books  (records)
To calculate the clustering factor of an index during the gathering of index statistics, Oracle does the following :
For each entry in the index
(
   Compare the entry’s table rowid block with the block of the previous index entry.
If the block is different, Oracle increments the clustering factor by 1.
)
The minimum possible clustering factor is equal to the number of distinct  blocks identified through the index’s list of rowid’s.  An index with a low clustering_factor is closely aligned with the table and related rows reside together inside each data block, making indexes very desirable for optimal access.
The maximum clustering factor is the number of entries in the index i.e. each rowid points to a different block in the table.  An index with a high clustering factor is out-of-sequence with the rows in the table and large index range scans will consume lots of I/O.
How to find the clustering factor of an index 
Oracle provides a column called clustering_factor in the dba_indexes view that provides information on how the table rows are synchronized with the index. A low value is desirable.
Let’s demonstrate the concept:
- Create a table organized which contains two columns  –  id(number) and txt (char)
- Populate the table insert 35 records for each value of id where id ranges from 1 to 100
- In this case as records are added sequentially,  records for a key value are stored together
SQL> drop table organized purge; 
            create table organized (id number, txt char(900));

           begin
           for i in 1..100 loop
               insert into organized select i, lpad('x', 900, 'x')
               from    dba_objects where rownum < 35;
           end loop;
          end;
          /
-  create another table unorganized which is a replica of the ‘organized’ table but records are inserted in a random manner so that records for a key value may be scattered across different blocks (order by dbms_random.random).
SQL> drop table unorganized purge;
           create table unorganized as select * from organized order by dbms_random.random;
- Find out no. of blocks across which records of a key value are spread in the two tables.
- Note that in ‘unorganized’  table,  records  are scattered where in ‘organized’ table, records are clustered   
SQL> select org.id,  org.cnt organized_blocks , unorg.cnt unorganized_blocks
         from
            (select id, count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt
              from organized
              group by id)org ,
           ( select id,  count(distinct(dbms_rowid.ROWID_BLOCK_NUMBER(rowid))) cnt
             from unorganized
              group by id) unorg
        where org.id = unorg.id
        order by id;
     ID    ORGANIZED_BLOCKS UNORGANIZED_BLOCKS
———- —————- ——————
         1                5                 34
         2                6                 34
         3                6                 32
                      …..
        98                5                 32
        99                5                 34
       100               6                 33
-- Create index on id column on both the tables and gather statistics for both the tables
create index organized_idx on organized(id);
           create index unorganized_idx on unorganized(id);
          exec dbms_stats.gather_table_stats(USER, 'organized', estimate_percent => 100, method_opt=> 'for all indexed columns size 254');
        exec dbms_stats.gather_table_stats(USER, 'unorganized', estimate_percent => 100, method_opt=> 'for all indexed columns size 254');
– Check that both the tables contain same no. of rows/blocks.
SQL> select table_name, blocks, num_rows
           from user_tables
           where table_name like '%ORGANIZED'
           order by 1;
TABLE_NAME                         BLOCKS   NUM_ROWS
——————————      ———- ———-
ORGANIZED                                 488       3400
UNORGANIZED                           486       3400
– Check the index statistics
– Note that the index on table ‘organized‘ has a clustering factor (488) which is equal to the no. of blocks in the table i.e.  to fetch all the records for various key values using index,  blocks need not be switched unless all the records in the earlier block have been fetched
– Note that the index on table ‘unorganized‘ has a clustering factor (3299) which is close to the no. of rows   in the table (3400) i.e. almost every record for a key  value is in a different block and  to fetch all the records for a  key values using index,  block  needs to be switched almost for every record
SQL>set line 500
          col table_name for a15
          col index_name for a15
          select blevel,  leaf_blocks, table_name, index_name, clustering_factor
          from user_indexes
          where table_name like '%ORGANIZED%'
          order by 1;
    BLEVEL LEAF_BLOCKS TABLE_NAME      INDEX_NAME      CLUSTERING_FACTOR
———- ———– ————— ————— —————–
         1           7 ORGANIZED              ORGANIZED_IDX                 488
         1           7 UNORGANIZED     UNORGANIZED_IDX              3299
The  clustering factor is a measure of the no. of I/Os the database will perform against the table in order to read every row via the index. We can verify this fact by 
 executing a query with tracing enabled that will, in fact, read every row of the table via the index. We’ll do that by using an index hint to force the optimizer to use the index and count the non-null occurrences of a nullable column (text) that is not in the index. That will force the database to go from index to table for every single row:
SQL> alter session set tracefile_identifier = 'cluster_factor';
           alter session set sql_trace=true;
           select /*+ index(organized organized_idx) */ count(txt) from organized where id=id;
           select /*+ index(unorganized unorganized_idx) */ count(txt) from unorganized where id=id;
           alter session set sql_trace=false;
– Find out the name of trace file generated
SQL>col trace_file for a100
           select  value trace_file from v$diag_info
           where upper(name) like '%TRACE FILE%';
TRACE_FILE
—————————————————————————————————-
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_29116_cluster_factor.trc
– Run tkprof utility on the trace file generated 
cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
   tkprof orcl_ora_29116_cluster_factor.trc cluster_factor.out
   vi cluster_factor.out
– let’s analyze the tkprof output 
********************************************************************************
SQL ID: c3r8241qas3rn
Plan Hash: 2296712572
select /*+ index(organized organized_idx) */ count(txt)
from
 organized where id=id
Rows     Row Source Operation
——-  —————————————————
      1  SORT AGGREGATE (cr=496 pr=0 pw=0 time=0 us)
   3400   TABLE ACCESS BY INDEX ROWID ORGANIZED (cr=496 pr=0 pw=0 time=37892 us cost=496 size=3073600 card=3400)
   3400    INDEX FULL SCAN ORGANIZED_IDX (cr=8 pr=0 pw=0 time=9189 us cost=8 size=0 card=3400)(object id 75116)
********************************************************************************
As you can see
Total no. of I/Os performed against the index on ‘organized’ table = 8 (cr=8 in the INDEX FULL SCAN ORGANIZED_IDX row source)
Total I/O’s performed by the query = 496 (cr = 496 in the TABLE ACCESS BY INDEX ROWID ORGANIZED)
Hence , No. of I/O’s made against the table = 496 – 8 = 488 which is equal to the clustering factor of the index
********************************************************************************
SQL ID: d979q0sqdbabb
Plan Hash: 3317439903
select /*+ index(unorganized unorganized_idx) */ count(txt)
from
 unorganized where id=id
Rows     Row Source Operation
——-  —————————————————
      1  SORT AGGREGATE (cr=3307 pr=272 pw=0 time=0 us)
   3400   TABLE ACCESS BY INDEX ROWID UNORGANIZED (cr=3307 pr=272 pw=0 time=40536 us cost=3308 size=3073600 card=3400)
   3400    INDEX FULL SCAN UNORGANIZED_IDX (cr=8 pr=0 pw=0 time=9693 us cost=8 size=0 card=3400)(object id 75117)
********************************************************************************
Similarly, if I do the same analysis on the UNORGANIZED index,
Total no. of I/Os performed against the index on ‘unorganized’ table = 8 (cr=8 in the INDEX FULL SCAN UNORGANIZED_IDX row source)
Total I/O’s performed by the query = 3307 (cr = 3307 in the TABLE ACCESS BY INDEX ROWID ORGANIZED)
Hence , No. of I/O’s made against the table = 3307 – 8 = 3299 which is equal to the clustering factor of the index
So, for one table, the database performs 496 total I/O’s to retrieve exactly the same data as for the other table—which needed 3307 I/O’s.
Obviously, one of these indexes is going to be more useful for retrieving a larger number of rows than the other.
We can verify this  by using autotrace on a query against both  the tables:
SQL>set autotrace traceonly explain
          select /*+ index(organized organized_idx) */ count(txt)
          from  organized where id=id;
          set autotrace off
———————————————————————————————-
| Id  | Operation                                    | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
———————————————————————————————-
|   0 | SELECT STATEMENT             |               |     1 |   904 |   496   (0)| 00:00:06 |
|   1 |  SORT AGGREGATE                |               |     1 |   904 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| ORGANIZED     |  3400 |  3001K|   496   (0)| 00:00:06 |
|*  3 |    INDEX FULL SCAN           | ORGANIZED_IDX |  3400 |       |    8   (0)| 00:00:01 |
———————————————————————————————-
Note that there is a cost of 8 for using the index for the ORGANIZED table and index—about 8 I/O’s against the index i.e. the query will hit one root block (1)and the leaf blocks (7) .
Then the query will be doing 488 more I/Os against the table(= number of blocks in table), because the rows needed are all next to each other on a few database blocks, for a total cost of 496.
SQL>set autotrace traceonly explain
         select /*+ index(unorganized unorganized_idx) */ count(txt)
         from      unorganized where id=id;
         set autotrace off;
————————————————————————————————–
| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
————————————————————————————————
|   0 | SELECT STATEMENT             |                 |     1 |   904 |  3308   (1)| 00:00:40 |
|   1 |  SORT AGGREGATE              |                 |     1 |   904 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| UNORGANIZED     |  3400 |  3001K|  3308   (1)| 00:00:40 |
|*  3 |    INDEX FULL SCAN           | UNORGANIZED_IDX |  3400 |       |     8   (0)| 00:00:01 |
————————————————————————————————
In the case of UNORGANIZED index, the plan still has the same 8 I/Os against the index. But because the rows needed from the table are not next to each other, the optimizer estimates that the query will have to switch the block in  the table for every row it retrieves, and its estimated cost for 3400 row is 3299 (clustering factor) rows + 8 I/Os which comes to 3307 (almost = 3308 (listed)).
As you can see, both the plans expect to return the same number of rows: 1. Both the plans are using an index full scan. But the two plans have radically different costs: one has a low cost of 496 and the other a much higher cost of 3308—even though both plans are going after exactly the same set of rows from two tables that contain the same data!
The reason of this observation is quite clear : the data in the ‘unorganized’  table is not sorted in the same fashion as the data in the index. This leads to an  increase the clustering factor  and the database must do reads and rereads of the table thereby increasing the cost. With the ‘organized’  table, the table data and the index data were sorted identically.
How to resolve the performance issues due to high clustering factor?
 To improve the CF, the table must be rebuilt (and reordered). The data retrieval can be considerably speeded up by physically sequencing the rows in the same order as the key column. If we can group together the rows for a key value,  we can get all of the row with a single block read because the rows are together.  T o achieve this goal, various methods may be used :
    . Single table index clusters : Clusters related rows together onto the same data block
    . Manual row re-sequencing (CTAS with Order by) : Pre-orders data to avoid expensive disk sorts after retrieval.
. Using dbms_redefinition, for an in-place table reorganization
In my post Clustering Factor Demystified : Part – III, I have demonstrated the use of single table index and hash clusters to improve the clustering factor of an unorganized table.
Note :
- Rebuilding of index cannot  improve the Clustering Factor.
- If table has multiple indexes, careful consideration needs to be given by which index to order table.
- Row re-sequencing does not help queries that perform full-scans or index unique scans, and careful attention must be given to the nature of the queries against the target table.
- The degree to which resequencing improves the performance depends on how far out of sequence the rows are when you begin and how many rows you will be accessing in sequence.
Summary:
- A good Clustering Factor is equal (or near) to the values of number of blocks of table.- A bad Clustering Factor is equal (or near) to the number of rows of table.
– A low clustering factor is good and reflects strong clustering.
– A high clustering factor is bad and reflects weak clustering.
– The clustering factor may be lower than the number of blocks if there are empty blocks in the table below HWM and/or there are many rows that have null values for the indexed column(s).
– The clustering factor can never be greaterr than the no. of rows in the table.In my next post, I will demonstrate how to use CTAS order by to resequence the rows physically in the table.
Thanx for your time! I hope you found the post useful. Your comments and suggestions are always welcome!

 

Index Clustering Factor And Oracle
——————————————————————————————————–


Related links:

                                              ——————-

ADD INSTANCE MANUALLY

We can add an instance to a cluster database by 3 methods :
- Enterprise Manager
- DBCA
- Manually
  In this post I will demonstrate the method to add an instance manually to a RAC database.
Current scenario
———————-
Total no. of nodes in the cluster   : 3
Names of nodes                         : host01, host02, host03
Name of RAC database              : orcl
Instances of orcl database          : orcl1, orcl2
Nodes hosting orcl instances      : host01, host02
Now I want to add another instance orcl3 of orcl database on host03 manually.
Following are the steps which need to be taken:
- Login to orcl database in SQLPLUS as sysdba on one of the existing nodes ,say host01
- create undo tablespace and redo log groups for the instance 3
SQL>create undo tablespace undotbs3 datafile '+DATA';
alter database add logfile thread 3;
alter database add logfile thread 3;
– Set various parameters for the new instance –
SQL>alter system set instance_number = 3         scope=spfile sid='orcl3';
alter system set instance_name = 'orcl3'     scope=spfile sid='orcl3';
alter system set thread = 3                         scope=spfile sid='orcl3';
alter system set undo_tablespace=undotbs3 scope=spfile sid='orcl3';
alter database enable thread 3;
– Stop all instances of the database orcl
$srvctl stop database -d orcl
– Restart database orcl so that new parameters in spfile are read
$srvctl start database -d orcl
—– Add the instance to the database –
$srvctl add instance -d orcl -i orcl3 -n host03
– Start the instance –
$srvctl start instance -d orcl -i orcl3
– Check that all the instances (including the newly added orcl3) are running –
$srvctl status database -d orcl
– Copy the following entry for orcl in tnsnames.ora from host01 to tnsnames.ora on host03 -
ORCL =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )
– Copy the password file from host01 to host03
host01$scp $ORACLE_HOME/dbs/orapwdorcl1
host03:/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwdorcl3
— Test that remote connection can be made from host03 –
host03$sqlplus sys/oracle@orcl as sysdba 

 SQL>sho parameter db_name
I hope this article was useful. Your comments and suggestions are always welcome!
———————————————————————————————-
Related links :

 Home
RAC Index
11gR2 RAC: Add A Node

                                                     ——————-

MIGRATION OF A RAC DATABASE

In this post, I will demonstrate how we can transport a RAC database from one cluster to another
(e.g. from production to test environment).
Current scenario:
- Database to be converted : orcl
– Target cluster name : cluster01.example.com
– Target cluster SCAN name : cluster01-scan.cluster01.example.com
– No. of nodes in the target cluster : host01, host02, host03
– Name of the database after conversion : newdb
Overview:
———————-
On the source Server
———————-
- Check prerequisites
– Start the database in READ ONLY mode
– Verify the database is ready for migration
– Make a folder to hold converted datafiles
– Use RMAN Convert command to create
. Transport script
. Initialization parameter file
. Converted Datafiles———————
On the target server
———————

– Create appropriate folders
– Transfer above files to the proper location on the target system.
– Edit initialization parameter file.
– Edit the transport script transport.sql
– Make an entry for the database in /etc/oratab
– Run transport script to create a single instance database on the target server.
– Register the database with clusterware
– Put the database in archivelog mode if not aLREADY
– Use rconfig to convert the single instance database to RAC database
– Create entries for the database in tnsnames.ora

Implementation:

———————-
On the source Server
———————

– Check prerequisites

– check the platform name of current database —

SQL>SELECT PLATFORM_NAME
FROM  V$DATABASE;

PLATFORM_NAME
——————————
Linux IA (32-bit)

— Start the database in READ ONLY mode

[oracle@host01]$srvctl stop database -d orcl

SQL> startup mount;
alter database open read only;

— Verify the database is ready for migration

SQL>   SET  SERVEROUTPUT ON;
DECLARE
db_ready BOOLEAN;
BEGIN
db_ready :=
DBMS_TDB.CHECK_DB('Linux IA (32-bit)',DBMS_TDB.SKIP_READONLY);
if db_ready
then
dbms_output.put_line('YES your database can be transported. ');
else
dbms_output.put_line('NO your database can not be transported. ');
end if;
end;
/

– Make a folder to hold converted datafiles

[oracle@host01]$mkdir /home/oracle/convertdb

- Use RMAN Convert Database command to create
. Transport script
. Inittalization parameter file
. Converted Datafiles

The CONVERT DATABASE command specified in the example creates
– a transport script named /home/oracle/convertdb/transport.sql,
– a PFILE initnewdb.ora in file /home/oracle/convertdb/newdb
– a copy of all data files in the =/home/oracle/convertdb/newdb  directory in the format
of the target platform ‘Linux IA (32-bit)’

RMAN>CONVERT DATABASE
NEW DATABASE 'newdb'
TRANSPORT SCRIPT '/home/oracle/convertdb/transport.sql'
TO PLATFORM 'Linux IA (32-bit)'
FORMAT '/home/oracle/convertdb/newdb/%U';

Starting conversion at source at 23-FEB-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=42 instance=orcl1 device type=DISK

External table SH.SALES_TRANSACTIONS_EXT found in the database

Directory SYS.SS_OE_XMLDIR found in the database
Directory SYS.DATA_PUMP_DIR found in the database
Directory SYS.SUBDIR found in the database
Directory SYS.MEDIA_DIR found in the database
Directory SYS.LOG_FILE_DIR found in the database
Directory SYS.DATA_FILE_DIR found in the database
Directory SYS.XMLDIR found in the database
Directory SYS.ORACLE_OCM_CONFIG_DIR found in the database

BFILE PM.PRINT_MEDIA found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00001 name=+DATA/orcl/datafile/system.256.808138599
converted datafile=/home/oracle/convertdb/newdb/data_D-ORCL_I-1335919524_TS-SYSTEM_FNO-1_16o2n0vb
.
.
.
.
converted datafile=/home/oracle/convertdb/newdb/data_D-ORCL_I-1335919524_TS-USERS_FNO-4_1co2n13i
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03

Edit init.ora file /home/oracle/convertdb/newdb/init_00o2n0vb_1_0.ora. This PFILE will be used to

create the database on the target platform

Run SQL script /home/oracle/convertdb/transport.sql on the target platform to create database

To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished conversion at source at 23-FEB-13

———————
On the target server
———————

- Create appropriate folders

[oracle@host01]$mkdir -p /home/oracle/convertdb/newdb
mkdir -p /u01/app/oracle/admin/newdb/adump

- Transfer above files to the proper location on the target system
. transport.sql to /home/oracle/convertdb folder
. init….ora  and converted datafiles to /home/oracle/convertdb/newdb folder

Edit initialization parameter file

[oracle@host01]$vi  /home/oracle/convertdb/newdb/init_00o2n0vb_1_0.ora

# Please change the values of the following parameters:

# db_create_file_dest      = “/home/oracle/convertdb/newdb/+DATA”
db_create_file_dest      = “+DATA”

#  db_recovery_file_dest    = “/home/oracle/convertdb/newdb/+FRA”
db_recovery_file_dest    = “+FRA”

db_recovery_file_dest_size= 4039114752

# remote_listener          = “NEWDBcluster01-scan.cluster01.example.com:1521″
remote_listener          = “cluster01-scan.cluster01.example.com:1521″

#  cluster_database         = TRUE
cluster_database         = FALSE

#  audit_file_dest          = “/home/oracle/convertdb/newdb/adump”
audit_file_dest          = “/u01/app/oracle/admin/newdb/adump”
– Edit the transport script transport.sql

[oracle@host01]$vi  /home/oracle/convertdb/transport.sql

– Delete the following lines to create redo threads 2 and 3

— Create log files for threads other than thread one.

SQL>ALTER DATABASE ADD LOGFILE THREAD 2
GROUP 3 SIZE 50M BLOCKSIZE 512 REUSE,
GROUP 4 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD LOGFILE THREAD 3
GROUP 5 SIZE 50M BLOCKSIZE 512 REUSE,
GROUP 6 SIZE 50M BLOCKSIZE 512 REUSE;

- Make following entry for the database in /etc/oratab

newdb:/u01/app/oracle/product/11.2.0/db_1:N

- Run transport script to create a single instance database on the target server

[oracle@host01]$. oraenv - newdb

sqlplus  / as sysdba

SQL>@/home/oracle/convertdb/transport.sql

– check that database has been transported —

[oracle@host01]$. oraenv - newdb

sqlplus  / as sysdba

SQL>startup

select * from hr.employees;

- Register the database with clusterware

– Check the database is currently not registered with clusterware

[oracle@host01 ~]$ srvctl status database -d newdb

PRCD-1120 : The resource for database newdb could not be found.
PRCR-1001 : Resource ora.newdb.db does not exist

[root@host01 trace]# crsctl stat res -t |grep newdb

— Register the database

[oracle@host01 ~]$ srvctl add database -d newdb -o $ORACLE_HOME -p /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfilenewdb.ora -a DATA,FRA

– Check the database is now registered with clusterware

[root@host01 trace]# crsctl stat res -t |grep newdb

ora.newdb.db

— Check that the database has no configured instances as it is a single
instance database

[oracle@host01 ~]$ srvctl start database -d newdb

– Put the database in archivelog mode if not already
( needed to convert it to a RAC database)

SQL> shu immediate;
startup mount;
alter database archivelog;
alter database open;

- Use rconfig to convert the single instance newdb database to a RAC database running
on three nodes host01, host02 and host03 with

. Datafiles on +DATA diskgroup
. Flash recovery area on +FRA diskgroup

— copy ConvertToRAC_AdminManaged.xml to another file my.xml

[oracle@host01]$cd $ORACLE_HOME/assistants/rconfig/sampleXMLs

cp ConvertToRAC_AdminManaged.xml my.xml

— Edit my.xml and make following changes :

. Specify current OracleHome of non-rac database for SourceDBHome

. Specify OracleHome where the rac database should be configured. It can be same as

SourceDBHome

. Specify SID of non-rac database and credential. User with sysdba role is required to perform

conversion

. Specify the list of nodes that should have rac instances running for the Admin Managed

Cluster Database. LocalNode should be the first node in this nodelist.

. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from

db_unique_name

. Specify the type of storage to be used by rac database. Allowable values are CFS|ASM

. Specify Database Area Location to be configured for rac database.

. Specify Flash Recovery Area to be configured for rac database.

— Run rconfig to convert orcl from single instance database to 2 instance RAC database

host01$rconfig my.xml

Converting Database “NEWDB” to Cluster Database. Target Oracle Home:

/u01/app/oracle/product/11.2.0/dbhome_1. Database Role: PRIMARY.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Flash Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Starting Cluster Database

Operation Succeeded

<Oracle_Home>
/u01/app/oracle/product/11.2.0/dbhome_1

— Check the log file for rconfig while conversion is going on

oracle@host01$ls -lrt  $ORACLE_BASE/cfgtoollogs/rconfig/*.log

              tailf <logfilename>

— check that the database has been converted successfully

host01$srvctl status database -d newdb

Instance newdb1 is running on node host01
Instance newdb2 is running on node host02
Instance newdb3 is running on node host03

[root@host01 trace]# srvctl config database -d newdb

Database unique name: NEWDB
Database name: NEWDB
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfilenewdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: NEWDB
Database instances: newdb1,newdb2,newdb3
Disk Groups: DATA,FRA
Services:
Database is administrator managed

[root@host01 trace]# crsctl stat res -t

ora.newdb.db
1        ONLINE  ONLINE       host01                   Open
2        ONLINE  ONLINE       host02                   Open
3        ONLINE  ONLINE       host03                   Open

- Create entries for the database in tnsnames.ora

[oracle@host01]$vi $ORACLE_HOME/network/admin/tnsnames.ora

— Following is the entry I copied and added :

NEWDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = newdb)
)
)

– check that database can be connected remotely from remote node.

host02$sqlplus system/oracle@newdb

— check that datafiles have converted to ASM

SQL>select name from v$datafile;

NAME
——————————————————————————–
+DATA/newdb/datafile/system.276.808311515
+DATA/newdb/datafile/sysaux.275.808311589
+DATA/newdb/datafile/undotbs1.272.808311655
+DATA/newdb/datafile/users.282.808311747
+DATA/newdb/datafile/example.273.808311721
+DATA/newdb/datafile/undotbs2.280.808311735
+DATA/newdb/datafile/undotbs3.281.808311743

I hope that you found this post useful. Your comments and suggestions are always welcome.

——————————————————————————————————-


Related links:

Home

11G R2 RAC Index
11g R2 RAC: Clone Database Home

 

TRACING SERVICES IN RAC

In this post, I will demonstrate how we can trace all the sessions who connected to a RAC database using a service . In this case, trace files of different sessions will be generated on the servers hosting the instances they are connected to. Hence, we will need to combine their
trace files Enable tracing for a service and collect tracefiles generated in various instances and then run tkprof on the resulting file.

 

CURRENT SCENARIO:
Name of the admin managed RAC database : amdb
No. of nodes : 3
Host names   : host01, host02, host03
- Create a service amdbs with
  . preferred nodes host01 and host02
  . Available nodes : host03
– enable tracing for sessions connected using amdbs service
SYS> exec dbms_monitor.SERV_MOD_ACT_TRACE_ENABLE('amdbs');

 

– create 2 sessions using amdbs
[oracle@host01 ~]$ sqlplus system/oracle@amdbs

 

– Issue two different queries in two sessions –
SQL1>select * from hr.employees;
SQL2>select * from scott.emp;

 

– After queries have completed, disable tracing
SQL>  exec dbms_monitor.SERV_MOD_ACT_TRACE_disable('amdbs');

 

– Find out the instance names of the two sessions which issued
– the queries. Trace files will be created on corresponding servers
SQL1> sho parameter instance_name
NAME                                 TYPE        VALUE
———————————— ———– ——————————
instance_name                        string      amdb3
SQL2> sho parameter instance_name
NAME                                 TYPE        VALUE
———————————— ———– ——————————
instance_name                        string      amdb1

 

– Run trcsess on both the nodes to get the corresponding files containing   statements executed by sessions connected using service amdbs
[oracle@host01 trace]$ cd /u01/app/oracle/diag/rdbms/amdb/amdb1/trace
trcsess output=amdbhost01.trc service=amdbs *.trc
[oracle@host03 trace]$ cd /u01/app/oracle/diag/rdbms/amdb/amdb3/trace
trcsess output=amdbhost03.trc service=amdbs *.trc

 

– combine trcsess output files from both nodes –

[oracle@host03 trace]$scp amdbshost03.trc host01:/u01/app/oracle/diag/rdbms/amdb/amdb1/trace/ 

[oracle@host01 trace]$ cat amdbhost01.trc amdbshost03.trc>amdbhost.trc 

[oracle@host01 trace]$ tkprof amdbhost.trc amdbhost.out

 

– check that tkprof output file amdbhost.out contains the statements issued by both the sessions connected using amdbs (serach for hr.employees and scott.emp)
 [oracle@host01 trace]$vi amdbhost.out
—————————————————————————————————————————-

Related links:

Home

11G R2 RAC Index
11g R2 RAC : Services
Service Created Using EM Does Not Update DD or OCR
Workload Management in 11g R2 RAC : Failover