Monthly Archives: December 2012

DIRECT READ ENHANCEMENTS IN 11G

 
In this post, I will discuss how direct read behaviour can be influenced in 11g by means of some undocumented parameters. But before coming to that, a few words about direct reads.
Whenever a server process is looking for some data blocks and does not find them in buffer cache, it will read them from disk into memory. Blocks can be read into :
- Buffer cache (buffered read) in which case
  . Latches are obtained in the buffer cache
  . Some earlier blocks in buffer cache may need to be aged out to make space for new blocks
 . those blocks can be accessed by other server processes also
- PGA of the server process (direct read) in which case
  . Blocks are read into PGA of the server process
  . those blocks cannot be accessed by other server processes
  . No latches need to be obtained and hence operation is faster
  . no blocks in buffer cache may need to be aged out to make space for new blocks
Hence, direct reads may be used when
  . the data blocks being read will not be needed by any other process
  . we don’t want a large no. of blocks in buffer cache to be aged out
   The mechanism for direct path reads has been modified in 11g as compared to 10g.
IN 10g
 – all the parallel queries resulted in direct reads irrespective of the size of the data as parallel queries are normally used in data warehouse where amount of data read is large.
 –  all the serial queries were executed using buffered reads assuming that amount of data read is not large.
Implications:
1. Parallel queries fetching even small amount of data bypassed the buffer cache so that if any other server process needed to access the same data,  he had to access the disk.
2. A serial query fetching massive amount of data could age out a lot of data from buffer cache .
In 11g we can modify this behaviour using some undocumented parameters.
1. we can set a threshold on the no. of blocks read (_small_table_threshold) so that even parallel queries fetching data less than the threshold will perform bufered I/O so that the fetched data in buffer cache can be accessed by other server processes also.
2. For serial queries we can enable direct I/O (_serial_direct_read) for all full table scans irrespective of the amount of the data read so that Full Table Scans (FTS’s)  read the data into PGA and data in buffer cache is not aged out.
                  PRACTICAL IMPLEMENTATION
OVERVIEW:
—————
SCENARIO – I
- Set
 _serial_direct_read = FALSE (Default) i.e. disable direct I/O for serial queries and
 _small_table_threshold = 10 (Default value  = 2% of the buffer cache)
so that
    . parallel queries reading less than 10 blocks of data will perform buffered I/O
    . parallel queries reading more than 10 blocks of data will perform direct I/O
    . serial queries always perform buffered I/O whether FTS or otherwise
———————-
SCENARIO – II
- Set _serial_direct_read = TRUE  i.e. enable direct I/O for serial queries
so that
    . serial FTS perform direct I/O
    . serial queries not performing FTS perform buffered I/O
——————————————————————————————
———————–  SCENARIO – I  ————————-
– Use following query to find initial values of
   . _serial_direct_read
   . _small_table_threshold
– Enter name of the parameter when prompted
SET linesize 235
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90
SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,’FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,’FALSE’,’F’,’TRUE’,’T’) “S”,
  decode(p.issys_modifiable,’FALSE’,’F’,’TRUE’,’T’,’IMMEDIATE’,’I’,’DEFERRED’,’D’) “I”,
  decode(p.isdefault,’FALSE’,’F’,’TRUE’,’T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm;
Enter value for 1: small_table
old  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
new  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%small_table%’)
Parameter                     Session         Instance  S I D            Description
————-                                                      ———–  ———   ——-            ——-
_small_table_threshold                                    59               lower threshold level of table size for direct reads
SQL> /
old  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
new  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%_serial_direct%’)
Parameter                    Session         Instance                   S I D              Description
———————  ———-      ——————-   ————— ——————
_serial_direct_read                          FALSE                                            enable direct read in serial
-– SET PARAMETERS
  – SET _serial_direct_read = FALSE(DEFAULT) i.e. disable direct I/O for serial queries and
  – Set _small_table_threshold = 10
SQL>CONN / AS SYSDBA
         ALTER SYSTEM SET “_serial_direct_read” = FALSE SCOPE=SPFILE;
         ALTER SYSTEM SET “_small_table_threshold” = 10 SCOPE=SPFILE;
         STARTUP FORCE;
– Use following query to verify that
   . _serial_direct_read = FALSE
   . _small_table_threshold = 10
SET linesize 235
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90
SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,’FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,’FALSE’,’F’,’TRUE’,’T’) “S”,
  decode(p.issys_modifiable,’FALSE’,’F’,’TRUE’,’T’,’IMMEDIATE’,’I’,’DEFERRED’,’D’) “I”,
  decode(p.isdefault,’FALSE’,’F’,’TRUE’,’T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm;
Enter value for 1: small_table
old  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
new  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%small_table%’)
Parameter                     Session         Instance  S I D          Description
————-                 ———–  ———   ——-       —————
_small_table_threshold     10               10          59             lower threshold level of table size for direct reads
SQL> /
old  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
new  12:   AND UPPER(a.ksppinm) LIKE UPPER(‘%_serial_direct%’)
Parameter                    Session         Instance                   S I D              Description
———————  ———-      ——————-   ————— ——————
_serial_direct_read         TRUE          FALSE                                            enable direct read in serial
– CREATE TEST TABLE TEST of large size
SQL>conn / as sysdba
    @?/sqlplus/admin/plustrce
    grant plustrace to hr;
    grant select on sh.sales to hr;
    grant execute on dbms_stats to hr;
    grant execute on dbms_monitor to hr;
    conn hr/hr
HR>drop table test purge;
   create table test as select * from sh.sales;
   exec dbms_stats.gather_table_stats(USER, ‘TEST’);
Check that no. of blocks in test   > 10
                               employees       < 10
HR> select table_name, blocks from user_tables
         where table_name in (‘TEST’, ‘EMPLOYEES’);
TABLE_NAME                         BLOCKS
—————————— ———-
EMPLOYEES                               5
TEST                                            4513
– SET CLIENT-ID FOR THE SESSION (to read the trace files)
HR>exec dbms_session.set_identifier (‘PARALLEL’);
– ENABLE TRACING FOR THE SESSION TO TRACE WAIT EVENTS AND DIRECT READS
HR>alter session set events ‘10046 trace name context forever, level 8′;
       alter session set events ‘10949 trace name context forever, level 1′;
– ISSUE PARALLEL QUERY ON BOTH THE TABLES TEST(>10 BLOCKS) AND EMPLOYEES (
   SET AUTOTRACE ON TO VERIFY THAT QUERY IS EXECUTED IN PARALLEL
HR>set autotrace traceonly
   select /*+ parallel(2) */ * from employees;
   select /*+ parallel(2) */ * from test;
–ISSUE SERIAL FTS  ON BOTH THE TABLES
        SERIAL non FTS (INDEXED READ) on EMPLOYEES
HR>select * from employees;
   select * from test;
   select * from employees where employee_id=100;
–CHECK THE WAIT EVENTS FOR ABOVE STATEMENTS IN THE TRACE FILE
$cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
 trcsess output=parallel.out clientid=’PARALLEL’ *.trc
 tkprof parallel.out parallel.txt sys=no
 vi parallel.txt
 – PARALLEL  QUERY ON EMPLOYEES DOES NOT RESULT IN DIRECT READS AS BLOCKS < _SMALL_TABLE_THRESHOLD(10)
select /*+ parallel(2) */  * from  employees
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  PX Deq: Execution Msg                   1        0.01          0.01
  Disk file operations I/O                     2        0.00          0.00
  db file sequential read                       6        0.04          0.04
  os thread startup                               2        0.06          0.12
  PX Deq: Join ACK                          2        0.00          0.01
  PX Deq: Parse Reply                       2        0.01          0.01
  SQL*Net message to client              9        0.00          0.00
  PX Deq: Execute Reply                    8        0.04          0.05
  SQL*Net message from client           9        0.00          0.00
  PX Deq: Signal ACK RSG               1        0.00          0.00
  PX Deq: Signal ACK EXT               4        0.01          0.01
  PX qref latch                                    1        0.00          0.00
  PX Deq: Slave Session Stats             3        0.00          0.00
  enq: PS – contention                          1        0.00          0.00
***************************************************************
- PARALLEL QUERY ON test RESULTs IN DIRECT READS AS BLOCKS > _SMALL_TABLE_THRESHOLD(10)
select /*+ parallel(2) */  * from  test
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  PX Deq: Join ACK                          2         0.04          0.05
  PX Deq: Parse Reply                       2         0.00          0.01
  SQL*Net message to client              61258 0.00          0.21
  PX Deq: Execute Reply                    1021   0.01          1.36
  SQL*Net message from client          61258 0.78         27.58
  Disk file operations I/O                     1        0.00          0.00
  direct path read                                 1143  0.00          0.07
  PX Deq Credit: send blkd                 1104  0.92         38.77
  PX qref latch                                     1115 1.36         56.25
  PX Deq Credit: need buffer               15      0.15          0.49
  PX Deq: Execution Msg                     26     0.10          0.73
  asynch descriptor resize                      36     0.00          0.00
********************************************************************************
- ALL THE SERIAL QUERIES RESULT IN BUFFERED READS (WAIT EVENT ON DIRECT READS IS NOT THERE)
    AS (_SERIAL_DIRECT_READ=FALSE)
select *
from
 employees
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client                9        0.00          0.00
  db file sequential read                         1        0.00          0.00
  db file scattered read                          1        0.00          0.00
  SQL*Net message from client            9        0.02          0.02
********************************************************************************
select * from  test
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                   1        0.14          0.14
********************************************************************************
select * from  employees where employee_id=100
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client                2        0.00          0.00
  db file sequential read                         2        0.03          0.03
  SQL*Net message from client            2        0.00          0.00
********************************************************************************
—————————————————————————————————–
———————–  SCENARIO – II  ————————-
- SET _serial_direct_read = TRUE i.e. enable direct I/O for serial queries so that
    . serial FTS perform direct I/O
    . serial queries not performing FTS perform buffered I/O
— SET PARAMETERS
  – SET _serial_direct_read = true i.e. Enable direct I/O for serial queries
SQL>CONN / AS SYSDBA
    ALTER SYSTEM SET “_serial_direct_read” = TRUE SCOPE=SPFILE;
    STARTUP FORCE;
Use following query to verify that
   . _serial_direct_read = TRUE
   . _small_table_threshold = 10
SET linesize 235
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90
SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,’FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,’FALSE’,’F’,’TRUE’,’T’) “S”,
  decode(p.issys_modifiable,’FALSE’,’F’,’TRUE’,’T’,’IMMEDIATE’,’I’,’DEFERRED’,’D’) “I”,
  decode(p.isdefault,’FALSE’,’F’,’TRUE’,’T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm;
– SET CLIENT-ID FOR THE SESSION
sql>CONN hr/hr
HR>exec dbms_session.set_identifier (‘SERIAL’);
-- ENABLE TRACING FOR THE SESSION TO TRACE WAIT EVENTS(10046) AND DIRECT READS(10949)
HR>alter session set events ‘10046 trace name context forever, level 8′;
       alter session set events ‘10949 trace name context forever, level 1′;
--ISSUE SERIAL FTS  ON BOTH THE TABLES
        SERIAL non FTS on EMPLOYEES
HR>select * from employees;
        select * from test;
        select * from employees where employee_id=100;
--CHECK THE WAIT EVENTS FOR ABOVE STATEMENTS IN THE TRACE FILE
$cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
 trcsess output=parallel.out clientid=’SERIAL’ *.trc
 tkprof parallel.out parallel.txt sys=no
 vi parallel.txt
- FTS ON EMPLOYEES/TEST rESULTS IN DIRECT READS (WAIT EVENT ON DIRECT READS IS THERE)AS (_SERIAL_DIRECT_READ=truE)
select * from employees
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client               9        0.00          0.00
  Disk file operations I/O                      1        0.00          0.00
  db file sequential read                         1        0.00          0.00
  direct path read                                  2        0.00          0.00
  SQL*Net message from client            9        0.00          0.00
********************************************************************************
select * from test;
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client             61258   0.00          0.17
  Disk file operations I/O                   1           0.00          0.00
  asynch descriptor resize                  2           0.00          0.00
  direct path read                              1129     0.00          0.07
  SQL*Net message from client        61258   0.14         17.70
********************************************************************************
 - non FTS on EMPLOYES  DOES NOT RESULT IN DIRECT READS (WAIT EVENT ON DIRECT READS IS NOT THERE)
select * from  employees where employee_id=101
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  —————————————-   Waited  ———-  ————
  SQL*Net message to client              2        0.00          0.00
  db file sequential read                       2        0.00          0.00
  SQL*Net message from client          2        0.00          0.00
********************************************************************************
—- CLEANUP —-
SYS>alter system reset _serial_direct_read  scope=spfile;
         alter system reset _small_table_threshold  scope=spfile;
         drop table hr.test purge;
References:
—————————————————————————————————–
Related links :
                                            ——————–

11g R2 RAC: CONVERT NON RAC DATABASE TO RAC DATABASE USING RCONFIG

Oracle supports the following methods to convert a single-instance database to an RAC database as long as the RAC and the standalone environments are running on the same operating system and using the same oracle release:
• DBCA
• Oracle Enterprise Manager (grid control)
• RCONFIG
• Manual method
In this post, I will focus on and demonstrate this using the rconfig command-line tool.
During the conversion, rconfig performs the following steps automatically:
• Migrating the database to ASM, if specified
• Creating RAC database instances on all specified nodes in the cluster
• Configuring the Listener and NetService entries
• Registering services with CRS
• Starting up the instances and listener on all nodes
 In Oracle 11g R2., a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.
When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS, you will find two sample XML input files.
- ConvertToRAC_AdminManaged.xml
- ConvertToRAC_PolicyManaged.xml
While converting a single-instance database, with filesystem storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM. Therefore, configuring parallel options to use multiple RMAN channels in the
RMAN on the local node may make backup run faster, which eventually reduces the conversion duration. For example, you may configure the following in the RMAN  settings of orcl database on the local node.
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6;
CURRENT SCENARIO:
- 3 node RAC setup
- Names of nodes : Host01, Host02, Host03
- Name of single instance database with filesystem storage : orcl
- Source Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
- Target Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
OBJECTIVE
- convert orcl to a Admin managed RAC database running on two nodes host01 and host02.
- change storage to ASM with
   . Datafiles on +DATA diskgroup
   . Flash recovery area on +FRA diskgroup
IMPLEMENTATION:
– copy ConvertToRAC_AdminManaged.xml to another file my.xml
host01$cd $ORACLE_HOME/assistants/rconfig/sampleXMLs
host01$cp ConvertToRAC_AdminManaged.xml my.xml
– Edit my.xml and make following changes :
   . Specify Convert verify as “YES”
   . Specify current OracleHome of non-rac database for SourceDBHome
   . Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
   . Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
   . Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist.
   . Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name
   . Specify the type of storage to be used by rac database. Allowable values are CFS|ASM
   . Specify Database Area Location to be configured for rac database.
   . Specify Flash Recovery Area to be configured for rac database.
Note: The Convert verify option in xml file has three options:
  • YES : rconfig performs check that prerequisites for single-instance to RAC conversion have been met before it starts conversion
  • NO : rconfig does not perform checks and starts conversion
  • ONLY: rconfig performs only prerequisite checks; it does not perform conversion after completing checks
– Run rconfig to convert orcl from single instance database to 2 instance RAC database
host01$rconfig my.xml
– Check the log file for rconfig while conversion is going on
oracle@host01$ls -lrt  $ORACLE_BASE/cfgtoollogs/rconfig/*.log
                         tailf
– check that the database has been converted successfully
host01$srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02
– Note that rconfig adds password file to all the nodes but  entry to   tnsnames.ora needs to be modified (to reflect scan name instead of host-ip) on the local node and added to rest of the nodes.
– For all other nodes, copy the entry for the database (orcl) from    tnsnames.ora on local node to tnsnames.ora on remote node(s).
– Following is the entry I modified on the local node and copied to rest of the nodes :
ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )
– check that database can be connected remotely from remote node.
host02$sqlplus system/oracle@orcl
– check that datafiles have converted to ASM
SQL>select name from v$datafile;
NAME
——————————————————————————–
+DATA/orcl/datafile/system.326.794838279
+DATA/orcl/datafile/sysaux.325.794838349
+DATA/orcl/datafile/undotbs1.305.794838405
+DATA/orcl/datafile/users.342.794838413
+DATA/orcl/datafile/undotbs2.348.794838
———————————————————————————————————–
Related links:

11g R2 RAC: Clone Database Home

 

                                                              ———————

 

SHARED POOL ARCHITECTURE

                 
  In this first article in this series, I will discuss the purpose of shared pool, its architecture and hash buckets in the shared pool. In the next article(s), I will discuss about latches,  locks , pins and mutexes and tuning of shared pool.                    
WHAT IS SHARED POOL?
  The Shared Pool is a RAM area within the RAM heap that is created at startup time, a component of the System Global Area (the SGA).  The shared pool is the most important area of the SGA, except for the data buffer caches. The shared pool is usually the second-largest SGA memory area.                 
  PURPOSE OF SHARED POOL
 An important purpose of the shared pool is to cache the executable versions of SQL and PL/SQL statements. This allows multiple executions of the same SQL or PL/SQL code to be performed without the resources required for a hard parse, which results in significant reductions in CPU, memory, and latch usage.

                   WHAT DOES SHARED POOL CONTAIN?

  The shared pool is primarily a repository for shared SQL and PL/SQL so that users can share the statements thus reducing the need to parse a similar statement. Much of the shared pool usage is to support the execution of shared SQL and PL/SQL packages; but in order to build a cursor or compile a PL/SQL procedure, we need to know all about the database objects referenced by the SQL or PL/SQL being compiled e.g. to execute a query on a table, we need to know metadata about table including column names, data types, indexes and optimizer statistics. All of this additional metadata is also cached in the shared pool, independent of the cursors or program unit. Since this metadata is stored independently, it can be used to build any no. of cursors.  Besides, several features like the shared server, parallel query, or RMAN make large memory allocations in the shared pool.

A number of components are fixed in size and space is allocated for them at instance startup. There is usually very little that the DBA can do with regard to these permanent allocations.

- Session arrays

- Process arrays

- Segmented arrays

- Other static structures

Generally what is of far greater ineterst to the DBA, are the allocations of memory that can be aged in and out of the cache since building new objects in the shared pool is expensive and impacts scalability and performance. Memory allocations for objects that can be rebuilt are referred to as ‘recreatable’.

 Various recreatable components in shared pool are:

 Library cache
   Oracle’s library cache is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database. It contains :   1. SQL(cursors): Executable representation of a SQL statement that may be used repeatedly by many sessions.

   2. PL/SQL : Executable representation of PL/SQL packages ,procedures and functions that may be used repeatedly by many sessions.It also holds stored trigger code.

   3. Objects of various types required to parse and execute SQL statements including tables, ,indexes, types, methods etc. and java classes.

  The LC will maintain the relationships between tables and SQL statements (more specifically child cursors). This way if the table is altered, Oracle knows which child cursors and cursors to invalidate.

  The LC handles point to memory that contains information about procedures, functions, tables, views, synonyms, cursors and child cursors.

  The LC references and contains meta data about Oracle objects. Not all objects defined in Oracle’s data dictionary, but only those that have been recently referenced ) and have some cached information. V$DB_OBJECT_CACHE Reports(Namespace, name, sharable_mem) all objects currently in the library cache.

– Row Cache or The dictionary cache

 The dictionary cache stores environmental information, which includes referential integrity, table definitions, indexing information, and other metadata stored within Oracle’s internal tables.It’s also known as the row cache. It is used to cache data dictionary related information in RAM for quick access. The dictionary cache is like the buffer cache, except it’s for Oracle data dictionary information instead of user information.  Oracle Database uses this metadata when parsing SQL cursors or during the compilation of PL/SQL programs

 Result cache

  It holds query result sets and query fragments, so that subsequent queries can retrieve the result directly from cache.

              HOW IS MEMORY ALLOCATED IN SHARED POOL?

 The shared pool has many hundreds of objects in it , where an object  can be for example, a cursor, a table, a view or a PL/SQL package.

Usually these individual objects are not large but they can grow to be several MB based on the object type and its defining attributes for example , a package body containing many procedures is likely to consume more space than a simple select statement.

 Each individual object is not comprised of single large allocation but is further partitioned into independent memory allocations called ‘heaps’. The no. of heaps for an object depends on the object type. For example, a SQL cursor has 2 heaps : a smaller heap for the library cache metadata and a larger heap containing the executable representation of the cursor (usually called sqlarea). Although heaps themselves may be pretty large, each heap is itself comprised of one or more chunks of memory of standard allocation sizes . The standard allocation sizes reduce the number of problems arising from memory fragmentation.

  When an object is allocated memory, the memory is not allocated as a single contiguous area . Rather it is in the form of one or more chunks of memory. The chunks comprising an object may not be contiguously located, but the memory within each chunk must be contiguous. This chunking enables large  objects (over 5KB) to be loaded into the cache without requiring a single contiguous area. In this way, the database reduces the possibility of running out of enough contiguous memory due to fragmentation.

  The most prevalent chunk sizes in use are  1K and 4K creating more uniform memory allocations. As a result, at a steady state the same kinds of objects and chunk sizes are allocated and aged out. This allocation method reduces fragmentation and total memory usage. Note that smaller units of allocation may be used for components like row cache.

  Large allocations (over 5KB) are usually made by the shared server, parallel query, or RMAN. Oracle recommends segregating the SGA memory used by these features by configuring a distinct memory area, called the large pool.

  Infrequently, Java, PL/SQL or SQL cursors may make allocations out of the shared pool that are larger than 4KB.  If a large memory allocation exceeding 5K is requested , Oracle might have to age out a large no. of objects to satisfy that request and this operation could hold the latch resource for detectable periods of time, causing minor disruption to other concurrent attempts at memory allocation.  To allow these allocations to occur most efficiently, an area of shared pool is set aside and only used for allocations larger than 4K .This is known as reserved pool and by default it occupies 5% of the shared pool size. Reserved pool will be used only if insufficient space is available in shared pool. Allocations made from the reserved pool are made as a large contiguous space of the size of the objects unlike shared pool where allocations are made in chunks of 1K or 4K.  Thus, reserved pool makes allocation of large chunks more efficient. . Small allocations are not allowed in reserved area. Hence, smaller objects will not fragment the reserved list, helping to ensure the reserved list will have large contiguous chunks of memory. The database can use this memory for operations such as PL/SQL and trigger compilation or for temporary space while loading Java objects. After the memory allocated from the reserved pool is freed, it returns to the reserved pool.

   Whenever an object is to be allocated memory in the shared pool, memory to be allocated for chunks has to be contiguous. If requested amount of contiguous memory is not available, the heap manager iterates through the shared pool LRU list attempting to free space (by aging out the objects) to create a contiguous chunk of the requested size. It may be possible that ORA-4031 may be raised even though we have a large amount of free space available simply because the available free memory will not yield a contiguous extent large enough to satisfy the current request. If ASMM/AMM is enabled, an additional granule of memory is requested if sufficient contiguous memory can’t be found.

                      HOW ARE OBJECTS ORGANIZED IN SHARED POOL?

    The library cache stores text of the SQL statements and their execution plans in cursors. Whenever a SQL statement is issued, library cache is searched to find an existing cursor for the issued SQL statement.If whole of the library cache is searched for cursor, it will take very long. If we think of library cache as a library of books and cursor as a book, it will obviously take very long to locate a book in a huge library. Hence, we asign a code to every book and have a catalog card which contains book’s code and it’s rack number in the library. We search the catalog looking for the desired card and just go the listed rack to get the book.

  In the similar manner, pointers to the cursors are placed in the doubly linked lists called hash chains. We scan the hash chain to get the pointer to the cursor of our sql statement, once found (soft parse), information in the cursor is used to execute the statement. In order to reduce the length of the hash chain to be scanned, hash chains are grouped together into hash buckets (Books in library grouped by subject) i.e. one hash bucket contains the hash chain containinig pointers to the cursors of sql statements hashing to a set of values.

For example,

      Hash buckets               Hash chains

         H1             S1<–>  S2 <–>  S5 <–> S9

                           ^                                    ^

                           |_____________________|

 

         H2          S3 <–> S8

                        ^           ^

                        |_______|

 

         H3           S4 <–> S6
                        ^            ^           
                        |_______|

 

         H4             S7
 Note that multiple sql statements can hash to the same value. In above diagram, H1 can contain statements which hash to certain values only. Statements S1, S2, S5 and S9 hash to one of those values. Hence, pointers to their cursors are in the same hash bucket H1. So, if statement S5 is issued, only the hash chain corresponding to hash bucket H1 is searched instead of searching whole of library c  If the corresponding cursor is not found, statement is hashed to get the hash bucket number, its cursor is constructed and pointer to the cursor is linked to hash chain for the bucket (hard parse).

That’s was an overview of shared pool’s architecture. In my next post in this series, I will talk about various serialization structures used in library cache to protect memory allocations i.e. latches, locks, pins and mutexes.

 

Related links :

Home

Database Index
Tuning Index
Cursor Sharing Demystified 
Library Cache Lock And Pin Demonstrated
Latches, Locks, Pins And Mutexes 
Parent And Child Cursors In Oracle
Shared Pool Architecture 

                                    ———————

11G DATAGUARD: FLASHBACK STANDBY AFTER RESETLOGS ON PRIMARY

   If primary database is opened with resetlogs option following an incomplete recovery, redo apply service might halt on standby database when it encounters open resetlogs command in redo. But if physical standby database’s SCN is far enough behind the primary database’s SCN, then redo apply service can interpret open resetlogs command without stopping.
    Hence , after primary has been opened with resetlogs option, if we flashback standby database to an SCN which is earlier than reset logs SCN# of primary, we won’t have to recreate the standby database.
Let’s implement :
OVERVIEW:
   — Backup primary database
   — Create restore point before_update
   — Perform update
   — Restore and recover primary database to restore point before_update
   — Check configuration – standby database has diverged from primary
   — Flashback standby database to SCN = primary (resetlogs_scn) – 1
   — check that primary is back into configuration
IMPLEMENTATION:
–backup database
RMAN>backup database;
– Create restore point before_update
pri> create restore point before_update;
-- check initial salary –
PRI>select salary from hr.employees where employee_id=100;
    SALARY
———-
      2000
– update salary
PRI> update hr.employees set salary = 1000 where employee_id = 100;
       commit;
       alter system switch logfile;
       /
        /
-- On primary Restore from backup and recover till before update
PRI>shutdown immediate;
         startup mount;
$rman target sys/sys@orcl catalog rcowner/rcowner@rcat
RMAN> restore database;
               recover database until restore point before_update;
              alter database open resetlogs;
 — check that recovery was successful i.e. 
     salary on primary database is the same as before update
PRI>select salary from hr.employees where employee_id=100;
   SALARY
———-
      2000
– check configuration  –
    Note that there is error as standby database has diverged from primary
DGMGRL>show configuration;
    orcl – Primary database
    sby  – Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the database
DGMGRL> show database sby;
Database – sby
  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   (unknown)
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    sby
  Database Error(s):
    ORA-16700: the standby database has diverged from the primary database
    ORA-16766: Redo Apply is stopped
– check resetlogs scn from primary database
PRI>select RESETLOGS_CHANGE# from v$database;
RESETLOGS_CHANGE#
—————–
          1414101
– check current scn# from standby database
– note that standby database is ahead of reset logs change# and hence needs
   to be flashed back
SBY>select current_scn from v$database;
CURRENT_SCN
———–
    1414993
–Flashback standby database to 2 SCN earlier than resetlogs_scn–
   i.e. 1414101 -2 = 1414099
SBY>shu immediate;
         startup mount;
         flashback database to scn 1414099 ;
         alter database open;
– check that standby database is ready to receive and apply logs from primary;
DGMGRL> show configuration;
Configuration – dgconfig
  Protection Mode: MaxPerformance
  Databases:
    orcl – Primary database
    sby  – Physical standby database
DGMGRL> show database sby;
Database – sby
  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: ON
  Instance(s):
    sby
Database Status:
SUCCESS
– check that salary on standby database is the same as before update
SBY>select salary from hr.employees where employee_id=100;
    SALARY
———-
      2000
  Thanks for your time !!!
   Keep visiting the blog….
                                                   ———————————
Related links:

11g DataGuard : Automatic Client Failover

Flashback Through Role Transition For Physical Standby

                                                      ———————-

11G DATAGUARD : AUTOMATIC JOB FAILOVER

In this post, I will demonstrate automatic failover of a job to standby which can be implemented as a result of the automatic client failover introduced in 11g dataguard.
Now,  we can create service for the database which can run on both primary and standby database and
following a switchover / failover, dataguard broker can automatically
- start the appropriate services on the new primary database,
- disconnect clients from the failed database and
- redirect them to the new primary database — no manual intervention is required.
For demonstration of automatic client failover please click here:
Using this feature, we can also enable  jobs running on primary to failover to standby database in the event of switchover / failover. To implement this, we need to create a job class which maps to a service and we assign job to the job class. On failover/switchover, service will failover to the standby database and along with it all the jobs assigned to the job classes which have been mapped to the service will also failover to standby database.
Job 1 —-|
Job 2 —+—–> Job class —>  Service
Job 3 —-|
OVERVIEW:
- Create a service orcls  for the database and start it
- Create net service name for the service which points to both the primary and standby database
- Create a trigger which starts the service on the primary database
- Connect to the primary database (orcl) using the service orcls
- Create a job class TEST_CLASS mapping ro service orcls
- Create and start a job belonging to TEST_CLASS
- Check that job is running on primary (orcl)
- Perform switchover
- Check that job has failed over and is now running on new primary (sby)
                                       —- CREATE SERVCE —-
– CHECK THE SERVICE_NAMES PARAMETER ON PRIMARY
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcl
– CREATE A SERVICE ORCLS 
PRI>exec dbms_service.create_service(-
       service_name => ‘orcls’,-
       network_name => ‘orcls’,-
       failover_method => ‘BASIC’, -
       failover_type  => ‘SELECT’,-
       failover_retries => 180,-
       failover_delay => 1);
– CHECK THAT LISTENER DOES NOT KNOW ABOUT THE SERVICE AS IT HAS NOT BEEN STARTED YET
$ lsnrctl services |grep orcls
– START THE SERVICE
PRI>exec dbms_service.start_service(‘orcls’);
– CHECK THAT SERVICE APPEARS IN PARAMETER SERVICE_NAMES
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcls
– CHECK THAT SERVICE IS AUTOMATICALLY REGISTERED WITH LISTENER
PRI>ho lsnrctl services |grep orcls
Service “orcls” has 1 instance(s).
– CREATE AFTER STARTUP TRIGGER WHICH STARTS THE SERVICE  –
    ON THE DATABASE WHICH IS CURRENTLY IN PRIMARY ROLE
PRI>create or replace trigger manage_service after startup on database
       declare
          role varchar2(30);
       begin
          select database_role into role from v$database;
          IF role = ‘PRIMARY’ THEN
             DBMS_SERVICE.START_SERVICE(‘orcls’);
          ELSE
             DBMS_SERVICE.STOP_SERVICE(‘orcls’);
          END IF;
        END;
/
- CREATE NET SERVICE NAME FOR SERVICE ORCLS
——————————-
Pls note that both primary and standby databases should be registered statically with listener in database home. Here database home listener is running on port 1521 for both standby and primary databases.
IF BOTH PRIMARY AND STANDBY DATABASES ARE ON SAME MACHINE
    . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing both primary database orcl and standby database sby)
– NOW WHEN SERVICE IS AVAILABLE ON ORCL (PRIMARY DATABASE), LISTENER WILL CONNECT TO ORCL DATABASE
– AND WHEN SERVICE IS AVAILABLE ON SBY (STANDBY DATABASE FOLLWING FAILOVER/SWITCHOVER), LISTENER WILL CONNECT TO  SBY DATABASE
———————————–
IF  PRIMARY AND STANDBY DATABASES ARE ON DIFFERENT MACHINES
– USING NETMGR
   . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing primary database orcl)
     – add address on host2 port 1521 (representing standby database sby)
– NOW WHEN SERVICE CONNECTS TO ORCL PRIMARY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST1 AND APPLICATION WILL GET CONNECTED TO ORCL DATABASE.
– NOW WHEN SERVICE CONNECTS TO SBY STANDBY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST2 AND APPLICATION WILL GET CONNECTED TO SBY DATABASE.
———————————
                            ———     JOB FAILOVER   ——————-
– check current configuration
DGMGRL> show configuration;
  Protection Mode: MaxPerformance
  Databases:
    orcl – Primary database
    sby  – Physical standby database
– create test  table on PRIMARY (ORCL)
ORCL>drop table hr.test purge;
           create table hr.test(id number, tim timestamp);
— create job class TEST_CLASS which is associated with service orcls
ORCL>exec dbms_scheduler.drop_job_class (‘TEST_CLASS’);
           BEGIN
sys.dbms_scheduler.create_job_class(
logging_level => DBMS_SCHEDULER.LOGGING_RUNS,
service => ‘orcls’,
job_class_name => ‘”TEST_CLASS”‘);
        END;
/
– create a job which inserts records in table HR.TEST every 3 seconds
   and belongs to job_class TEST_CLASS
ORCL>exec sys.dbms_scheduler.drop_job(job_name=>’sys.test_job’);
       BEGIN
sys.dbms_scheduler.create_job(
job_name => ‘”SYS”.”TEST_JOB”‘,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘begin
                        insert into hr.test values (1, systimestamp);
                        commit;
       end;’,
repeat_interval => ‘FREQ=SECONDLY;INTERVAL=3′,
start_date => systimestamp at time zone ‘Asia/Calcutta’,
job_class => ‘”TEST_CLASS”‘,
auto_drop => FALSE,
enabled => TRUE);
END;
/
– check that job executes every 3 seconds
ORCL>select * from hr.test
        order by tim;
– check that job is currently running on primary
ORCL>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
JOB_NAME
——————————
TEST_JOB
– check that job is currently not running on standby
SBY>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
no rows selected
– Perform switchover so that 
     SBY becomes new primary and
    ORCL becomes new physical standby
DGMGRL>switchover to sby;
– Startup new standby (orcl)
ORCL>conn / as sysdba
       startup
-
– check that switchover successful
DGMGRL>connect sys/sys@sby
                show configuration;
  Protection Mode: MaxPerformance
  Databases:
    sby  – Primary database
    orcl – Physical standby database
– check that job is still running by checking the records in hr.TEST table
ORCL>select * from hr.test
        order by tim;
– check that the job is now running on new primary i.e. SBY and not on eralier primary (orcl)
   as it has failed over along with service orcls
– check that job is currently running on new primary(SBY)
SBY>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
JOB_NAME
——————————
TEST_JOB
– check that job is currently not running on new standby (ORCL)
ORCL>select job_name
       from dba_scheduler_jobs
       where job_name=’TEST_JOB';
no rows selected
– RESTORE EARLIER STATUS
DGMGRL>switchover to orcl;
ORCL>exec dbms_scheduler.drop_job(‘TEST_JOB’);
          drop table hr.test purge;
I hope you found this demonstration interesting. Pls give your valuable comments/suggestions
                             ——————————————————————
 Related links:

11G Dataguard :  Automatic Client Failover 

                                                       ——————–

11G DATAGUARD : AUTOMATIC CLIENT FAILOVER

In this post, I will demonstrate automatic client failover which is a new feature introduced in 11g dataguard.
Prior to 11g, one or more user written database triggers were required to automate client failover depending upon configuration.

Dataguard 11g release 2 simplifies configuration significantly by eliminating need for user written triggers to automate client failover. Role transitions managed by dataguard broker can automatically

– failover the database ,

– start the appropriate services on the new primary database,

– disconnect clients from the failed database and

– redirect them to the new primary database — no manual intervention is required.

OVERVIEW:
- Create a service orcls  for the database and start it
- Create net service name for the service which points to both the primary and standby database
- Create a trigger which starts the service on the primary database
- Connect to the primary database (orcl) using the service orcls
- Issue a long running query
- While the query is running, perform switchover to standby
- After the switchover, check that query has failed over to the new primary
IMPLEMENTATION
– CHECK THE SERVICE_NAMES PARAMETER ON PRIMARY
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcl
– CREATE A SERVICE ORCLS 

PRI>exec dbms_service.create_service(-

service_name => ‘orcls’,-

network_name => ‘orcls’,-

failover_method => ‘BASIC’, –

failover_type => ‘SELECT’,-

failover_retries => 180,-

failover_delay => 1);

– CHECK THAT LISTENER DOES NOT KNOW ABOUT THE SERVICE AS IT HAS NOT BEEN STARTED YET
$ lsnrctl services |grep orcls
– START THE SERVICE
PRI>exec dbms_service.start_service(‘orcls’);
– CHECK THAT SERVICE APPEARS IN PARAMETER SERVICE_NAMES
PRI>sho parameter service_names
NAME                                 TYPE        VALUE
———————————— ———– ——————————
service_names                        string      orcls
– CHECK THAT SERVICE IS AUTOMATICALLY REGISTERED WITH LISTENER
PRI>ho lsnrctl services |grep orcls
Service “orcls” has 1 instance(s).
CREATE AFTER STARTUP TRIGGER WHICH STARTS THE SERVICE ON  –
   THE PRIMARY DATABASE AND STOPS IT ON STANDBY DATABASE
PRI>create or replace trigger manage_service after startup on database

       declare

          role varchar2(30);

       begin

           select database_role into role from v$database;

           IF role = ‘PRIMARY’ THEN

                DBMS_SERVICE.START_SERVICE(‘orcls’);

          ELSE

               DBMS_SERVICE.STOP_SERVICE(‘orcls’);

  END IF;

  END;

/

 ——————————-

Pls note that both primary and standby databases should be registered statically with listener in database home. Here database home listener is running on port 1521 for both standby and primary databases.
IF BOTH PRIMARY AND STANDBY DATABASES ARE ON SAME MACHINE
    . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing both primary database orcl and standby database sby)
– NOW WHEN SERVICE IS AVAILABLE ON ORCL (PRIMARY DATABASE), LISTENER WILL CONNECT TO ORCL DATABASE
– AND WHEN SERVICE IS AVAILABLE ON SBY (STANDBY DATABASE FOLLWING FAILOVER/SWITCHOVER), LISTENER WILL CONNECT TO  SBY DATABASE
———————————–
IF  PRIMARY AND STANDBY DATABASES ARE ON DIFFERENT MACHINES
– USING NETMGR
   . create net service name orcls
     – service name = orcls
     – add address on host1 port 1521 (representing primary database orcl)
     – add address on host2 port 1521 (representing standby database sby)
– NOW WHEN SERVICE CONNECTS TO ORCL PRIMARY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST1 AND APPLICATION WILL GET CONNECTED TO ORCL DATABASE.
– NOW WHEN SERVICE CONNECTS TO SBY STANDBY DATABASE, IT WILL BE REGISTERED AUTOMATICALLY  WITH LISTENER ON PORT 1521 ON HOST2 AND APPLICATION WILL GET CONNECTED TO SBY DATABASE.
———————————
– CREATE A USER DGTEST
PRI>create user dgtest identified by dgtest;
        grant create session to dgtest;
        grant select_catalog_role to dgtest;
PRESENTLY, I HAVE BOTH PRIMARY AND STANDBY ON SAME MACHINE
– CHECK CURRENT CONFIGURATION
   Primary – orcl
   Standby sby
DGMGRL>show configuration;
– CHECK THAT  SERVICE IS REGISTERED WITH LISTENER (1521) AND POINTS TO ORCL (PRIMARY DATABASE)
$lsnrctl services
– CONNECT USING SERVICE ORCLS AND CHECK THAT DB_UNIQUE_NAME=ORCL (PRIMARY)
PRI>connect dgtest/dgtest@orcls
        sho parameter db_unique_name
— CHECK THAT SESSION HAS NOT FAILED OVER TILL NOW (failed_over = no)
PRI>select failover_method, failover_type, failed_over
       from v$session
       where username=’DGTEST';
– ISSUE A LONG RUNNING QUERY IN THIS SESSION AND SIMULTANEOUSLY SWITCHOVER TO SBY    IN ANOTHER DGMGRL SESSION.
PRI>select * from dba_objects, dba_tables;
DGMGRL>switchover to sby;
– WHILE SWITCHOVER IS GOING ON CHECK THAT QUERY IS TEMPORARILY SUSPENDED
- AFTER SBY DATABASE IS STARTED, CHECK THAT SERVICE ORCLS IS POINTS TO SBY (NEW PRIMARY) NOW
$lsnrctl services
– CHECK THAT QUERY IS RESUMED AFTER SWITCHOVER
– CHECK CURRENT CONFIGURATION
DGMGRL>show configuration;
– CHECK THAT SESSION HAS FAILED OVER TO SBY DATABASE 
PRI>sho parameter db_unique_name
– CHECK THAT SESSION HAS FAILED OVER
PRI>select failover_method, failover_type, failed_over
       from v$session
       where username=’DGTEST';
– — RESTORE EARLIER CONFIGURATION
DGMGRL>switchover to orcl;
 We can use client failover feature of 11g dataguard to automatically failover a running job also  to standby in case of switchover/failover.
—————————————————————————–
Related links:

11g Dataguard Index

                                                        ———————

11G DATAGUARD: AUTOMATIC BLOCK MEDIA RECOVERY (AUTO BMR)

                          
In this post, I will demonstrate automatic block media recovery introduced in 11g Data Guard. This feature can be used only if active dataguard is enabled.
This feature enables the automatc repair of corrupt blocks transparent to the user and application. If corruption occurs on the primary database, blockmedia recovery is performed automatically using a good copy of the block from the standby database and vice versa.
OVERVIEW:
– Create and polulate a test table on primary
– Corrupt two blocks containing table data
– Check that blocks corrupted using dbv
– Flush buffer cache and issue query for corrupt blocks
– The query succeeds as blocks have been repaired automatically
– Verify automatic BMR using
     . Alert log of Primary
     . dbv utility
— IMPLEMENTATION –
– CREATE A TEST TABLESPACE ON PRIMARY
PRI>drop tablespace test including contents and datafiles;
       create tablespace test datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ size 30M;
– CREATE A TEST USER ON PRIMARY
   
PRI> create user test  identified by test default tablespace test;
       grant connect, resource to test;
– CREATE AND POLULATE TABLE TEST_TAB IN TEST TABLESPACE ON PRIMARY
PRI> Create table test.test_tab as select * from hr.employees;
         insert into test.test_tab select * from test.test_tab;
        /
        /
        /
        /
        commit;
       select table_name, tablespace_name from dba_tables where table_name=’TEST_TAB';
select count(*) from test.test_tab;
– CHECK THE LEAST BLOCK OCCUPIED BY THE TABLE IN DATAFILE
  PRI>select min(dbms_rowid.rowid_block_number(rowid))
           from test.test_tab;
       MIN(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
—————————————–
                                      131
– corrupt two blocks
#dd if=/dev/zero of=/u01/app/oracle/oradata/orcl/test01.dbf bs=8192 conv=notrunc seek=136 count=2
– check if datafile corrupted
[oracle@host1 ~]$ dbv file=/u01/app/oracle/oradata/orcl/test01.dbf blocksize=8192
Page 136 is marked corrupt
Corrupt block relative dba: 0x01800088 (file 6, block 136)
Completely zero block found during dbv:
Page 137 is marked corrupt
Corrupt block relative dba: 0x01800089 (file 6, block 137)
Completely zero block found during dbv:
Total Pages Marked Corrupt   : 2
-- FLUSH BUFFER CACHE AND ISSUE QUERY ON CORRUPTED BLOCKS
- QUERY SUCCEEDS AS BLOCKS HAVE BEEN REPAIRED AUTOMATICALLY
PRI>alter system flush buffer_cache;
       select count(*) from test.test_tab;
– CHECK THE ALERT LOG OF PRIMARY DATABASE
# tailf /u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl.log
——————
Tue Dec 18 12:43:01 2012
ALTER SYSTEM: Flushing buffer cache
Tue Dec 18 12:43:26 2012
– BLOCK 136 IS FOUND CORRUPT AND IS DUMPED IN TRACE FILE
Hex dump of (file 6, block 136) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12608.trc
Corrupt block relative dba: 0x01800088 (file 6, block 136)
Completely zero block found during multiblock buffer read
Reading datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ for corruption at rdba: 0x01800088 (file 6, block 136)
Reread (file 6, block 136) found same corrupt data
– REQUEST FOR AUTO BMR SENT FOR BLOCK 136
Requesting Auto BMR for (file# 6, block# 136)
– BLOCK 137 IS FOUND CORRUPT AND IS DUMPED IN TRACE FILE
Hex dump of (file 6, block 137) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12608.trc
Corrupt block relative dba: 0x01800089 (file 6, block 137)
Completely zero block found during multiblock buffer read
Reading datafile ‘/u01/app/oracle/oradata/orcl/test01.dbf’ for corruption at rdba: 0x01800089 (file 6, block 137)
Reread (file 6, block 137) found same corrupt data
– REQUEST FOR AUTO BMR SENT FOR BLOCK 136
Requesting Auto BMR for (file# 6, block# 137)
– AUTO BMR SUCCESSFUL FOR BLOCK 136
Waiting Auto BMR response for (file# 6, block# 136)
Auto BMR successful
– AUTO BMR SUCCESSFUL FOR BLOCK 137
Waiting Auto BMR response for (file# 6, block# 137)
Auto BMR successful
– CHECK THAT BLOCKS HAVE BEEN REPAIRED
oracle@host1 ~]$ dbv file=/u01/app/oracle/oradata/orcl/test01.dbf blocksize=8192
Total Pages Marked Corrupt   : 0
Thanks for your time. Your comments and suggestions are welcome !!!
References:
————————————————————————————–
 Related links:

                                                

——————