Category Archives: Tuning

AUTOMATIC DEGREE OF PARALLELISM (DOP) – PART – I

In a serial – non-parallel – execution environment, a single process or thread undertakes  the operations required to process your SQL statement and each action must complete before  the succeeding action can commence.Parallel SQL allows a SQL statement to be processed by multiple threads or processes simultaneously.Oracle supports parallel processing for a wide range of operations, including queries, DDL and DML:
  •  Queries that involve table or index range scans.
  •  Bulk insert, update or delete operations.
  •  Table and index creation.
In earlier versions of the Oracle Database, we had to determine the DOP more or less  manually, either with a parallel hint or by setting a parallel degree with alter table. 
There was an automatic computation of the DOP available for the objects with dictionary DOP of default, derived from the simple formula:
 CPU_COUNT * PARALLEL_THREADS_PER_CPU. 
If there were insufficient parallel servers to satisfy the requested DOP, one of three things could occur:
  • The SQL would be run at a reduced DOP (be downgraded)
  • The SQL would run in serial mode (be serialized)
  • If PARALLEL_MIN_PERCENT was specified and less than the nominated percentage of the DOP was achievable, then the the SQL statement might terminate with “ORA-12827: insufficient parallel query slaves available”.
Oracle 11g release 2 introduced Automatic computation of DOP to parallelize
SQL. Automatic DOP whichis – as the name says – a way of having Oracle determine the degree of parallelism depending on the nature of the operations to be performed and the sizes of the objects involved. The decision is made based on various initialization parameter settings.
1. PARALLEL_DEGREE_POLICY. It can have 3 values : MANUAL, LIMITED and AUTO.
  • MANUAL – This is the default. Disables Auto DOP, statement queuing and in-memoryparallel execution. It reverts the behavior of parallel execution to what it was previous to Oracle Database 11g, Release 2 (11.2)..
  • LIMITED- will only assign a system computed  degree of parallelism (DOP) for tables decorated with a parallel degree of
  • DEFAULT
  1. Tables and indexes that have a specific DOP specified use that specified DOP.   
  2.  Parallel statement queuing and in-memory parallel execution are disabled.
  3. The optimizer computes the optimal degree-of-parallelism value based on estimated  execution statistics..
  • AUTO
  1. will consider to assign a system computed DOP to all tables
  2. Enables parallel statement queuing :If the requested or required DOP is not possible because parallel servers are busy, then Oracle will defer statement execution rather than downgrading or  serializing the SQL until enough parallel slaves become available.
  3. Enables in-memory parallel execution.

2. PARALLEL_MIN_TIME_THRESHOLD : Oracle 11gR2 will ascertain if the query’s estimated execution time is likely to run longer than the acceptable value (in seconds) for PARALLEL_MIN_TIME_THRESHOLD and, if sufficient resources for parallel execution exist right now, it will allow the query to execute; otherwise, it will delay its execution until sufficient resources exist. This helps prevent a single parallel query from consuming excessive resources at the cost of other non-parallelizable operations.The default of this parameter is 10 seconds.

3. PARALLEL_ADAPTIVE_MULTI_USER : If it is set to TRUE, then Oracle will adjust the degree of parallel based on the overall load on the system. When the system is more heavily loaded, then the degree of parallelism will be reduced.

4. If PARALLEL_IO_CAP is set to TRUE in 11g or higher, then Oracle will limit the Degree of  Parallelism to that which the IO subsystem can support. The IO subsystem limits can be calculated by using the procedure DBMS_RESOURCE_MANAGER.CALIBRATE_IO.

5. PARALLEL_MAX_SERVERS : To avoid an arbitrary number of parallel processes to be running on a system, which may overload that system, the parameter parallel_max_servers provides a hard upper boundary. Regardless of any other setting, the degree of parallelism cannot exceed that which can be supported by PARALLEL_MAX_SERVERS. For most SQL statements, the number of servers required will be twice the Degree of Parallelism.

6. PARALLEL_DEGREE_LIMIT : It is the maximum DOP that can be used. It can take various values :

  •  CPU (Default) : the total CPU count of the system
  •  IO : the limit depends on the I/O load on the database host
  •  any number greater than  0.

Note that even if you increase the parameter value, the automatically computed degree of parallelism may not increase, because the database may determine that the computed degree of parallelism is sufficient.The DOP that we run the statement with is the minimum value of the computed DOP (or ideal DOP) and that parallel_degree_limit parameter.

• A degree of parallelism can be specified at the table or index level by using the PARALLEL clause of CREATE TABLE, CREATE INDEX, ALTER TABLE or ALTER INDEX.

• The PARALLEL hint can be used to specify the degree of parallelism for a specific table within a query.

In my next article in this series , I will practically demonstrate oracle behaviour for various values of the parameter PARALLEL_DEGREE_POLICY i.e. MANUAL, LIMITED AND AUTO

References:
http://searchitchannel.techtarget.com/feature/Using-parallel-SQL-to-improve-Oracle-database-performance
http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm

http://kerryosborne.oracle-guy.com/2014/01/does-parallel_degree_limit-work-with-parallel_degree_policymanual/
http://www.databasejournal.com/features/oracle/oracle-parallel-processing-new-and-improved.html
http://www.pythian.com/blog/secrets-of-oracles-automatic-degree-of-parallelism/
http://hemantoracledba.blogspot.in/2015/02/parallel-execution-1-parallel-hint-and.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/JcnHK+(Hemant%27s+Oracle+DBA+Blog)
https://hourim.wordpress.com/2015/02/24/parallel-query-broadcast-distribution-and-temp-space/
http://www.oracle.com/technetwork/articles/datawarehouse/twp-parallel-execution-fundamentals-133639.pdf
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1657271217898
What Is Auto DOP?
Configuring And Controlling Auto DOP
https://blogs.oracle.com/datawarehousing/entry/parallel_degree_limit_parallel_max
In Memory Parallel execution in Oracle Database 11g R2
Parameter Changes For Parallel Execution In Oracel Database 12cR2

———————————————————————————————

Related links :


Home

Database Index
Automatic Degree Of Paralellism – Part-II
Oracle 11g : Automatic DOP – Parallel Threshold
Oracle 11g :  Parallel Statement Queueing
Parallel_Adaptive_Multi_User
Parallel_Min_Percent
Parallel_Min_Servers

                                           —————-

TUNING PGA : PART – III

In
my first article on PGA tuning I had explained the concept of
manual/automatic tuning of PGA. In my second article, I had given
guidelines to size the PGA in case of automatic tuning.

In the third and last article on PGA tuning, I will demonstrate both manual and automatic tuning of PGA.

BRIEF OVERVIEW OF THE DEMONSTRATION:

MANUAL TUNING : EFFECT OF VARIOUS SORT AREA SIZES ON SORTING OPERATIONS

— Disable AMM

— Enable manual PGA memory management

— Set sort area size to  different values (64k, 1M, 1G)

— Note the effect on sorts : As sort area size is increased, temporary tablespace usage decreases

AUTOMATIC TUNING

— Enable automatic PGA memory management with pga_aggregate target set to minimum (10M)

— execute query

— Sort spills to disk

— use Pga advisor to estimate correct size of pga

— Increase size og PGA as per advice

— Re execute query

— Sort does not spill to disk

————————————————–

– SAVE CURRENT PARAMETER SETTINGS IN PFILE

conn / as sysdba

create pfile=’/u01/app/oracle/initorcl.ora’ from spfile;

– DISABLE AMM

sho parameter memory_target;

alter system set memory_target=0;

sho parameter memory_target;

– ENABLE MANUAL PGA MEMORY MANAGEMENT

alter system set workarea_size_policy=manual;

alter system set pga_aggregate_target=0 scope=spfile;

startup force;

sho parameter workarea

sho parameter pga_agg

@?/sqlplus/admin/plustrce.sql

grant plustrace to hr;

– CREATE TEST TABLE, GATHER STATS

conn hr/hr

create table t as select * from all_objects;

exec dbms_stats.gather_table_stats( user, ‘T’ );

– END CURRENT SESSION AND START A NEW SESSION TO GET A CONSISTENT ENVIRONMENT IN WHICH NO WORK HAS BEEN DONE YET

conn hr/hr

– FIND OUT SID FOR CURRENT SESSION

HR>select sid from v$mystat where rownum = 1;

sid = 37

– START ANOTHER SESSION FROM WHERE WE CAN MONITOR

MEMORY USAGE FOR SORTING IN EARLIER SESSION

sqlplus / as sysdba

— NOTE DOWN CURRENT MEMORY USED FOR UGA/PGA

AND TEMPORARY TABLESPACE USAGE BY HR’s session

 NOTE THAT TEMPORARY TABLESPACE USAGE = 0

SYSDBA>

select a.name, b.value
from v$statname a, v$sesstat b
where a.statistic# = b.statistic#
and b.sid = 37
and (a.name like ‘%ga %’
or a.name like ‘%direct temp%’);

NAME                                                                  VALUE

—————————————————————- ———-

session uga memory                                                   147180

session uga memory max                                               147180

session pga memory                                                   841832

session pga memory max                                               841832

physical reads direct temporary tablespace                                0

physical writes direct temporary tablespace                               0

– NOTE THAT DISK SORTS = 0

SYSDBA>

select name, value
from v$sysstat
where name like ‘%sort%';

NAME                                                                  VALUE

—————————————————————- ———-

sorts (memory)                                                         3687
sorts (disk)                                                              0
sorts (rows)                                                          22489

– NOTE THAT TEMPORARY SEGMENT USAGE BY HR = 0 AS NO SORTS HAVE BEEN MADE IN HR SESSION.

SYSDBA>

select username, segtype, blocks, tablespace
from   v$tempseg_usage
where username=’HR';

no rows selected

———————————————-

– SET SORT AREA SIZE = 64K IN HR SESSION AND PERFORM THE SORT

———————————————-

HR>

alter session set sort_area_size = 65536;

– NOTE THAT SORT HAS BEEN PERFORMED ON DISK

set autotrace traceonly statistics

select * from t order by 1, 2, 3, 4;

Statistics

———————————————————-

8  recursive calls
296  db block gets
808  consistent gets
3737  physical reads
0  redo size
3064341  bytes sent via SQL*Net to client
41240  bytes received via SQL*Net from client
3713  SQL*Net roundtrips to/from client
0  sorts (memory)
    1  sorts (disk)

55666  rows processed

— NOTE DOWN CURRENT MEMORY USED FOR UGA/PGA AND TEMPORARY TABLESPACE USAGE BY HR’s SESSION

– NOTE THAT TEMPORARY TABLESPACE HAS BEEN USED

. UGA expanded and then shrank (session uga memory < session uga memory max)

. PGA (1406016) is more than earlier value (841832)

SYSDBA>

select a.name, b.value
from v$statname a, v$sesstat b
where a.statistic# = b.statistic#
and b.sid = 21
and (a.name like ‘%ga %’
or a.name like ‘%direct temp%’);

NAME                                                                  VALUE

—————————————————————- ———-

session uga memory                                                   274344

session uga memory max                                               331568
session pga memory                                                  1406016
session pga memory max                                              1406016
physical reads direct temporary tablespace                             3737
physical writes direct temporary tablespace                            3737
– NOTE THAT DISK SORTS HAVE INCREASED (FROM 0 TO 10)

select name, value
from v$sysstat
where name like ‘%sort%';

NAME                                                                  VALUE

—————————————————————- ———-

sorts (memory)                                                         8495
sorts (disk)                                                             10
sorts (rows)                                                         221576

———————————————-

– SET SORT AREA SIZE = 1M IN HR SESSION AND PERFORM THE SORT

———————————————-

HR>

alter session set sort_area_size = 1048576;

– NOTE THAT SORTS HAVE AGAIN SPILLED TO DISK AS SORT AREA SIZE IS STILL INSUFFIEICNT.

HR>

set autotrace traceonly statistics;

select * from t order by 1, 2, 3, 4;

Statistics

———————————————————-

8  recursive calls
9  db block gets
808  consistent gets
825  physical reads
0  redo size
3064341  bytes sent via SQL*Net to client
41240  bytes received via SQL*Net from client
3713  SQL*Net roundtrips to/from client
0  sorts (memory)
  1  sorts (disk)

55666  rows processed
— NOTE DOWN CURRENT MEMORY USED FOR UGA/PGA AND TEMPORARY TABLESPACE USAGE BY HR’s SESSION

– NOTE THAT TEMPORARY TABLESPACE USAGE IS REDUCED

( current value – earlier value = 4562 – 3737 = 825

which is less that ealier value (3737))

— UGA EXPANDED FROM EARLIER VALUE OF 274344 TO 1309592 (= MAX VALUE)

— PGA EXPANDED FROM EARLIER VALUE OF 1406016 TO 2483236(= MAX VALUE)

SYSDBA>

select a.name, b.value
from v$statname a, v$sesstat b
where a.statistic# = b.statistic#
and b.sid = 21
and (a.name like ‘%ga %’
or a.name like ‘%direct temp%’);

NAME                                                                  VALUE

—————————————————————- ———-

session uga memory                                                  1309592
session uga memory max                                              1309592
session pga memory                                                  2483236
session pga memory max                                              2483236
physical reads direct temporary tablespace                             4562
physical writes direct temporary tablespace                            4562

– NOTE THAT TOTAL DISK SORTS (SINCE INSTANCE STRATUP) HAVE INCREASED AS CURRENT SORT SPILLED TO DISK.

SYSDBA>

select name, value
from v$sysstat
where name like ‘%sort%';

NAME                                                                  VALUE

—————————————————————- ———-

sorts (memory)                                                         9781
sorts (disk)                                                             11
sorts (rows)                                                         280487

———————————————-

– SET SORT AREA SIZE = 1G IN HR SESSION AND PERFORM THE SORT

————————————————

HR>

alter session set sort_area_size = 1073741820;

– NOTE THAT ALL SORTS HAVE BEEN PERFORMED IN MEMORY AS SORT AREA SIZE IS SUFFICIENT

HR>

set autotrace traceonly statistics

select * from t order by 1, 2, 3, 4;

Statistics

———————————————————-

1  recursive calls
0  db block gets
808  consistent gets
0  physical reads
0  redo size
3064341  bytes sent via SQL*Net to client
41240  bytes received via SQL*Net from client
3713  SQL*Net roundtrips to/from client
1  sorts (memory)

  0  sorts (disk)

55666  rows processed
— NOTE DOWN CURRENT MEMORY USED FOR UGA/PGA AND TEMPORARY TABLESPACE USAGE BY HR’s SESSION

– NOTE THAT TEMPORARY TABLESPACE USAGE IS NOT AT ALL USED AND SORT IS COMPLETELY PERFORMED IN PGA

(Reads/Writes to temporary tablespace is same as earlier

value (4562))

— UGA AND PGA EXPANDED

SYSDBA>

select a.name, b.value
from v$statname a, v$sesstat b
where a.statistic# = b.statistic#
and b.sid = 21
and (a.name like ‘%ga %’
or a.name like ‘%direct temp%’);

NAME                                             VALUE

————————–                       ———-
session uga memory           7404100
session uga memory max     7404100
session pga memory          8556012
session pga memory max     8556012
physical reads direct temporary tablespace   4562
physical writes direct temporary tablespace      4562
– DISK SORTS HAVE SAME VALUE (11) AS EARLIER AS SORTING COMPLETELY IN MEMORY ITSELF.

SYSDBA>

select name, value
from v$sysstat
where name like ‘%sort%';

NAME               VALUE

—————–  ———

sorts (memory)     10641
sorts (disk)       11
sorts (rows)       337538

—————————————–

– ENABLE AUTOMATIC PGA MEMORY MANAGEMENT

SET pga_aggregate target to minimum valie i.e. 10M

SYSDBA>

alter system set pga_aggregate_target=10m;

alter system set workarea_size_policy=auto;



sho parameter pga_agg

sho parameter workarea

— EXECUTE QUERY FROM HR SESSION WITH NEW SETTING

NOTE THAT SORT IS SPILLING TO DISK AS THE VALUE OF 10M IS INSUFFICIENT

HR>

select * from t order by 1, 2, 3, 4;

Statistics

———————————————————-
8  recursive calls
9  db block gets
808  consistent gets
827  physical reads
0  redo size
3064341  bytes sent via SQL*Net to client
41240  bytes received via SQL*Net from client
3713  SQL*Net roundtrips to/from client
0  sorts (memory)
          1  sorts (disk)

55666  rows processed

— FIND OUT WHETHER ABOVE QUERY EXECUTION WAS OPTIMAL/ONE PASS/MULTI PASS

– NOTE THAT QUERY WAS NOT EXECUTED IN OPTIMAL MODE (ONE PASS / MULTIPASS EXECS > 0)

HR>

SQL> col sql_text for a25 word_wrapped

      Select SQL_text, sum(OPTIMAL_EXECUTIONS) OPTIMUM_EXECS,
             sum(ONEPASS_EXECUTIONS) ONE_PASS_EXECS, sum(MULTIPASSES_EXECUTIONS) MULTI_PASS_EXECS
      from v$sql s, v$sql_workarea w
      where s.sql_text like ‘select * from t order by 1%’
       and s.sql_id=w.sql_id
      group by sql_text;

– LOOK AT PGA ADVISOR TO FIND OUT IF THE CURRENT SIZE OF PFA IS APPROPRIATE

SQL> SELECT round (PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
       ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
       ESTD_OVERALLOC_COUNT
FROM   v$pga_target_advice;

TARGET_MB   CACHE_HIT_PERC   ESTD_OVERALLOC_COUNT

—————-   ———————    ———————————-

10                     92                                         3

12                     92                                         3

14                     92                                         3

16                     92                                         3

18                     92                                         3

20                     92                                         3

30                     92                                         3

40                     92                                         3

60                     92                                         1

80                    100                                        0
set the PGA_AGGREGATE_TARGET parameter to a value where we avoid any over

allocation, so lowest PGA_AGGREGATE_TARGET value we can set is 80m (where

ESTD_OVERALLOC_COUNT is 0). After eliminating over-allocations, the goal is

to maximize the PGA cache hit Percentage based on your response-time

requirement and memory constraints.

SYSDBA>alter system set pga_aggregate_target=80m;

– AGAIN EXECUTE QUERY IN HR SESSION AND NOTE THAT SORTS HAVE NOT SPILLED TO DISK.

HR> select * from t order by 1, 2, 3, 4;

55666 rows selected.

Statistics

———————————————————-

1  recursive calls
0  db block gets
808  consistent gets
0  physical reads
0  redo size
3064341  bytes sent via SQL*Net to client
41240  bytes received via SQL*Net from client
3713  SQL*Net roundtrips to/from client
1  sorts (memory)

          0  sorts (disk)

55666  rows processed

— FIND OUT WHETHER ABOVE QUERY EXECUTION WAS OPTIMAL/ONE PASS/MULTI PASS

– NOTE THAT QUERY WAS NOT EXECUTED IN OPTIMAL MODE (OPTIMAL EXECS > 0)

SQL> col sql_text for a25 word_wrapped

      Select SQL_text, sum(OPTIMAL_EXECUTIONS) OPTIMUM_EXECS,
             sum(ONEPASS_EXECUTIONS) ONE_PASS_EXECS,
             sum(MULTIPASSES_EXECUTIONS) MULTI_PASS_EXECS
      from   v$sql s, v$sql_workarea w
      where  s.sql_text like ‘select * from t order by 1%’
       and   s.sql_id=w.sql_id
      group by sql_text;
– RESTORE EARLIER STATUS

SYSDBA>

drop table hr.t purge;

create spfile from pfile=’/u01/app/oracle/initorcl.ora';

startup force;

 

References:

——————————————————————-

Related links:

Tuning Index
Tuning PGA : Part – I
Tuning PGA : Part – II

——————–

TUNING PGA : PART – II

In continuation with my earlier article (Tuning PGA Part-I), in this article I will discuss how we can decide on an appropriate value of PGA_AGGREGATE_TARGET for our database.

When configuring a brand new instance, it is hard to know precisely the appropriate setting for PGA_AGGREGATE_TARGET. You can determine this setting in three stages:
1. Make a first estimate for PGA_AGGREGATE_TARGET. By default, Oracle Database uses 20% of the SGA size. However, this initial setting may be too low for a large DSS system.
2. Run a representative workload on the instance and monitor performance, using PGA statistics collected by Oracle Database, to see whether the maximum PGA size is under-configured or over-configured.
3. Tune PGA_AGGREGATE_TARGET, using Oracle PGA advice statistics.

Setting PGA_AGGREGATE_TARGET Initially

The value of the PGA_AGGREGATE_TARGET initialization parameter  should be set based on the total amount of memory available for the Oracle database instance. This value can then be tuned and dynamically modified at the instance level. illustrates a typical situation.
Let total physical memory available on the system = 4 GB
Memory  left for the OS and other non-Oracle applications running on the same system = 20% i.e. .8 GB
Memory available for Oracle instance = 80% i.e. 3.2 GB
You must then divide the resulting memory between the SGA and the PGA.
For OLTP systems, the PGA memory = 20% i.e. 3.2 * .2 = 0.64 GB leaving 80% for the SGA.
For DSS systems running large, memory-intensive queries, PGA memory can
typically use up to 70% of that total i.e. 3.2 * 0.7 = 2.2 GB ).
Good initial values for the parameter PGA_AGGREGATE_TARGET might be:
For OLTP: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 20%
For DSS: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 70%
where total_mem is the total amount of physical memory available on the system.
In this example, with a value of total_mem equal to 4 GB, you can
initially set PGA_AGGREGATE_TARGET to 1600 MB for a DSS system and to
655 MB for an OLTP system.

Monitoring the Performance of the Automatic PGA Memory Management

Before starting the tuning process, you need to know how to monitor and
interpret the key statistics collected by Oracle Database to help in assessing the performance of the automatic PGA memory management component. Several dynamic performance views are available for this purpose:
 V$PGASTAT
This view gives instance-level statistics on the PGA memory usage and the
automatic PGA memory manager. Various columns of interest are:
over allocation count
: This statistic is cumulative from instance startup. Over-allocating PGA memory can happen if the value of PGA_AGGREGATE_TARGET is too small to accommodate the untunable  component of PGA  plus the minimum memory
required to execute the work area workload. When this happens, Oracle Database cannot honor the initialization parameter PGA_AGGREGATE_TARGET, and extra PGA memory must be allocated. If over-allocation occurs, you should increase the value of PGA_AGGREGATE_TARGET using the information provided by the advice view V$PGA_TARGET_ADVICE.
extra bytes read/written:
When a work area cannot run optimally, one or more extra passes is performed over the input data. extra bytesread/written represents the number of bytes processed during these extra passes since instance startup. This number is also used to compute thecache hit percentage. Ideally, it should be small compared to total bytes processed.
cache hit percentage:
This metric is computed by Oracle Database to reflect the performance of the PGA memory component. It is cumulative from instance startup. A value of 100% means that all work areas executed by the system since instance startup have used an optimal amount of PGA memory.  When a work area cannot run optimally, one or more extra passes is performed over the input data. This reduces the cache hit percentage in proportion to the size of the input data and the number of extra passes performed.
 V$PROCESS
This view has one row for each Oracle process connected to the instance. The
columns PGA_USED_MEM, PGA_ALLOC_MEM, PGA_FREEABLE_MEM andPGA_MAX_MEM can be used to monitor the PGA memory usage of these processes.
 V$SQL_WORKAREA_HISTOGRAM
This view shows the number of work areas executed with optimal memory size,
one-pass memory size, and multi-pass memory size since instance startup.
Statistics in this view are subdivided into buckets that are defined by the optimal memory requirement of the work area. Each bucket is identified by a range of optimal memory requirements specified by the values of the columns LOW_OPTIMAL_SIZE and HIGH_OPTIMAL_SIZE.
V$SQL_WORKAREA_ACTIVE
You can use this view to display the work areas that are active (or executing) in the instance. Small active sorts (under 64 KB) are excluded from the view. Use this view to precisely monitor the size of all active work areas and to determine if these active work areas spill to a temporary segment.
 V$SQL_WORKAREA
Oracle Database maintains cumulative work area statistics for each loaded
cursor whose execution plan uses one or more work areas. Every time a work area is deallocated, the V$SQL_WORKAREA table is updated with execution statistics for that work area.

Tuning PGA_AGGREGATE_TARGET

To help you tune the initialization parameter PGA_AGGREGATE_TARGET, Oracle Database provides the V$PGA_TARGET_ADVICE  view.
V$PGA_TARGET_ADVICE
This view predicts how the statistics cache hit percentage and over
allocation count in V$PGASTAT will be impacted if you change the value of the initialization parameter PGA_AGGREGATE_TARGET. A typical query of this view:
SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
       ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
       ESTD_OVERALLOC_COUNT
  FROM V$PGA_TARGET_ADVICE;
How to Tune PGA_AGGREGATE_TARGET ?
1.  If over allocation count > 0, it indicates that PGA_AGGREGATE_TARGET is too small to even meet the minimum PGA memory needs. If PGA_AGGREGATE_TARGET is set within the over allocation zone, the memory manager will over-allocate memory and actual PGA memory consumed will be more than the limit you set. Set PGA_AGGREGATE_TARGET so that there is no memory over-allocation; avoid setting it in the over-allocation zone.
2. After eliminating over-allocations, aim at maximizing the PGA cache hit percentage, based on your response-time requirement and memory constraints.
   – If available memory >= optimal memory for 100% PGA cache hit %,
        set PGA_AGGREGATE_TARGET = optimal memory as in PGA_TARGET_ADVICE ,
         if AMM is enabled
             Rest of the available memory will be allocated to SGA
         else
              Rest of the available memory will be returned to Operating system
   – If available memory < optimal memory for 100% PGA cache hit %,
        set PGA_AGGREGATE_TARGET = available memory and settle for a lower PGA cache hit %.
V$PGA_TARGET_ADVICE_HISTOGRAM 
This view predicts how the statistics displayed by the performance view will
be impacted if you change the value of the initialization parameter PGA_AGGREGATE_TARGET. You can use the dynamic view V$PGA_TARGET_ADVICE_HISTOGRAM to view detailed information on the predicted number of optimal, one-pass and multi-pass work area executions for the set of PGA_AGGREGATE_TARGET values you use for the prediction.
V$SYSSTAT and V$SESSTAT
Statistics in the V$SYSSTAT and V$SESSTAT views show the total number of work areas executed with optimal memory size, one-pass memory size, and
multi-pass memory size. These statistics are cumulative since the instance or the session was started.
The following query gives the total number and the percentage of times work areas were executed in these three modes since the instance was started:
SELECT name profile, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (sum(value) over ()) total 
               FROM V$SYSSTAT 
               WHERE name like ‘workarea exec%’);


In the third and last article on PGA tuning, I will demonstrate both manual and automatic tuning of PGA.
References:
———————————————————————————————————–

Related links:
Home

Tuning Index
Tuning  PGA : Part – I
Tuning  PGA : Part – III

                                                   ——————

TUNING PGA : PART – I

 In this article, I will discuss about tuning of PGA. First, we will discuss the scenario when PGA was tuned manually , find out what were the problems and how the problems were resolved using automatic tuning of PGA.

Program Global Area or PGA as it is popularly called contains data and control information about a server process. This memory is specific to a single server process and is not accessible by any other server process. Because the PGA is process-specific, it is never allocated in the SGA.

Various  components of PGA are as follows:

  • Private SQL Area:  Holds information about a parsed SQL statement and other session-specific information for processing. A private SQL area is further  divided into the following areas:
    • Run time Area:  Contains query execution state information for example,  the number of rows retrieved so far in a full table scan.
    • Persistent area: Contains bind variable values.
  • SQL Work Areas:  : Used for sorting, hash operations etc.
  • User Global Area (UGA) : It essentially stores the session state.  It contains session information such as logon information, buffers read as a result of direct path reads and other information required by a database Session. The UGA must be available to a database session for the life of the session. In a dedicated server connection, since each session is associated with a dedicated server process having its own PGA, the UGA is stored in the PGA only. In a shared server connection, since a session can use any one of the shared servers, the UGA should be accessible to each one of the shared server processes and hence it cannot be stored in the PGA . Therefore,  when using shared server connections, the UGA is stored in the SGA .

  For complex queries, work areas in PGA are required to perform memory intensive operations.  For example, a sort operator uses the sort area to sort a set of rows. Similarly, a hash join operator uses a hash area to build a hash table from its left input, whereas a bitmap merge uses the bitmap merge area to merge data retrieved from scans of multiple bitmap indexes.

Memory intensive operations which make use of PGA are

– Sort based operators (Order by, group by, rollup, Distinct etc.)
– Hash join
– Bitmap related operations
– Write buffers used by bulk load operations (Direct Path Load)

                              HOW DOES SIZE OF PGA AFFECT APPLICATION’S PERFORMANCE

If amount of data to be processed fits the work area i.e. size of work area is optimal, all the operations are performed in memory.

If memory required by the amount of data to be processed is more than the available workarea, the input is divided into smaller pieces. Then, some pieces pf data are processed in memory while the rest are spilled to temporary tablespace.

When one extra pass is performed on all or some of the input data, the corresponding size of the available work area is called one-pass size i.e. if the data having size equal to one-pass size is sorted with currently available PGA workarea, an extra pass on the data will have to be made.

When the available work area size is even less than one pass threshold, multiple passes over the imput data are needed causing dramatic increase in response time. This is referred to as multipass size of the workarea.

In an OLTP system, size of input data is small and hence they mostly run in optimal mode.

In DSS  systems, where input data is very large , it is important to size the workarea for good performance as in-memory operations are m uch faster than temporary disk operations. Generally, bigger workareas can significantly improve the performance of an operation at the cost of higher memory consumption.

Optimally, the size of workarea should be enough to accommodate the input data.

                        MANUAL PGA MEMORY MANAGEMENT

Earlier releases required DBA to manually specify the maximum workarea size for each type of SQL operator.

PGA has two components : Tunable and untunable

Untunable PGA : consists of

  • Context information of each session
  • Each open cursor
  • PL/SQL, OLAP or Java memory

This component of PGA can’t be tuned i.e. whatever is memory is needed it will be consumed else the operation fails.

Tunable PGA : consists of memory available to SQL work areas (used by various sort operations)

  •  approx. 90% of PGA in DSS systems
  •  approx. 10% of PGA in OLTP systems

This component is tunable in the sense that memory available and hence consumed may be less than what is needed  and the operation will still complete but may spill to disk. Hence, increasing available memory may improve performance of such operations.

                   MANUAL PGA MEMORY MANAGEMENT

In manual management, DBA manually specifies the maximum work area size for each type of SQL operator (Sort, hash ,etc.) i.e. when WORKAREA_SIZE_POLICY = Manual

SORT_AREA_SIZE, HASH_AREA_SIZE, CREATE_BITMAP_AREA_SIZE, BITMAP_MERGE_AREA_SIZE parameters decided the size of memory available for each of these operations per server process.

e.g. SORT_AREA_SIZE = Memory available for sorting to each server process (each user in dedicated server connection)

Let SORT_AREA_SIZE = 1M

As no. of users performing sort operations increase, the total PGA memory for the instance increases linearly as n * 1 M where n = no. of sessions

Implication :

– Each user session will use 1M of PGA for sorting irrespective of the size of the data.

If size of the data is > 1M, his sorts will spill to the disk even though we might be having PGA memory available to the instance i.e. a session won’t be able to use the available memory because the memory allocated to his session has been fixed by SORT_AREA_SIZE.

IF SIZE OF THE DATA IS << 1M, the sorting will take place completely in memory but will still consume 1M of memory which is more than what is needed. The extra memory can’t be transferred to another session needing it.

– If SORT_AREA_SIZE is small, sorts spill to disk and

If SORT_AREA_SIZE is made large and no. of sessions performing sorts is so large that the total memory required by them (n * 1M) is more than the available memory ( Total memory available to oracle instance – SGA ), paging and swapping will take place to satisfy that memory requirement resulting in heavily degraded performance.

Hence, DBA had to continuously monitor the user workload and decide on the appropriate value of SORT_AREA_SIZE such that neither the sorts spilled to disk nor swapping/paging took place.

                   AUTOMATIC PGA MEMORY MANAGEMENT

Automatic PGA memory management resolved above mentioned issues by allowing DBA to allocate an aggregate PGA to all the server processes for all the SQL operations which could be distributed as per the requirement. In this case, Oracle dynamically adapts the SQL memory allocation based on

  • - PGA memory available
  • - SQL operator needs
  • - System workload

With automatic PGA memory management, sizing of SQL work areas for all dedicated server sessions is automatic and all *_AREA_SIZE initialization parameters are ignored for these sessions. At any given time, the total amount of PGA memory available to active work areas on the instance is automatically derived from the parameter PGA_AGGREGATE_TARGET. This amount is set to the value of PGA_AGGREGATE_TARGET minus the PGA memory allocated for other purposes (for example, session memory). The resulting PGA memory is then allotted to individual active work areas based on their specific memory requirements.

  To implement it, two parameters need to be set.

WORKAREA_SIZE_POLICY = AUTO
PGA_AGGREGATE_TARGET = Target size of PGA for the total instance.

Given the specified target PGA, Oracle itself tunes the size of individual PGA’s depending upon the workload.

IMPLICATIONS:

– As the workload changes, memory available to each session changes dynamically while keeping the sum of all PGA allocations under the threshold PGA_AGGREGATE_TARGET.

e.g.

If PGA_AGGREGATE_TARGET = 10G,
let Tunable PGA = 5G
If one session is performing sort needing 5G workarea,
he will get workarea = 5G
If two sessions performing the same sort,
They will get 5/2 = 2.5 G each
and so on..

i.e. Sort area for each session is not fixed. Rather it changes dynamically.

– PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is allocated when instance is started i.e. memory will be allocated only when there is a sorting operation which needs it. Else, it will be returned to the Operating System or transferred to SGA if AMM is enabled.

– HOW IS MEMORY ALLOCATED TO PGA – PRACTICAL IMPLEMENTATION

OVERVIEW

– Setup
– Disable AMM
– Create  3 test tables
. hr.small(9M),
. scott.medium(17M),
. sh.large (33M)
– Set workarea_size_policy = AUTO
– Check current value of  PGA_AGGREGATE_TARGET
– Set PGA_AGGREGATE_TARGET to current value
– To check that PGA memory allocated to the instance can exceed even the specified PGA_AGGREGATE_TARGET if fixed PGA requirement is more

– Check  current allocation/PGA used/overallocation count
– Create a  PL/SQL array requiring large fixed PGA
– Check that  PGA allocated > PGA_AGGREGATE_TARGET
PGA allocated > PGA in use
over allocation count increased

– To check that overallocation count increases if \fixed memory requirement of SQL work area is not satisfied by the extra PGA memory allocated to the instance  (PGA allocated > PGA in use)

– Execute queries on 3 test tables
– Check that overallocation count increases as specific requirement of the SQL workareas is not met by the extra memory allocated in PGA.

—————— IMPLEMENTATION ———————–

- SETUP

- Check if AMM enabled

SQL>CONN / AS SYSDBA

sho parameter memory_target

  – Disable AMM

ALTER SYSTEM SET MEMORY_TARGET = 0;

  – Create 3 test tables
. hr.small(9M),
. scott.medium(17M),
. sh.large (33M)

SQL>ALTER USER SCOTT IDENTIFIED BY tiger account unlock;

ALTER USER Sh IDENTIFIED BY sh account unlock;
GRANT SELECT_CATALOG_ROLE TO HR, SCOTT, SH;
GRANT EXECUTE ON DBMS_STATS TO HR, SCOTT, SH;

CONN HR/hr
drop table hr.small purge;
CREATE TABLE SMALL AS SELECT * FROM dba_OBJECTS;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, ‘SMALL’);

col segment_name for a30
SELECT SEGMENT_NAME, BYTES/1024/1024 MB
FROM   USER_SEGMENTS
WHERE  SEGMENT_NAME = ‘SMALL';

SEGMENT_NAME                           MB
—————————— ———-
SMALL                                   9

CONN SCOTT/tiger

drop table scott.medium purge;
CREATE TABLE MEDIUM AS SELECT * FROM dba_OBJECTS;
INSERT INTO MEDIUM SELECT * FROM MEDIUM;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, ‘MEDIUM’);

col segment_name for a30

SELECT SEGMENT_NAME, BYTES/1024/1024 MB
FROM   USER_SEGMENTS
WHERE  SEGMENT_NAME = ‘MEDIUM';
SEGMENT_NAME                           MB
—————————— ———-
MEDIUM                                 17

CONN Sh/sh

drop table sh.large purge;
CREATE TABLE LARGE AS SELECT * FROM dba_OBJECTS;
INSERT INTO LARGE SELECT * FROM LARGE;
INSERT INTO LARGE SELECT * FROM LARGE;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, ‘LARGE’);

col segment_name for a30
SELECT SEGMENT_NAME, BYTES/1024/1024 MB
FROM   USER_SEGMENTS
WHERE  SEGMENT_NAME = ‘LARGE';
SEGMENT_NAME                           MB
—————————— ———-
LARGE                                  33

 – Set WORKAREA_SIZE_POLICY = AUTO

SQL>CONN / AS SYSDBA
ALTER SYSTEM SET WORKAREA_SIZE_POLICY = AUTO;

- Check current value of PGA_AGGREGATE_TARGET

SHO PARAMETER PGA_AGGREGATE_TARGET

NAME                                 TYPE        VALUE
———————————— ———–
pga_aggregate_target                 big integer 496M

– Note that current PGA allocated <  PGA_AGGREGATE_TARGET

as currently only the PGA required for fixed areas has been allocated
COL NAME FOR A35

SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’, ‘total PGA allocated’);

NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter             496
total PGA allocated                 56.3554688
- Set PGA_AGGREGATE_TARGET to current value
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 496M;
———————

To check that PGA memory allocated to the instance can exceed even the specified PGA_AGGREGATE_TARGET if fixed PGA requirement is more
———————–

  – Check current allocation /  PGA used / overallocation count

– Note that
– PGA allocated (57M) > PGA in use (44M) i.e. some extra memory is allocated
– Overallocation count  = 0 because currently
the fixed memory requirement  < PGA_AGGREGATE_TARGET (496M)

COL NAME FOR A35

SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);

NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter             496
total PGA inuse                     44.9199219
total PGA allocated                 57.2304688
over allocation count                        0

   – Create a  PL/SQL array requiring large fixed PGA

SQL>create or replace package demo_pkg    As
type array is table of char(2000) index by binary_integer;
g_data array;
end;
/

- Fill up the chararray (a CHAR datatype is blank-padded so each of     these array elements is exactly 2,000 characters in length):

SQL> begin
for i in 1 .. 200000
loop
demo_pkg.g_data(i) := ‘x';
end loop;
end;
/

    – Check that 

      PGA allocated (509M)  > PGA_AGGREGATE_TARGET (496M)

       PGA allocated (509M) > PGA in use (493M)

over allocation count increased (0 to 2)

COL NAME FOR A35

SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’)
union
SELECT NAME, VALUE
FROM   V$PGASTAT
WHERE NAME IN (‘over allocation count’);

NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter             496
over allocation count                        2
total PGA allocated                 509.044922
total PGA inuse                     493.198242

- To check that overallocation count increases if fixed memory requirement of SQL work area is not satisfied by the extra PGA memory allocated to the instance  (PGA allocated > PGA in use)

- Execute queries on 3 test tables

– Check that overallocation count increases as specific requirement of the SQL workareas is not met by the extra memory allocated in PGA.

HR>CONN HR/hr

set autotrace traceonly
select * from small order by 1,2,3,4,5,6;
SYS>COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’)
union
SELECT NAME, VALUE
FROM   V$PGASTAT
WHERE NAME IN (‘over allocation count’);

SCOTT>CONN scott/tiger

set autotrace traceonly
select * from medium order by 1,2,3,4,5,6;
SYS>COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’)
union
SELECT NAME, VALUE
FROM   V$PGASTAT
WHERE NAME IN (‘over allocation count’);

SH>CONN sh/sh

set autotrace traceonly
select * from large order by 1,2,3,4,5,6;

SYS>COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’)
union
SELECT NAME, VALUE
FROM   V$PGASTAT
WHERE NAME IN (‘over allocation count’);

————————————————————

— cleanup —

sql>conn / as sysdba

drop table hr.small purge;
drop table scott.medium purge;
drop table sh.large purge;

– enable AMM if it was initially enabled

SQL>ALTER SYSTEM SET MEMORY_TARGET= <inital value>;

In my next article (Tuning PGA Part-II)  in this series, I will discuss how to set an appropriate value for PGA_AGGREGATE_TARGET.

References:

Expert Oracle Database Architecture by Thomas Kyte
Oracle database operating system memory allocation management for PGA
https://docs.oracle.com/cd/E11882_01/server.112/e40540/memory.htm#CNCPT803
http://ksun-oracle.blogspot.in/2015/09/limit-pga-memory-usage.html

——————————————————————————————————

Related links:

Home

Tuning Index
Tuning PGA : Part – II
Tuning PGA : Part – III

                                                  ——————

SIMULATE BUFFER BUSY WAIT AND IDENTIFY HOT OBJECT

In my earlier post on Buffer Cache Wait Events, I had talked about buffer busy waits. 
 
Lets see a demonstration of how to simulate a Buffer Busy Wait scenario and how to find out object part of buffer busy wait.
Login to your database as sys user
SYS> create user test identified by test;
     grant dba to test;
     grant select on v_$session to test;
Now Connect to Test User and create a table with 10000 records .
SYS> conn test/test
TEST> create table t nologging
      as
      select rownum t1,rownum+1 t2
      from dual
      connect by level<=10000;
The
Next thing would be to create a package which will allow me to access
the same set of blocks from multiple session at the  same time.
TEST>CREATE OR REPLACE PACKAGE GEN_BUF_BUSY
    authid current_user
AS
TYPE t_RefCur IS REF CURSOR;
FUNCTION RetDSQL(p_sql IN VARCHAR2) RETURN t_RefCur;
PROCEDURE RUNSELECT;
PROCEDURE RUNUPDATE1;
PROCEDURE RUNUPDATE2;
procedure kill_session(p_username varchar2);
END GEN_BUF_BUSY;
/
Package would have one select procedure and two update procedure updaing different columns of the table at the same row.
TEST>CREATE OR REPLACE PACKAGE BODY GEN_BUF_BUSY AS
FUNCTION RetDSQL(p_sql IN VARCHAR2) RETURN t_RefCur
as
v_RetCur t_RefCur;
BEGIN
OPEN v_RetCur FOR p_sql;
RETURN v_RetCur;
end RETDSQL;
PROCEDURE RUNSELECT
as
cursor dummy is select * from t;
c1 t_refcur;
rec dummy%rowtype;
begin
for i in 1..1000000
loop
c1:=retDSQL(‘select * from t’);
loop
fetch c1 into rec;
exit when c1%notfound;
end loop;
close c1;
end loop;
end RUNSELECT;
PROCEDURE RUNUPDATE1
as
BEGIN
for i in 1..1000000
loop
update t set t1=rownum;
commit;
end loop;
END RUNUPDATE1;
PROCEDURE RUNUPDATE2
as
BEGIN
for i in 1..1000000
loop
update t set t2=rownum;
commit;
end loop;
END RUNUPDATE2;
procedure kill_session(p_username varchar2)
as
cursor c1 is select sid,serial# from sys.v_$session where username=p_username;
rec c1%rowtype;
begin
open c1;
loop
fetch c1 into rec;
exit when c1%notfound;
execute immediate ‘alter system kill session ”’||rec.sid||’,’||rec.serial#||””;
end loop;
end;
end GEN_BUF_BUSY;
/
Now open Six sessions
Session 1 :  SYS User
Session 2 :  TEST
Session 3 :  TEST
Session 4 :  TEST
Session 5 :  TEST
Session 6 :  TEST
and run the following procedures
exec GEN_BUF_BUSY.runUPDATE1; — Run on SESSION 2
exec GEN_BUF_BUSY.runUPDATE2; — Run on SESSION 3
exec GEN_BUF_BUSY.runSELECT;  — Run on SESSION 4
exec GEN_BUF_BUSY.runSELECT;  — Run on SESSION 5
exec GEN_BUF_BUSY.runSELECT;  — Run on SESSION 6
From session one (SYS) monitor by following queries 
SYS> col event for a30
     select event,p1 file#,p2 block#
     from v$session_wait
     where sid in
         (select sid from v$session
          where username=’TEST’);
EVENT                               FILE#     BLOCK#
—————————— ———- ———-
log buffer space                        0          0
SQL*Net message from client    1650815232          1
buffer busy waits                      4     190828
buffer busy waits                       4     190828
enq: TX – row lock contention  1415053318     131096
SQL*Net message from client    1650815232          1
P1 stands for File id and P2 stands for Block id when you work with Buffer Busy Waits.
Now we have two methods to identify the object the wait belongs to :
1.Using the dump of the file/block
2. Using DBA_EXTENTS
Let’s use both of them :
1. Take a dump of the specified file/block
    Identify the seg/obj  in trace file . It is given in hexadecimal format in  trace file.
   Convert the hexadecimal no. to decimal no.
   Identify the object from dba_objects
- Take the dump of the specified file/block
SYS>alter system dump datafile 4 block 190828 ;
- Identify the tracefile of the session
SYS> col value for a70
          select  value from v$diag_info
          where name like ‘%Trace File%';
VALUE
———————————————————————-
/u01/app/oracle/diag/rdbms/orcl1/orcl1/trace/orcl1_ora_9088.trc
-    Identify the seg/obj  in trace file .
SYS>ho vi u01/app/oracle/diag/rdbms/orcl1/orcl1/trace/orcl1_ora_9088.trc
Object id on Block? Y
 seg/obj: 0x128cf csc: 0x00.fdd256  itc: 3  flg: E  typ: 1 – DATA
     brn: 0  bdba: 0x102e968 ver: 0x01 opc: 0
     inc: 0  exflg: 0
-  The object-id in hexadecimal format is 128cf
-   Convert the hexadecimal no. to decimal no. – It can be done using simple maths or by creating a function to do it.
     . Using mathematics
sql>select 15 + 12*16 + 8*16*16 + 2*16*16*16 + 1*16*16*16*16
    from dual;
15+12*16+8*16*16+2*16*16*16+1*16*16*16*16
—————————————–
                                    75983
SQL>  col owner for a30
      col object_name for a25
  
      select owner,object_name
      from dba_objects
      where data_object_id = 75983;
OWNER                          OBJECT_NAME
—————————— ——————–
TEST                           T
OR
     . Create a function to convert hexadecimal to decimal and then use the function to find the object
CREATE OR REPLACE FUNCTION HEX2DEC (hexnum IN CHAR) RETURN NUMBER IS
  i                 NUMBER;
  digits            NUMBER;
  result            NUMBER := 0;
  current_digit     CHAR(1);
  current_digit_dec NUMBER;
BEGIN
  digits := LENGTH(hexnum);
  FOR i IN 1..digits LOOP
     current_digit := SUBSTR(hexnum, i, 1);
     IF current_digit IN (‘A’,’B’,’C’,’D’,’E’,’F’) THEN
        current_digit_dec := ASCII(current_digit) – ASCII(‘A’) + 10;
     ELSE
        current_digit_dec := TO_NUMBER(current_digit);
     END IF;
     result := (result * 16) + current_digit_dec;
  END LOOP;
  RETURN result;
END hex2dec;
/
SYS>  select owner,object_name
      from dba_objects
      where data_object_id=hex2dec(upper(‘128cf‘));
OWNER                          OBJECT_NAME
—————————— ——————–
TEST                           T
2. Use DBA_EXTENTS : Identify the object to which the specified  block belongs
SQL>col owner for a10
    col segment_name for a15
    select owner, segment_name
    from dba_extents
    where file_id = 4
      and 190828 between block_id and block_id+blocks-1;
OWNER      SEGMENT_NAME
———- —————
TEST       T

References:

http://www.oracledba.in/Articles/display_article.aspx?article_id=784

————————————————————————————————————

Related links: 

Home

Database Index

Tuning Index 
Buffer Cache Wait Events
Clustering Factor Demystified
Direct Read Enhancements in 11g

Oracle Checkpoints

 

 

———————–

TUNING SHARED POOL : A DEMONSTRATION

In my earlier post, cursor sharing demystified, I had discussed and demonstrated the effects of various values of the

parameter CURSOR_SHARING. In this post, I will demonstrate that hard parsing can be reduced by
  •  – Replacing literals with bind variables
  •  – setting CURSOR_SHARING=SIMILAR

Overview

1. create a procedure to query sh.sales without using bind variables
2. set cursor_sharing to exact
3. Take begin snapshot for awr/statspack
4. run the procedure
5. Take end snapshot for awr/statspack
6. Generate awr/statspack report
7. Both reports indicate low soft parsing
8. Resolve the problem using

– set cursor_sharing to similar

OR

– set cursor_sharing to exact and use bind variables in the procedure

9. Run the procedure and generate awr/statspack report
10. Verify that soft parsing has increased considerably

————————————————————————-

Implementation

SQL>conn sh/sh

— create package test in which we are querying table sh.sales repeatedly (50000 times)  without using bind variables

sql>
CREATE OR REPLACE PACKAGE test AS
     PROCEDURE Workload;
     PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE);
END;
/

CREATE OR REPLACE PACKAGE BODY test AS
PROCEDURE Workload IS
BEGIN
FOR i in 1 .. 50000
LOOP
Foo(i);
END LOOP;
END Workload;
PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE) IS
BEGIN
DECLARE
l_stmt VARCHAR2(2000);
BEGIN
l_stmt := 'SELECT * FROM sh.sales s WHERE s.cust_id = ' ||
TO_CHAR(CUSTID);
EXECUTE IMMEDIATE l_stmt;
END;
END Foo;
END;
/

— Create begin snapshot for statspack report —

sql>conn perfstat/perfstat
       exec statspack.snap;

— create begin snapshot for awr report —

sql>conn / as sysdba
    exec dbms_workload_repository.create_snapshot();
    alter system set cursor_sharing = exact scope=memory;

— execute the workload —

sql>conn sh/sh
       exec test.workload;

— Create end snapshot for statspack report —

sql>conn perfstat/perfstat
      exec statspack.snap;

— create end snapshot for awr report —

sql>conn / as sysdba
       exec dbms_workload_repository.create_snapshot();

— create statspack report —

sql>conn perfstat/perfstat
    @?/rdbms/admin/spreport.sql

— create awr report —

sql>conn / as sysdba
    @?/rdbms/admin/awrrpt.sql

— Instance efficieny section of both awr and statspack report  show that soft parse % is very low as cursor sharing is exact   and bind variables have not been used.

— To solve this issue, we can take two actions :

  1.  Change cursor_sharing to similar
  2.  Use bind variables in the procedure

-—————– Solution 1 —————————

       Change cursor_sharing to similar (if application code cannot be changed)

sql>conn / as sysdba
    alter system set cursor_sharing=similar scope=memory;

— create begin snapshot for awr report —

sql>conn / as sysdba
    exec dbms_workload_repository.create_snapshot();

— execute the workload —

sql>conn sh/sh
    exec test.workload;

— Create end snapshot for statspack report —

sql>conn perfstat/perfstat

       exec statspack.snap;

— create end snapshot for awr report —

sql>conn / as sysdba
    exec dbms_workload_repository.create_snapshot();

— create statspack report —

sql>conn perfstat/perfstat
    @?/rdbms/admin/spreport.sql

— create awr report —

sql>conn / as sysdba
    @?/rdbms/admin/awrrpt.sql

— Instance efficieny section of both awr and statspack report  show that soft parse % has increased substantially as cursor  sharing has been set to similar

—————— Solution 2 —————————

      . Use bind variables in the procedure

— Reset cursor_sharing to exact

sql>conn / as sysdba
    alter system set cursor_sharing=exact scope=memory;

— alter the Foo procedure to use bind variables :

CREATE OR REPLACE PACKAGE BODY test AS
      PROCEDURE Workload IS
       BEGIN
             FOR i in 1 .. 50000     LOOP
                 Foo(i);
             END LOOP;
         END Workload;


       PROCEDURE Foo(CUSTID IN sh.sales.cust_id%TYPE) IS
       BEGIN
           DECLARE
              l_stmt VARCHAR2(2000);
            BEGIN
                   l_stmt := ‘SELECT * FROM sh.sales s WHERE s.cust_id = :p_cust_id';
                  EXECUTE IMMEDIATE l_stmt USING CUSTID;
            END;
         END Foo;
END;
/

— Create begin snapshot for statspack report —

sql>conn perfstat/perfstat
       exec statspack.snap;

— create begin snapshot for awr report —

sql>conn / as sysdba
 exec dbms_workload_repository.create_snapshot();
        alter system set cursor_sharing = exact scope=memory;

— execute the workload —

sql>conn sh/sh
      exec test.workload;

— Create end snapshot for statspack report —

sql>conn perfstat/perfstat
       exec statspack.snap;

— create end snapshot for awr report —

sql>conn / as sysdba
       exec dbms_workload_repository.create_snapshot();

— create statspack report —

sql>conn perfstat/perfstat
    @?/rdbms/admin/spreport.sql

— create awr report —

sql>conn / as sysdba
      @?/rdbms/admin/awrrpt.sql

— Instance efficieny section of both awr and statspack report  show that soft parse % has increased substantially as bind variables  have been used.
————————————————————————————————————————-

Related links:
                                                                                                                                   ————–

CURSOR SHARING DEMYSTIFIED

As I discussed in my earlier post on parent and child cursors, multiple child cursors may be created for the same parent cursor if bind variables have different values.
In this post I will discuss about the parameter CURSOR_SHARING which controls the sharing of child cursors if bind variables have different values.

The parameter CURSOR_SHARING can take 3 values :

  •  – EXACT
  •  – SIMILAR
  •  – FORCE

Let’s see the impact of different values :

CURSOR_SHARING = EXACT

– In this case when the same statement is issued with different literals, multiple parent cursors will be created.

— create a test table with

1 record with id1 = id2 = 1

1000 records with id1 = id2 = 2

2000 records with id1 = id2= 3

— create an index on the table

HR> drop table test purge;
create table test (id1 number, id2 number, txt char(1000));
insert into test values (1,1, ‘one’);

begin
for i in 1..1000 loop
insert into test values (2,2, ‘two’);
insert into test values (3,3, ‘three’);
end loop;
end;
/

    insert into test select * from test where id1=3;
commit;

    create index test_idx1 on test(id1);
create index test_idx2 on test(id2);

    select id1,id2, count(*)
from test
group by id1,id2;


CURSOR_SHARING=EXACT 

 

Parent   Parent  Parent
       |               |             |
 Child     Child  Child
 

— Flush the shared pool

Set cursor_sharing=exact

SYS>alter system set CURSOR_SHARING=’EXACT';
alter system flush shared_pool;
sho parameter CURSOR_SHARING

NAME                                 TYPE        VALUE
—–                                        —-        —–

cursor_sharing                       string      EXACT
— Issue identical statements with different values of literals

HR>conn hr/hr
select count(*) from test where id1=1;
select count(*) from test where id1=2;
select count(*) from test where id1=3;

— Check that the 3 parent cursors have been created

— Note that   there is one record for each statement in v$sqlarea as   one parent cursor is created for each sql statement since  each of these statements differ in their text.

  •     Each statement has different SQL_ID/HASH_VALUE
  •    There is one child per parent cursor (version_count=1)
  •     Execution plans for id = 2,3 is same (full table scan) (same PLAN_HASH_VALUE)
  •     Execution plan for id = 1 is different (indexed access)

SYS>col sql_text for a30 word_wrapped

         SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE    PLAN_HASH_VALUE
—————————— ————- ————- ———-               —————

select count(*) from test      1n09m564gh0q3          1  2297955011       4192825871
where id1=3

select count(*) from test      20nhaap8uxf7s             1   1370405112       3507950989
where id1=2

select count(*) from test      bavqx2mw26wg0         1  4163072480    3507950989
where id1=1

— Note that 3 child cursors have been created for the 3 statements

SYS>col child_number for 99
         SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE, 
                        PLAN_HASH_VALUE
         FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE    PLAN_HASH_VALUE

—————————— ————- ———- ———-   —————

select count(*) from test      1n09m564gh0q3          0 2297955011   4192825871
where id1=3

select count(*) from test      20nhaap8uxf7s          0 1370405112       3507950989
where id1=2

select count(*) from test      bavqx2mw26wg0          0 4163072480   3507950989
where id1=1

— We can see that in all 6 cursors have been created :
– 3 parent cursors and
– 3 child cursors

Each of the cursor occupies memory. Parent cursors contain sql text whereas
child cursor contains execution plan, execution statistics and execution
environment. If we replace literal with a bind variable, all the 3 statements
will be identical and hence only parent cursor needs to be created. Multiple
child cursors can be created for different values of the bind variables.

That’s what CURSOR_SHARING=SIMILAR does. It replaces literals in the otherwise
identical SQL statements with bind variables and only one parent cursor is
created.

If histogram on a column is created with only one bucket,i.e. it does not know about the skew
in data, only one child cursor will be created.

If histogram is created on a column with >1 buckets i.e. it knows about skew in data in that
column, it  will create one child cursor for each statement even of the execution plan is same.

Thus CURSOR_SHARING=SIMILAR reduces the no. parent cursors.

If there is skew in data
If histogram on the column containing skewed data is there
multiple child cursors may be created – one for each value of the bind variable
else (histogram is not available)
only one child cursor will be created.
else (Data is not skewed)
only one child cursor will be created.

Now, since there is identical skewed data in id1 and id2 , we will create histogram  on id1
with one bucket and on id2 with 4 buckets and see the difference.

CURSOR_SHARING=SIMILAR  WITHOUT HISTOGRAM

Parent  
  |        
 Child    

— create histogram only on id1 with one bucket so that optimizer does not
know about the skew —

HR>exec dbms_stats.gather_table_stats(OWNNAME => ‘HR’,-
TABNAME => ‘TEST’,-
ESTIMATE_PERCENT =>null,-
METHOD_OPT => ‘FOR COLUMNS SIZE 1 ID1′);

— Set cursor_sharing = similar —

— Flush the shared pool

SYS>alter system set CURSOR_SHARING=’SIMILAR';
alter system flush shared_pool;
sho parameter CURSOR_SHARING

— Issue identical statements with different values of literals for the column on which histogram is not there (id1)

HR>conn hr/hr
select count(*) from test where id1=1;
select count(*) from test where id1=2;
select count(*) from test where id1=3;

— Check that the only 1 parent cursor has been created and literal has been replaced by bind variable. ( 1 record in v$SQLAREA)

.There is only one child  cursor (version_count=1) since the optimizer does not know about skew in data

SYS>col sql_text for a30 word_wrapped

         SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE    PLAN_HASH_VALUE

—————————— ————- ————- ———-  —————

select count(*) from test      07tpk6bm7j4qm             1   3866661587   3507950989
where id1=:”SYS_B_0″

— Note there is only one child cursor created i.e. same execution plan will be used for different values of the bind variable

SYS>col child_number for 99
         SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE,
                       PLAN_HASH_VALUE
         FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE  PLAN_HASH_VALUE

—————————— ————- ———- ———-  —————

select count(*) from test      07tpk6bm7j4qm          0   3866661587    3507950989
where id1=:”SYS_B_0″

CURSOR_SHARING=SIMILAR  WITH HISTOGRAM

               Parent

                   +
   +—- —+——–+
   |                |               |
  Child    Child    Child

— create histogram  on id2 with  4 buckets so that optimizer knows about  the skew in data —

HR>exec dbms_stats.gather_table_stats(OWNNAME => ‘HR’,-
TABNAME => ‘TEST’,-
ESTIMATE_PERCENT =>null,-
CASCADE => TRUE,-
METHOD_OPT => ‘FOR COLUMNS SIZE 4 ID2′); 

— Issue identical statements with different values of literals for the column  on which histogram is there (id2)

SYS>alter system flush shared_pool;

HR>conn hr/hr
select count(*) from test where id2=1;
select count(*) from test where id2=2;
select count(*) from test where id2=3;

— Check that the only 1 parent cursor has been created and literal has been replaced by bind variable. ( 1 record in v$SQLAREA)
.There are 3 child cursors (version_count=3)

SYS>col sql_text for a30 word_wrapped

         SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE  PLAN_HASH_VALUE

—————————— ————- ————- ———-  ————–

select count(*) from test      3tcujqmqnqs8t             3   3981140249  2432738936
where id2=:”SYS_B_0″

— Note that 3 child cursors have been created as optimizer realizes that data is skewed and different execution plans will be more efficient for different values of the bind variable.
—  2 children have same execution plan (PLAN_HASH_VALUE)      (for id=2 and 3 (Full table scan )

SYS>col child_number for 99
    SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE,  
                   PLAN_HASH_VALUE
     FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE  PLAN_HASH_VALUE

—————————— ————- ———- ———-   ————–

select count(*) from test      3tcujqmqnqs8t          0   3981140249   2432738936
where id2=:”SYS_B_0″

select count(*) from test      3tcujqmqnqs8t          1   3981140249   2432738936
where id2=:”SYS_B_0″

select count(*) from test      3tcujqmqnqs8t          2 3981140249   1489241381
where id2=:”SYS_B_0″

Hence, it can be seen that setting CURSOR_SHARING=SIMILAR
– replaces literals with bind variables in otherwise identical sql statements

  • - Only one child cursor is created if optimizer does not know about skew in   data
  • - If optimizer is aware of the skew in data, Multiple child cursors are created   for each distinct value of the bind   variable even if they have the same   executiion plan.

Ideally we would like one child cursor to be created if execution plan is same for different values of the bind variable.

Setting CURSOR_SHARING=FORCE IN 11G does precisely this but only if the optimizer is

aware about the skew in the data. Let’s see:

CURSOR_SHARING=FORCE IN 11G WITHOUT HISTOGRAM

Parent  

     |        
 Child    
– Flush the shared pool and issue query using the column without histogram on
it so that optimizer is not aware of the skew.
SYS>alter system set CURSOR_SHARING=’FORCE';

          alter system flush shared_pool;

HR>conn hr/hr
select count(*) from test where id1=1;
select count(*) from test where id1=2;
select count(*) from test where id1=3;

— Note that only one parent cursor is created

One child cursor has been created (version_count=1)

SYS>col sql_text for a30 word_wrapped
         SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
         FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE   PLAN_HASH_VALUE

—————————— ————- ————- ———-    ————–

select count(*) from test      07tpk6bm7j4qm             1   3866661587   3507950989
where id1=:”SYS_B_0″

— Note that 1 child cursor has been created

SYS>col child_number for 99
    SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE,
                   PLAN_HASH_VALUE
     FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE   PLAN_HASH_VALUE

—————————— ————- ———- ———-   —————

select count(*) from test      07tpk6bm7j4qm          0   3866661587    3507950989
where id1=:”SYS_B_0″

CURSOR_SHARING=FORCE IN 11G WITH HISTOGRAM
      Parent

           |

   +—+—-+
   |              |
  Child    Child

– Flush the shared pool and issue query using the column with histogram on it so that optimizer is aware of the skew.

SYS> alter system flush shared_pool;

HR>conn hr/hr
select count(*) from test where id2=1;
select count(*) from test where id2=2;
select count(*) from test where id2=3;

— Note that only one parent cursor is created

Two child cursors have been created (version_count=2)

SYS>col sql_text for a30 word_wrapped

         SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE   PLAN_HASH_VALUE
—————————— ————- ————- ———-   —————

select count(*) from test      3tcujqmqnqs8t             2   3981140249   2432738936
where id2=:”SYS_B_0″

— Note that 2 child cursors have been created and    each child has a distinct execution plan (PLAN_HASH_VALUE)

SYS>col child_number for 99
         SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE,  
                        PLAN_HASH_VALUE
         FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE  PLAN_HASH_VALUE

—————————— ————- ———- ———-  —————

select count(*) from test      3tcujqmqnqs8t          0  3981140249   2432738936
where id2=:”SYS_B_0″

select count(*) from test      3tcujqmqnqs8t          1  3981140249  1489241381
where id2=:”SYS_B_0″

Hence, setting CURSOR_SHARING=FORCE in 11g will use the same child cursor if   execution plan is same for different values of the bind variables which means  saving in memory in the shared pool and saving in the time for scanning the  hash chains in the library cache . This new feature of 11g is called ADAPTIVE CURSOR SHARING.

Note: The behaviour of CURSOR_SHARING=FORCE in 11g is different from 9i/10g. Earlier, it would peek the value of the bind variable during the first execution and decide on the eexcution plan. On subsequent execution of the same statement with different values of the bind variable, it would reuse the same plan irrespective of the skew in the data.

CURSOR_SHARING=FORCE IN 10G WITH/WITHOUT HISTOGRAM

Parent  

  |        

 Child    
  Let’s demonstrate this by simulating 10g optimizer by setting the parameter optimizer_geatures_enable to 10.2.0.0.

SYS> alter system set optimizer_features_enable=’10.2.0.3′;

— Flush the shared pool and issue query using the column with histogram on

it so that optimizer is aware of the skew.

SYS> alter system flush shared_pool;

HR>conn hr/hr
select count(*) from test where id2=1;
select count(*) from test where id2=2;
select count(*) from test where id2=3;

— Note that only one parent cursor is created

Only child cursor has been created (version_count=1)

SYS>col sql_text for a30 word_wrapped

    SELECT SQL_TEXT , SQL_ID, VERSION_COUNT, HASH_VALUE,PLAN_HASH_VALUE
FROM V$SQLAREA
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID        VERSION_COUNT HASH_VALUE PLAN_HASH_VALUE

—————————— ————- ————- ———- —————

select count(*) from test      3tcujqmqnqs8t             1 3981140249      2432738936

where id2=:”SYS_B_0″

— Note that 1 child cursor has been created

SYS>col child_number for 99

    SELECT SQL_TEXT, SQL_ID, CHILD_NUMBER CHILD#, HASH_VALUE, PLAN_HASH_VALUE
FROM V$SQL
WHERE LOWER(SQL_TEXT) LIKE ‘select count(*) from test%’
AND LOWER(SQL_TEXT) NOT LIKE ‘%HASH%';

SQL_TEXT                       SQL_ID            CHILD# HASH_VALUE PLAN_HASH_VALUE

—————————— ————- ———- ———- —————

select count(*) from test      3tcujqmqnqs8t          0 3981140249      2432738936

where id2=:”SYS_B_0″

– cleanup –

SYS>alter system set optimizer_features_enable=’11.2.0.1′;

         drop table hr.test purge;


CONCLUSION:

CURSOR_SHARING = EXACT

– Causes maximum memory usage in library cache as two cursors – one parent and one child cursor are created for each distinct value of the bind variable.

– Gives best performance as optimizer creates different execution plan for each value of the bind variable.

CURSOR_SHARING = SIMILAR
- Reduces memory usage in library cache as only one parent cursor is created .
- If data is not skewed or the optimizer is not aware of the skew, optimizer peeks at the value of the bind variable on the first execution of the statement and that plan is used for all the  values of the bind variable. Thus only one child cursor is created resulting in minimum memory usage by child cursors. In this case performance will be affected if there is skew in the data.
- If data is skewed and the optimizer is aware of the skew, multiple child cursor are created – one for each distinct value of the bind variable. In this case performance will be the best as optimizer creates different execution plan for each value of the bind variable. But in this case we will have multiple child cursors created for the same execution plan.
CURSOR_SHARING = FORCE IN 10g
- Causes minimum memory usage in library cache as only one parent cursor and only one child cursor are created .
- In this case performance will be affected if there is skew in the data.
CURSOR_SHARING = FORCE IN 11g (ADAPTIVE CURSOR SHARING)
- Reduces  memory usage in library cache as only one parent cursor and only one child cursor are created .
- If data is not skewed or the optimizer is not aware of the skew, optimizer peeks at the value of the bind variable on the first execution of the statement and that plan is used for all the  values of the bind variable. Thus only one child cursor is created resulting in minimum memory usage by child cursors. In this case performance will be affected if there is skew in the data. (same scenario as cursor_sharing=similar )
- If data is skewed and the optimizer is aware of the skew, multiple child cursor are created for different values of the bind variable – one for each distinct execution plan . In this case performance will be the best as optimizer creates different execution plans for different values of the bind variable. But in this case we will have only child cursor created for the same execution plan thereby resulting in optimum memory usage by child cursors.In my next post on Tuning Shared Pool , I will demonstrate how can we reduce hard parsing by- replacing literals with bind variables- setting cursor_sharing = similar

————————————————————————————
 
Related links: