Monthly Archives: October 2013

UNABLE TO ACCESS EM EXPRESS FOR NON-CDB

I recently faced the issue that I could not access EM express for a non CDB .

- I found out the https port no. for EM express using following query:

select dbms_xdb_config.gethttpsport from dual;

GETHTTPSPORT
------------
5500

- Tried to access  EM express by entering following URL in the browser:

https://<hostname>:5500/em

- Got the following error message:

Unable to connect

Firefox can’t establish a connection to the server at em12.example.com:5501.

- I checked if my database was registered with default listener

[oracle@em12 bin]$ lsnrctl stat

Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=em12.example.com)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary…
Service em12rep” has 1 instance(s).
Instance “em12rep”, status READY, has 1 handler(s) for this service…
Service “em12repXDB” has 1 instance(s).
Instance “em12rep”, status READY, has 1 handler(s) for this service…
The command completed successfully

– The output clearly shows that my database orcl is not registered with the default listener running on port 1521.

- I checked the parameter local_listener for my database

SQL> sho parameter local_listener

NAME                                 TYPE        VALUE
———————————— ———– ——————————
local_listener                       string      LISTENER_ORCL

– Checked the entry for LISTENER_ORCL in $ORACLE_HOME/network/admin/tnsnames.ora

– It showed a listener named LISTENER_ORCL  at port 1522

[oracle@em12 bin]$ vi $ORACLE_HOME/network/admin/tnsnames.ora

# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENER_ORCL =
  (ADDRESS = (PROTOCOL = TCP)(HOST = em12.example.com)(PORT = 1522))

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = em12.example.com)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

– Created a listener LISTENER_ORCL in $OARCLE_HOME at port 1522 by adding following lines to listener.ora

[oracle@em12 bin]$ vi $ORACLE_HOME/network/admin/listener.ora

LISTENER_ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = em12.example.com)(PORT = 1522))
  )

– Started listener LISTENER_ORCL

[oracle@em12 bin]$ lsnrctl stat listener_orcl

– Registered database with LISTENER_ORCL

SQL> alter system register;

– Checked that database has been successfully registered with LISTENER_ORCL

[oracle@em12 bin]$ lsnrctl services listener_orcl

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=em12.example.com)(PORT=1522)))
Services Summary…
Service “orcl” has 1 instance(s).
Instance “orcl”, status READY, has 1 handler(s) for this service…
Handler(s):
“DEDICATED” established:0 refused:0 state:ready
LOCAL SERVER
Service “orclXDB” has 1 instance(s).
Instance “orcl”, status READY, has 1 handler(s) for this service…
Handler(s):
“D000″ established:8 refused:0 current:0 max:1022 state:ready
DISPATCHER <machine: em12.example.com, pid: 7014>
(ADDRESS=(PROTOCOL=tcp)(HOST=em12.example.com)(PORT=59034))
The command completed successfully

- Tried to access  EM express by entering following URL in the browser:

https://<hostname>:5500/em

I could successfully access database login page.

Conclusion:

If you want to access EM-express for a database registered with non-default listener (say LISTENER1),
– Create an entry for listener (LISTENER1) in tnnames.ora
– Set parameter LOCAL_LISTENER to listener (LISTENER1)

 

I hope this post was helpful.

Your comments and suggstions are always welcome.

References:
http://www.oracle.com/technetwork/database/manageability/emx-intro-1965965.html#A5
http://www.oracle.com/technetwork/database/manageability/emx-cdb-1965987.html

——————————————————————————————-

Related Links:

Home

Database 12c Index

12c: Access EM Express for CDB / PDB / Non-CDB

Error : NMO not setuid-root (Unix Only)

 

ERROR: NMO not setuid-root (Unix-only)

I got this error when I was trying to specify Host Credentials in EM 12c cloud control.

Cause: root.sh had not been executed from agent home

Solution: I executed root.sh from agent home and the problem was resolved.

[root@em12 dbhome_1]# /u01/app/oracle/product/agent/core/12.1.0.2.0/root.sh

I hope this information was useful.

—————————————————————————————–

Related Links:

Home

Database 12c Index

12c : Access EM Express for CDB / PDB / Non-CDB
12c: Unable To Access EM Express For Non-CDB

 

 

 

11g R2 RAC: REBOOT-LESS FENCING WITH MISSING DISK HEARTBEAT

In my earlier post, I had discussed about reboot-less node fencing , a new feature introduced since 11.2.0.2. In this post, I will demonstrate reboot-less node fencing when disk heartbeat is lost.

– Check that clusterware version is 11.2.0.3

[root@host02 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]

– check that both the nodes in the cluster are active

[root@host02 ~]# olsnodes -s
host01  Active
host02  Active

– Stop ISCSI service on node2

[root@host02 ~]# service iscsi stop
Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:tsn.e55ea88d0212, portal: 

192.9.201.182,3260]
Logout of [sid: 1, target: iqn.2006-01.com.openfiler:tsn.e55ea88d0212, portal: 192.9.201.182,3260]: 

successful
Stopping iSCSI daemon:

– Alert log of node2 –

– Note that instead of rebooting the node, CRSD resources are cleaned up

[cssd(2876)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK013; details at (:CSSNM00059:) 

...

[cssd(2876)]CRS-1606:The number of voting files available, 0, is less than the minimum number of 

voting files required, 1, resulting in CSSD termination to ensure data integrity; 

[cssd(2876)]CRS-1656:The CSS daemon is terminating due to a fatal error; 

[cssd(2876)]CRS-1652:Starting clean up of CRSD resources.
2013-10-09 11:04:30.795

...

[cssd(2876)]CRS-1654:Clean up of CRSD resources finished successfully.
2013-10-09 11:04:31.914

— Check that OHAS service is still up on host02

[root@host02 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

– Check that resources cssd , crsd and HAIP are down on host02[

[root@host02 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                                                   
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE                                                   
ora.crf
      1        ONLINE  ONLINE       host02                                       
ora.crsd
      1        ONLINE  OFFLINE                                                   
ora.cssd
      1        ONLINE  OFFLINE                               STARTING            
ora.cssdmonitor
      1        ONLINE  ONLINE       host02                                       
ora.ctssd
      1        ONLINE  OFFLINE                                                   
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  ONLINE       host02                                       
ora.evmd
      1        ONLINE  OFFLINE                                                   
ora.gipcd
      1        ONLINE  ONLINE       host02                                       
ora.gpnpd
      1        ONLINE  ONLINE       host02                                       
ora.mdnsd
      1        ONLINE  ONLINE       host02

–Check that host02 is no longer a part of the cluster

[root@host01 cluster01]# olsnodes -s
host01  Active
host02  Inactive

– Restart ISCSI service on host02

[root@host02 ~]# service iscsi start
iscsid dead but pid file exists
Turning off network shutdown. 

Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-

01.com.openfiler:tsn.e55ea88d0212, portal: 192.9.201.182,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.e55ea88d0212, portal: 

192.9.201.182,3260]: successful
                                                           [  OK  ]

- Alert log of host02

– Note that as soon as ISCSI service is started, CSSD service starts immediately and host02 joins the cluster

[cssd(5481)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details 

...

2013-10-09 11:10:43.897
[cssd(5481)]CRS-1707:Lease acquisition for node host02 number 2 completed

2013-10-09 11:10:47.629
[cssd(5481)]CRS-1605:CSSD voting file is online: ORCL:ASMDISK013; details in 

/u01/app/11.2.0/grid/log/host02/cssd/ocssd.log.

2013-10-09 11:10:54.652
[cssd(5481)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 host02 .

– check that resources haip, cssd and crsd have started on host02

[root@host02 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       host02                   Started             
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       host02                                       
ora.crf
      1        ONLINE  ONLINE       host02                                       
ora.crsd
      1        ONLINE  ONLINE       host02                                       
ora.cssd
      1        ONLINE  ONLINE       host02                                       
ora.cssdmonitor
      1        ONLINE  ONLINE       host02                                       
ora.ctssd
      1        ONLINE  ONLINE       host02                   OBSERVER            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  ONLINE       host02                                       
ora.evmd
      1        ONLINE  ONLINE       host02                                       
ora.gipcd
      1        ONLINE  ONLINE       host02                                       
ora.gpnpd
      1        ONLINE  ONLINE       host02                                       
ora.mdnsd
      1        ONLINE  ONLINE       host02

– Check that host02 has joined the cluster

[root@host02 ~]# olsnodes -s
host01  Active
host02  Active

References:

http://ora-ssn.blogspot.in/2011/09/reboot-less-node-fencing-in-oracle.html
http://www.trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.pdf
http://www.vmcd.org/2012/03/11gr2-rac-rebootless-node-fencing/

———————————————————————————————

Related Links:

Home

11g R2 RAC Index

11g R2 RAC: Node Eviction Due To Missing Network Heartbeat
11g R2 RAC :Reboot-less Node Fencing
 11g R2 RAC: Reboot-less Fencing With Missing Network Heartbeat

 

————–

11g R2 RAC: REBOOT-LESS FENCING WITH MISSING NETWORK HEARTBEAT

In my earlier post, I had discussed about reboot-less node fencing , a new feature introduced since 11.2.0.2. In this post, I will demonstrate reboot-less node fencing when network heartbeat is lost.

– Check that clusterware version is 11.2.0.3

[root@host02 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]

– check that both the nodes in the cluster are active

[root@host02 ~]# olsnodes -s
host01  Active
host02  Active

– Find out pvt interconnect

[root@host02 ~]# oifcfg getif
eth0  192.9.201.0  global  public
eth1  10.0.0.0  global  cluster_interconnect

– Stop pvt interconnect on node2

[root@host02 ~]# ifdown eth1

– Alert log of node2 –

– Note that instead of rebooting the node, CRSD resources are cleaned up

[cssd(802)]CRS-1612:Network communication with node host01 (1) missing for 50% of timeout interval.
..
Removal of this node from cluster in 6.640 seconds
2013-10-09 10:45:53.924
..
[cssd(802)]CRS-1609:This node is unable to communicate with other nodes in the cluster and is going  down to preserve cluster integrity;

[cssd(802)]CRS-1656:The CSS daemon is terminating due to a fatal error; ..
[cssd(802)]CRS-1652:Starting clean up of CRSD resources.
2013-10-09 10:46:01.918
...

[cssd(802)]CRS-1654:Clean up of CRSD resources finished successfully.
2013-10-09 10:46:03.794

— Check that OHAS service  is still up on host02

[root@host02 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

– Check that resources cssd , crsd and HAIP are down on host02[

[root@host02 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1        ONLINE  OFFLINE
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE              
ora.crf
1        ONLINE  ONLINE       host02
ora.crsd
      1        ONLINE  OFFLINE      
ora.cssd
      1        ONLINE  OFFLINE                               STARTING
ora.cssdmonitor
1        ONLINE  ONLINE       host02
ora.ctssd
1        ONLINE  OFFLINE
ora.diskmon
1        OFFLINE OFFLINE
ora.drivers.acfs
1        ONLINE  ONLINE       host02
ora.evmd
1        ONLINE  INTERMEDIATE host02
ora.gipcd
1        ONLINE  ONLINE       host02
ora.gpnpd
1        ONLINE  ONLINE       host02
ora.mdnsd
1        ONLINE  ONLINE       host02

– Restart private interconnect on host02

[root@host02 ~]# ifup eth1

- Alert log of host02

– Note that as soon as private network is started, CSSD, CRSD and EVMD services start immediately and host02 joins the cluster

[cssd(2876)]CRS-1713:CSSD daemon is started in clustered mode
2013-10-09 10:47:04.944
...
[cssd(2876)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 host02 .
2013-10-09 10:55:22.403
..
[crsd(3973)]CRS-1012:The OCR service started on node host02.
2013-10-09 10:56:25.304
...
[evmd(2753)]CRS-1401:EVMD started on node host02.
2013-10-09 10:56:41.996
...
[crsd(3973)]CRS-1201:CRSD started on node host02.
2013-10-09 10:56:45.274

– check that resources haip, cssd and crsd have started on host02

[root@host02 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1        ONLINE  ONLINE       host02                   Started
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       host02  
ora.crf
1        ONLINE  ONLINE       host02
ora.crsd
      1        ONLINE  ONLINE       host02                                       
ora.cssd
      1        ONLINE  ONLINE       host02        
ora.cssdmonitor
1        ONLINE  ONLINE       host02
ora.ctssd
1        ONLINE  ONLINE       host02                   OBSERVER
ora.diskmon
1        OFFLINE OFFLINE
ora.drivers.acfs
1        ONLINE  ONLINE       host02
ora.evmd
1        ONLINE  ONLINE       host02
ora.gipcd
1        ONLINE  ONLINE       host02
ora.gpnpd
1        ONLINE  ONLINE       host02
ora.mdnsd
1        ONLINE  ONLINE       host02

– Check that host02 has joined the cluster

[root@host02 ~]# olsnodes -s
host01  Active
host02  Active

References:

http://ora-ssn.blogspot.in/2011/09/reboot-less-node-fencing-in-oracle.html
http://www.trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.pdf
http://www.vmcd.org/2012/03/11gr2-rac-rebootless-node-fencing/

———————————————————————————————

Related Links:

Home

11g R2 RAC Index

11g R2 RAC: Node Eviction Due To Missing Network Heartbeat 
11g R2 RAC :Reboot-less Node Fencing
11g R2 RAC :Reboot-less  Fencing With Missing Disk Heartbeat

 

————–

11g R2 RAC : REBOOT-LESS NODE FENCING

Prior to 11.2.0.2, during failures of certain Oracle RAC-required subcomponents (e.g. private interconnect, voting disk etc.) , Oracle Clusterware tried to prevent a split-brain with a fast reboot of the server(s) without waiting for ongoing I/O operations or synchronization of the file systems. As a result, non-cluster-aware applications would be forcibly shut down. Moreover, during reboots, resources need to re-mastered across the surviving nodes . In a big cluster with many numbers of nodes, this can be potentially a very expensive operation.

This mechanism has been changed in version 11.2.0.2 (first 11g Release 2 patch set).

After deciding which node to evict,

– the clusterware will attempt to clean up the failure within the cluster by killing only the offending process(es) on that node . Especially I/O generating processes are killed .

– If all oracle resources/processes can be stopped and all IO generating processes can be killed,

  • clusterware resources will stop on the node
  • Oracle High Availability Services Daemon will keep on trying to restart the  Cluster Ready Services (CRS) stack again.
  • Once the conditions to start  CRS stack are re-established, all relevant cluster resources on that node will automatically start.

– If, for some reason, not all resources can be stopped or IO generating processes cannot be stopped completely (hanging in kernel mode, I/O path, etc.) ,

  • Oracle Clusterware will still perform a reboot or use IPMI to forcibly evict the node from the cluster as earlier.

This behavior change is particularly useful for non-cluster aware applications as the data will be protected by shutting down the cluster only on the node without rebooting the node itself.

I will demonstrate this functionality in two scenarios :

Failure of network heartbeat
Failure of DISK heartbeat

References:

http://ora-ssn.blogspot.in/2011/09/reboot-less-node-fencing-in-oracle.html
http://www.trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.pdf
http://www.vmcd.org/2012/03/11gr2-rac-rebootless-node-fencing/
http://www.vitalsofttech.com/grid-11gr2-ipmi-based-failure-isolation/
http://www.orafaq.com/node/2504
http://www.unbreakablecloud.com/wordpress/2011/02/05/oracle-grid-infrastructure-reboot-less-node-fencing/

———————————————————————————————

Related Links:

Home

11g R2 RAC Index

11g R2 RAC: Node Eviction Due To Missing Network Heartbeat 
 11g R2 RAC: Reboot-less Fencing With Missing Network Heartbeat
11g R2 RAC :Reboot-less  Fencing With Missing Disk Heartbeat

————–

 

 

12c: SOLUTION TO PERFORMANCE ISSUE WITH IN-DATABASE ARCHIVING

In my earlier post, I had demonstarted the performance issue that could arise with in-database archiving. In this post, I am going to provide the solution to the same (as suggested by John Watson).

– Let’s create a table of employees wherein we would like to archive the records of the employees who have retired.

SQL>create table hr.emp (id  number primary key,
name  varchar2(30),
retired  char(1)) row archival;

insert into hr.emp values (1, 'ONE', 'N');
insert into hr.emp values (2, 'TWO', 'N');
insert into hr.emp values (3, 'THREE', 'Y');

SQL> col ora_archive_state for a3
select id, name, retired, ora_archive_state from hr.emp;

ID NAME                           R ORA
---------- ------------------------------ - ---
1 ONE                            N 0
2 TWO                            N 0
3 THREE                          Y 0

– Archive the records of retired employees

SQL> update hr.emp set ora_archive_state=1 where id = 3;

– Let’s create a concatenated index on id, ora_archive_state columns

PDB1@CDB1> create index hr.emp_idx on hr.emp(id, ora_archive_state);

– Let’s try to add a duplicate record for id = 3 - fails

PDB1@CDB1> insert into hr.emp values (3, 'ANOTHER THREE', 'N');
insert into hr.emp values (3, 'ANOTHER THREE', 'N')
*
ERROR at line 1:
ORA-00001: unique constraint (HR.SYS_C009861) violated

– Let’s issue various queries and check their execution plans
— It can be seen that all the queries are executed by accessing only the index on id and ora_archive_state (emp_idx)

PDB1@CDB1> select count(*) from hr.emp;

Execution Plan
----------------------------------------------------------
Plan hash value: 660937672

--------------------------------------------------------------------
| Id  | Operation        | Name    | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT |         |     1 |     1   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE  |         |     1 |            |          |
|   2 |   INDEX FULL SCAN| EMP_IDX |     3 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------

PDB1@CDB1>  select count(*) from hr.emp where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2081018037

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |         |     1 |     3 |            |          |
|*  2 |   INDEX RANGE SCAN| EMP_IDX |     1 |     3 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

PDB1@CDB1>select count(*) from hr.emp where id <3;

Execution Plan
----------------------------------------------------------
Plan hash value: 2081018037

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |         |     1 |     3 |            |          |
|*  2 |   INDEX RANGE SCAN| EMP_IDX |     2 |     6 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"<3)

PDB1@CDB1> select count(*) from hr.emp where id between 1 and 3;

Execution Plan
----------------------------------------------------------
Plan hash value: 2081018037

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |         |     1 |     3 |            |          |
|*  2 |   INDEX RANGE SCAN| EMP_IDX |     3 |     9 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID">=1 AND "ID"<=3)

Conclusion:

In case in-database archiving is enabled for a table, indexes on various columns should be appended by column ORA_ARCHIVE_STATE to ensure performance.

I hope this post was useful.

Your comments and suggestions are always welcome!

References:

http://www.orafaq.com/node/2824

———————————————————————————————-

Related links:

Home

Database 12c Index

12c: In-database Archiving
12c: Improve Backup Performance Using In-database Archiving
12c: Performance Issue With In-database Archiving

 

12c: IMPROVE BACKUP PERFORMANCE USING IN-DATABASE ARCHIVING

Traditionally, the historical data needed for query purposes only  was removed from the database  and stored on tape, because

  •  Tape can store vast quantities of information for a very low cost.
  •  Huge amounts of old data will not be backed up again along with database backups.

This solution has major disadvantages such as:

  • The data is not instantly available when needed.
  •  Reloading the data into the database from the tape is expensive and time-consuming.

In-Database Archiving  stores operational and archival data in the same database so that  the data is still online and available and the cost is only slightly higher than storing this information on tape.

– Data is partitioned to distinguish active data from those in archived state.
– The partitions containing historical data
. are compressed and located on low cost storage tiers.
. are identified as read-only so that subsequent backups are not required after the initial database backup.

Demonstration:

– Create two tablespaces

  • active_tbs – to store active rows
  • archived_tbs – to store archived rows
SQL> create tablespace active_tbs;

create tablespace archived_tbs;

– Create table to store employees’ details

  • Partition key is column ORA_ARCHIVE_STATE
  • partition p_active – stores active rows (ora_archive_state = 0) in active_tbs tablespace in uncompressed format
  • partition p_archived – stores archived rows (ora_archive_state <> 0 ) in archived_tbs tablespace in compressed format
SQL>create table emp
(id    number,
name  varchar2(30),
retired  char(1)) row archival
partition by list(ora_archive_state)
 (partition p_active values(0)  tablespace active_tbs,
  partition p_archived values (default) tablespace   archived_tbs    compress);

– Populate table –

SQL>begin
     for i in 1..5 loop
       insert into emp values (i, 'EMP'||i, 'N');
     end loop;
     for i in 6..10 loop
        insert into emp values (i, 'EMP'||i, 'Y');
     end loop;
     end;
     /

– Check that all the records are currently located in active_tbs as by default ora_archive_state=0

SQL>  col name for a10
      col retired for a7
      col ora_archive_state for a15
      select id, name, retired, ora_archive_state
      from emp partition (p_active);

ID NAME       RETIRED ORA_ARCHIVE_STA
———- ———- ——- —————
1 EMP1       N       0
2 EMP2       N       0
3 EMP3       N       0
4 EMP4       N       0
5 EMP5       N       0
6 EMP6       Y       0
7 EMP7       Y       0
8 EMP8       Y       0
9 EMP9       Y       0
10 EMP10      Y       0

10 rows selected.

– Try to archive the records of retired employees (retired = Y)
— We get error message as ora_archive_state is the partitining key and changing its value would cause rows to move from p_active to p_archived

SQL> update emp set ora_archive_state = 1 where retired='Y';
update emp set ora_archive_state = 1 where retired='Y'
*
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change

– Enable row movement and then archive the records of retired employees (retired = Y)

SQL> alter table emp enable row movement;

     update emp set ora_archive_state = 1 
     where retired='Y';

5 rows updated.

– Check that partition p_active contains records of employees which have not retired (retired = ‘N’)

SQL> select id, name, retired, ora_archive_state from emp partition (p_active);

ID NAME       RETIRED ORA_ARCHIVE_STA
---------- ---------- ------- ---------------
1 EMP1       N       0
2 EMP2       N       0
3 EMP3       N       0
4 EMP4       N       0
5 EMP5       N       0

– Check if records of employees which have retired (retired = ‘Y’)have moved to partition p_archived
— No records are displayed because by default only active rows are displayed

SQL> select id, name, retired, ora_archive_state from emp partition (p_archived);

no rows selected

– Enable visibility of archived rows also

SQL> alter session set row archival visibility = all;

– Check that records of employees which have retired (retired = ‘Y’)have moved to partition p_archived

SQL> select id, name, retired, ora_archive_state from emp partition (p_archived);

ID NAME       RETIRED ORA_ARCHIVE_STA
---------- ---------- ------- ---------------
6 EMP6       Y       1
7 EMP7       Y       1
8 EMP8       Y       1
9 EMP9       Y       1
10 EMP10      Y       1

– Since archived records will be accessed for read only, change the tablespace archived_tbs to read only

SQL> alter tablespace archived_tbs read only;

-- Try to modify data in archived records – disallows as the tablespace is read only

SQL> update emp set name = 'emp6' where id = 6;
update emp set name = 'emp6' where id = 6
*
ERROR at line 1:
ORA-00372: file 12 cannot be modified at this time
ORA-01110: data file 12:

'+DATA/CDB1/E7CFA41503C950F1E0438602A8C06FD7/DATAFILE/archived_tbs.279.827852189'

– Backup tablespace archived_tbs after it has been made read only

RMAN> connect target sys/oracle@pdb1

      backup tablespace archived_tbs;

-- Exclude archived_tbs from whole database backups

RMAN> connect target sys/oracle@cdb1

      configure exclude for tablespace pdb1:archived_tbs;

Tablespace PDB1:ARCHIVED_TBS will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored

I hope information in this post was useful.

Your comments and suggestions are always welcome!

—————————————————————————————–

Related Links:

Home

Database 12c Index

12c : In-Database Archiving
12c : Performance Issue with In-Database Archiving
12c: Solution To Performance Issue With In-database Archiving

 

———————————————————————————————