Monthly Archives: November 2012

11g R2 RAC: NODE EVICTION DUE TO MISSING DISK HEARTBEAT

In this post, I will demonstrate node eviction due to missing disk heartbeat i.e. a node will be evicted from the cluster, if it can’t access the voting disk. To simulate it, I will stop iscsi service on one of the nodes and then scan alert logs and ocssd logs of various nodes.

Current scenario:
No. of nodes in the cluster  : 3
Names of the nodes      : host01, host02, host03
Name of the cluster database : orcl

I will stop ISCSI service on host03 so that it is evicted.

— Stop ISCSI service on host03 so that it can’t access shared storage and hence voting disk

[root@host03 ~]# service iscsi stop
 scan alert log of host03 Note that I/O error occurs at 03:32:11

[root@host03 ~]# tailf /u01/app/11.2.0/grid/log/host03/alerthost03.log

— Note that ocssd process of host03 is not able to access voting disks

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK01; details at (:CSSNM00059:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:32:11.310

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK03; details at (:CSSNM00059:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:32:11.311

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK03; details at (:CSSNM00060:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:32:11.311

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK01; details at (:CSSNM00060:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:32:11.312

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK02; details at (:CSSNM00060:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:32:11.310

[cssd(5149)]CRS-1649:An I/O error occured for voting file: ORCL:ASMDISK02; details at (:CSSNM00059:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

– ACFS can’t be accessed

[client(8048)]CRS-10001:ACFS-9112: The following process IDs have open references on /u01/app/oracle/acfsmount/11.2.0/sharedhome:

[client(8050)]CRS-10001:6323 6363 6391 6375 6385 6383 6402 6319 6503 6361 6377 6505 6389 6369 6335 6367 6333 6387 6871 6325 6381 6327 6496 6498 6552 6373 7278 6339 6400 6357 6500 6329 6365

[client(8052)]CRS-10001:ACFS-9113: These processes will now be terminated.

[client(8127)]CRS-10001:ACFS-9114: done.

[client(8136)]CRS-10001:ACFS-9115: Stale mount point /u01/app/oracle/acfsmount/11.2.0/sharedhome was recovered.

[client(8178)]CRS-10001:ACFS-9114: done.

[client(8183)]CRS-10001:ACFS-9116: Stale mount point ‘/u01/app/oracle/acfsmount/11.2.0/sharedhome’ was not recovered.

[client(8185)]CRS-10001:ACFS-9117: Manual intervention is required.

2012-11-17 03:33:34.050

[/u01/app/11.2.0/grid/bin/orarootagent.bin(5682)]CRS-5016:Process “/u01/app/11.2.0/grid/bin/acfssinglefsmount” spawned by agent

“/u01/app/11.2.0/grid/bin/orarootagent.bin” for action “start” failed: details at “(:CLSN00010:)” in “/u01/app/11.2.0/grid/log/host03/agent/crsd/orarootagent_root/orarootagent_root.log”

– At 03:34, voting disk can’t be accessed even after waiting for timeout

2012-11-17 03:34:10.718

[cssd(5149)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file ORCL:ASMDISK01 will be considered not functional in 99190 milliseconds

2012-11-17 03:34:10.724

[cssd(5149)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file ORCL:ASMDISK02 will be considered not functional in 99180 milliseconds

2012-11-17 03:34:10.724

[cssd(5149)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file ORCL:ASMDISK03 will be considered not functional in 99180 milliseconds

2012-11-17 03:35:10.666

[cssd(5149)]CRS-1614:No I/O has completed after 75% of the maximum interval. Voting file ORCL:ASMDISK01 will be considered not functional in 49110 milliseconds

2012-11-17 03:35:10.666

[cssd(5149)]CRS-1614:No I/O has completed after 75% of the maximum interval. Voting file ORCL:ASMDISK02 will be considered not functional in 49110 milliseconds

2012-11-17 03:35:10.666

[cssd(5149)]CRS-1614:No I/O has completed after 75% of the maximum interval. Voting file ORCL:ASMDISK03 will be considered not functional in 49110 milliseconds

2012-11-17 03:35:46.654

[cssd(5149)]CRS-1613:No I/O has completed after 90% of the maximum interval. Voting file ORCL:ASMDISK01 will be considered not functional in 19060 milliseconds

2012-11-17 03:35:46.654

[cssd(5149)]CRS-1613:No I/O has completed after 90% of the maximum interval. Voting file ORCL:ASMDISK02 will be considered not functional in 19060 milliseconds

2012-11-17 03:35:46.654

[cssd(5149)]CRS-1613:No I/O has completed after 90% of the maximum interval. Voting file ORCL:ASMDISK03 will be considered not functional in 19060 milliseconds

– Voting files are offlined as they can’t be accessed

[cssd(5149)]CRS-1604:CSSD voting file is offline: ORCL:ASMDISK01; details at (:CSSNM00058:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:36:10.596

[cssd(5149)]CRS-1604:CSSD voting file is offline: ORCL:ASMDISK02; details at (:CSSNM00058:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:36:10.596

[cssd(5149)]CRS-1604:CSSD voting file is offline: ORCL:ASMDISK03; details at (:CSSNM00058:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log.

2012-11-17 03:36:10.596

– CSSD of host03 reboots the node as no. of voting disks available(0) is less than minimum required (2)

[cssd(5149)]CRS-1606:The number of voting files available, 0, is less than the minimum number of voting files required, 2, resulting in CSSD termination to ensure data integrity; details at (:CSSNM00018:) in /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log

2012-11-17 03:36:15.645

[ctssd(5236)]CRS-2402:The Cluster Time Synchronization Service aborted on host host03. Details at (:ctsselect_mmg5_1: in /u01/app/11.2.0/grid/log/host03/ctssd/octssd.log.

 scan ocssd log of host03

[root@host03 ~]# tailf /u01/app/11.2.0/grid/log/host03/cssd/ocssd.log

– I/O fencing for ORCL database is carried out by CSSD at 03:32 ( same time as

   when host02 got the msg that orcl has failed on host03)

2012-11-17 03:32:10.356: [    CSSD][997865360]clssgmFenceClient: fencing client (0xaa14990), member 2 in group DBORCL, no share, death fence 1, SAGE fence 0

2012-11-17 03:32:10.356: [    CSSD][997865360]clssgmUnreferenceMember: global grock DBORCL member 2 refcount is 7

2012-11-17 03:32:10.356: [    CSSD][997865360]clssgmFenceProcessDeath: client (0xaa14990) pid 6337 undead

…..

2012-11-17 03:32:10.356: [    CSSD][997865360]clssgmFenceClient: fencing client (0xaa24250), member 4 in group DAALL_DB, no share, death fence 1, SAGE fence 0

2012-11-17 03:32:10.356: [    CSSD][997865360]clssgmFenceClient: fencing client (0xaa6db08), member 0 in group DG_LOCAL_DATA, same group share, death fence 1, SAGE fence 0

2012-11-17 03:32:10.357: [    CSSD][864708496]clssgmTermMember: Terminating member 2 (0xaa15920) in grock DBORCL

2012-11-17 03:32:10.358: [    CSSD][864708496]clssgmFenceCompletion: (0xaa46760) process death fence completed for process 6337, object type 3

..

2012-11-17 03:32:10.358: [    CSSD][864708496]clssgmFenceCompletion: (0xaa05758) process death fence completed for process 6337, object type 2

2012-11-17 03:32:10.359: [    CSSD][852125584]clssgmRemoveMember: grock DAALL_DB, member number 4 (0xaa05aa8) node number 3 state 0x0 grock type 2

2012-11-17 03:32:11.310: [   SKGFD[942172048]ERROR: -15(asmlib ASM:/opt/oracle/extapi/32/asm/orcl/1/libasm.so op ioerror error I/O Error)

2012-11-17 03:32:11.310: [    CSSD][942172048](:CSSNM00059:)clssnmvWriteBlocks: write failed at offset 19 of ORCL:ASMDISK02

2012-11-17 03:32:11.310: [   SKGFD][973764496]ERROR: -15(asmlib ASM:/opt/oracle/extapi/32/asm/orcl/1/libasm.so op ioerror error I/O Error)

2012-11-17 03:32:11.310: [    CSSD][973764496](:CSSNM00059:)clssnmvWriteBlocks: write failed at offset 19 of ORCL:ASMDISK03

2012-11-17 03:32:11.349: [    CSSD][997865360]clssgmFenceClient: fencing client (0xaa38ae0), member 2 in group DBORCL, same group share, death fence 1, SAGE fence 0

2012-11-17 03:32:11.349: [    CSSD][997865360]clssgmFenceClient: fencing client (0xaa5e128), member 0 in group DG_LOCAL_DATA, same group share, death fence 1, SAGE fence 0

2012-11-17 03:32:11.354: [    CSSD][908748688]clssnmvDiskAvailabilityChange: voting file ORCL:ASMDISK01 now offline

2012-11-17 03:32:11.354: [    CSSD][973764496]clssnmvDiskAvailabilityChange: voting file ORCL:ASMDISK03 now offline

2012-11-17 03:32:11.354: [    CSSD][931682192]clssnmvDiskAvailabilityChange: voting file ORCL:ASMDISK02 now offline

2012-11-17 03:32:12.038: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:ASMDISK02 sched delay 1610 > margin 1500 cur_ms 232074 lastalive 230464

2012-11-17 03:32:12.038: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:ASMDISK01 sched delay 1640 > margin 1500 cur_ms 232074 lastalive 230434

….

2012-11-17 03:32:12.223: [    CLSF][887768976]Closing handle:0xa746bc0

2012-11-17 03:32:12.223: [   SKGFD][887768976]Lib :ASM:/opt/oracle/extapi/32/asm/orcl/1/libasm.so: closing handle 0xa746df8 for disk :ORCL:ASMDISK01:

2012-11-17 03:32:12.236: [    CLSF][921192336]Closing handle:0xa5cbbb0

2012-11-17 03:32:12.236: [   SKGFD][921192336]Lib :ASM:/opt/oracle/extapi/32/asm/orcl/1/libasm.so: closing handle 0xa644fb8 for disk :ORCL:ASMDISK02:

2012-11-17 03:32:13.825: [    CSSD][997865360]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:ASMDISK03 sched delay 3110 > margin 1500 cur_ms 233574 lastalive 230464

2012-11-17 03:32:13.825: [    CSSD][997865360]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:ASMDISK02 sched delay 3110 > margin 1500 cur_ms 233574 lastalive 230464

2012-11-17 03:32:13.825: [    CSSD][997865360]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:ASMDISK01 sched delay 3140 > margin 1500 cur_ms 233574

2012-11-17 03:36:10.638: [    CSSD][877279120]CALL TYPE: call   ERROR SIGNALED: no   CALLER: clssscExit

scan alert log of host01

– Note that reboot message from host03  is received at 03:36:16 

[root@host01 host01]# tailf /u01/app/11.2.0/grid/log/host01/alerthost01.log

[ohasd(4942)]CRS-8011:reboot advisory message from host: host03, component: mo031159, with time stamp: L-2012-11-17-03:36:16.705

[ohasd(4942)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS

2012-11-17 03:36:29.610

– After host03 reboots itself, network communication with host03 is lost

[cssd(5177)]CRS-1612:Network communication with node host03 (3) missing for 50% of timeout interval.  Removal of this node from cluster in 14.060 seconds

2012-11-17 03:36:37.988

[cssd(5177)]CRS-1611:Network communication with node host03 (3) missing for 75% of timeout interval.  Removal of this node from cluster in 7.050 seconds

2012-11-17 03:36:43.992

[cssd(5177)]CRS-1610:Network communication with node host03 (3) missing for 90% of timeout interval.  Removal of this node from cluster in 2.040 seconds

2012-11-17 03:36:46.441

– After network communication can’t be established for timeout interval, the node is removed form cluster

[cssd(5177)]CRS-1632:Node host03 is being removed from the cluster in cluster incarnation 232819906

2012-11-17 03:36:46.572

[cssd(5177)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 host02 .

scan ocssd log of host01

– Note that ocssd process of host01 discovers missing disk heartbeat from host03 at 03:32:16

[root@host01 cssd]# tailf /u01/app/11.2.0/grid/log/host01/cssd/ocssd.log



2012-11-17 03:32:16.352: [    CSSD][852125584]clssgmGrockOpTagProcess: clssgmCommonAddMember failed, member(-1/CLSN.ONS.ONSNETPROC[3]) on node(3)

2012-11-17 03:32:16.352: [    CSSD][852125584]clssgmGrockOpTagProcess: Operation(3) unsuccessful grock(CLSN.ONS.ONSNETPROC[3])

2012-11-17 03:32:16.352: [    CSSD][852125584]clssgmHandleMasterJoin: clssgmProcessJoinUpdate failed with status(-10)

….

2012-11-17 03:36:15.328: [    CSSD][810166160]clssnmDiscHelper: host03, node(3) connection failed, endp (0x319), probe((nil)), ninf->endp 0x319

2012-11-17 03:36:15.328: [    CSSD][810166160]clssnmDiscHelper: node 3 clean up, endp (0x319), init state 3, cur state 3

2012-11-17 03:36:15.329: [GIPCXCPT][852125584]gipcInternalDissociate: obj 0x96c7eb8 [0000000000001310] { gipcEndpoint : localAddr ‘gipc://host01:f278-d1bd-1509-2f25#10.0.0.1#20071′, remoteAddr ‘gipc://host03:gm_cluster01#10.0.0.3#58536′, numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x141d, pidPeer 0, flags 0x261e, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)

@

scan alert log of host02

 Note that reboot message is received ar 03:36:16

[root@host02 ~]# tailf /u01/app/11.2.0/grid/log/host02/alerthost02.log

. At 03:32, CRSD process of host02 receives message that orcl database has failed on host03 as

     datafiles for orcl are on shared storage

[crsd(5576)]CRS-2765:Resource ‘ora.orcl.db’ has failed on server ‘host03′.

2012-11-17 03:32:44.303

. CRSD process of host02 receives message that acfs has failed on host03 as

      shared storage can’t be accessed

[crsd(5576)]CRS-2765:Resource ‘ora.acfs.dbhome_1.acfs’ has failed on server ‘host03‘.

2012-11-17 03:36:16.981

    . ohasd process receives reboot advisory message from host03

[ohasd(4916)]CRS-8011:reboot advisory message from host: host03, component: ag031159, with time stamp: L-2012-11-17-03:36:16.705

[ohasd(4916)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS

2012-11-17 03:36:16.981

[ohasd(4916)]CRS-8011:reboot advisory message from host: host03, component: mo031159, with time stamp: L-2012-11-17-03:36:16.705

[ohasd(4916)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS

2012-11-17 03:36:28.920

    . CSSD process of host02 identifies missing network communication from host03 as host03 has rebooted itself

[cssd(5284)]CRS-1612:Network communication with node host03 (3) missing for 50% of timeout interval.  Removal of this node from cluster in 14.420 seconds

2012-11-17 03:36:37.307

[cssd(5284)]CRS-1611:Network communication with node host03 (3) missing for 75% of timeout interval.  Removal of this node from cluster in 7.410 seconds

2012-11-17 03:36:43.328

[cssd(5284)]CRS-1610:Network communication with node host03 (3) missing for 90% of timeout interval.  Removal of this node from cluster in 2.400 seconds
– After network communication can’t be established for timeout interval, the node is removed form cluster

2012-11-17 03:36:46.297

[cssd(5284)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 host02 .

2012-11-17 03:36:46.470

[crsd(5576)]CRS-5504:Node down event reported for node ‘host03′.

2012-11-17 03:36:51.890

[crsd(5576)]CRS-2773:Server ‘host03′ has been removed from pool ‘Generic’.

2012-11-17 03:36:51.909

[crsd(5576)]CRS-2773:Server ‘host03′ has been removed from pool ‘ora.orcl’.

[cssd(5284)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 host02 host03 .

scan ocssd log of host02

– note that ocssd of host02 discovers missing host03 only after it has been rebooted at 03:36

[root@host02 ~]# tailf /u01/app/11.2.0/grid/log/host02/cssd/ocssd.log

2012-11-17 03:36:15.052: [    CSSD][810166160]clssnmDiscHelper: host03, node(3) connection failed, endp (0x22e), probe((nil)), ninf->endp 0x22e

2012-11-17 03:36:15.052: [    CSSD][810166160]clssnmDiscHelper: node 3 clean up, endp (0x22e), init state 3, cur state 3

…..

2012-11-17 03:36:15.052: [    CSSD][852125584]clssgmPeerDeactivate: node 3 (host03), death 0, state 0x1 connstate 0x1e

….

2012-11-17 03:36:28.920: [    CSSD][841635728]clssnmPollingThread: node host03 (3) at 50% heartbeat fatal, removal in 14.420 seconds

2012-11-17 03:36:28.920: [    CSSD][841635728]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)

2012-11-17 03:36:29.017: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 474884 lastalive 474074

2012-11-17 03:36:29.017: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 474884 lastalive 474074

2012-11-17 03:36:29.017: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 474884 lastalive 474074

2012-11-17 03:36:29.908: [    CSSD][852125584]clssgmTagize: version(1), type(13), tagizer(0x80cf3ac)

2012-11-17 03:36:29.908: [    CSSD][852125584]clssgmHandleDataInvalid: grock HB+ASM, member 1 node 1, birth 1

2012-11-17 03:36:30.218: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 475884 lastalive 475074

2012-11-17 03:36:30.218: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 475884 lastalive 475074

2012-11-17 03:36:30.218: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 810 > margin 750 cur_ms 475884 lastalive 475074

2012-11-17 03:36:31.408: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 790 > margin 750 cur_ms 476864 lastalive 476074

2012-11-17 03:36:31.408: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 790 > margin 750 cur_ms 476864 lastalive 476074

2012-11-17 03:36:31.408: [    CSSD][810166160]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 790 > margin 750 cur_ms 476864 lastalive 476074

2012-11-17 03:36:32.204: [    CSSD][831145872]clssnmSendingThread: sending status msg to all nodes

….

2012-11-17 03:36:46.161: [    CSSD][810166160]clssnmHandleSync: local disk timeout set to 27000 ms, remote disk timeout set to 27000

References:

————————–

Related links:

Home

11G R2 RAC Index
Node Eviction Due To Missing Network Heartbeat 
Node Eviction Due To Member Kill Escalation 
Node Eviction Due To CSSD Agent Stopping

11g R2 RAC: Reboot-less Node Fencing 
11g R2 RAC :Reboot-less  Fencing With Missing Disk Heartbeat
11g R2 RAC: Reboot-less Fencing With Missing Network Heartbeat                                                             —————— 
 

DUPLICATE DATABASE WITHOUT CONNECTION TO TARGET DATABASE

In this post, I will demonstrate how to duplicate a database from its backups without any
connection to the source database. This method can be used if source database is not
available .
********************************
  source database  orcl
  Duplicate database  orclt
********************************
Overview:
on the source  host
- BACKUP DATABASE PLUS ARCHIVELOG AND CONTROLFILE
-  Copy these backup files to the server where you want to create the duplicate copy.
- CREATE PFILE FROM SOURCE DATABASE
on the target host
- Add a line in the file /etc/oratab to reflect the database instance you are going to copy
- create folders
- Copy the backup files from the source database
- Copy the initialization parameter file from the source database add edit it.
- Copy the password file
- Startup the target database in nomount mode using modified parameter file
- Using RMAN  connect to the  duplicate database (orclt) as auxiliary instance
- Specify the location of the backups and duplicate the database orcl to orclt
Implementation
————–
*******************
on the source  host
*******************
- BACKUP DATABASE PLUS ARCHIVELOG AND CONTROLFILE
—-
—–
oracle@source$mkdir/home/oracle/stage
oracle@source$. oraenv orcl
              rman target /
RMAN>backup database format ‘/home/oracle/stage/%U.bak';
     backup archivelog all format ‘/home/oracle/stage/arch_%r%_s_%t.bak’
 ;
     backup current controlfile format ‘/home/oracle/stage/control.bak';
– CREATE PFILE FROM SOURCE DATABASE
SQL>CREATE PFILE=’/home/oracle/stage/initsource.ora’
    FROM SPFILE;
*****************
 on the target host.
*****************
– Make a staging folder for backups and pfile
oracle@dest$mkdir -p /home/oracle/stage
– create other required  folders
 $mkdir -p /u01/app/oracle/oradata/orclt
  mkdir -p /u01/app/oracle/flash_recovery_area/orclt
  mkdir -p /u01/app/oracle/admin/orclt/adump
  mkdir -p /u01/app/oracle/admin/orclt/dpdump
– Copy backup files from the source host
 # scp source:/home/oracle/stage/*.bak /home/oracle/stage/
– Copy pfile of source database (orcl)
 # scp source:/home/oracle/stage/initsource.ora /home/oracle/stage/inittarget.ora
– Copy the password file as well 
 $ cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl  /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
 
– Add a line in the file /etc/oratab to reflect the database instance you are going to  copy:
orclt:/u01/app/oracle/product/11.2.0/db1:N
– Edit the initialization parameter file from the main database.
$vi /home/oracle/stage/inittarget.ora
   – Change db_name = orclt
   – Edit it to reflect the new locations that might be appropriate
     such as control file locations,audit dump destinations, datafile
     locations, etc.
   – add these lines –
     db_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
     log_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
In case sorce and destination databases ae ASM, following lines can be added :
db_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”)
log_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”, “+FRA/orcl”,”+FRA/orclt”)
– Now set the Oracle SID as the duplicated database SID:
$ . oraenv
ORACLE_SID = [orclt] ?
– Startup the target database in nomount mode using modified parameter file
$sqlplus sys/oracle as sysdba
SQL> startup nomount pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
     create spfile from pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
- Using RMAN  connect to the  duplicate database (orclt) as auxiliary instance
$. oraenv
   orclt
$rman auxiliary /
– duplicate the database orcl to orclt
– the command performs the following steps:
    * Creates an SPFILE
    * Shuts down the instance and restarts it with the new spfile
    * Restores the controlfile from the backup
    * Mounts the database
    * Performs restore of the datafiles. In this stage it creates the files in the
      converted names.
    * Recovers the datafiles up to the time specified and opens the database
– If duplicate database has the same directory structure as source (on a different host)
RMAN>duplicate target database to orclt backup location ‘/home/oracle/stage/’ nofilenamecheck;
OR
– If duplicate database has different directory structure from source
RMAN>duplicate target database to orclt backup location ‘/home/oracle/stage/’ ;
– check that duplicate database is up
$sqlplus / as sysdba
sql>conn hr/hr
    select * from tab;
– Note that  DBID is different from the main database so it can be backed up
   independently and using the same catalog as well.
SQL> select dbid from v$database;
     conn sys/oracle@orcl as sysdba
     select dbid from v$database;
——————————————————————————————–
Related links:
                                                                                                        ——————–

DUPLICATE DATABASE USING BACKUP WITH CONNECTION TO TARGET DATABASE

In this post, I will demonstrate how to duplicate a database using its backups.
This method requires connection with the target database also to read its controlfile
to get information about the backups.
********************************
  source database  orcl
  Duplicate database  orclt
***********************************
Overview :
on the source  host
- Backup Database , Archivelogs and controlfile
- Copy these backup files to the server where you want to create the duplicate copy.
- Creat Pfile from source database

 

on the target host
- Add a line in the file /etc/oratab to reflect the database instance(orclt) you are going to copy
- create folders
- Copy the initialization parameter file from the source database add edit it.
- Copy the password file
- Startup the duplicate database (orclt) in nomount mode using modified parameter file
- Using RMAN  connect to the source database(orcl) as target database and duplicate database
  (orclt) as auxiliary instance
- duplicate the database orcl to orclt
Implementation :
———————-
on the source  host
———————-
– Make a folder to stage the backup
oracle$mkdir /home/oracle/stage
– Take the backup of the source database
oracle$. oraenv orcl
         rman target /
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON;
     backup database plus archivelog format ‘/home/oracle/stage/%U.rmb';
The controlfile backup is also required. If you have configured the controlfile
autobackup, the backup would contain the controlfile as well. If you want to be sure,
 ors you have not configured controlfile autobackup, you can backup the controlfile
explicitly.
——-
- Creat Pfile from source database
——-
SQL>Create pfile=’/u01/app/oracle/oradata/orcl/initsource.ora’
    from spfile;
———————————————–
The rest of the steps occur on the target host.
———————————————–
———————————
 Copy the backup files to the server where you want to create the duplicate copy.
——————————-
$mkdir -p /home/oracle/stage
 scp sourcehost:/home/oracle/stage/*.rmb desthost::/home/oracle/stage/
– Add a line in the file /etc/oratab to reflect the database instance you are going to  copy:
orclt:/u01/app/oracle/product/11.2.0/db1:N
– Now set the Oracle SID as the duplicated database SID:
# . oraenv
ORACLE_SID = [orclt] ?
– create folders
 $mkdir -p /u01/app/oracle/oradata/orclt
  mkdir -p /u01/app/oracle/flash_recovery_area/orclt
  mkdir -p /u01/app/oracle/admin/orclt/adump
  mkdir -p /u01/app/oracle/admin/orclt/dpdump
– Copy the initialization parameter file from the main database.
– If u are duplicating on the same host
$cp  /u01/app/oracle/oradata/orcl/initsource.ora /u01/app/oracle/oradata/orclt/inittarget.ora
OR
– If u are duplicating on the different host
$scp  soucehost:/u01/app/oracle/oradata/orcl/initsource.ora /u01/app/oracle/oradata/orclt/inittarget.ora
– Edit pfile
$vi /u01/app/oracle/oradata/orclt/inittarget.ora
   – Change db_name = orclt
   – Edit it to reflect the new locations that might be appropriate
     such as control file locations,audit dump destinations, datafile
     locations, etc.
   – add these lines –
     db_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
     log_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
In case source and destination databases ae ASM, following lines can be added :
db_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”)
log_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”, “+FRA/orcl”,”+FRA/orclt”)
————————————–
– Copy the password file as well
$ cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl  /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
OR
$ scp sourcehost:/u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl  /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
$ . oraenv
ORACLE_SID = [orclt] ?
– Startup the duplicate database in nomount mode using modified parameter file
$sqlplus sys/oracle as sysdba
SQL> startup nomount pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
     create spfile from pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
- Using RMAN  connect to the source database(orcl) as target database and duplicate database
  (orclt) as auxiliary instance
$. oraenv
   orclt
$rman target sys/oracle@orcl auxiliary /
– duplicate the database orcl to orclt
– the command performs the following steps:
    * Creates an SPFILE
    * Shuts down the instance and restarts it with the new spfile
    * Restores the controlfile from the backup
    * Mounts the database
    * Performs restore of the datafiles. In this stage it creates the files in the
      converted names.
    * Recovers the datafiles up to the time specified and opens the database
RMAN>duplicate target database to orclt;
– check that duplicate database is up
$sqlplus / as sysdba
sql>conn hr/hr
    select * from tab;
– Note that  DBID is different from the main database so it can be backed up
   independently and using the same catalog as well.
SQL> select dbid from v$database;
      DBID
———-
3779358555
     conn sys/oracle@orcl as sysdba
    
     select dbid from v$database;
      DBID
———-
1326904854
———————————————————————————————–
Related links:
                                                ——————

ORACLE 11G :ACTIVE DATABASE DUPLICATION

Oracle 11g introduced active database duplication using which we can create a duplicate database of the target database without any backups. Duplication is performed over the network.
Procedure :
Overview:
on the source  host
- Create Pfile from source database
- Create an entry in tnsnames.ora for duplictae database on target host on port 1522
on the target host
- Add a line in the file /etc/oratab to reflect the database instance you are going to copy
- create folders
- Copy the initialization parameter file from the source database add edit it.
- Copy the password file
- Create a listener in database home on port 1522 and register duplicate database statically with it
- Startup the target database in nomount mode using modified parameter file
- Using RMAN  connect to the source database(orcl) as target database and duplicate database (orclt) as auxiliary instance
- duplicate the target database
********************************
  source database  orcl
  Duplicate database  orclt
***********************************
Implementation
– On source host
– CREATE PFILE FROM SOURCE DATABASE
SQL>CREATE PFILE=’/u01/app/oracle/oradata/orcl/initsource.ora’     FROM SPFILE;
– On source database, create a service for orclt on target host on port 1522
The rest of the steps occur on the target host.
– Add a line in the file /etc/oratab to reflect the database instance you are going to copy
     orclt:/u01/app/oracle/product/11.2.0/db1:N
– Now set the Oracle SID as the duplicated database SID:
# . oraenv
ORACLE_SID = [orclt] ?
– create folders
 $mkdir -p /u01/app/oracle/oradata/orclt
  mkdir -p /u01/app/oracle/flash_recovery_area/orclt
  mkdir -p /u01/app/oracle/admin/orclt/adump
  mkdir -p /u01/app/oracle/admin/orclt/dpdump
– Copy the initialization parameter file from the main database.
$cp  /u01/app/oracle/oradata/orcl/initsource.ora /u01/app/oracle/oradata/orclt/inittarget.ora
– Edit the initialization parameter file
$vi /u01/app/oracle/oradata/orclt/inittarget.ora
   – Change db_name = orclt
   – Edit it to reflect the new locations that might be appropriate
     such as control file locations,audit dump destinations, datafile
     locations, etc.
   – add these lines –
     db_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
     log_file_name_convert = (“/u01/app/oracle/oradata/orcl”,
                             “/u01/app/oracle/oradata/orclt”)
In case source and destination databases ae ASM, following lines can be added :
db_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”)
log_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”, “+FRA/orcl”,”+FRA/orclt”)
– Copy the password file as well
$cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl  /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
 
– Startup the target database in nomount mode using modified parameter file
$ . oraenv
ORACLE_SID = [orclt] ?
$sqlplus sys/oracle as sysdba
SQL> startup nomount pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
     create spfile from pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora';
– create a listener on port 1522 in database home on target host and statically register service
    orclt with it.
 
– connect to the auxiliary instance
$. oraenv
   orclt
$rman target sys/oracle@orcl auxiliary sys/oracle@orclt
– duplicate the database orcl to orclt from active database
– the command performs the following steps:
    * Creates an SPFILE
    * Shuts down the instance and restarts it with the new spfile
    * Creates the controlfile for standby database
    * Mounts the database
    * Performs restore of the datafiles. In this stage it creates the files in the
      converted names.
    * Recovers the datafiles up to the time specified and opens the database
RMAN>duplicate target database to orclt from active database;
– check that duplicate database is up
$sqlplus / as sysdba
sql>conn hr/hr
    select * from tab;
-- Note that  DBID is different from the main database so it can be backed up
   independently and using the same catalog as well.
SQL> select dbid from v$database;
      DBID
———-
3779357884
     conn sys/oracle@orcl as sysdba
    
     select dbid from v$database;
      DBID
———-
1326904854
———————————————————————————————–
Related links:
                                                                                             ————————

ORACLE 11G: AUTOMATIC DOP – PARALLEL THRESHOLD

Oracle 11g introduced automatic DOP i.e. Degree of Parallelilsm for a statement is automatically computed by the optimizer if the parameter PARALLEL_DEGREE_POLICY = AUTO.
Before deciding onthe DOP, optimizer first checks if the statement needs to be parallelized. For this the parameter parallel_min_time_threshold is used. This parameter can be set to an integer which refers to time in seconds. If the statement can be executed serially within the time specified by this parameter, it will not be parallelized i.e. it will execute serially. But if serial execution takes more than the time specified by this parameter, statement is executed in parallel.
Let’s demonstrate ….
Set parallel_min_time_threshold  = 10 so that a statement will be parallellized if
    its serial execution takes more than 10 secs
SYS>alter system set parallel_degree_policy=auto;
          alter system set parallel_min_time_threshold  = 10;
          sho parameter parallel;
NAME                                 TYPE        VALUE
———————————— ———– —————
fast_start_parallel_rollback         string      LOW
parallel_adaptive_multi_user         boolean     TRUE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_limit                string      CPU
parallel_degree_policy               string      AUTO
parallel_execution_message_size      integer     16384
parallel_force_local                 boolean     FALSE
parallel_instance_group              string
parallel_io_cap_enabled              boolean     FALSE
parallel_max_servers                 integer     20
parallel_min_percent                 integer     0
NAME                                 TYPE        VALUE
———————————— ———– —————
parallel_min_servers                 integer     0
parallel_min_time_threshold          string      10
parallel_server                      boolean     FALSE
parallel_server_instances            integer     1
parallel_servers_target              integer     6
parallel_threads_per_cpu             integer     2
recovery_parallelism                 integer     0
– Find out the execution plan for the following statement
– Note that execution plan uses parallel servers (automatic DOP: Computed Degree of Parallelism is 2)  as sh.sales is a huge table and the query will take more than 10 secs if executed serially
SYS>set line 500
           explain plan for
          select * from sh.sales
          order by 1,2,3,4;
 
          select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
Plan hash value: 2055439529
—————————————————————————————————————————————–
| Id  | Operation               | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
—————————————————————————————————————————————–
|   0 | SELECT STATEMENT        |          |   918K|    25M|       |   297  (11)| 00:00:04 |       |    |           |      |            |
|   1 |  PX COORDINATOR         |          |       |       |       |            |          |       |    |           |      |            |
|   2 |   PX SEND QC (ORDER)    | :TQ10001 |   918K|    25M|       |   297  (11)| 00:00:04 |       |    |  Q1,01 | P->S | QC (ORDER) |
|   3 |    SORT ORDER BY        |          |   918K|    25M|    42M|   297  (11)| 00:00:04 |       |    |  Q1,01 | PCWP |               |
|   4 |     PX RECEIVE          |          |   918K|    25M|       |   274   (3)| 00:00:04 |       |    |  Q1,01 | PCWP |               |
|   5 |      PX SEND RANGE      | :TQ10000 |   918K|    25M|       |   274   (3)| 00:00:04 |       |    |  Q1,00 | P->P | RANGE |
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
|   6 |       PX BLOCK ITERATOR |          |   918K|    25M|       |   274   (3)| 00:00:04 |     1 | 28 |  Q1,00 | PCWC |               |
|   7 |        TABLE ACCESS FULL| SALES    |   918K|    25M|       |   274   (3)| 00:00:04 |     1 | 28 |  Q1,00 | PCWP |               |
—————————————————————————————————————————————–
Note
—–
   – automatic DOP: Computed Degree of Parallelism is 2
– Now let us set set parallel_min_time_threshold=3600 so that a query will be parallellized if its serial
    execution takes more than 1  hour (3600 secs)
SQL> alter system set parallel_min_time_threshold=3600;
           sho parameter parallel
NAME                                 TYPE        VALUE
———————————— ———– ——————————
fast_start_parallel_rollback         string      LOW
parallel_adaptive_multi_user         boolean     TRUE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_limit                string      CPU
parallel_degree_policy               string      AUTO
parallel_execution_message_size      integer     16384
parallel_force_local                 boolean     FALSE
parallel_instance_group              string
parallel_io_cap_enabled              boolean     FALSE
parallel_max_servers                 integer     20
parallel_min_percent                 integer     0
NAME                                 TYPE        VALUE
———————————— ———– ——————————
parallel_min_servers                 integer     0
parallel_min_time_threshold          string      3600
parallel_server                      boolean     FALSE
parallel_server_instances            integer     1
parallel_servers_target              integer     6
parallel_threads_per_cpu             integer     2
recovery_parallelism                 integer     0
-- Check that serial execution plan (utomatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold) is generated as serial execution of the statement will take less     than 1 hour
SQL> set line 500
          explain plan for
          select * from sh.sales
           order by 1,2,3,4;
 
           select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
——————————————————————————————————–
Plan hash value: 3803407550
——————————————————————————————————
| Id  | Operation            | Name  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
——————————————————————————————————
|   0 | SELECT STATEMENT     |       |   918K|    25M|       |  7826   (1)| 00:01:34 |       |       |
|   1 |  SORT ORDER BY       |       |   918K|    25M|    42M|  7826   (1)| 00:01:34 |       |       |
|   2 |   PARTITION RANGE ALL|       |   918K|    25M|       |   494   (3)| 00:00:06 |     1 |    28 |
|   3 |    TABLE ACCESS FULL | SALES |   918K|    25M|       |   494   (3)| 00:00:06 |     1 |    28 |
——————————————————————————————————
PLAN_TABLE_OUTPUT
——————————————————————————————————–
Note
—–
   – automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
References:
-——————————————————————————————-
Related links:
 


ORACLE 11G: PARALLEL STATEMENT QUEUEING

Oracle 11g introduced parallel statement queueing as a new feature.
Till 10g, if a running statement was already using all the parallel processes as specified by PARALLEL_SERVERS_TARGET and another statement requiring parallel servers was issued, second statement would fail .
As of 11g, in the same situation as above, second statement would be placed in a queue and would start executing as soon as parallel servers are freed by the first statement.
This feature is influeneced by following parameters:
PARALLEL_DEGREE_POLICY – should be AUTO to enable parallel statement queueing
PARALLEL_MAX_SERVERS – The maximume no. of parallel servers that can be allotted.
PARALEL_SERVERS_TARGET – speciifes the total no. of parallel servers currently available for use.
                                                       Upper limit decided by PARALLEL_MAX_SERVERS
Let’s demonstrate…
Set parallel_servers_target to 4 so that if a statement has used up 4 processes, and another statement
   requiring parallel servers is issued, second statement should be queued
SQL>
alter system set parallel_degree_policy=auto;
alter system set parallel_servers_target=4;
alter system set parallel_max_servers=20;
SQL> sho parameter parallel
NAME                                 TYPE        VALUE
———————————— ———– ——————————
fast_start_parallel_rollback         string      LOW
parallel_adaptive_multi_user         boolean     TRUE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_limit                string      CPU
parallel_degree_policy               string      AUTO
parallel_execution_message_size      integer     16384
parallel_force_local                 boolean     FALSE
parallel_instance_group              string
parallel_io_cap_enabled              boolean     FALSE
parallel_max_servers                 integer     20
parallel_min_percent                 integer     0
NAME                                 TYPE        VALUE
———————————— ———– ——————————
parallel_min_servers                 integer     0
parallel_min_time_threshold          string      AUTO
parallel_server                      boolean     FALSE
parallel_server_instances            integer     1
parallel_servers_target              integer     4
parallel_threads_per_cpu             integer     2
recovery_parallelism                 integer     0
– Let us see the execution plan of the following statement –
– Note that DOP as computed automatically = 2
    Since two processes requiring parallel servers are in the query i.e. Table access Full and sort order by,      each of the processes runs with DOP = 2 and hence 4 parallel  servers will be  used by this query
SESS1>explain plan for
      select * from sh.sales
      order by 1,2,3,4;
 select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
——————————————————————————————————————————————–
Plan hash value: 2055439529
—————————————————————————————————————————————–
| Id  | Operation               | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
—————————————————————————————————————————————–
|   0 | SELECT STATEMENT        |          |   918K|    25M|       |   297  (11)| 00:00:04 |       |    |           |      |            |
|   1 |  PX COORDINATOR         |          |       |       |       |            |          |       |    |           |      |            |
|   2 |   PX SEND QC (ORDER)    | :TQ10001 |   918K|    25M|       |   297  (11)| 00:00:04 |       |    |  Q1,01 | P->S | QC (ORDER) |
|   3 |    SORT ORDER BY        |          |   918K|    25M|    42M|   297  (11)| 00:00:04 |       |    |  Q1,01 | PCWP |               |
|   4 |     PX RECEIVE          |          |   918K|    25M|       |   274   (3)| 00:00:04 |       |    |  Q1,01 | PCWP |               |
|   5 |      PX SEND RANGE      | :TQ10000 |   918K|    25M|       |   274   (3)| 00:00:04 |       |    |  Q1,00 | P->P | RANGE |
PLAN_TABLE_OUTPUT
——————————————————————————————————————————————–
|   6 |       PX BLOCK ITERATOR |          |   918K|    25M|       |   274   (3)| 00:00:04 |     1 | 28 |  Q1,00 | PCWC |               |
|   7 |        TABLE ACCESS FULL| SALES    |   918K|    25M|       |   274   (3)| 00:00:04 |     1 | 28 |  Q1,00 | PCWP |               |
—————————————————————————————————————————————–
Note
—–
   - automatic DOP: Computed Degree of Parallelism is 2
– Run the above query in two sessions –
– I have given a hint to identify the sessions from which the statement has been issued
– The query in first session executes
SESS1>select /*+ sess1 */ * from sh.sales
      order by 1,2,3,4;
– The query in second session hangs
SESS2>select /*+ sess2 */ * from sh.sales
      order by 1,2,3,4;
– Check the status of the queries from a third session
– Note that statement issued from the first session is executing while the statement issued from the second session is queued as all the 4 available parallel servers have been used by the first query
SYS> col sql_text for a50
 
          select status, sql_text
          from v$sql_monitor
          where sql_text like ‘%sess%';
STATUS              SQL_TEXT
——————- ——————————————
EXECUTING           select /*+ sess1 */ * from sh.sales
                          order by 1,2,3,4
QUEUED              select /*+ sess2 */ * from sh.sales
                          order by 1,2,3,4
– Abort the query in session 1 by pressing Ctrl-C –
– The query starts executing in session 2 —
– The status of the first query changes from EXECUTING to DONE and
– The status of the SECOND  query changes from QUEUED to executing
SQL> col sql_text for a50
  
           select status, sql_text
           from v$sql_monitor
           where sql_text like ‘%sess%';
STATUS              SQL_TEXT
——————- ————————————————–
DONE (FIRST N ROWS) select /*+ sess1 */ * from sh.sales
                          order by 1,2,3,4
EXECUTING           select /*+ sess2 */ * from sh.sales
                          order by 1,2,3,4
References:
———————————————————————————————————————

Related links:

 
                                           ——————

11g R2 RAC: ADD BACK DELETED NODE

 
 
In 11g R2 RAC, when a node is deleted, only the OCR and the inventory of the remaining nodes is updated. Grid  software from the deleted node is not removed and inventory of the deleted node is not updated
(/u01/app/oraInventory/ContentsXML/inventory.xml)
 
Hence, to add back a deleted node we need not copy the Grid software on the node which was deleted earlier. Only the following steps need to be executed:
 
 
Current scenario:
 
I have two nodes in the cluster presently.
 
Host names :
- host01.example.com 
- host02.example.com
 
Node which was deleted earlier and has to be added back:
-  host03.example.com
 
 
———————————————————-
– Run addNode.sh from a currently existing  node (say host01) with no copy option
———————————————————–
 
- login as grid
 
[grid@host01]$ cd /u01/app/oracle/11.2.0/grid/oui/bin
[grid@host01]$./addNode.sh -silent -noCopy ORACLE_HOME=/u01/app/11.2.0/grid “CLUSTER_NEW_NODES={host03}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={host03-vip}”
 
This step updates the inventories on the nodes and instatiates scripts on local node.
 
———————————————–
– Run root.sh on the newly added node
———————————————-
 
– If root.sh is not on the newly added node, copy it from any  of the existing nodes
 
[root@host03}#/u01/app/11.2.0/grid/root.sh
 
——————————————–
– Chck that the node has been added
———————————————
 
[grid@host01]$olsnodes
 
                         crsctl stat res -t
 
———————————————————–
 

Related links:


                                                 ——————-