Category Archives: 11g R2 RAC

PROT-35: The configured Oracle Cluster Registry locations are not accessible

I received this error message when I was trying to restore OCR from its backup.

Reason: The ASM compatibility of the diskgroup to which I was trying to restore OCR was 11.0.0.0

Solution : Advance the ASM compatibility of the destination diskgroup to a value >= 11.2.0.0 and then try to restore OCR.

Hope this post was useful!

————————————————————————————————

Related links:

Home

11g R2 RAC Index

INS-08109 unexpected error occured while validating inputs at state ‘InstallOptions’
ORA-15040: diskgroup is incomplete
PRVF-5636 : The DNS response time for an unreachable node exceeded 15000 ms 

 

 

ORA-15040: diskgroup is incomplete

I received this error while I was trying to bring up cluster on the second in my 2-node cluster.

When I tried to start cluster, ora.asm resource was found in intermediate state.

[root@host02 ~]# crsctl start cluster
CRS-5702: Resource 'ora.evmd' is already running on 'host02'
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'host02'
CRS-4000: Command Start failed, or completed with errors.

– Since asm was not up, OCR could not be read and hence cluster could not come up

[root@host02 ~]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  INTERMEDIATE host02                   OCR not started

– The alert log showed that DATA diskgroup could not be mounted.

2013-09-04 13:55:38.896
[/u01/app/11.2.0/grid/bin/oraagent.bin(28674)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at “(:CLSN00100:)” in “/u01/app/11.2.0/grid/log/host02/agent/ohasd/oraagent_grid/oraagent_grid.log”.

– I tried to mount DATA diskgroup using SQL but I got ORA-15040 which indicated that disk “0” could not be read

SQL> alter diskgroup data mount;
alter diskgroup data mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "0" is missing from group number "1" 

– When I listed ASM disks on second node, ASMDISK01 was not listed

[root@host02 bin]# oracleasm listdisks
ASMDISK010
ASMDISK011
ASMDISK012
ASMDISK013
ASMDISK014
ASMDISK02
ASMDISK03
ASMDISK04
ASMDISK05
ASMDISK06
ASMDISK07
ASMDISK08
ASMDISK09

– I issued oracleasm scandisks which led to discovery of ASMDISK01

[root@host02 bin]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASMDISK01"

– Now when I issued oracleasm listdisks, ASMDISK01 was also listed.

[root@host02 bin]# oracleasm listdisks
ASMDISK01
ASMDISK010
ASMDISK011
ASMDISK012
ASMDISK013
ASMDISK014
ASMDISK02
ASMDISK03
ASMDISK04
ASMDISK05
ASMDISK06
ASMDISK07
ASMDISK08
ASMDISK09

— Now I could mount DATA diskgroup successfully

SQL> alter diskgroup data mount;

Diskgroup altered.

I hope this post was helpful.

—————————————————————-

Related Links:

Home

11g R2 RAC Index

INS-08109 unexpected error occured while validating inputs at state ‘InstallOptions’
ORA-01102: cannot mount database in EXCLUSIVE mode
PROT-35: The configured Oracle Cluster Registry locations are not accessible
PRVF-5636 : The DNS response time for an unreachable node exceeded 15000 ms on following nodes: host01, host02

 

INS-08109 unexpected error occured while validating inputs at state ‘InstallOptions’

I received this error when I was trying to install 11.2.0.3 database software in my 2 node cluster employing 11.2.0.3 clusterware.

Reason: clusterware was not up

Solution: Start the clusterware and attempt database software installation again.

—————————————————————————————–

Related Links:

Home

11g R2 RAC Index

ORA-01102: cannot mount database in EXCLUSIVE mode
ORA-15040: diskgroup is incomplete
PROT-35: The configured Oracle Cluster Registry locations are not accessible

PRVF-5636 : The DNS response time for an unreachable node exceeded 15000 ms on following nodes: host01, host02

 ——————————————————————————————-

 

 

 

11g R2 RAC: Highly Available IP (HAIP)

In earlier releases, to minimize node evictions due to frequent private NIC down events, bonding, trunking, teaming, or similar technology was required to make use of redundant network connections between the nodes. Oracle Clusterware now provides an integrated solution which ensures “Redundant Interconnect Usage” as it supports IP failover .

Multiple private network adapters can be defined either during the installation phase or afterward using the oifcfg. The ora.cluster_interconnect.haip resource will pick up a  highly available virtual IP (the HAIP) from “link-local” (Linux/Unix)  IP range (169.254.0.0 ) and assign to each private network.   With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces. If a private interconnect interface fails or becomes non-communicative, then Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces.

Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined. The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster .  If there’s only one active private network, Grid will create one;  if two, Grid will create two and so on. The number of HAIPs won’t increase beyond four even if more private network adapters are activated . A restart of clusterware on all nodes is required for new adapters to become effective.

Oracle RAC Databases, Oracle Automatic Storage Management (clustered ASM), and Oracle Clusterware components such as CSS, OCR, CRS, CTSS, and EVM components employ Redundant Interconnect Usage.  Non-Oracle software and Oracle software not listed above, however, will not be able to benefit from this feature.

Let’s demonstrate :

Current configuration :

Cluster name : cluster01
nodes : host01, host02

– Overview
– check current network network configuration
– check that a link local HAIP (eth1:1 ) has been started for the only private interconnect eth1   on both the nodes
– Add another network adapter eth2 to both the nodes
– Assign IP address to eth2 on both the nodes
– Restart network service on both the nods
– Check that eth2 has been activated on both the nodes
– Add eth2 to as another private interconnect on one of  the nodes
– check that eth2 has been added to the cluster as another private interconnect
– check that HAIP has not been activated yet (c/ware needs to be restarted)
– Restart crs on both the nodes
– Check that the resource ora.cluster_interconnect.haip has been restarted on both the nodes
– check that a link local HAIPs(eth1:1 and eth2:1) have been started for  both the private  interconnects eth1 and eth2   on both the nodes from the subnet 169.254.*.* reserved for HAIP
– stop private interconnect on eth1 on node1
– check that eth1 is not active and corresponding HAIP has failed over to eth2
— check that crs is still up on host01

Implementation

- Check current network configuration

eth0 is public interconnect
eth1 is private interconnect

[root@host01 ~]# oifcfg getif
eth0  192.9.201.0  global  public
eth1  10.0.0.0  global  cluster_interconnect

- check that a link local HAIP (eth1:1 ) has been started for the only private interconnect eth1 on both the nodes

[root@host01 ~]# ifconfig -a

(output trimmed to show only private interconnect)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:AA
inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB)  TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:69:3E:AA
inet addr:169.254.4.103  Bcast:169.254.127.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x20a4

[root@host02 network-scripts]# ifconfig -a

(output trimmed to show only private interconnects)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:44:67:25
inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB)  TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:44:67:25
inet addr:169.254.91.243  Bcast:169.254.127.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x20a4

- Add another network adapter eth2 to both the nodes

- Assign IP address to eth2 on both the nodes
host01 : 10.0.0.11, subnet mask : 255.255.255.0
host02 : 10.0.0.22, subnet mask : 255.255.255.0

- Restart network service on both the nodes

#service network restart

- Check that eth2 has been activated on both the nodes

[root@host01 ~]# ifconfig -a |  grep eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4

[root@host02 network-scripts]# ifconfig -a |  grep eth2

eth2      Link encap:Ethernet  HWaddr 00:0C:29:44:67:2F

- Add eth2 to as another private interconnect on one of  the nodes

[root@host01 ~]# oifcfg setif -global eth2/10.0.0.0:cluster_interconnect

- check that eth2 has been added to the cluster as another private interconnect

[root@host01 ~]# oifcfg getif

eth0  192.9.201.0  global  public
eth1  10.0.0.0  global  cluster_interconnect
eth2  10.0.0.0  global  cluster_interconnect

- check that HAIP has not been activated yet (c/ware needs to be restarted)

[root@host01 ~]# ifconfig -a |  grep eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4

[root@host02 network-scripts]# ifconfig -a |  grep eth2

eth2      Link encap:Ethernet  HWaddr 00:0C:29:44:67:2F

- Restart crs on both the nodes

[root@host01 ~]# crsctl stop crs
crsctl start crs

[root@host02 network-scripts]# crsctl stop crs
crsctl start crs

- Check that the resource ora.cluster_interconnect.haip has been restarted on both the nodes
(Since it is a resource of lower stack, -init option has been used)

[root@host01 ~]# crsctl stat res ora.cluster_interconnect.haip -init

NAME=ora.cluster_interconnect.haip
TYPE=ora.haip.type
TARGET=ONLINE
STATE=ONLINE on host01

- check that a link local HAIPs(eth1:1 and eth2:1) have been started for  both the private interconnects eth1 and eth2 on both the nodes from the subnet 169.254.*.* reserved for HAIP

[root@host01 ~]# ifconfig -a

(output trimmed to show only private interconnects)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:AA
inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB)  TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:69:3E:AA
inet addr:169.254.4.103  Bcast:169.254.127.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x20a4

eth2      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4
inet addr:10.0.0.11  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:4358 errors:0 dropped:0 overruns:0 frame:0
TX packets:404 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1487549 (1.4 MiB)  TX bytes:76461 (74.6 KiB)
Interrupt:75 Base address:0x2424

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4
inet addr:169.254.196.216  Bcast:169.254.255.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x2424

[root@host02 network-scripts]# ifconfig -a

(output trimmed to show only private interconnects)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:44:67:25
inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB)  TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:44:67:25
inet addr:169.254.91.243  Bcast:169.254.127.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x20a4

eth2      Link encap:Ethernet  HWaddr 00:0C:29:44:67:2F
inet addr:10.0.0.22  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:672f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:7229 errors:0 dropped:0 overruns:0 frame:0
TX packets:2368 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4288301 (4.0 MiB)  TX bytes:1163296 (1.1 MiB)
Interrupt:75 Base address:0x2424

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:44:67:2F
inet addr:169.254.174.223  Bcast:169.254.255.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x2424

— stop private interconnect on node1

[root@host01 ~]# ifdown eth1

-- check that eth1 is not active and corresponding HAIP (169.254.4.103) has failed over to eth2

[root@host01 ~]# ifconfig -a

(output trimmed to show private interconnect only)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:AA
BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:163401 errors:0 dropped:0 overruns:0 frame:0
TX packets:145495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:89098576 (84.9 MiB)  TX bytes:69881778 (66.6 MiB)
Interrupt:75 Base address:0x20a4

eth2      Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4
inet addr:10.0.0.11  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:11649 errors:0 dropped:0 overruns:0 frame:0
TX packets:4738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6370975 (6.0 MiB)  TX bytes:2033237 (1.9 MiB)
Interrupt:75 Base address:0x2424

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4
inet addr:169.254.196.216  Bcast:169.254.255.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x2424

eth2:2    Link encap:Ethernet  HWaddr 00:0C:29:69:3E:B4
inet addr:169.254.4.103  Bcast:169.254.127.255  Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
Interrupt:75 Base address:0x2424

– check that crs is still up on host01

[root@host01 ~]# crsctl stat res -t

References:

http://www.oracle.com/technetwork/products/clusterware/overview/oracle-clusterware-11grel2-owp-1
-129843.pdf

http://ora-ssn.blogspot.in/2011/09/redundant-interconnect-usage-in-11g-r2.html

http://oraschool.tistory.com/38

————————————————————————————————-

Related Links:

Home

11g R2 RAC Index
11g R2 RAC: NIC Bonding

 

———————————————————–

PRVF-5636 : The DNS response time for an unreachable node exceeded 15000 ms on following nodes: host01, host02

I received this error message when I was trying to install 11.2.0.3 grid software on OEL 5.4 32 bit machine. I resolved it as follows:

- Modified /var/named/chroot/etc/named.conf file on DNS server

Changed the entry for zone “.” IN to

{
type hint;
file "/dev/null";
};

- Restarted named service

[root@server1 ~]# service named restart

- Invoked runInstaller again.

[grid@host01 clusterware]$ ./runInstaller

This time I did not get this error.
Hope it helps !!

———————————————————————————————

Related Links:

Home

11g R2 RAC Index

INS-08109 unexpected error occured while validating inputs at state ‘InstallOptions’
ORA-01102: cannot mount database in EXCLUSIVE mode
ORA-15040: diskgroup is incomplete
PROT-35: The configured Oracle Cluster Registry locations are not accessible

LOCAL ARCHIVE WITH NFS

 

In this post, I will demonstrate the use of NFS to perform recovery when archivelogs are stored locally by each instance .

In a cluster database we can store archivelogs on
– shared location e.g. ASM OR
– local archive log destination for each instance

If archivelogs are locally stored and an instance is trying to perform recovery, it will need logs from all the instances. Hence, it is recommended that local archive log destinations be created for each instance with NFS-read mount points to all other instances. This is known as the local archive with network file system (NFS) scheme. During recovery, one instance may access the logs from any host without having to first copy them to the local destination.

– OVERVIEW –

– Enable local archiving on each of 3 nodes in the cluster
– Take example tablespace offline in immediate mode so that it will need recovery when it is brought online.
– Trigger a manual checkpoint.
– Switch logs on all the 3 nodes to generate archivelogs
– Try to bring the example tablespace online – will fail as it needs media recovery
– Try to recover example tablespace – fails as archivelogs from other instance are inaccessible.
– Mount the archivelogs from other nodes using NFS
– Try to recover example tablespace – succeeds as archivelogs from other instance are accessible.
– Try to bring the example tablespace online – will succeed as it has been recovered.

– IMPLEMENTATION –

– create folders to store archived logs on 3 nodes –

host01$mkdir /home/oracle/archive1
host02$mkdir /home/oracle/archive2
host03$mkdir /home/oracle/archive3

— Login to the database from any of the existing nodes (host01, host02). Say host01
– set the archive log destinations of three instances  to the folders created above.

SQL>set sqlprompt ORCL1>

ORCL1>sho parmeter db_recovery

      alter system set log_archive_dest_1 = ‘location=/home/oracle/archive1′ scope=both    sid=’orcl1′;
      alter system set log_archive_dest_1 = ‘location=/home/oracle/archive2′ scope=both      sid=’orcl2′;
      alter system set log_archive_dest_1 = ‘location=/home/oracle/archive3′ scope=both     sid=’orcl3′;

ORCL1>archive log list;

– Put the database in archivelog mode if not already

host01$srvctl stop database -d orcl
                srvctl start instance -d orcl -i orcl1 -o mount

ORCL1>alter database archivelog;
      archive log list;
      shu immediate;

host01$srvctl start database -d orcl
                  srvctl status database -d orcl

— On node1 , switch logs and verify that archive logs are generated in specified location –

ORCL1>archive log list;
      alter system switch logfile;
      /
      /
      select name from v$archived_log;


      ho ls /home/oracle/archive1/

— On node2 , switch logs and verify that archive logs are generated in specified location –

SQL>set sqlprompt ORCL2>


ORCL2>archive log list;
      alter system switch logfile;
      /
      /
      select name from v$archived_log;


      ho ls /home/oracle/archive2/

— On node3, switch logs and verify that archive logs are generated in specified location –

SQL>set sqlprompt ORCL3>

ORCL3>archive log list;
      alter system switch logfile;
      /
      /
      select name from v$archived_log;

      ho ls /home/oracle/archive3/

– Take example tablespace offline immediate
–  Trigger a checkpoint
—   Switch logs on all the 3 instances

ORCL1>alter tablespace example offline immediate;
      alter system checkpoint;
      alter system switch logfile;
      /
      /
      select name from v$archived_log;

      ho ls /home/oracle/archive1/

ORCL2>alter system switch logfile;
      /
      /

      select name from v$archived_log;

      ho ls /home/oracle/archive2/

ORCL2>alter system switch logfile;
      /
      /
      select name from v$archived_log;

      ho ls /home/oracle/archive3/

– Try to bring example tablespace online – Needs media recovery

ORCL1>alter tablespace example online;

– Try to recover example tablespace from node1 – fails as local archived logs from node2 and node3 are not accessible;

host01RMAN>recover tablespace example;

– Make the folders containing archived logs on node2 and node3 sharable and start portmap and nfs service

— node2 —
– Add following line to /etc/exports

/home/oracle/archive2 *(rw,sync)

host02#service portmap restart
       service nfs restart

— node3 —
– Add following line to /etc/exports

/home/oracle/archive3 *(rw,sync)

– start portmap and nfs service on node2

host02#service portmap restart
                 service nfs restart

– on node1, create folders where archivelog folders from node2 and node3 will be mounted

host01$mkdir /home/oracle/archive2
                 mkdir /home/oracle/archive3

– As root user on node1,
   – start portmap and nfs service
   – Mount the archive folders on node2 and node3

host01#service portmap restart
                  service nfs restart

host01#mount host02:/home/oracle/archive2 /home/oracle/archive2
                 mount host03:/home/oracle/archive2 /home/oracle/archive3

– check that archivelogs on node2 and node3 are accessible on node1 –

host01$ls /home/oracle/archive2
host01$ls /home/oracle/archive3

– Try to recover example tablespace from node1 – succeeds as local archived logs from node2 and node3 are accessible;

host01RMAN>recover tablespace example;

ORCL1>alter tablespace example online;

——————————————————————————————-
Related links:
HOME

11g R2 RAC : Add Instance Manually
11g R2 RAC : Autolocate Backups
11g  R2 RAC : Clone Database Home
11g  R2 RAC : NIC Bonding

 

————

ADD INSTANCE MANUALLY

We can add an instance to a cluster database by 3 methods :
- Enterprise Manager
- DBCA
- Manually
  In this post I will demonstrate the method to add an instance manually to a RAC database.
Current scenario
———————-
Total no. of nodes in the cluster   : 3
Names of nodes                         : host01, host02, host03
Name of RAC database              : orcl
Instances of orcl database          : orcl1, orcl2
Nodes hosting orcl instances      : host01, host02
Now I want to add another instance orcl3 of orcl database on host03 manually.
Following are the steps which need to be taken:
- Login to orcl database in SQLPLUS as sysdba on one of the existing nodes ,say host01
- create undo tablespace and redo log groups for the instance 3
SQL>create undo tablespace undotbs3 datafile '+DATA';
alter database add logfile thread 3;
alter database add logfile thread 3;
– Set various parameters for the new instance –
SQL>alter system set instance_number = 3         scope=spfile sid='orcl3';
alter system set instance_name = 'orcl3'     scope=spfile sid='orcl3';
alter system set thread = 3                         scope=spfile sid='orcl3';
alter system set undo_tablespace=undotbs3 scope=spfile sid='orcl3';
alter database enable thread 3;
– Stop all instances of the database orcl
$srvctl stop database -d orcl
– Restart database orcl so that new parameters in spfile are read
$srvctl start database -d orcl
—– Add the instance to the database –
$srvctl add instance -d orcl -i orcl3 -n host03
– Start the instance –
$srvctl start instance -d orcl -i orcl3
– Check that all the instances (including the newly added orcl3) are running –
$srvctl status database -d orcl
– Copy the following entry for orcl in tnsnames.ora from host01 to tnsnames.ora on host03 -
ORCL =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl)
    )
  )
– Copy the password file from host01 to host03
host01$scp $ORACLE_HOME/dbs/orapwdorcl1
host03:/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwdorcl3
— Test that remote connection can be made from host03 –
host03$sqlplus sys/oracle@orcl as sysdba 

 SQL>sho parameter db_name
I hope this article was useful. Your comments and suggestions are always welcome!
———————————————————————————————-
Related links :

 Home
RAC Index
11gR2 RAC: Add A Node

                                                     ——————-