In my earlier post , I had explained the concept of server pools which was introduced in 11g R2 RAC. In this post, I will demonstrate how instance failover takes place in a policy managed database in 11g R2 RAC.
Prior to 11g R2 RAC, various instance of a database were mapped to various nodes in the cluster. Imagine a cluster having 4 nodes host01, host02, host03 and host04. I have a database DB configured to run on 3 nodes in the cluster i.e.
Instance DB1 runs on host01
Instance DB2 runs on host02
Instance DB3 runs on host03
Let us say I have a service DB_SERV defined for the database with
Preferred instances : DB1, DB2
AVailable instances : DB3
As the clusterware starts up , the status is as follows:
Instance DB1 runs on host01
Instance DB2 runs on host02
Instance DB3 runs on host03
Service DB_SERV is supported by host01 and host02 i.e. if a user connects to the database using service db_serv, he will be connected to either of host01 and host02.
DB1 DB2 DB3
DB_SERV DB_SERV |
| | |
V V V
host01 host02 host03 host04
Now, if host02 crashes,
i.e.
DB1 DB2 DB3
DB_SERV DB_SERV |
| | |
V V V
host01 host02 host03 host04
X
- Instance DB2 crashes.
- Service DB_SERV fails over from crashed instance (DB2) to available instance i.e. DB3 i.e. now if a user connects to the database using service db_serv, he will be connected to either of host01 and host03.
DB1 DB3
DB_SERV DB_SERV
| |
V V
host01 host03 host04
It implies that although we had an idle server host04 available but still instance DB2 crashed instead of failing over to host04 since instance DB2 has been configured to run on host02 only.
ENTER SERVER POOLS:
Above problem has been resolved in 11g R2 RAC with the introduction of server pools. Now the database is configured to run in a server pool. The cardinality of the database is decided by MIN-SIZE attribute of the server pool. The database runs on the servers which are assigned to the server pool. If a node hosting the database crashes, the idle node (if available) will be moved from Free pool to the server pool and the instance running on the crashed node will fail over to the that node. All the services supported by the instance will fail over along with the instance.
Let us see how.
Imagine a cluster having 3 nodes host01, host02 and host03
HOST01 HOST02 HOST03
– create a server pool pmpool with min = 2, max = 2, importance = 1
[grid@host01 ~]$srvctl add srvpool -g pmpool -l 3 -u 3 -i 1
– check that 2 servers (host02, bost03) have been allocated to this server pool
[grid@host01 ~]$crsctl status serverpool
PMPOOL
+——————————————+
| |
HOST01 | HOST02 HOST03 |
+——————————————-+
– create a policy managed database pmdb which will run in the serverpool pmpool
– check that two servers (host02 and host03) have been allocated to serverpool pmpool
[root@host01 ~]#crsctl status serverpool
– check that two instances of pmdb are running on the 2 servers (host02, host03) in pmpool
[oracle@host01 ~]$ srvctl status database -d pmdb
Instance pmdb_1 is running on node host02
Instance pmdb_2 is running on node host03
PMPOOL
+——————————————+
| PMDB_1 PMDB_2 |
HOST01 | HOST02 HOST03 |
+——————————————-+
REBOOT HOST03 WHERE INSTANCE PMDB-2 is running
PMPOOL
+——————————————+
| PMDB_1 PMDB_2 |
HOST01 | HOST02 HOST03 |
| X |
+——————————————-+
– Check the status of the database pmdb repeatedly –
–Note that instance pmdb_2 fails over to host01 –
[oracle@host01 ~]$ srvctl status database -d pmdb
Instance pmdb_1 is running on node host02
Instance pmdb_2 is running on node host03
[oracle@host01 ~]$
[oracle@host01 ~]$
[oracle@host01 ~]$ srvctl status database -d pmdb
Instance pmdb_1 is running on node host02
Instance pmdb_2 is not running on node host03
[oracle@host01 ~]$
[oracle@host01 ~]$
[oracle@host01 ~]$
[oracle@host01 ~]$
[oracle@host01 ~]$ srvctl status database -d pmdb
Instance pmdb_1 is running on node host02
Instance pmdb_2 is running on node host01
PMPOOL
+——————————————+
| PMDB_1 PMDB_2 |
HOST03 | HOST02 HOST01 |
+——————————————-+
– check that host01 has been moved to pmpool
PMPOOL
+——————————————-+
| |
HOST03 | HOST02 HOST01 |
X +——————————————-+
[root@host02 ~]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.pmpool
ACTIVE_SERVERS=host01 host02
——————————————————————————————————–
Related links:
Good Article!!! Thanks for this blog. My question is will the 4th server(Node) join the cluster seamlessly and require no manual steps?
Absolutely Vicky. The 4th node will join the serverpool seamlessly. No manual intervention is required. (Please note that 4th node is already part of the cluster. It will become part of the serverpool).
Regards
Anju Garg
Please explain below lines little bit confused here :-
It implies that although we had an idle server host04 available but still instance DB2 crashed instead of failing over to host02 instance DB2 has been configured to run on host02 only.
Hi Odba2014
Sorry for typo. It should be read as :
It implies that although we had an idle server host04 available but still instance DB2 crashed instead of failing over to host04 since instance DB2 has been configured to run on host02 only.
I have corrected it in the post.
Thanks for pointing out.
Regards
Anju
Hi,
I have configured 2 node RAC (racdev1 & racdev2) with single database running (dev) on it. Instance(dev1) and (dev2) are running on their respective nodes.
Now i have added another Node (racdev3) – extended the clusterware + rdbms from racdev1.
My question is if i lets say stop dev2 running on racdev2 ,will dev2 automatically failover to racdev3 considering all are part of same server pool ?
Hi Naveed,
Have you created database as a policy managed database?
Regards
Anju Garg