11g R2 RAC: SERVER POOLS

                                        

In this post, I will explain about Server Pools. 11G R2 RAC introduced serverpools which will revolutionalize cluster management and ease the life of the DBA’s. To understand srever pools, let us see how the things were earlier :

 

ADMIN MANAGED DATABASES
 In prior releases of Oracle RAC, the DBA would explicitly manage which nodes of a cluster would be used to run the various instances associated with a RAC database. Whenever a node was added/removed from the cluster, DBAs were responsible for adding/removing instances from a RAC database, including the creation of redo log threads and undo tablespaces. Additionally, database services would be manually configured with preferred and available nodes, which facilitated the balancing of connections and failover of the services in a RAC environment. This scheme worked well for a cluster with a small number of nodes.

 

NEED FOR POLICY MANAGED DATABASES
 - As the number of nodes become large, DBA’s tasks became very complicated .
- It was not possible to add more instances dynamically to newly added servers .
- It was not possible for additional instances of database to start automatically as some servers in the cluster became idle.
 –  It was not possible for a service to be available on newly added nodes dynamically.

 

 Now, from Oracle 11g R.2 RAC all these tasks are automated using server pool , a new feature of Oracle 11g RAC. Server Pools logically divide the cluster into small segments that can be used for varying workloads.  Now databases and services can be defined to run in server pools. and  servers in the server pools can change dynamically.

 

ENTER SERVER POOLS
Oracle 11g R2 introduced server pools in which we could divide the cluster into smaller server pools. Each pool could be assigned some attributes:

 

IMPORTANCE : 0(default) to 1000 – It is used to determine how to reconfigure the servers in various pools when a node joins/leaves the cluster. Using importance parameter you can ensure that low precedence workload doesn’t eat up more resources.

 

MAX_SIZE: The maximum number of servers a server pool can contain. Default value is -1 which indicates  means that all the servers in the cluster can be a part of this pool.

 

MIN_SIZE: The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other pools into this one until that number is met. The default value is 0.

 

ACL:  String in the following format:owner:user:rwx,pgrp:group:rwx,other::r— Defines the owner of the server pool and which privileges are granted to various operating system users and groups.
  • r: Read only
  • w: Modify attributes of the pool or delete it
  • x: Assign resources to this pool
By default, the identity of the client that creates the server pool is the owner.
Clusterware can move servers from server pool robotically to maintain important server pool configuration. By default two pools get created after installation of the cluster : Generic Pool and Free Pool

 

Generic Pool: The servers in the pool are used to host Admin Managed databases.
  When you upgrade your existing version of the clusterware to 11.2, all the nodes get mapped to the in-built Generic pool.This is an internally managed server pool and the modification of the attributes of this pool is not allowed. When you create a database which is Admin-managed, that also becomes a part of the Generic Pool as a child pool of it.

 

 Free Pool: All the servers which have not been assigned to any pool are part of this pool. When the clusterware starts , initially all the servers are assigned to this pool  only. As applications start, servers move from this pool to Generic or user defined pool dynamically. As like the Generic pool, this is also an internally managed pool but still some attributes are available to be modified by the dba like IMPORTANCE.
In addition to above pools, users can also define server pools.

 

The allocation of the hosts is going to be happening in the following order:
     1. Generic Pool
     2. Explicitly created pool
     3. Free Pool

 

Servers will be assigned to explicitly created pools  as per the following rules:
1.    Fill all server pools in order of importance until they meet  their minimum.
2.    Fill all server pools in order of importance until they meet  their maximum.
3.    By default any left over go to FREE.

 

If servers leave the cluster for any reason, a server reallocation may take place. If there are servers in the Free Pool and another server pool falls below its maximum value, a free server is allocated to the affected server pool. If there are no free servers, then server reallocation takes place only if a server pool falls below its minimum level. If that occurs, a server will be sourced from one of the following locations in the following order:

1. The server pool with the lowest importance that has more than its minimum number of servers
2. The server pool with the lowest importance that has at least one server and has lower importance than the affected server pool

Using these mechanisms, server pools can maintain an optimal level of resources based on current no. of servers that are available.

DEMONSTRATION:
 
Current scenario:
Nodes in the cluster : host01, host02, host03

 

- Let’s see the current status of server pools
#crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host01 host02 host03
NAME=Generic
ACTIVE_SERVERS=

 

- create a serverpool pool mypool with min = 2, max = 2, imp = 1
[root@host02 host02]# srvctl add srvpool -g mypool -l 2 -u 2 -i 1

 

- check that two servers have been allocated to mypool from free pool
[root@host02 host02]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=host01
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.mypool
ACTIVE_SERVERS=host02 host03

 

- Add another server pool mypool1 with min = 2, max = 2, imp = 1
[root@host02 host02]# srvctl add srvpool -g mypool1 -l 2 -u 2 -i 1

 

- check that one  server has been allocated to mypool1 from free pool
 [root@host02 host02]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.mypool
ACTIVE_SERVERS=host02 host03
NAME=ora.mypool1
ACTIVE_SERVERS=host01

 

- Increase the importance of mypool1 to 2
[root@host02 host02]# srvctl modify srvpool -g mypool1 -i 2

 

- Check that one server has been moved from mypool of lower importance(1) to mypool of higher importance (2)
[root@host02 host02]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.mypool
ACTIVE_SERVERS=host03
NAME=ora.mypool1
ACTIVE_SERVERS=host01 host02

 

- Increase the importance of mypool to 3
[root@host02 host02]# srvctl modify srvpool -g mypool -i 3

 

- Check that one server has been moved from mypool1 to mypool
[root@host02 host02]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.mypool
ACTIVE_SERVERS=host02 host03
NAME=ora.mypool1
ACTIVE_SERVERS=host01

 

- Cleanup –
[root@host02 host02]# srvctl remove srvpool -g mypool1
[root@host02 host02]# srvctl remove srvpool -g mypool

 

CHILD POOLS
 We can have various child pools  for the same parent pool.  The child pools will  have their own properties.  Servers will be allocated to the child pools based on the IMPORTANCE parameter assigned to them. For example we can have a database which runs in a parent pool. We can assign various services of the database assigned to different pools. We can have different attributes set up for the child pools depending on the importance that we want to give to each service of the database. The hosts assigned  to various parent pools are mutually exclusive i.e. same server can’t be assigned to different parent pools.  For the child pools, whether the same server can be assigned to different child pools or not, would be based on the  value of the attribute EXCLUSIVE_POOL for them. If there is a string value assigned to this parameter and this is shared by two child pools, host assigned to one child pool can’t be shared by its other sibling.

 

– Create a parent pool pmpool3 with min = 3, max=3 and importance = 1
[root@host02 host02]# srvctl add srvpool -g pmpool3 -l 3 -u 3 -i 1

 

– Check that all the three servers have been assigned to pmpool3 –
[root@host02 host02]# crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ora.pmpool3
ACTIVE_SERVERS=host01 host02 host03

 

- Add a child pool ch1 with parent pool = pmpool3, min = 2, max =2 and importance = 1
[grid@host02 ~]$ crsctl add serverpool ch1 -attr "PARENT_POOLS=ora.pmpool3,MIN_SIZE=2,MAX_SIZE=2,IMPORTANCE=1"

 

- Check that two servers have been allocated to child pool ch1
[root@host02 ~]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ch1
ACTIVE_SERVERS=host01 host02
NAME=ora.pmpool3
ACTIVE_SERVERS=host01 host02 host03

 

- Add another child pool ch2 with min = 2, max = 2, importance = 2(higher than ch1(1))
 [grid@host02 ~]$ crsctl add serverpool ch2 -attr "PARENT_POOLS=ora.pmpool3,MIN_SIZE=2,MAX_SIZE=2,IMPORTANCE=2"

 

- Check that two servers have been allocated to child pool ch2. Since we haven’t set the parameter EXCLUSIVE_POOLS, the same hosts host01 and  host02  are  shared by both the children.
[root@host02 ~]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ch1
ACTIVE_SERVERS=host01 host02
NAME=ch2
ACTIVE_SERVERS=host01 host02
NAME=ora.pmpool3
ACTIVE_SERVERS=host01 host02 host03
————————————————————————————

 

EXCLUSIVE_POOLS

 

– Delete both child pools ch1 and ch2
[grid@host02 ~]$ crsctl delete serverpool ch1 
[grid@host02 ~]$ crsctl delete serverpool ch2

 

– Add child pool ch1 with min = 2,max = 2, importance = 1, exclusive_pools=’x’
[grid@host02 ~]$ crsctl add serverpool ch1 -attr "PARENT_POOLS=ora.pmpool3,MIN_SIZE=2,MAX_SIZE=2,IMPORTANCE=1,EXCLUSIVE_POOLS='x'"

 

– check that 2 servers have been assigned to child pool ch1
[grid@host02 ~]$ crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ch1
ACTIVE_SERVERS=host01 host02
NAME=ora.pmpool3
ACTIVE_SERVERS=host01 host02 host03

 

- Add another child pool ch2 with min = 2, max = 2, importance = 2(higher than ch1(1)), exclusive_pools = x so that it won’t share any servers with ch1
[grid@host02 ~]$ crsctl add serverpool ch2 -attr "PARENT_POOLS=ora.pmpool3,MIN_SIZE=2,MAX_SIZE=2,IMPORTANCE=2,EXCLUSIVE_POOLS='x'"

 

- CHeck that one server has been moved from ch1 to ch2 as importance of ch2 is higher and servers are not shared between ch1 and ch2.
[grid@host02 ~]$ crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=
NAME=Generic
ACTIVE_SERVERS=
NAME=ch1
ACTIVE_SERVERS=host01
NAME=ch2
ACTIVE_SERVERS=host02 host03
NAME=ora.pmpool3
ACTIVE_SERVERS=host01 host02 host03
=========

 

WIth the introduction of server pools, we can have two management styles for RAC database.

 

Administrator managed databases (similar to what we had prior to 11g R2)
  . DBA defines which servers should the database run on
  . DBA decides the servers (preferred/available nodes) on which services of the database should run.
  . DBA decides which instance of the database will run on which server.
  . This database type would assign the instances to a child pool of the default pool available Generic pool.

 

Although this type seems to be much simpler, this can’t be used when the number of nodes pass a number which is difficult to manage by a DBA, for example 100. In that case, the next type of the 11.2 database would be helpful-Policy Managed database.

 

Policy Managed Databases
- The database can be associated with a server pool rather than a specific set of nodes. It will decide   the minimum and maximum no. of servers needed by that resource (database , service, third party application).
- The  database will run on the servers which have been allocated to the serverpool it has been assigned to.
- Since servers allocated to the server pool can change dynamically, this would allow Oracle to dynamically deliver services based on the total no. of servers available to the cluster.
  . The database will started on enough servers subject to the availability of the servers.
  . We need not hardcode the servers a database should run on.
  . Any instance of the database can run on any node. There is no fixed mapping between instance number and node.
  . As servers are freed/added/deleted, they will be allocated to existing pools as per the rules mentioned earlier.

 

For example, if a cluster consisted of eight nodes in total and supported three RAC databases. Each database would be defined with a minimum and maximum number of servers. Let’s assume that

 

DB1 is defined with a minimum of 4 and a maximum of 6 with an importance of 10,
DB2 is defined with a minimum of 2 and maximum of 3 and an importance of 7, and
DB3 is set to a minimum of 2 and maximum of 3 with an importance of 5.

 

Initially the 8 nodes could be configured as nodes 1-4 allocated to DB1, nodes 5-6 allocated to DB2 and nodes 7-8 allocated to DB3. If node 3 failed for some reason, the system would allocate node 7 or 8 to DB1 because it has a higher importance than DB3 and a minimum requirement of 4 servers, even though it would cause DB3 to fall below the minimum number of servers. If node 3 is re-enabled it would be allocated immediately to DB3 to bring that database back to its minimum required servers.

 

If a 9th node were added to the cluster, it would be assigned to DB1 because of the importance factor and the fact that DB1 has not yet hit its maximum number of servers.

Conclusion
As the number of cluster increase, DBA’s will get a huge benefit from server pools as the management of the clusterware resources will become quite easy and dynamic.
In my next post, I will write about instance failover in server pools.
References:
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/admin.htm
 http://allthingsoracle.com/an-introduction-to-11-2-rac-server-pools/
http://www.mahir-quluzade.com/2013/05/converting-administrator-managed.html
Policy based cluster management in Oracle 12c

———————————————————————————
Related links:

HOME   

11gR2 RAC INDEX

6 thoughts on “11g R2 RAC: SERVER POOLS

  1. At the end of your example, if the 9th node was added, and it would assigned to DB, will Oracle create the fifth instance automatically? Or DBA has to do it? By the way, you have provided very clear demo. Good job and thanks.

  2. At the end of your example, if the 9th node was added, and it would assigned to DB1, will Oracle create the fifth instance automatically? Or DBA has to do it? By the way, you have provided very clear demo. Good job and thanks.

  3. Thanx Sitao!

    Yes, Oracle will automatically create the fifth instance. That’s the beauty of server pools. Now, since any of the servers can be moved to the pool where a DB has been configured to run, all the servers in the cluster are configured to run the instance of any of the databases in the cluster i.e. the moment another server is added, redo threads and undo tablespace for each policy managed database in the cluster is automatically created corresponding to the new server.

  4. Is it possible for the database to be assigned to 2 or 3 pools and run on all of them?
    There is -g “server_pool_list” parameter in the documentation.

    1. As per the documentation,
      “Databases are deployed in one or more server pools and the size of the server pools determine the number of database instances in the deployment.”

      I haven’t tried but it should be possible for the database to be assigned to 2 or 3 pools and run on all of them.

      Regards
      Anju

Your comments and suggestions are welcome!