Single Client Access Name or SCAN is a new feature that has been introduced in 11g R2 RAC. It enables the clients to use a single hostname to access oracle databases running in the cluster. Clients using SCAN do not need to change their TNS configuration if you add or remove nodes in the cluster.
To understand it better, let us go back a little to 10g/11gR1 RAC.
Let us say we have a database orcl which is running on 3 nodes in a cluster;
Name of the cluster: cluster01.example.com
Instance Host name IP address VIP Pvt network
orcl1 host01.example.com 192.9.201.181 192.9.201.182 10.0.0.1
orcl2 host02.example.com 192.9.201.183 192.9.201.184 10.0.0.2
orcl3 host03.example.com 192.9.201.185 192.9.201.186 10.0.0.3
DNS server
server1.example.com 192.9.201.59
Storage
openfiler1.example.com 192.9.201.9
Prior to 11g R2, when the user requested the connection to the database by issuing
$sqlplus hr/hr@orcl
Entry in tnsnames.ora on client side was read which looked like:
ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = host01-vip.example.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = host02-vip.example.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = host03-vip.example.com)(PORT = 1521))
(LOAD_BALANCE = yes)
)
(CONNECT_DATA =
(SID = orcl)
)
)
TNS entry contains reference(address) to all the nodes in the cluster. The request was to be sent to any one of the host names listed in ADDRESS_LIST.
To resolve the host name two methods were available :
- hosts file
- DNS Server
Let us discuss hosts file first. The /etc/hosts file contained entries using which hostnames could be resolved to corresponding VIPs.
#Public Network
192.9.201.181 host01.example.com host01
192.9.201.183 host02.example.com host02
192.9.201.185 host03.example.com host03
#Private Network
10.0.0.1 host01-priv.example.com host01-priv
10.0.0.2 host02-priv.example.com host02-priv
10.0.0.3 host03-priv.example.com host03-priv
#Virtual Network
192.9.201.182 host01-vip.example.com host01-vip
192.9.201.184 host02-vip.example.com host02-vip
192.9.201.186 host03-vip.example.com host03-vip
#Storage
192.9.201.9 openfiler1.example.com openfiler
Once VIP of the host was known, the request was sent to that IP address where it would be received by the local listener running there. The listener’s information was stored on the listener.ora file on the server side.
In this method of hostname resolution, whenever we added/deleted a node , we needed to modify both – tnsnames.ora and hosts file on each client machine which was a cumbersome task. This problem of hosts file was resolved by storing hostname resolution entries in a centralised DNS server. Now , on adding/deleting a node, hostname resolution entries needed to be modified only in one place i.e. DNS server but tnsnames.ora still needed to be modified on each client machine.
To avoid modifying tnsnames.ora on each client machine, SCAN was introduced in 11gR2.
ENTER SCAN in 11g R2 RAC ….
Now we can specify only the cluster name in the tnsnames.ora and client would be connected to any of the nodes in the cluster. Let us see how it can be implemented:
Now tnsnames.ora entry looks like:
ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
(LOAD_BALANCE = yes)
)
(CONNECT_DATA =
(SID = orcl)
)
)
When a user tried to connect to orcl database, he needs to resolve cluster name i.e. cluster01.example.com. To resolve the cluster name again we can use the hosts file or DNS server.
SCAN USING HOSTS FILE
If we use /etc/hosts file, it would look like this :
cluster01-scan 192.9.201.181
Here, we have the limitation that cluster name can resolve only to one SCAN-VIP i.e. all the client connection requests will be directed to same SCAN listener i.e. client side load balancing won’t take place. In case we want client side load balancing, cluster name should resolve to multiple SCAN-VIP’s and we must use DNS .
SCAN USING DNS
Hence we include the following in /etc/resolve.conf to forward any connection request for example.com to DNS server running at IP address 192.9.201.59
/etc/resolv.conf
Search nameserver
example.com 192.9.201.59
Various files associated with DNS on DNS server are :
- /var/named/chroot/etc/named.conf
- /var/named/chroot/var/named/forward.zone
- /var/named/chroot/var/named/reverse.zone
- /var/named/chroot/var/named/reverse1.zone
——————————————
- /var/named/chroot/etc/named.conf
——————————————
When a request comes to DNS, this file /var/named/chroot/etc/named.conf
is looked into first
Here it informs that domain example.com’s info is in forward.zone
zone “example.com” IN {
type master;
file “forward.zone”;
};
In forward.zone , following are specified :
– IP addresses of all the nodes in the cluster
– VIP addresses of all the nodes in the cluster
– Private network IP addresses of all the nodes in the cluster
– IP address of DNS Server
– IP address of Storage
– IP addresses of all the nodes in the cluster
– VIP addresses of all the nodes in the cluster
– Private network IP addresses of all the nodes in the cluster
– IP address of DNS Server
– IP address of Storage
- SCAN-VIP’s
************
——————————————
- /var/named/chroot/var/named/forward.zone
—————————————–
This file is used to resolve host names to ip addresses.
IN NS server1
IN A 192.9.201.59
server1 IN A 192.9.201.59
host01 IN A 192.9.201.183
host02 IN A 192.9.201.185
host03 IN A 192.9.201.187
openfiler1 IN A 192.9.201.181
host01-priv IN A 10.0.0.11
host02-priv IN A 10.0.0.22
host03-priv IN A 10.0.0.33
host01-vip IN A 192.9.201.184
host02-vip IN A 192.9.201.186
host03-vip IN A 192.9.201.188
cluster01-scan IN A 192.9.201.190
IN A 192.9.201.191
IN A 192.9.201.192
After getting address, the request is sent in round-robin fashion to the SCAN listeners listening at SCAN-VIP’s.
——————————————–
- /var/named/chroot/var/named/reverse.zone
——————————————–
This file is used to resolve ip addresses to host names.
********
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
59 IN PTR server1.example.com.
182 IN PTR openfiler.example.com.
183 IN PTR host01.example.com.
185 IN PTR host02.example.com.
187 IN PTR host03.example.com.
184 IN PTR host01-vip.example.com.
186 IN PTR host02-vip.example.com.
188 IN PTR host03-vip.example.com.
190 IN PTR cluster01-scan.example.com
191 IN PTR cluster01-scan.example.com
192 IN PTR cluster01-scan.example.com
——————————————–
- /var/named/chroot/var/named/reverse1.zone
——————————————–
This file is used to resolve private interconnect ip addresses
IN NS server1.example.com.
11 IN PTR host11-priv.example.com.
22 IN PTR host22-priv.example.com.
33 IN PTR host33-priv.example.com.
Thus, now we do not need to modify tnsnames.ora file on client machine whenever we add/delete node(s)as the file contains just a reference to the cluster. Also, we need not modify the entries in DNS on adding / deleting as node as SCAN name in DNS resolves to SCAN-VIP’s and not node-VIP’s. But we still needed to assign VIPs manually to various nodes. If we let DHCP assign VIP’s to various nodes automatically and let these VIP’s be known to DNS somehow, we would just need to run a few commands to let clusterware know about the node addition/deletion and we are done!!! No need to assign VIP’s manually.
This feature of 11gR2 RAC is called have GPNP – Grid Plug n Play . This is implemented by GNS (Grid Naming Service) which is similar to DNS but is used to resolve the cluster name only within a corporate domain (i.e. example.com).
SCAN WITH GNS
Let us see how DNS entries are modified now :
Various files associated with DNS are :
- /var/named/chroot/etc/named.conf
- /var/named/chroot/var/named/forward.zone
- /var/named/chroot/var/named/reverse.zone
- /var/named/chroot/var/named/reverse1.zone
——————————————
- /var/named/chroot/etc/named.conf
——————————————
When a request comes to DNS, this file is looked into first
Here domain example.com’s info is in forward.zone
zone “example.com” IN {
type master;
file “forward.zone”;
allow-transfer { 192.9.201.180; };
};
As per forward.zone request for cluster01.example.com is to be forwarded to the address for cluster01-gns i.e. 192.9.201.180
——————————————
- /var/named/chroot/var/named/forward.zone
—————————————–
This file is used to resolve host names to ip addresses.
IN NS server1
IN A 192.9.201.59
server1 IN A 192.9.201.59
host01 IN A 192.9.201.183
host02 IN A 192.9.201.185
host03 IN A 192.9.201.187
openfiler1 IN A 192.9.201.181
host01-priv IN A 10.0.0.11
host02-priv IN A 10.0.0.22
host03-priv IN A 10.0.0.33
$ORIGIN cluster01.example.com.
@ IN NS cluster01-gns.cluster01.example.com.
cluster01-gns IN A 192.9.201.180
After getting gns address, it comes back to named.conf and checks if transfer is allowed to that address. if it is allowed, request is placed on that address to gns.
zone “example.com” IN {
type master;
file “forward.zone”;
allow-transfer { 192.9.201.180; };
};
——————————————–
- /var/named/chroot/var/named/reverse.zone
——————————————–
This file is used to resolve ip addresses to host names.
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
59 IN PTR server1.example.com.
182 IN PTR openfiler.example.com.
183 IN PTR host01.example.com.
185 IN PTR host02.example.com.
187 IN PTR host03.example.com.
——————————————–
- /var/named/chroot/var/named/reverse1.zone
——————————————–
This file is used to resolve private interconnect
ip addresses
IN NS server1.example.com.
11 IN PTR host11-priv.example.com.
22 IN PTR host22-priv.example.com.
33 IN PTR host33-priv.example.com.
——————————————————————-
Once the request goes to GNS, it is multicast to all the nodes in the cluster. In all, three SCAN Listeners run on any of the nodes in the cluster. mDNS process running on the nodes hosting the SCAN listeners return the VIP’s of SCAN listeners to GNS. GNS in turn returns the addresses of the three SCAN listeners to DNS. The client request is forwarded to any one of the three SCAN listeners (say SCAN-Listener1). All the SCAN listeners have the load information of all the nodes. The SCAN listener which receives the client request (SCAN-Listener1) forwards the request to the local listener running on the least loaded node.
Thus, with SCAN all the VIP’s (Host VIP’s and SCAN-VIPs) are assigned dynamically by DHCP. The only VIP which is static is the VIP for GNS. Note that we have to specify the range of IP’s to be assigned by DHCP in the file /etc/dhcpd.conf.. For example the following entry specifies the IP range from 192.9.201.190 to 192.9.201.254.
range dynamic-bootp 192.9.201.190 192.9.201.254;
Now, to add a node, We just need to add an entry giving the physical IP of the node in files of DNS server. The VIP’s will be automatically assigned by DHCP.
I hope you found this post useful.
Your comments and suggestions are always welcome.
References:
——————————————————————————————————-
Related links:
——————
Hello Ma,am kindly explain why we need Scan Vip why can’t Scan Listener be run on Node Vip.
For load balancing and high availability of SCAN listener, Oracle recommends having 3 SCAN listeners in a RAC setup. If SCAN listener runs on node-VIP, and we have a RAC setup which has less than 3 nodes, the no. of SCAN listeners will be limited to the no. of nodes. Hence, SCAN listener runs on its own VIP so that if no. of nodes in the cluster is less than 3, we can have more than 1 SCAN listeners running on their own VIPs on one node itself.
Hii Ma’am
As I know when a client request goes to a local listener running on a node server side load balancing takes place and client request is forwarded to the least loaded instance .then y do we need a scan listener.?
Hello Aakash,
U r right. Server side load balancing takes place even when the request goes to local listener running on the node. Let’s see how is it implemented:
If I have a 3 node setup and the request goes to local listener of node1 and node1 is already overloaded, its listener will forward this request to any of the remaining nodes only if it knows about their workload information . It will get this information only if REMOTE_LISTENER parameter on node1’s instance is set to listeners of the other two nodes.
Similarly, REMOTE_LISTENER parameter of other instances should be set to the listeners running on all the nodes except themselves. It means that we have different values of the parameter REMOTE_LISTENER on different instances.
Additionally, If a node is added to the cluster, we need to modify the parameter on all the instances.
Now consider the case when we have SCAN listener running and all the local listeners are registered with it. The parameter REMOTE_LISTENER of all the instances is set to SCAN listener. The request comes to the SCAN listener which gets load info of all the nodes from their respective local listeners and forwards the request to the least loaded node.
Moreover, When a node is added, its local listener gets automatically registered with SCAN listener and we need not modify the parameter REMOTE_LISTENER.
Hence SCAN listener simplifies the management considerably.
Very good Explanation!
hi,
oracle recommends to have 3 scan listeners . is there any specific reason to have the odd number of scan listeners?
Thanks
Pavan
Hi Pavan,
3 scan listeners are recommended as they can provide required high availability and load balancing.
Regards
Anju
Thanks mam…i tried to change the permission as you suggested and it worked…well
Thank you for the valuable answer ma’am….
Hi, I read most of you posts in your blog, there are really awesome, its really helpful for real time environment troubleshooting.Thanks for sharing it.GREAT WORK !!!!!!!!!!!!
Thanks Sandhya!
It is just a humble effort to put together information and present it in an easily understandable manner. It is satisfying to know if my blog is able to serve its purpose.
Anju
You are doing a great service
Krishna
Hi Anju,
Thank you for an excellent blog, could you please let me know the issues we may face if SCAN name is set to resolve to one IP only instead of 3 IPs,
Regards
Hi Rakesh,
Thanx for your feedback.
If SCAN name resolves to one IP only, there will not be any failover/load balancing. All the clients
will always connect to only one node.
Hope it helps!
Your comments and suggestions are always welcome.
Regards
Anju Garg
Thank you Anju, I agree on Failover . In case of Load balancing, if there is only one SCAN listener I think all instances are registered with it as well, so will it not be able distribute the load to other instances.
Regards
Hi Rakesh
Load balancing will not take place on client side. Since SCAN name resolves to only one IP address, all the client requests will be directed to the same SCAN listener. But as you mentioned, as a result of server side load balancing, that SCAN listener can further redirect the connection to other instances too.
I haven’t tried it practically.
Regards
Anju
Anju,
Another great post on SCAN and how it simplifies listener administration. Thanks a lot.
Thanks for your time !
Your comments and suggestions are always welcome.
Regards
Anju Garg
Hello Ma’am,
What if one of the ip address mapped to the scan name is not accessible?
Hello Krunal
SCAN VIp will failover to one of the surviving IP’s.
Regards
Anju Garg
Does that mean that scan name will map to only 2 ips until the third on is fixed?
Yes.
Many Thanks Ma’am!!
Excellent post. Immensely helpful
Thanks for your time Krunal!
Your comments and suggestions are always welcome.
Regards
Anju Garg
What about private IPs. Will the be assigned dynamically here too.
I see that in /etc/dhcpd.conf you are giving range only for VIPs and Scan VIPs
Regards
Only VIP’s – host VIPs and SCAN VIP’s will be assigned dynamically by DHCP.
Only static VIP will be GNS-VIP.
Regards
Anju
Thanks Anju!!..Your blogs and replies are of significant help.
You are always welcome Krunal!
Hi Anju,
I am not a DBA guy but I would like to understand the flow from application cluster database (runs on two nodes) to fix an issue.
I have few question here, could you please help me to find answer for them?
1) When you say SCAN listner, does it mean this?
ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
(LOAD_BALANCE = yes)
)
(CONNECT_DATA =
(SID = orcl)
)
)
2) If yes, then this is how we configured db cluster to get the connection in our application, so now the load balancing happens in cluster so I am least bothered about how that works in cluster [but is there something which could throw the below exception at some time (but not all time)].
3) This is the issue I am facing in my application. [ once in 100 times, getting IO Network Adapter Couldn’t open connection exception but all other times connection works just fine].
We understood that there is something to do with connection pooling and load balanceing, but unable to get the exact issue.
Could you please help me to find the solution for this issue, thanks a lot for your time and help..
Thanks,
Mani.
Hi Mani,
Your interpretation of SCAN is absolutely correct but not being able to connect at times has nothinig to do with SCAn configuration. It appears to be some network issue.
Regards
Anju Garg
when we configure RAC using scan ip with DNS. we have to mention dns ip address in /etc/hosts file?
You can mention SCAN-IP in /etc/hosts or DNS. When it is mentioned in /etc/hosts, client side load balancing won’t take place as SCAN can resolve to only one IP. If it is mentioned in DNS, SCAN can resolve to multiple IP’s and hence client side load balancing will take place.
Hope it helps.
Regards
Anju Garg
maniyana thangal karuthikalai thandadhaiku nanri
In this configuration
ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan.cluster01.example.com)(PORT = 1521))
(LOAD_BALANCE = yes)
)
(CONNECT_DATA =
(SID = orcl)
)
)
The entry cluster01-scan.cluster01.example.com will resolve to what IP address? If this entry will resolve to any of the IP 192.9.201.190 or 192.9.201.191 or 192.9.201.192. They are not assigned on the network interfaces. They will not be reachable.
In this case the entry cluster01-scan.cluster01.example.com will be resolved via DNS or /etc/hosts. In DNS , it can resolve to multiple IP addresses but in /etc/hosts, it will resolve to one IP address only.
Regards
ANju
Anju What an excellent Article you have written Awesome
Thanks Ravi for your time and feedback.
Your comments and suggestions are always welcome.
regards
Anju
Hi Mam,
Is scan vip which listed in ./crsctl command are same as the ip listed on DNS server ?
For example:
3 ip mentioned in DNS and 3 scan ip listed in cluster
Hi Satish
If you use SCAN with DNS, scan VIP’s are same as those listed in DNS server.
If you use SCAN with GNS, scan VIP’s are dynamically assigned by the clusterware.
REgards
Anju GArg
Hi Anju – very clear and precise explanation!