Monthly Archives: November 2012
11g R2 RAC: ADD A NODE
After you have added a node, in case you want to clone database home on that node, please click here.
———————–
11g R2 RAC: CLONE DATABASE HOME
- - Two node RAC setup.
- - Both the current nodes have non shared database home.
- - Current two node setup is not able to handle increased workload.
- - Want to add third node to the setup.
- . Install oracle clusterware on the third node
- . Install database software on the third node (Since database home is non shared).
- Prepare the new cluster node
- Deploy the Oracle RAC database softwar
- Run the clone.pl script on each node
- Run the $ORACLE_HOME/root.sh script on each node
- · Specify the kernel parameters.
- · Use short, nondomain-qualified names for all names in the Hosts file.
- · Verify that you can ping the public and interconnect names.
- · Ensure Oracle Clusterware is active.
- · Ensure that Oracle ASM is active and there at least one Oracle ASM disk group exists and is mounted.
- check the contents of the inventory.xml file, we will see that it has information about the 11g Oracle Home
[oracle@node3 dbhome_1]$cd /u01/app/oracle/oraInventory/ContentsXML
11g R2 RAC : NODE EVICTION DUE TO MEMBER KILL ESCALATION
———————-
11g R2 RAC: NODE EVICTION DUE TO CSSDAGENT STOPPING
Related links:
Node Eviction Due To Missing Network Heartbeat
Node Eviction Due T0 Missing Disk Heartbeat
Node Eviction Due To Member Kill Escalation
11g R2 RAC: Reboot-less Fencing With Missing Network Heartbeat
——————-
11g DATAGUARD SETUP USING ACTIVE DATAGUARD
In this blog we’ll see how to configure 11g Data Guard using active dataguard i.e. standby database will be created from running primary database . We do not need to take backup of database or controlfile. Primary database dg01 on node1.oracle.com (192.9.201.192)
Standby database dg02 on node2.oracle.com (192.9.201.193)
Listeners :
primary : on port 1521 in grid home
on port 1522 in database home
standby : on port 1521 in grid home
on port 1522 in database home
TNS entries
primary : dg01 on node1.oracle.com on ports 1521, 1522
registered with both the listeners
dg02 on node2.oracle.com on port 1522
primary : dg02 on node2.oracle.com on ports 1521, 1522
registered with both the listeners
dg01 on node1.oracle.com on port 1522
Configuration on primary site:-
——————————————-
- As root assign
. IP address 192.9.201.192
. hostname node1.oracle.com
#service network restart
- Make entries in /etc/hosts
192.9.201.182 node1.oracle.com node1
192.168.2.102 node2.oracle.com node2
– Reboot machine (mandatory as hostname has been changed)
– create dg01 database in archivelog mode , with sample schemas, with FRA default
— Invoke netmgr from db home
- - make service dg01 available on port 1522
- - create a listener in database home on port 1522
- - statically register dg01 with listener on port 1522
- - create service for dg02 running on node2.oracle.com on port 1522
- - delete orcl database
Configuration on standby site:-
——————————————-
- As root assign
. IP address 192.9.201.193
. hostname node2.oracle.com
#service network restart
— Reboot machine (mandatory as hostname has been changed)
- Make entries in /etc/hosts
192.9.201.182 node1.oracle.com node1
192.9.201.193 node2.oracle.com node2
– Test
#ping node1
ping node2
– Invoke netmgr from db home
- - create service for dg01 running on node1.oracle.com on port 1522
- - create service for dg02 running on node2.oracle.com on port 1522
- - create a listener in database home on port 1522
- - statically register dg02 with it
—————————–
On primary node01
—————————–
— Configure the database dg01 in archive log mode and enable the force logging.
SQL>SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=6G;
ALTER DATABASE FORCE LOGGING;
select force_logging from v$database;
ALTER DATABASE OPEN;
– Configure standby redo log files to enable fast failover and real time apply. (Assume we have three redo log group)
– Find out no. and size of current redo log groups
SQL>select group#, bytes/1024/1024 MB from v$log;
– Add standby log files –
Note: – Number of standby redo log group must be more than number of redo log group and size of the standby logfile must be same as the size of redo log file.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/app/oracle/oradata/dg01/sredo04.log' size 50m;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/app/oracle/oradata/dg01/sredo05.log' size 50m;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/app/oracle/oradata/dg01/sredo06.log' size 50m;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/app/oracle/oradata/dg01/sredo07.log' size 50m;
– check that standby logfiles have been created
SQL> col member for a30
select group#, TYPE, MEMBER from v$logfile where type = 'STANDBY';
GROUP# TYPE MEMBER
———- ——- ——————————
4 STANDBY /u01/app/oracle/oradata/dg01/sredo04.log
5 STANDBY /u01/app/oracle/oradata/dg01/sredo05.log
6 STANDBY /u01/app/oracle/oradata/dg01/sredo06.log
7 STANDBY /u01/app/oracle/oradata/dg01/sredo07.log
SQL> select group#, bytes/1024/1024 MB from v$standby_log;
GROUP# MB
———- ———-
4 50
5 50
6 50
7 50
– Make a staging folder
oracle@node1$mkdir /home/oracle/stage
— CREATE PFILE FROM SPFILE.
PRI>create pfile='/home/oracle/stage/initdg01.ora' from spfile;
- Now configure following parameters for primary database:-
– add following lines to initdg01.ora
oracle@node1$vi /home/oracle/stage/initdg01.ora
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dg01,dg02)’
DB_UNIQUE_NAME=’dg01′
SERVICE_NAMES=’dg01′
DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′
LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′
LOG_ARCHIVE_DEST_1=’location=USE_DB_RECOVERY_FILE_DEST’
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_2=’service=dg02 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=dg02′
LOG_ARCHIVE_DEST_STATE_2=enable
STANDBY_FILE_MANAGEMENT=auto
FAL_CLIENT=’dg01′
FAL_SERVER=’dg02′
LOG_ARCHIVE_FORMAT=’%t_%s_%r.arc’
Note:
In case source and destination databases are ASM, parameters db_file_name_convert and log_file_name_convert may be defined as follows:
db_file_name_convert = (“+DATA/dg01″,”+DATA/dg02″)
log_file_name_convert = (“+DATA/dg01″,”+DATA/dg02″, “+FRA/dg01″,”+FRA/dg02″)
– create spfile from pfile and start dg01 using new parameters
PRI>shu immediate;
create spfile from pfile='/home/oracle/stage/initdg01.ora';
startup;
—– Take the entire database backup on pre configured staging directory(optional with active dataguard)
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
BACKUP FORMAT '/home/oracle/stage/%U' DATABASE PLUS ARCHIVELOG;
BACKUP FORMAT '/home/oracle/stage/%U' CURRENT CONTROLFILE FOR STANDBY;
sql "alter system archive log current";
– create staging folder on node2
oracle@node2$mkdir /home/oracle/stage
—- Copy the following items from node1 to node2.
PASSWORD FILE
[oracle@node1 ~]$scp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwdg01 node2:/u01/app/oracle/product/11.2.0/db_1/dbs/orapwdg02
PARAMETER FILE and rman backups(optional with active dataguard)
[oracle@node1 ~]$ scp /home/oracle/stage/* node2:/home/oracle/stage/
Note: – RMAN backup must be copied at the same directory on machine2
i.e. /home/oracle/stage
– Configure net service for dg01 (primary database) on node1 at port 1522
as well as for dg02 (standby database) on node2 at port 1522
set environment to dg01 and invoke netmgr
DG02 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2.oracle.com)(PORT = 1522))
)
(CONNECT_DATA =
(SERVICE_NAME = dg02)
)
)
DG01 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1.oracle.com)(PORT = 1522))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dg01)
)
)
– Create a listener in database home on port 1522 and statically register dg01 with it.
—————————-
On standby node02
—————————–
oracle@node2$mv /home/oracle/stage/initdg01.ora /home/oracle/stage/initdg02.ora
vi initdg02.ora
— Final pfile should look like this
dg02.__db_cache_size=440401920
dg02.__java_pool_size=4194304
dg02.__large_pool_size=4194304
dg02.__shared_pool_size=163577856
dg02.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/dg02/adump’
*.control_files=’/u01/app/oracle/oradata/dg02/control01.ctl’
*.core_dump_dest=’/u01/app/oracle/admin/dg02/cdump’
*.db_block_size=8192
*.db_domain=”
*.db_file_multiblock_read_count=16
*.db_name=’dg01′
*.db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area’
*.db_recovery_file_dest_size=6G
*.dispatchers='(PROTOCOL=TCP) (SERVICE=dg02XDB)’
*.job_queue_processes=10
*.log_archive_format=’%t_%s_%r.dbf’
*.open_cursors=300
*.pga_aggregate_target=204472320
*.processes=150
*.remote_login_passwordfile=’EXCLUSIVE’
*.sga_target=613416960
*.undo_management=’AUTO’
*.undo_tablespace=’UNDOTBS1′
*.user_dump_dest=’/u01/app/oracle/admin/dg02/udump’
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(dg01,dg02)’
DB_UNIQUE_NAME=’dg02′
SERVICE_NAMES=’dg02′
DB_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02,’/u01/app/oracle/oradata/dg01′
LOG_FILE_NAME_CONVERT=’/u01/app/oracle/oradata/dg02′,’/u01/app/oracle/oradata/dg01′
LOG_ARCHIVE_DEST_1=’location=USE_DB_RECOVERY_FILE_DEST’
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_2=’service=dg01 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=DG01′
LOG_ARCHIVE_DEST_STATE_2=enable
STANDBY_FILE_MANAGEMENT=auto
FAL_CLIENT=’dg02′
FAL_SERVER=’dg01′
LOG_ARCHIVE_FORMAT=’%t_%s_%r.arc’
@node2: Create the required directory for standby database
[oracle@node2 ~]$ mkdir -p /u01/app/oracle/admin/dg02/adump
mkdir -p /u01/app/oracle/oradata/dg02/
mkdir -p /u01/app/oracle/flash_recovery_area/dg02
@node2:– Configure net service for dg01 (primary database) on node1 at port 1522
as well as for dg02 (standby database) on node2 at port 1522
– set oracle home to database home and start netmgr
DG02 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2.oracle.com)(PORT = 1522))
)
(CONNECT_DATA =
(SERVICE_NAME = dg02)
)
)
DG01 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1.oracle.com)(PORT = 1522))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dg01)
)
)
@node2 –create a listener on port 1522 in database home and statically dg02 service with it
@node2 : Start dg02 at nomount stage
[oracle@node2 ~]$ export ORACLE_SID=dg02
SBY> CREATE SPFILE FROM PFILE='/home/oracle/stage/initdg02.ora';
STARTUP NOMOUNT ;
EXIT
@node2: Now using RMAN duplicate command create standby database
[oracle@node2 ~]$ export ORACLE_SID=dg02
RMAN TARGET SYS/oracle@dg01
RMAN> CONNECT AUXILIARY SYS/oracle@dg02 DUPLICATE TARGET DATABASE FOR STANDBY from active database;
– Switch logs on primary
PRI>alter system switch logfile;
– start redo apply on standby
SBY>alter database recover managed standby database using current logfile disconnect;
– Verify that standby database is performing correctly
SBY>select sequence#, first_time, next_time from v$archived_log order by sequence#;
– Perform a few log switches on primary
PRI>alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
– On standby database verify that new redo data was received , archived and applied
SBY>select sequence#, first_time, next_time,applied from v$archived_log order by sequence#;
——————————
Related links:
11g Dataguard: Automatic Block Media Recovery (BMR)
11g Dataguard: Automatic Client Failover
11g Dataguard: Automatic Job Failover
11g Dataguard: Flashback Standby After Resetlogs On Primary
ORA-12528: TNS:listener: all appropriate instances are blocking new connection
Recover Standby Datafile From Primary
Warning: ORA-16792: configurable property value is inconsistent with database setting
————
HOW DOES 11G R2 CLUSTERWARE START ASM WHEN ASM SPFILE IS STORED ON ASM ITSELF?
a. ASM_DISKSTRING parameter (if specified) to locate the disks on which ASM disks are configured.
b. ASM SPFILE location : Name of the diskgroup containing ASM spfile
– CSSD scans the headers of all ASM disks ( as indicated in ASM_DISKSTRING in GPnP profile) to identify the disk containing the voting file. Using the pointers in ASM disk headers, the Voting Files locations on ASM Disks are accessed by CSSD and CSSD is able to complete initialization and start or join an existing cluster.
– To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device’s header. OHASD reads the header of the ASM disk containing ASM SPfile (as read from GPnP profile) and using the pointers in disk header, contents of ASM spfile are read. Thereafter, ASM instance is started.
– With an ASM instance operating and its Diskgroups mounted, access to Clusterware’s OCR is available to CRSD.
– OHASD starts CRSD with access to the OCR in an ASM Diskgroup.
– Clusterware completes initialization and brings up other services under its control.
Demonstration :
Once the voting disk is accessible and ASM is started using the SPfile read above, rest of the resources on the node can be started after reading the Oracle Local Registry (OLR) on the node.
References:
http://aychin.wordpress.com/2011/01/22/oracle-11gr2-asm-spfile-eng/
http://asmsupportguy.blogspot.in/2014/03/asm-spfile-in-disk-group.html
http://community.oraclepressbooks.com/downloads/tcoug-top-10-asm-questions.pdf
11g R2 RAC: GPNP Profile Demystified
11g R2 RAC: Let’s Edit GPNP Profile
11g R2 RAC: Let’s Lose GPNP Profile
11g R2 RAC : Voting Disk Demystified