All posts by Anju Garg

About Anju Garg

Working as Senior Oracle Corporate Trainer (DBA) with Koenig Solutions Pvt. Ltd.

I am Oracle ACE

It gives me immense pleasure to share the news that I have been honored with the prestigious  Oracle ACE award. I am grateful to Oracle ACE Program for accepting my nomination. I would like to thank  Murali Vallath Sir who nominated me for this award. I am also thankful to my family members without whose support and motivation, this would not have been possible.  Thanks a lot to all the readers of my blog whose comments and suggestions helped me to learn and share whatever little knowledge I have.

I will do my best to participate in the Oracle ACE program.

 

 

I Am Speaking at OTN Yathra 2016

The Oracle ACE directors and Oracle Volunteers  in the region are organizing their third evangelist event called ‘OTNYathra 2016’ from 23rd  April 2016 to 1st May 2016.  This yathra or tour will a series of 6 conferences across 6 major cities (Chennai, Bangalore, Hyderabad,  Pune, Mumbai and Delhi) managed by ACE directors and Oracle Volunteers of the region.

I will be speaking at this year’s OTNYathra  about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Sunday 1st May, 2016   from 3:00pm to 3:50pm in
Room 1, BirlaSoft, H–9, Sector 63, NOIDA – 201306, NCR Delhi
Hope to meet you there!!

One Million Page Views

Yesterday page views of  my blog touched the magical figure of 1 Million.  I had started blogging in November 2012 with the aim of saving whatever I learn, on the web in a language simple enough to be understood by even a beginner. Well, I myself was a beginner then and even now I am a beginner.  I think I am a bit skeptical – not easily convinced until I get a proof. So, in most of my posts , I try to verify the facts mentioned in oracle documentation and I presume that’s what has clicked with the readers of my blog.

I would like to thank all the readers of my blog  who spent their precious time in going through my posts and giving their valuable feedback.

Keep visiting my blog…

ORA-39070: Unable to open the log file

I received this error message when I was trying to perform a data pump export of SH schema in parallel in a RAC database. I proceeded as follows:

Current scenario:
Name of the cluster: cluster01
Number of nodes : 3 (host01, host02, host03)
RAC Database version: 11.2.0.3
Name of RAC database : orcl
Number of instances : 3

  • Created a directory object  pointing to shared storage which is accessible by all the three instances of the database
SQL>drop directory dp_shared_dir;
SQL>create directory DP_SHARED_DIR as '+DATA/orcl/';
SQL>grant read, write on directory dp_shared_dir to public;
  • Issued the command to export SH schema in parallel across all active Oracle RAC instances with parallelism = 6 which resulted in error ORA-39070
[oracle@host01 root]$ expdp system/oracle@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y dumpfile='expsh%U.dmp' reuse_dumpfiles=y

Export: Release 11.2.0.3.0 - Production on Tue Dec 8 14:45:39 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
 With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
 Data Mining and Real Application Testing options
 ORA-39002: invalid operation
 ORA-39070: Unable to open the log file.
 ORA-29283: invalid file operation
 ORA-06512: at "SYS.UTL_FILE", line 536
 ORA-29283: invalid file operation

Cause:
The error message indicates that Log file cannot be opened. Since directory parameter points to a shared location on an ASM disk group and log file is not supported on it, I received the above error.

Solution:
I modified my command and explicitly specified log file to be created on  local file system pointed to by the directory object DATA_PUMP_DIR. Subsequently, export was performed successfully.

[oracle@host01 root]$ expdp system/oracle@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y logfile=data_pump_dir:expsh.log dumpfile='expsh%U.dmp' reuse_dumpfiles=y

Export: Release 11.2.0.3.0 - Production on Tue Dec 8 15:14:11 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_10": system/********@orcl schemas=sh directory=dp_shared_dir parallel=6 cluster=y logfile=data_pump_dir:expsh.log dumpfile=expsh%U.dmp reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 273.8 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
.....
.....
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/DIMENSION
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_10 is:
+DATA/orcl/expsh01.dmp
+DATA/orcl/expsh02.dmp
+DATA/orcl/expsh03.dmp
+DATA/orcl/expsh04.dmp
+DATA/orcl/expsh05.dmp
+DATA/orcl/expsh06.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully completed at 15:20:49

I hope it helps!!!

—————————————————————————————————————-

Related links:  

Webinar: Histograms: Pre-12c and Now

To improve optimizer estimates in case of skewed data distribution , histograms can be created. Prior to 12c, based on No. of distinct values (NDV) in a column two types of histograms could be created :

if no. of buckets >= NDV, frequency histogram is created and the optimizer makes accurate estimates.

If no. of buckets < NDV, height balanced histogram is created and accuracy of optimizer estimates depends on whether a key value is an endpoint or not.

The problem of optimizer mis-estimates in case of height balanced histograms is resolved to a large extent in Oracle Database 12c by introducing top-frequency and hybrid histograms which are created if no. of buckets < NDV.

I will present a webinar on “Histograms: Pre-12c and now” on Saturday, November 7th  at 10:00 AM – 11:00 AM (IST) organized by All India Oracle User Group – North India Chapter.

This webinar explores Pre as well post-12c histograms while highlighting the top-frequency and hybrid histograms introduced in Oracle Database 12c.

Everyone can join this Live Webinar @

https://plus.google.com/u/0/events/cgrgqlm5f7nuecdpjoc85d1u6eo
or
https://www.youtube.com/watch?v=xfwbDczWFXo

Hope to meet you at the webinar!!!

Speaking at SANGAM 2015

AIOUG meet “SANGAM  – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year’s Sangam (Sangam15 - 7th Annual Oracle Users Group Conference) will be held in Hyderabad International Convention Centre, Hyderabad on Saturday 21st & Sunday 22nd November 2015.

I will be speaking at this year’s SANGAM about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Saturday November 21, 2015  from 5:10pm to 6:00pm in
Hall 5 (Ground Floor). 

Hope to meet you there!!

 

 

Oracle 12.1.0.2c Cluster: New Location / Name For Alert Log

In my last post, I had presumed there is a bug since I discovered an empty clusterware alert log in its conventional location i.e.  $ORACLE_HOME/log/<hostname>in 12.1.0.2 standard cluster.

[grid@host01 ~]$ crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@host01 host01]# ls -l /u01/app/12.1.0/grid/log/host01/alerthost01.log

-rw-rw-r– 1 grid oinstall 0 Jun 15 14:10 /u01/app/12.1.0/grid/log/host01/alerthost01.log

But as commented by Ricardo Portillo Proni,  in oracle clusterware 12c,  the location of alert log has been changed to $ORACLE_BASE/diag/crs/<hostname>/crs/trace/

Hence, I could successfully  the alert log on node host01 in directory  $ORACLE_BASE/diag/crs/host01/crs/trace/

[grid@host01 trace]$ ls -l $ORACLE_BASE/diag/crs/host01/crs/trace/alert*

-rw-rw—- 1 root oinstall 812316 Aug 11 10:22 /u01/app/grid/diag/crs/host01/crs/trace/alert.log

Another noticeable thing is that name of clusterware alert log has been changed to alert.log as compared to alert<hostname>.log in 11g.

I would like to mention that I have verified the above only in 12.1.0.2 standard cluster.

In 12.1.0.1 flex cluster though, the location and name of  alert log location is same as in 11g i.e. $ORACLE_HOME/log/host01

[root@host01 host01]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]

[root@host01 host01]# ls -l $ORACLE_HOME/log/host01/alert*
-rw-rw-r-- 1 grid oinstall 497364 Aug 11 11:00 /u01/app/12.1.0/grid/log/host01/alerthost01.log

Conclusion:
12.1.0.2 standard cluster

  • Name of alert log : alert.log
  • location of alert log: $ORACLE_BASE/diag/crs/host01/crs/trace

12.1.0.1 flex cluster

  • Name of alert log : alert<hostname>.log
  • location of alert log: $ORACLE_HOME/log/host01

Hope it helps!

Pls refer to comments for further information.

References:
Oracle RAC 12c (12.1.0.2) Operational Best Practices (Oracle Presentation)
Oracle 12c RAC: Clusterware logs are now centralized

———————————————————————————————————————–

Related Links :

Home

12c RAC Index

Oracle 12.1.0.2 Standard Cluster: Empty Alert Log