Saturday, December 1, 2012

AP653APP.us.oracle.com in R12 installation

Introduction

After you perform an Oracle E-Business Suite  R12 installation (12.1.1) using Rapid Install, the concurrent managers may not get started.  The Node can be shown as AP653APP.us.oracle.com which seems to be an internal Oracle Corporation's server name.  Furthermore, the status message could be ""System Hold, Fix Manager before resetting counters".

Solution

There is no apparent reason why the 12.1.1 installation refers to a non-existent node name AP653APP.us.oracle.com.  This does not go away even after upgrading to 12.1.3 maintenance pack.  But you can use the solution described in http://www.appsdba.info/docs/others/Troubleshoot_cm.pdf.  This invalid node name can cause even the database listener startup to fail with the below error messages because of this invalid node name in sqlnet.ora's  tcp.invited_nodes  parameter.

TNS-12560: TNS:protocol adapter error
 TNS-00584: Valid node checking configuration error

**before**


tcp.invited_nodes=(AP653APP.us.oracle.com, dbserver.yourcompany.com, appserver.yourcompany.com)

**after**

tcp.invited_nodes=(dbserver.yourcompany.com, appserver.yourcompany.com)

Monday, October 22, 2012

Linux server load average history


Introduction

Have you got an email in the morning saying that the database was slow during the night or particularly slow - say between 2:00 and 4:00 am ? How would you approach and find the cause of performance problems that have occurred in the past ?

AWR report from Database


Oracle introduced the AWR report in 10g and because it automatically collects performance statistics every one hour by default (called snaps) with a default 7 day retention, one can quickly obtain an AWR report for the specified duration and analyze the report.  

1) sqlplus '/ as sysdba'
2) @?/rdbms/admin/awrrpt.sql   

Note: (?  is a shortcut for the $ORACLE_HOME location)


sar report from operating system

If the database is hosted in Linux, one can also use the sar command to find the server load average upto a 10 minute granularity.  This is particularly useful if a root owned process or non-database processes (such as a long running gzip, gunzip, cp command etc) have caused the high load average in the server which in turn has slowed other processes in the server (including the database in question).

The below example shows a high server load average on Oct 11th.  The server saves the load average history for the last few days in /var/log/sa  (sa11 here refers to the 11th of the month).

cd /var/log/sa
sar -q -f sa11





sar command can be executed by oracle osuser 

One need not have the system administrator's privilege (root access)  for executing the sar command. This command can be executed by the oracle osuser directly. 

Saturday, September 22, 2012

datafile migration from filesystem to ASM

Introduction

Oracle has a standard procedure in the RMAN Backup and Recovery guide to migrate all the datafiles (full database) from a file system storage to ASM storage including redo logfiles too.  But sometimes a database may continue to have datafiles in the traditional file system storage as well as in ASM diskgroups.  The reason could be due to a lack of free space in the ASM disk groups and hence the need to store datafiles in filesytem.

This post explains how to clone or restore such a database (having datafiles both in a filesystem as well as ASM disks) into a destination host with datafiles only in ASM disks.

Steps

Let the filesystem location for the datafiles in source database be '/data/filesystem' with 10 datafiles in this directory and let the  destination ASM diskgroup be '+ASMDG' 

1)  On the source  database, prepare a simple select statement to generate output as below.

select 'set newname for datafile '||''''||file_name||''''||' to +'||''''||'ASMDG'||''''||';'
from dba_data_files
where file_name not like '%+%';


2) On the destination host, use the above generated output in the RMAN run block. For example, if the data to be restored is until Sep 06 2012, the below syntax can be used.

run
{
set until time "to_date('Sep 06 2012 08:00:00','Mon DD YYYY HH24:MI:SS')";
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
set newname for datafile '/data/filesystem/file1.dbf' to '+ASMDG';

set newname for datafile '/data/filesystem/file2.dbf' to '+ASMDG';
set newname for datafile '/data/filesystem/file3.dbf' to '+ASMDG';  
set newname for datafile '/data/filesystem/file4.dbf' to '+ASMDG';
set newname for datafile '/data/filesystem/file5.dbf' to '+ASMDG';
set newname for datafile '/data/filesystem/file6.dbf' to '+ASMDG'; 
set newname for datafile '/data/filesystem/file7.dbf' to '+ASMDG';
set newname for datafile '/data/filesystem/file8.dbf' to '+ASMDG';
set newname for datafile '/data/filesystem/file9.dbf' to '+ASMDG';  
set newname for datafile '/data/filesystem/file10.dbf' to '+ASMDG';
restore database;
SWITCH DATAFILE ALL;

}

Note 

The main goal of this post is to explain how to easily generate a "set name for datafile" command output from the source database and use the commands in the RMAN run block for the destination database.  Other RMAN commands for restore/cloning can be different based upon the actual situation.


Wednesday, June 27, 2012

RMAN Duplicate UNTIL TIME



If one uses RMAN to perform database backups, then the RMAN Duplicate command can be used to create new databases from these backups (also called clones or copies or duplicates of the source database).  If no additional options are provided in the RUN block of the RMAN interface, Oracle will build the new database as an exact duplicate of the source database upto the current time.

But sometimes IT application teams request for a copy that contains data "only until a time in the past".  In the user-managed backup and recovery strategy, we call it the "Time Based Recovery". One can achieve the same results with the RMAN DUPLICATE command also using the "set until time" option just before specifying the auxiliary channels.

The following is an example that creates a new database called PRODTEST which contains the same data present in PRODDB at "Jul 07 2012 08:00:00"

rman
connect  target sys/*****@PRODDB
connect  auxiliary /
run
{
set until time "to_date('Jul 07 2012 08:00:00','Mon DD YYYY HH24:MI:SS')";
allocate auxiliary channel c1 type disk;
allocate auxiliary channel c2 type disk;
allocate auxiliary channel c3 type disk;
allocate auxiliary channel c4 type disk;
duplicate
target database to PRODTEST
pfile=/sw/app/oracle/product/db_1/dbs/initPRODTEST.ora;
}

Saturday, May 5, 2012

How to drop a database


Introduction

The drop database command is the recommended command to drop an Oracle database. Though one can use operating system commands to remove database files instead of the "drop database" command, the latter is compulsory if one wants to drop a database using Oracle ASM (Automatic Storage Management).


DROP DATABASE command

It may be very rare that one is asked to drop a database. But this is usually the case when one wants to refresh existing test/development databases to clone them again from the latest production database backups. One may have to drop production databases too in the event of say, the front end application is retired/decomissioned  or the existing database data is migrated and merged to a different database etc. One needs to exercise a lot of caution and get the necessary approvals prior to dropping a production database. Furthermore, take a full database backup prior to dropping the database. 

  1. sqlplus '/ as sysdba'
  2. shutdown  immediate;
  3. startup mount restrict;
  4. exit
  5. rman
  6. connect target  /
  7. drop database;


Do you really want to  drop the database (enter YES or NO)?


The drop database commands deletes all the datafiles, tempfiles, online redo logfiles, spfile (server parameter file) and the control files.

DROP DATABASE command restrictions

The drop database command does not completely remove all the dropped database references in the server. Some of the dropped database related files such as the diagnostic files (trace files, alert logfile etc) are not deleted by the drop database command. Additionally, the Flash Recovery area files are not deleted too (incase the dropped database was using the Flash Recovery Area feature). 

DROP DATABASE command is a must for ASM

The operating system remove commands (for example, the rm command in Linux/Unix) can work only in databases that are stored on filesystems, but not for database files that are stored in ASM disks.  When an ASM diskgroup/disk contains database files from multiple databases, the drop database command "selectively" drops only the files associated with the database to be dropped from the ASM disks.

DROP DATABASE command works for ASM and non-ASM based databases

One can use drop database command for all databases which use the traditional filesystem storage as well as the ASM storage.

Dropping (removing) a database using operating system commands

Though drop database command is the recommended approach, one can also drop a database that uses traditional filesystem storage using operating system commands such as the rm command in Linux/Unix. Because this is a manual approach, one has to ensure that all the files that are part of the database are removed manually

  1. Datafiles
  2. Tempfiles
  3. Online redo logfiles
  4. Archived redo logfiles (if any)
  5. Flash Recovery area files(if any)
  6. pfile,spfile (if any)
  7. diagnostic files
  8. control files

Saturday, March 31, 2012

Database lock checks

Introduction

If one wants to get to immediately know the Oracle database sessions that are blocking other sessions, here is one easy method.  There are various procedures to find Oracle database blocking (offending) sessions as well as waiting sessions.  This post describes one such procedure.

DBA_BLOCKERS and DBA_WAITERS

The first task an Oracle DBA has to do after executing a CREATE DATABASE statement (manual database creation method) is to run the $ORACLE_HOME/rdbms/admin/catalog.sql and $ORACLE_HOME/rdbms/admin/catproc.sql which help in creating the data dictionary objects.  However,  a few data dictionary objects have to be created separately by running other scripts.  Two of the important database locking views are viz. dba_blockers and dba_waiters respectively. These are created by running catblock.sql script. One can execute this as part of the initial database creation process or later as well. Executing this script does not cause any database downtime and hence can be run at any time.

  1. sqlplus '/ as sysdba'
  2. @?/rdbms/admin/catblock.sql


The DBA_BLOCKERS just gives the blocking session information. But DBA_WAITERS gives details about both blocking session and waiting session

v$session.blocking_session column

The v$session view also has a column named blocking_session.  If a database session is struck/waiting for processing, one can easily find its blocking session (session that is causing the wait)  by querying blocking_session column and then take suitable action. Furthermore, this column can help in situations when the relevant information is not present in DBA_BLOCKERS and DBA_WAITERS.

Wednesday, February 15, 2012

sqlplus tips

Introduction

Most Oracle Database Administrators (DBAs) rely on the Oracle SQL*Plus (sqlplus) utility provided in $ORACLE_HOME/bin for various administration tasks on the database server.  Though Oracle provides the web-based Enterprise Manager Grid Control to perform various database administration activities and the exclusive command line interfaces for ASM (asmcmd) and Dataguard administration, many experienced DBAs still rely on sqlplus alone for administration.  In this article, I will write about 3 useful tips for sqlplus

spool append

Many of you may be already aware of the spool command.  The spool command helps one capture the output of various commands executed in an output file that can be used for various purposes such as record keeping, auditing and uploading query output to Oracle Support. 

Until Oracle 9i, if one uses a spool file for one session, the contents of this file are erased if the same file name is specified at a different time. There was no append option until Oracle 9i version.  The append option is available from Oracle 10g.  An example is shown below. This option is available on both Windows and Unix/Linux platforms.

$ sqlplus '/ as sysdba'

SQL > spool dba_output.txt
SQL > select count(*)
        from dba_tables;            
      1057

SQL > spool off;
SQL > exit

$ sqlplus '/ as sysdba'

SQL > spool dba_output.txt append
SQL > select count(*)
        from dba_data_files;
      7

SQL > spool off;
SQL > exit

$ cat dba_output.txt   (The file has contents saved from both the sessions above due to the append option in the latter.)

SQL > select count(*)
        from dba_tables;             
      1057

SQL > spool off;

SQL > select count(*)
        from dba_data_files;
      7

SQL >  spool off;


SQLPATH environment variable

Many of you are already aware of the PATH environment variable. Oracle provides a similar environment variable for sql scripts as well.  Oracle DBAs who heavily rely on sqlplus usually have a few local directories on the server where all the commonly run sql scripts are stored.  For example, say one has two directories viz. /home/oracle/admin_scripts and /home/oracle/monitoring_scripts respectively.

SQLPATH environment variable gives DBAs the flexibility to run any scripts from the above two directories even if  sqlplus is started from a different directory *without* giving the full path of the script.  An example is shown below.

$ export SQLPATH=/home/oracle/admin_scripts:/home/oracle/monitoring_scripts

$ ls  /home/oracle/admin_scripts
   admin1.sql admin2.sql

$ ls /home/oracle/monitoring_scripts
   monitoring1.sql monitoring2.sql

One can currently be in a different working directory (say /tmp), but still execute any of the above scripts *without* specifying the full path during execution. sqlplus automatically searches the script in directories referred by $SQLPATH

$ pwd
/tmp

$ sqlplus '/ as sysdba'
SQL > @admin1.sql

System altered.


glogin.sql  (Global Login)

Similar to the operating system login profile (such as .bash_profile etc), one can enable a global sqlplus profile by customizing $ORACLE_HOME/sqlplus/admin/glogin.sql.  For example, if one has several databases running on a server and if one wants to get the database name that sqlplus is connected to, then one can edit glogin.sql and put the "select name from v$database" command. Because glogin.sql is a global profile file,  Oracle automatically executes this file during every start of sqlplus.


Conclusion

An Oracle DBA who relies heavily on sqlplus can make his/her work life much easier by knowing about advanced capabilities of sqlplus.  For more information,  one can refer to the "SQL*Plus Quick Reference" and "SQL*Plus User's Guide and Reference" PDF documents in the Oracle documentation webpage.


Related Posts Plugin for WordPress, Blogger...