Friday, February 29, 2008

tar gzip and gunzip command examples

Introduction
 
This article gives you some examples of tar, gzip and gunzip commands on Linux operating system.
 
How do you take an export of a database that is 100GB in size whereas the disk space available to store the export dump is only 20GB ? Similarly how do you make a tar ball of the E-Business Suite APPL_TOP that is already 50GB in size whereas the disk space available is only 15GB ?. We are in the process of migrating our existing Oracle installations from the HPUX platform to Linux and for the most part, we spend time in transferring files between the platforms. 'Create and Compress' is one option that i have used successfully and i will illustrate the syntax in this post.

Full database export using parallel compression
  1. Login to source HPUX database host
  2. mknod db_full_exp.dmp p
  3. gzip < db_full_exp.dmp > db_full_exp.dmp.gz & nohup exp system/***** parfile=params.lst file=db_full_exp.dmp &
  4. Copy the above db_full_exp.dmp.gz to the target Linux database server
  5. gunzip db_full_exp.dmp.gz
  6. imp system/***** parfile=params.lst file=db_full_exp.dmp
Tar file and parallel compression
 
The Oracle Applications 11i E-Business Suite Linux Migration involves copying several source directories from the source platform to the target Linux platform. The below example deals with the $APPL_TOP copy. Same syntax can be used for other directories that should be migrated. Please replace the "PIPE" with the actual PIPE symbol on the keyboard.
  1. On source HPUX node, cd $APPL_TOP
  2. tar cvf - . PIPE  gzip > /any_directory_having_free_space/appl_top_files.tar.gz
  3. Copy the appl_top_files.tar.gz to the target Linux node
  4. On target Linux node, cd $APPL_TOP (directory should be empty)
  5. gunzip -c appl_top_files.tar.gz  PIPE tar xvf -
Conclusion

A database administrator should automate processes and tasks and that extra knowledge in programming comes a long way in accomplishing these goals. Mastery in advanced unix commands like awk and sed will also help eliminate manual tasks for the most part. Ease in shell scripting or perl scripting will help one spend less time in routine tasks thereby giving more time for other important and priority activities.

Tuesday, February 19, 2008

Oracle Index Monitoring Benefits

Introduction

There are many new features available in Oracle 9i and 10g database releases. Out of the these many new features, a few standout from the crowd and 'Index Monitoring' is one of them. It is introduced in the 9i release and its one of my favorites in Oracle. Whereas the benefits are enormous, the risks are minimal to zero.

All database indexes need to be maintained in Oracle whether they are being used by a SQL query execution plan. The primary purpose of creating indexes is to reduce the cost of an sql operation thereby increasing performance and response times in the database. Sometimes database developers create indexes that will be never be used by Oracle.

Example 1 :- Creating index on a column that will be used only in the select clause and never used in the where clause of an sql statement (non-driving columns).

Example 2 :- Another common example is when the cost based optimizer decides that a table access or an alternate index/composite index is a better access path than using the index in question .

One of the most easy to implement Oracle 9i features (and 10g) has been 'Index Monitoring'. The feature identifies all unused indexes in the database and updates a data dictionary view with this information. A DBA in coordination with the development team can review these results and proceed to drop these indexes.

Benefits of dropping unused indexes
1. Faster DML operations
Database table insert/update/delete statements can be executed quickly without the need for Oracle to insert/update/delete the corresponding indexes in the background.
2. Reduced Database size
Indexes occupy the next highest storage after tables in most transactional databases. If large indexes are found unused, they can help reduce the overall database size.
3. Less RedoLog consumption
All indexes update/insert/delete get logged (if they are created with the default LOGGING option - which is the norm rather than an exception ) in the redologs. If unwanted/redundant indexes are dropped, less redologs are switched per day thereby increasing overall database performance by a good multiple.
4. Reduced archive space
More redologs caused by unused indexes mean more archivelog files.
5. Faster database backups
Dropping unused indexes will decrease the database size. Lesser sized databases take less time for their backup.
6. Reduced recovery time
Less archive logs translates to lesser archive logs to apply in database recovery operations.
Implementation Procedure
1. Identify all hot custom indexes which are of greater than a threshold size (e.g 100MB)
2. For each of the above indexes, the following command need to be executed
alter index monitoring usage;


3. Let Oracle monitor the indexes usage for 24 hrs. The data dictionary view v$object_usage gets updated if the index has been used atleast once in the last 24 hrs. (Here 24 hrs is just an example. Some sites may need 48 hrs of monitoring or even upto 7 days of monitoring. )

4. Turn off index monitoring using executing the following command
alter index nomonitoring usage;

5. Get all unused indexes reported in the view v$object_usage
Are there any dependencies ? Is downtime needed for Index Monitoring ?
Index Monitoring has no dependencies with any database operations. It can be dynamically implemented with a very little overhead. There is also no downtime required for the implementation.
Caveats

1. Implementing Index Monitoring in development and test databases may not yield good results or may not be used as a basis for production implementation. Whereas several indexes may be shown as unused in development databases, the same indexes may be used more frequently in production because of the higher transaction rate.

2. Only custom indexes need to be considered for index monitoring. Its not advisable to monitor indexes that come in a pre-seeded database similar to the Oracle 11i E-Business Suite database.

3. The actual Index Monitoring is entirely done by Oracle without a DBA intervention. In ideal cases, Oracle may not even report any unused/redundant index. But this is very rare and happens only in perfect database design cases. So the index monitoring exercise may end up without any unused/redundant indexes in perfect design databases.

4. Sometimes an index is used exclusively for a weekend job or a monthend job to improve its performance. Exercise sufficient caution before dropping unused indexes reported in v$object_usage. Confirmation has to be taken from the index owner that there is no special purpose for the concerned index.
Conclusion
Database upgrades and migrations to new releases have to be performed with a pro-active frame of mind instead of a reactive frame of mind. I have seen many Oracle upgrades and migrations performed just to meet Oracle support deadlines or to meet a prerequisite for other software components. New features like Index Monitoring come with the least risks and lot of benefits. If you are starting a 'reorg' exercise in your database, Index Monitoring can be a good starting point.

Friday, February 1, 2008

Oracle Applications 11i Middle Tiers Linux Migration

We successfully migrated our Oracle Applications 11i (11.5.10.2) middle tiers (forms and web) from HP-UX to Linux during the weekend. We followed the standard Metalink Note :- 238276.1 for the Linux Migration. Our admin tiers and database instances were on HP-UX and these were left alone during the migration. Overall it has been a smooth cutover, however we did face a SEV 1 issue during the GOLIVE activities. We were able to get help within the maintenance window and thereby were able to release the instance on time. The details are below  

ORA-01403: No Data Found... 

After completing all the tasks and starting the applications on the target linux servers, none of the forms were opening. Every click resulted in a "ORA-01403: No Data Found" error. We reviewed all the tasks and everything was run fine. However we were convinced that this error could be a result of just one file or one wrong parameter. Soon after a thorough search on the Oracle Metalink site, we found an article that says an incorrect custom.pll can cause this error while trying to open any form or click on any form button. After getting this vital hint, we were able to demonstrate that its indeed the custom code that is causing the problem. The following method was used to confirm this.

  1. Login to forms
  2. If "ORA-01403: No Data Found" occurs on every form click or form button click, then escape the error window, Goto Help menu => Diagnostics => custom code => The option here will be NORMAL. Choose OFF.
  3. Now navigate forms again. The error no longer showed up after turning off custom code.

After narrowing down to the bottom of the problem, we soon found that a pll file added along with CUSTOM.pll for some changes was not compiled on linux. The problem went away after compiling this pll too.

Conclusion

adadmin program compiles only the standard code and hence custom code compilation has to be taken care of seperately after the standard code compilation. Failure to compile any custom code files can result in the above problem.

Related Posts Plugin for WordPress, Blogger...