Monday, December 29, 2008

How to create a lean Oracle Internet Directory

Introduction


A Directory Service such as Microsoft Active Directory (AD) stores a variety of information about an enterprise viz. User Accounts, Organization Units, Printer names, Computer names, Fax Machine names and other network resources. If AD is the master source of truth for Oracle Internet Directory (OID), the Oracle Directory Integration Assistant provides a default filter to retrieve only the needed resourcers i.e user accounts (objectclass=user), groups and Organization Units (objectclass=organizationalunit). Below is the default filter to retrieve information from a Microsoft Active Directory. This can be viewed by launching dipassistant -gui tool on the Oracle Internet Directory (OID) host (dipassistant -gui => ActiveChgImp => Filtering)

"searchfilter=((objectclass=group)(objectclass=organizationalunit)(&(objectclass=user)(!(objectclass=computer))))"

Custom Filter


One can add additional conditions in the searchfilter to block unwanted resources such as disabled user accounts, special user accounts, generic user accounts or even organization units if a FLAT DIT is implemented. In a huge enterprise, Active Directory can contain thousands of unwanted user account names and if a custom filter is not used, all these unwanted entries will be loaded into FND_USER table on the E-Business Suite database if "Unconditional Provisioning" is enabled.

A few advantages of a custom filter are the following

  1. A lean OID (less number of rows in the ODS schema and lesser number of entries processed during the initial bootstrap operation to get data from AD to OID).
  2. Less work performed by the AD to OID syncrhonziation jobs during every scheduled interval.
  3. A lean FND_USER table if Unconditional Provisioning is turned on.

How to write a custom filter


Conditions written in boolean operators are usually in the form of (condition_1 and condition_2), (condition_1 and (condition_2 or condition_3)) etc.

But an LDAP searchfilter is written differently. Note the position where the boolean operators are placed in the below examples.


Example 1 : (condition_1 and condition_2) is written as and(condition_1)(condition_2)

Example 2 : (condition_1 and (condition_2 or condition_3)) is written as and(condition_1)or((condition_2)(condition_3))


In our enterprise's Active Directory, special account names have a . (DOT) in them such as ADMIN.PAYROLL, Inactive users are stored in the attribute Org-UserStatus with an "I". The custom search filter for our case is


searchfilter=(&(objectclass=user)(!((cn=*.*)(sAMAccountName=*.*)(Org-UserStatus=I)(objectclass=computer))))

If one wants to write a custom filter, one should work with the AD administrator to obtain the conditions and attribute names that can be used in the filtering process. Now follow the rules mentioned above to place the boolean operators in the right positions to get the filter working.

Thursday, December 4, 2008

FLATTENED Directory Information Tree

Introduction


By default, Oracle creates the Directory Information Tree (DIT) in an identical structure during the initial data load (bootstrap) and subsequent data syncrhonization from an external LDAP directory (such as MS Active Directory) to the Oracle LDAP Directory (Oracle Internet Directory). However, several entries failed to get loaded into OID when we tried with this default method. Our bootstrap.log file contained the below errors for the entries that failed to get processed in OID.

LDAP: error code 32 - Parent entry not found in the directory
Error occurred while loading - cn=emp_2,ou=aurora,ou=IL,dc=oid_srv,dc=mycompany,dc=com

DN in Active Directory (AD)

cn=emp_2,ou=aurora,ou=IL,dc=ad_srv,dc=mycompany,dc=com

DN in Oracle Internet Directory (Identical to the DN in AD)

cn=emp_2,ou=aurora,ou=IL,dc=oid_srv,dc=mycompany,dc=com





Solution


This error is at best misleading because the parent entry (ou=aurora) did get created in OID. We could not get a solution for this error and hence altered the design of our DIT on the target OID. We have FLATTENED the DIT. A FLAT DIT does not have any sub tree such as OUs etc. and all the entries are loaded under a single sub tree (FLATTENed). To create a FLAT DIT, the Domain Mapping Rule has to be configured in the AD->OID syncrhonization profile.

DN in Active Directory (AD)

cn=emp_2,ou=aurora,ou=IL,dc=ad_srv,dc=mycompany,dc=com

DN in Oracle Internet Directory (FLATTENED without any OUs)

cn=emp_2,dc=oid_srv,dc=mycompany,dc=com




Conclusion


Oracle supports both regular DITs and flat DITs. If one wants to check the source directory's DN value, one can use use the orclsourceobjectdn attribute returned by the ldapsearch command in OID.Furthermore, a flat DIT is simple in design and easy to search for entries in OID using the oidadmin utility. All the LDAP syncrhonization functions such as ADD, MODIFY and DELETE are supported in a FLAT DIT. We have designed a flat DIT and we are happy with it.

Monday, December 1, 2008

OCA CA Root certificate warning in the browser

Introduction

We went LIVE with Single Sign-On integration for Oracle Applications during the Thanks Giving weekend. For the real time system, we configured SSL for network encryption. For the E-Business Suite SSL configuration, we purchased the security certificate from Verisign (A third-party certificate authority). But this time we wanted to try Oracle Certificate Authority (OCA) for the Single Sign-On web server.

Some of the advantages of OCA are
  1. Stronger encryption algorithms.
  2. Cost effective. Since OCA is Oracle's own certificate issuing system, it eliminates the need for IT administrators to purchase certificates from third-party authorities such as Verisign etc.
  3. Automatic storage of the certificate in OID because OCA is tightly integrated with the latter. Policies can be configured in OID for renewal alerts and expiration dates of the security certificates.


However, popular browsers such as Internet Explorer, Mozilla FireFox etc. trust certificates only from a few well established authorities and Oracle is not one of them yet. Because of this limitation, websites configured with SSL certificates issued by OCA show a warning.


Solution


One procedure to avoid such a warning is to request the system administrators to push the CA Root Certificate to all browsers in the enterprise.


Conclusion


To save a few $$, Oracle Certificate Authority is a good alternative to Verisign for securing Oracle resources provided your system administrator agrees to push the CA Root Certificate to the trusted list of certificates of the standard browsers. We saved this hassle for our administrator and purchased a certificate from Verisign (its already trusted by Internet Explorer). However OCA is a good bet for test and UAT systems. By the way, OCA is part of the Identity Management Software. Following is the warning message shown by Internet Explorer when the certificate issued by OCA is not yet trusted by the browser.




Monday, November 24, 2008

How to fix BULKDELETE performance problem

Due to several reasons, one may have to delete all the bootstrapped 3rd party user data from Oracle Internet Directory (OID). For example, thousands of entries may have got loaded from MS Active Directory into OID during the initial bootstrap process. If one wants to bulkdelete all these entries from OID, one can notice a performance problem during the delete operation. This problem currently exists in 10.1.4.2 Oracle Identity Management software. When i tried the bulkdelete operation with 60,000 entries, it took almost 20 hours to finish the execution.

The fix for this performance problem is to create a custom index on one of the OID tables.

  1. Login as ods/***@ssodb
  2. CREATE INDEX TEMP_UBS_CT_HRCH_QUERY_IDX on CT_HRCH_QUERY(ENTRYID);
  3. exit
  4. $ORACLE_HOME/ldap/bin/bulkdelete connect="ssodb" basedn="cn=Users,dc=oid_srv,dc=mycompany,dc=com"
  5. Login as ods/***@ssodb
  6. Drop index TEMP_UBS_CT_HRCH_QUERY_IDX

This index is only for fixing the performance problem and can be dropped after the bulkdelete operation completes successfully. I could complete the bulkdelete operation in less than 20 minutes with the custom index in place to delete the 60,000 entries from OID. Compare that with the 20 hours that i got without the index !

Wednesday, November 12, 2008

Restriction on Automatic User Provisioning

Introduction


I have explained about the different methods of user provisioning in the previous article here. If one were to go with 'Unconditional Provisioning' (automatic provisioning) of new users from OID to FND_USER table, then one has to consider an important restriction on how such users be placed in the Directory Information Tree in OID.




New User entries that are created in OID and have to be automatically provisioned (created) in FND_USER table must be either placed in the default USERS container or any container that is a sub-tree of the USERS container. If the user entries are in a sub-tree that is at the same level as the default USERS container, then such entries will not be automatically created in FND_USER table even if the 'Applications SSO Enable OID Identity Add Event' system profile option is ENABLED.




Conclusion


This restriction should go away in future releases of Oracle Identity Management, but as of now, the latest 10.1.4.2 does have it. One can load all users in OID in the default USERs container, however for better management and administration purposes, it is always recommended to create normal users in a sub-tree container.

Thursday, October 23, 2008

'Applications SSO Auto Link User' profile and bootstrapping again

Introduction 

After integrating Oracle E-Business Suite with SSO, setting the system profile 'Applications SSO Auto Link User' = 'Enabled' lets users login to Oracle using only their SSO username/pwd (Windows NT username/pwd) if the 3rd party LDAP directory is MS Active Directory) WITHOUT the need to enter their Oracle E-Business local username/pwd again in the next page. This profile 'automatically links' the Windows NT username and the existing Oracle username internally (provided there is a user_name in FND_USER that EXACTLY matches the Windows NT username) and directly shows the Oracle E-Business responsibilities navigation pane. It does it by updating FND_USER.USER_GUID to a value that matches with the corresponding OID's ORCLGUID value of the logged in user.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Problem

Due to some circumstances, one may have to bulkdelete all the OID users and reload them from AD to OID using the bootstrap process. Because it is a new bootstrap process, a new ORCLGUID may get generated in the OID that will make it out of sync with the target FND_USER.USER_GUID value. With such a scenario, Oracle will prompt again for the E-Business local username/pwd despite the system profile 'Applications SSO Auto Link User' having an 'Enabled' value. One can verify the mismatch after the reload by running the following command.

ldapsearch -v -h oid_host_name -p 389 -D "cn=orcladmin" -w ***** -b "" -s sub "sn=RAMINENI*" uid orclguid orclactivestartdate orclactiveenddate orclisenabled
 
ldap_open( oid_host_name, 389 )
filter pattern: sn=RAMINENI*
returning: uid orclguid orclactivestartdate orclactiveenddate orclisenabled filter is: (sn=RAMINENI*)
cn=c123456,ou=consultants,ou=users,IdentityRealm
orclguid=844AB8C88EE56B74E040810A0CBC6249
1 matches

This orclguid shown above may NOT match the FND_USER.USER_GUID for the same user because the former is a newly generated one because of the reload process. In this case, Oracle prompts for the Oracle E-Business username/pwd after entering the Windows NT username/pwd in the login page even though the system profile 'Applications SSO Auto Link User' is set to 'Enabled', often confusing the  user.

 

   

 

 

 

 

Solution

The process to 'auto link' SSO username and the FND_USER user_name after a reload of data from the 3rd party LDAP directory to OID is to update FND_USER.USER_GUID value to NULL.

update fnd_user set user_guid = NULL where user_guid is NOT NULL
 
A better alternative is to run the $FND_TOP/patch/115/sql/fndssouu.sql USER_NAME
script (Apart from setting the FND_USER.USER_GUID value to NULL, this script also flushes the workflow entity cache for the user)

Friday, October 17, 2008

Different methods of provisioning users from Oracle LDAP to E-Business Suite

Introduction

While working on the Oracle E-Business Suite login integration project with MS Active Directory through Oracle Internet Directory (OID), I have come across several ways to provision (add) new users in the E-Business Suite. Without Single Sign On, the only way to add new users is through the FND_USER_PKG or by using the Security => User => Define of Oracle E-Business 11i. Depending on the setup in OID, I have identified five different ways to add new users in 11i after integration with MS Active Directory

Assumptions
  1. Oracle Identity Management 10.1.4.2 and Oracle Applications E-Business Suite 11.5.10.2
  2. Uni-directional synchronization from Active Directory to Oracle Internet Directory
  3. Uni-directional provisioning from Oracle Internet Directory to the E-Business Suite

Options

On-Demand User Creation

On-Demand User Creation allows any enterprise user who has access to the SSO login page of the E-Business Suite to get an account created automatically. The user has to enter his/her Windows username/pwd on the SSO login page. A click on the 'OK' button will create a user in the FND_USER table of the E-Business Suite with a default 'Preferences SSWA' responsibility and will let the user access Oracle immediately.

Procedure

Set the profile 'Applications SSO Auto Link User' to 'Create a new user and link to OID user'

Advantages
  • Account Creation in E-Business Suite is self-service and automatic.

Disadvantages

  • Lets any user in the enterprise to have an Oracle account. Anyone who has the login webpage address can get an account created in 11i.

ldifwrite on OID and LDAPUserImport on 11i

Procedure

  1. Use the ldifwrite command in the OID server to create a dump file containing the user's LDAP attributes and other information.
  2. Copy the ldif file to one of the 11i middle-tiers
  3. Run the LDAPUserImport java command to import the user into FND_USER table.

Advantages

  • Control on user creation. Only genuine and approved users will be allowed to have an account in Oracle.

Diadvantages

  • Manual process. However, it can be automated as well.

provsubtool on OID and Workflow Subscription Event on 11i

Procedure

  1. Run the provsubtool command in OID to add new users to an account subscription list
  2. On the 11i side, a workflow subscription event oracle.apps.fnd.subscription.add will be triggered at an appropriate time to add these users in the list to the FND_USER table in 11i

Security => User => Define form in 11i

The good old way of using Security => User => Define can still be continued for provisioning users in 11i after an integration with an LDAP directory.
Procedure
  1. Verify the user exists in OID using an ldapsearch command or by using the oidadmin tool.
  2. Disable Applications 'SSO LDAP Synchronization profile' option
  3. Create the user using the Security => User => Define form

Unconditional Provisioning

By enabling the 'Applications SSO Enable OID Identity Add Event' system profile, the provisioning profile will add every user account to E-Business Suite that is synchronized from MS Active Directory to OID. Exercise caution before enabling this system profile because not every employee or consultant in an organization will need an Oracle account. This can lead to a proliferation of users in the FND_USER table.

Conclusion

Sufficient thought has to be exercised before enabling the Unconditional Provisioning or the On-Demand user creation as described above. Use one of the other three options for maximum control on user provisioning.

Tuesday, September 16, 2008

Oracle E-Business Suite Login Integration with Corporate LDAP

Introduction

I am currently working on a new project to integrate Oracle E-Business Suite Login Information with the corporate LDAP Directory ( In our case it is the Microsoft Active Directory). This will accomplish Single Sign On (SSO) functionality for Oracle Applications. Instead of storing user login credentials in a local oracle database table, users can directly login to Oracle Applications website using their Windows NT credentials. SSO allows users login to their enterprise assets using only a single username/password across the enterprise. This eliminates the need for the user to register multiple times for multiple IT Applications, greatly increasing simplicity. In our organization, the enterprise user credential repository is stored in the corporate LDAP directory (MS Active Directory). Oracle E-Business Suite uses OID (Oracle's LDAP) Implementation to integrate with MS Active Directory (Microsoft's LDAP Implementation). However, external users and vendors who use Applications such as iSupplier and Procurement will continue to get authenticated using the local oracle database table, because of the fact that they are not enterprise users.

Architecture

All the details are available in the Proof-of-Concept architecture diagram attached above.Click on it for the full-sized image. Currently i am still working on the Proof-of-Concept setup before finalizing the final architecture. The final architecture will have advanced features such as High Availability through Real Application Clusters for the OID database, OID LDAP Replication, Oracle AS Cluster for SSO login servers etc.

Monday, September 1, 2008

Oracle DBA Activities

1) What is a typical day at your job?

I start my day checking any system alerts such as database performance problems, backup failures etc. We are using Oracle's Enterprise Manager which is a web-based software that sends email alerts to us automatically whenever it detects a problem based on certain criteria. I spend most of the day working on current projects such as database upgrades, migrations, new installations etc. I also help application developers and end-users whenever they have a database related question or problem.
2) What does it take to be a successful Oracle DBA?
Most of today's E-Business and IT applications are entirely web-based and hence the underlying databases have to be highly available 24*7. Responsibility, proactive attitude and emergency preparedness are some of the key characteristics that can make a successful Oracle DBA. IT application developers and the end-user communities rely heavily on the database administrator for their day-to-day database issues, questions and projects. An Oracle DBA should be polite and must treat every one in the organization with courtesy and respect.
3) Has your job description evolved over time?
Yes indeed ! The definition of an Oracle DBA has a much broader scope today. I started with just "database work" in my first job. Today my responsibilities include Oracle systems design and architecture, including Oracle E-Business Suite administration, Oracle Application Server setup and administration, setting up of Continuity of Business systems (Disaster Recovery preparedness), setup and administration of Oracle Fusion Middleware components such as Oracle Portal Server, Identity Management etc. I am also expected to work on hardware specifications and requirements for upgrading existing Oracle installations or setting new ones. Whereas the traditional "Oracle DBA" designation has remained the same, it has a much wider scope and responsibility today.
4) How do you keep up with new features and changes & advancements in database technology?
Every major Oracle database release comes with a lot of exciting new features which can be leveraged for simplicity, automation or better database management.
a)I am an avid reader of the bi-monthly Oracle Magazine. The subscription is free and it is available online as well. The magazine covers the latest in Oracle, contains a lot of expert articles with a practical outlook to tackle business problems.

b)I have also subscribed to rss feeds in http://otn.oracle.com/ so that i get updated whenever there is a new knowledge based article. This a popular site for the Oracle community and most of the technology articles are posted by Oracle ACEs and Oracle ACE Directors who are proven and recognized individuals by Oracle Corporation.

c)I also recommend aspiring DBAs to register in the Official Oracle Forum , thanks to the many experts who generously contribute to this discussion board, virtually any of your database related questions can get answered here.
5.What is the best feature you like about oracle DB, what needs improvement compared to other databases in the market?
My favorite Oracle database feature is Real Application Clusters (RAC). Using RAC technology, Oracle databases can be setup for high availability and virtually unlimited scalability. I did not get a chance to fully evaluate other databases in the market vis-a-vis the Oracle database. Oracle is the recognized Industry leader as per various results published by market research companies such as IDC and Gartner.
6.Has any of the following major macro trends affected you personally, whats your opinion?
a.Outsourcing & Offshoring

No. Oracle DBA is one of the few jobs that had a lesser impact by Outsourcing. A DBA is critical to the success of an IT department requiring a lot of technical understanding, emotional maturity, ability to handle pressure and crisis and one that comes with a lot of responsibility. Infact, all the Dice Reports this year show Oracle database as one of the top technology skills in the market in the USA.

b.Virtualization

Remote Service and Tele-commuting are only for low profile work such as after-hours support etc. Most of the managers prefer Oracle DBAs to work onsite and with direct supervision.

c.Moving from client-server to web-based

The Oracle DBA is usually less impacted by Client-server to Web-based migrations. Oracle databases can work with both client-server systems and web-based systems.
7.Your advice to people who are evaluating Oracle DB administration as a career.

The IT industry is facing a shortage of quality Oracle DBAs. Oracle database administration is a good career option with long-term benefits. I have been working as an Oracle database administrator since more than 6 years and the experience is very rewarding. It has also given me the confidence to architect and build large scale IT systems. I was able to positively impact the experience of the end-user community and positively contribute to various IT departments.

I recommend all aspiring Oracle DBAs to get trained in a class atmosphere. Oracle University offers a lot of online and classroom courses in many countries across the world.

Wednesday, August 20, 2008

SSO Login failure: Is it a problem with the username or the password ?

We are working on a proof of concept instance to evaluate integrating E-Business Suite login information with the corporate LDAP directory (In our case, it is the MS Active Directory). Users are currently getting authenticated using the local fnd_user table in the E-Business Suite database. The project aims to use the corporate user repository (MS Active Directory) for authenticating the E-Business Suite users, enabling single sign on (SSO) for all Oracle E-Business suite of Applications in the enterprise.

I had a tough time testing login failures after the SSO integration with E-Business Suite. The login page does not reveal if it is an invalid User Name or an invalid Password. All i get is "Error: Authentication failed. Please try again".

I turned to the $ORACLE_HOME/sso/log/ssoServer.log for help. The ssoServer.log provides additional error information only for invalid User Names. In the below error message, the stack trace reveals the problem is with the username sramineni@mycompany.com , meaning this username does not exist in the Oracle Internet Directory (and MS Active Directory).

Wed Aug 20 23:47:41 CDT 2008 [ERROR] AJPRequestHandler-ApplicationServerThread-7 Could not get attributes for user, sramineni@mycompany.com oracle.ldap.util.NoSuchUserException: User does not exist - SIMPLE NAME = sramineni@mycompany.com at oracle.ldap.util.Subscriber.getUser_NICKNAME(Subscriber.java:1160)

Now, if i enter a valid User Name,but invalid password, Oracle fails to update ssoServer.log with any error/stack trace. Oracle keeps you guessing on this aspect.

Monday, August 18, 2008

Database and archive log backup in one RUN command

Introduction


We went live with 10.2.0.4 Oracle CRS this weekend, completing the 5 month 9i to 10g database upgrade project. I also verified the first RMAN backup job after the upgrade and it completed successfully. A known issue with the backup job in 9i no longer exists after the 10g upgrade, acting as an additional bonus of the upgrade !

Problem


We take full database backups always without any incremental backups. Though the below run block contains two backup commands, one for the database backup and the other for the archive log backup, the latter never used to get picked up in 9i. In this older version, Oracle always skipped the second backup command in the below run block without reporting any error messages or warnings.

connect rcvcat 'rmanprod/*****@catdb'
connect target '/'


run {
allocate channel t1 type 'SBT_TAPE';
allocate channel t2 type 'SBT_TAPE';
backup
format 'prod_bk_%s_%p_%t'
(database include current controlfile);
# backup all archive logs
backup
skip inaccessible
format 'prod_al_%s_%p_%t'
(archivelog all
delete input);
release channel t1;
release channel t2;
}
allocate channel for maintenance type 'SBT_TAPE';
crosscheck backup of database;
delete expired backup;
release channel;


Solution

Since we have a different backup job to backup only archive logs, we did not pursue this problem with Oracle Support. I am not sure if this is a known issue in 9i version, but pleasantly surprised to find this problem go away in 10g. The first backup job that we run after the 10g upgrade successfully processed both the database backup command and archive log backup command in the same RUN block. The 9i mystery seems to have gone away in 10g. I am interested to know if some of you observed the same issue with 9i before ?

Sunday, July 13, 2008

APP-FND-01508: Failed to Connect

Introduction

The other day i spent almost two hours to know the source of a APP-FND-01508: Failed to Connect error occurring in all forms associated with some custom responsibilities in the Applications 11i E-Business Suite. The problem started after changing the password of a custom database user 'amarchive' that is used to connect to open the forms from the applications' responsibility navigation pane.


Problem

We use Informia Archive software to archive historical financial data from the E-Business Suite. For reporting purposes, we have created a couple of custom responsibilities in the E-Business Suite so that users can run concurrent requests. All forms associated with these custom responsibilities started displaying the APP-FND-01508: Failed to Connect error. An event trace for the ORA-01017 error (invalid username/password) showed a custom database user 'amarchive' in the trace file whenever the above connect error was displayed. Upon further investigation, i found that the password of this custom database user was changed with the 'alter user amarchive identified by newpass' command. In normal circumstances, such a command would seem the right way to change the password of a custom database schema user. However, it was not the case in our database. The custom user was registered as an Oracle Applications module and hence it has an entry in the fnd_oracle_userid view. An alter user command only updates the password in the dba_users view and hence the password in fnd_oracle_userid does not get changed. A successful forms connection process needs both fnd_oracle_userid and dba_users view contain the same password. This can be accomplished only by using the FNDCPASS command.
FNDCPASS apps/**** 0 Y system/***** ORACLE AMARCHIVE newpass

Solution

I used the FNDCPASS command to change the password again (used the same password specified in the alter user command) and this command synchronized both fnd_oracle_userid and dba_users views, enabling successful forms navigation again.

Conclusion
Many Oracle E-Business Suite 11i databases have custom database schema users. These schema users have to be verified whether they are registered as application modules. If yes, then the users have entries in the fnd_oracle_userid view also. The only way to change password for registered application modules is through the FNDCPASS command.

Saturday, June 14, 2008

RMAN DUPLICATE - Not everything is made using the CATALOG alone?

Introduction

We just wrapped up our 10g RMAN testing. To test recovery using our RMAN backup, we have used the RMAN DUPLICATE command to create a duplicate database using this backup. I have noticed something about the DUPLICATE command that i have missed so far: The DUPLICATE command creates the duplicate database's controlfile using information from BOTH the existing target database's controlfile and the backup controlfile, not just the latter alone.

Test Case

1) dev10g is an existing development database that has got upgraded to 10g version
2) dev10g is in noarchivelog mode (like most development databases do)
3) dev10g is converted to archivelog mode to take a RMAN backup including full database and archivelogs backup
4) dev10g archiving is disabled (noarchivelog mode again)
5) dup10g is the proposed new database that will be created using the RMAN backup of dev10g.
6) DUPLICATE command is run on the dup10g database's host.
7) DUPLICATE command fails after restoring the datafiles.

From the logfile.....

contents of Memory Script:
{
set until scn 8321617961105;
recover
clone database
noredo
,
delete archivelog
;
}


..
..
..


RMAN-00571: =======================================
RMAN-00569: ===== ERROR MESSAGE STACK FOLLOWS =====
RMAN-00571: =======================================
RMAN-03002: failure of Duplicate Db command at 06/09/2008 23:41:15
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database:
ORA-01194: file 1 needs more recovery to be consistent

ORA-01110: data file 1: '/dbfs/oracle/oradata/dev10g_sys01.dbf'

Problem

The target database dev10g was converted to noarchivelog mode in the 4th step above after the RMAN backup was taken. However, the DUPLICATE command *considers* there are no archivelog files to be applied to dup10g because the target database's controlfile shows the database is in noarchivelog mode. Though dev10g's archivelog files were backed up when the backup commands were run in the 3rd step above, DUPLICATE generates an internal script that forbids the archivelog apply because the target database's current controlfile shows its in noarchivelog mode.

Solution

Enable archiving in the target database when DUPLICATE is running, even though its a development database that does not require archiving. It can be disabled once the duplicate database is created successfully.

One may not encounter this problem when the target database is a production one, because production databases are usually in archivelog mode; however, one may encounter this problem when the target database is a development one when its archiving is disabled after an RMAN backup.

Thursday, June 12, 2008

RMAN debug

RMAN automates Oracle database's backup and recovery processes. If you would like to know what happens inside an RMAN command,(RMAN Internals), then try the debug option. Also, when you have a case opened on any RMAN issue, the debug logfile can help a support analyst quickly locate the source of your RMAN problem.

Its very easy to turn on the debug feature.

/home/oracle/backup_testing> rman log=output.log trace=debug_info.log
RMAN> debug on
RMAN> RMAN Command 1;
RMAN> RMAN Command 2;
....
....
....
RMAN> exit

The debug_info.log contains all the internal details. BTW, you will not find information about this feature in the Oracle Documentation.

Tuesday, June 3, 2008

central tnsnames.ora

Introduction

For hassle-free management of enterprise wide databases connection information, the connect descriptors can be stored in a centralized repository. There are several solutions to implement a centralized repository ranging from the basic tnsnames.ora that is stored on a NFS to the advanced LDAP complaint directory services implementation through OID (Oracle Internet Directory).

Problem

Several of our Oracle developers and end users rely on 3rd party tools like TOAD, SQL*Navigator to communicate with Oracle databases. We have put the tnsnames.ora containing all the oracle databases connection information for these utilities on a shared network drive in windows. This solves the problem for clients using Windows. However our system administrators do not allow NFS access on Unix servers. Due to this restriction, it has become a hassle to maintain a separate tnsnames.ora in every Unix server that hosts oracle installations(client or server). If we create a new database or change an existing database's connection settings, we have to update the tnsnames.ora on all other Unix servers with this new connection information.

Solution
While discussing in Oracle forums, i stumbled upon a specialized tool tnsManager built by Andrew Barry that acts as a simple LDAP compliant server to serve just TNS information. It is the kind of solution that i was looking for solving the problem. It is quite easy to use and has a very small footprint. The 'HOW TO' guide is here.

Conclusion
We will be implementing Oracle Identity Management (that will contain OID) by the end of the year that provides a much more sophisticated mechanism for organizing and managing centralized directory services like applications login accounts, centrally managing database connection information etc. The tnsManager utility can be a good alternative which will satisfy our basic needs for storing just the database connection information in a central tnsnames.ora until the full blown Identify Management project is implemented. Best of all its just one executable that Andrew supplies and it takes 5 minutes to install and get it going ! As far as a DBA is concerned, all he or she has to do is edit the central tnsnames.ora that is read by the tnsManager utility whenever an existing database connection settings are changed or a new database is added in the enterprise. No extra configuration necessary.

Thursday, May 29, 2008

RMAN DUPLICATE - automatic temp files recreation

As part of our 10g upgrade project, we are testing our existing RMAN scripts, especially the DUPLICATE command that we use to clone databases. In one my previous posts, i have already outlined the procedure to clone a database using the DUPLICATE command.

It works fine in 10g too; however, an added benefit in this release is that it recreates the temp files automatically. The temp files for the destination database will be created in the directory specified by the DB_FILE_NAME_CONVERT parameter with file names and file sizes similar to the source database. This eliminates the manual recreation of the temp files after every database clone performed with the DUPLICATE command.

With this new feature in 10g (specifically it is an Oracle 10g R2 new feature) the DUPLICATE command eliminates one more manual step performed during the database cloning process.

Monday, May 19, 2008

How to become an Oracle DBA

I was on a vacation for three weeks in April and hence could not blog since more than a month. We are also currently busy upgrading our E-Business Suite 11.5.10.2 on 9.2.0.6 database veresion to 10g R2 version and also completing Linux Migrations of the rest of the Oracle systems that we have here. The upgrade is still in the development phase and we are planning to golive in July this year. We are all very excited about the upgrade especially because of all the goodies and new features that come with 10g release. So far the upgrade looks smooth that but i will post my experiences with issues (if any) once we are ready to GOLIVE.

Yesterday i got an email from a regular visitor of my blog who happens to be a fresh computer science engineering major and he wanted to know how can be become a good Oracle DBA with no prior experience.

One has to start with a Professional training course in Database Administration. I recommend DBA University courses. Learning by oneself is also a good idea especially if one is a computer science major already because one picks up RDBMS concepts in college usually. I recommend getting trained with Oracle Real Applications Clusters (RAC) directly. A useful book that i came across is the 'Personal Real Application Clusters' . This book is from Rampant Press (well known for their specialized Oracle concentration). Though i did not go through the book, the topics and chapter look quite interesting.

It will be a very good experience setting up your own personal RAC system. The industry is converting more and more Oracle single instances into multi node clusters (Real Application Clusters) for high availability and scalability. The favorite platform for RAC is Linux and of course it will be become the platform of the future. The book seems to be a good handbook for building your own Oracle RAC on Linux Platform easily and in a cost effective manner. Increased confidence and a sense of accomplishment will be the result of this exercise.

Wednesday, April 2, 2008

Oracle DBA interview questions

Introduction
Last night i got a call from my friend who wants to "job-hop" to get a 200% pay raise and he wanted answers to some popular interview questions.

There is no one line objective answer to these questions. The answers get better as you gain more experience in the art called database administration. If i were asked these questions below in an interview, my answers will be like the ones in this post. For simplicity, i will assume the database in question is a 9i or a 10g database and also an OLTP one. Some of the answers may not be applicable or may have got changed in other versions.
1) Suppose if i am having RAM of 30GB ,What would be the size for SGA ? what is the ideal calculation for configuring sga?
A) I will go with the assumption that you already have a database available of a particular size and now want to use newly provided 30GB RAM to configure SGA for this existing database. I will go with 1% of the DB size for SGA. So if the database size is 200 GB, the SGA i will recommend is 2GB. Similarly if the database size is 1000GB, i will go with 10 GB SGA. I will follow this formula up to 1000GB (1 Terabyte database). It is not directly proportional afterwards. For very large databases (more than 1 Terabyte), SGA sizing has to be done based on the demand and requirement after getting valuable inputs from performance reports like Statspack reports etc. If i had a 10 TB database i would stop at 10GB SGA and check performance reports to take a suitable decision whether to increase the SGA any further. This formula is purely based on my experience. There is no hard and fast rule. You are always free to add more SGA if you have the physical memory. But read the next question below to know why it may not be of good benefit if the existing SGA is already well configured.

2) If i increase/decrease the sga size what would be the effect to the database.
A) For majority of the cases there may not be any effect. Here is why. You should take statspack or other performance reports and find if the hit ratios are above 98% (I like 99%+ hit ratios). If the LIBRARY CACHE HIT RATIO and BUFFER CACHE HIT RATIO are below 95%, its better to increase the SGA. However if there are full table scans on big tables in your queries, any amount of increase of SGA (DB_CACHE_SIZE) will be futile because the LRU algorithm puts data accessed from table scan in the "Least Recently Used" category thereby causing them to get flushed out of the DB CACHE more frequently than others like those accessed through an index path. Similarly for a query which executes frequently and using literals instead of bind variables. They fill up the SHARED POOL immediately. I have seen databases getting ORA-01403 (Shared Pool out of memory) with a 5 GB SHARED_POOL_SIZE and i have also seen databases running fine with just 80MB SHARED_POOL_SIZE ! Statspack reports are going to be your bible in a 9i database, though you may rely on other components like the ADDM in 10g. In short, i would not increase the SGA unless i go through the performance reports.

3) Say my select query is going on,but some how my db buffer cache got full,so now what will happen to the query, under this scenario from where it will read the data.
A) It will read data from disk. There will not be any failure. You can see slow response (usually only the first time) from the database because it will read data from the disk. But the buffer cache will get flushed soon, because of the LRU algorithm and soon your select query will have free space in the buffer cache (unless your select query is the culprit here doing a lot of full table scans) .
4) Under what condition should i need to increase the sga max size
A) SGA_MAX_SIZE is a "nice to have" init.ora parameter. If you want to experiment with different SGA sizes while having the database up and running, you should configure SGA_MAX_SIZE. This parameter allows you to change individual SGA component sizes like DB_CACHE_SIZE, SHARED_POOL_SIZE dynamically while your database is online. You can increase these SGA components during peak load for e.g when there is a quarter end or month end reporting job that accesses a lot of data.
5) Say a query is not performing well today,but that query was running well so far,some how today it is taking a lot much time ,as a DBA what basic strategy is needed to check the performance of the query ?

A) First get the current sql of the query. You can join v$session and v$sqltext dynamic performance views based on the sql address and get the current SQL that is running (if its active). Every sql statement gets a unique address in the SHARED_POOL_SIZE (think it like an unique identifier throughout its existence in the SHARED POOL) or else you can get it from GUI tools like OEM, SQL Navigator or Embacardo tools.Then it depends why all of a sudden your query is having a higher response time today. May be there are other jobs currently running that are taking lot of resources, or there could have been a code change yesterday night by a database developer or there could have been a sudden data load into the underlying tables since last night. There are several possibilities. I will try to check the num_rows in dba_tables and the actual number of rows using a count(*) and then decide if i want to analyze the tables. Taking a trace (level 12) is a good place to start to get to the bottom of the problem. There could also be a I/O problem in the system level. So i will also check with system administrators.
6) Say like i have executed delete from a table( it containing 5 records only),but it is not processing the delete. I check and see there are some locks in the database. How much time do i need to wait for my delete operation to succeed ? If Oracle does not release the locks, do i have to bounce the database to resolve the problem ?

A) You should find out what kind of lock is there. If its an enqueue lock you have to look at the other offending session that is blocking this session. Enqueue locks are usually a result of bad code. To let the delete statement pass through, I can call the user who is running the offending session and then remove the session from the database. However the root cause lies in the code design. In future the same delete statement may have the problem. Therefore fixing the code will be a long term fix for enqueue kind of locks.
7) When ever we run statspack,we see some top 5 wait events like "log file sync" and "db file scattered read",when these events occur and how to resolve them ?

A) I will not worry about familiar events like "db sequential read" or "sql *net message from client" etc. However i will be concerned if the events are "log file sync", "enqueue" or "file scattered read" etc. if there is huge transactional activity in the database, you should get a trend of your expected log switches per day and if the latest statspack report has this event in the top, then you should find out if any long running DML jobs were running that is causing too many waits on "log file sync". Some of the reasons could be a disk I/0 problem that is causing slow log file syncs or there may be too frequent commits in your long running jobs that is causing frequent log buffer flushes to the online redo logs. DB FILE SCATTERED READ is a scary wait event. If its in the top 5, then that means user sessions are waiting frequently on data accessed from disk. This also indicates that the database is undergoing a lot of full table scans or index fast full scans which will slow down the database.

Friday, March 14, 2008

Oracle Database is the skill with the biggest demand

Since the last few days i am only hearing discouraging news about the US economic slowdown and a possible recession in the coming days , but there is one news item that will bring a smile to all the Oracle DBAs !

According to The Dice Report, the latest snapshot on the technology job market published by dice.com for the month of March 2008, Oracle Database is the skill thats the most in demand. This is certainly heartening especially for the Oracle DBA community.

Click here for more details....

Oracle Business Intelligence (Discoverer 10g) Linux Migration

Introduction

As part of the ongoing Linux Migration project, we successfully migrated the Oracle Business Intelligence (Discoverer 10g) from HPUX to Linux last weekend. The Discoverer migration is straightforward for the most part and many steps can be done well in advance to avoid increased system downtime during the cutover. Most of the procedure involves a new linux installation and configuration and only a part of the procedure involves “actual migration” of files from the existing source HPUX platform to the target linux platform. Below are the high level steps that we followed.
Advance Steps
  1. Our existing version on HPUX is 10.1.2.0.2. We downloaded the same Linux x86 version from the OTN site. Oracle Discoverer 10g is also called Oracle Business Intelligence SE (Standard Edition).
  2. Install the Linux 10.1.2.0.2 software on the target linux host.
  3. Create $ORACLE_HOME/discoverer/secure directory
  4. To provide transparency and a much more friendly user URL to end users, we got a DNS alias for the linux host. After the installation, $ORACLE_HOME/chgip/scripts/chgiphost.sh -mid is run to update the discoverer installation's configuration files with the DNS alias (Compare it with adautocfg.sh in Oracle E-Business Suite 11i.
Cutover Steps (Downtime)
  1. Shutdown the discoverer processes on the existing HPUX platform.
  2. Migrate the discoverer preferences from the source HPUX to the target Linux platform. Discoverer preferences are settings that define the Discoverer environment and control the behavior of Discoverer Plus and Discoverer Viewer. Whenever an individual user changes preferences in discoverer, thet are stored in a hidden file $ORACLE_HOME/discoverer/.reg_key.dc. This file contains values in a platform specific format and therefore needs to be converted to the target platform format.
  • Backup the target linux platform’s $ORACLE_HOME/discoverer/.reg_key.dc
  • Copy the $ORACLE_HOME/discoverer/.reg_key.dc from source platform to Linux as hpux_reg_key.dc
  • On Linux , cd $ORACLE_HOME/discoverer/util and run the below conversion script
  • perl convertreg.pl hpux_reg_key.dc $ORACLE_HOME/discoverer/.reg_key.dc

3. Apply any one-off patches on Linux that have been applied in source platform

4. Migrate tnsnames.ora and dbc files from source to Linux.

5. Start the linux discoverer processes (opmnctl startall and emctl start iasconsole)

6. Oracle Discoverer is now LIVE on Linux !

Friday, February 29, 2008

tar gzip and gunzip command examples

Introduction
 
This article gives you some examples of tar, gzip and gunzip commands on Linux operating system.
 
How do you take an export of a database that is 100GB in size whereas the disk space available to store the export dump is only 20GB ? Similarly how do you make a tar ball of the E-Business Suite APPL_TOP that is already 50GB in size whereas the disk space available is only 15GB ?. We are in the process of migrating our existing Oracle installations from the HPUX platform to Linux and for the most part, we spend time in transferring files between the platforms. 'Create and Compress' is one option that i have used successfully and i will illustrate the syntax in this post.

Full database export using parallel compression
  1. Login to source HPUX database host
  2. mknod db_full_exp.dmp p
  3. gzip < db_full_exp.dmp > db_full_exp.dmp.gz & nohup exp system/***** parfile=params.lst file=db_full_exp.dmp &
  4. Copy the above db_full_exp.dmp.gz to the target Linux database server
  5. gunzip db_full_exp.dmp.gz
  6. imp system/***** parfile=params.lst file=db_full_exp.dmp
Tar file and parallel compression
 
The Oracle Applications 11i E-Business Suite Linux Migration involves copying several source directories from the source platform to the target Linux platform. The below example deals with the $APPL_TOP copy. Same syntax can be used for other directories that should be migrated. Please replace the "PIPE" with the actual PIPE symbol on the keyboard.
  1. On source HPUX node, cd $APPL_TOP
  2. tar cvf - . PIPE  gzip > /any_directory_having_free_space/appl_top_files.tar.gz
  3. Copy the appl_top_files.tar.gz to the target Linux node
  4. On target Linux node, cd $APPL_TOP (directory should be empty)
  5. gunzip -c appl_top_files.tar.gz  PIPE tar xvf -
Conclusion

A database administrator should automate processes and tasks and that extra knowledge in programming comes a long way in accomplishing these goals. Mastery in advanced unix commands like awk and sed will also help eliminate manual tasks for the most part. Ease in shell scripting or perl scripting will help one spend less time in routine tasks thereby giving more time for other important and priority activities.

Tuesday, February 19, 2008

Oracle Index Monitoring Benefits

Introduction

There are many new features available in Oracle 9i and 10g database releases. Out of the these many new features, a few standout from the crowd and 'Index Monitoring' is one of them. It is introduced in the 9i release and its one of my favorites in Oracle. Whereas the benefits are enormous, the risks are minimal to zero.

All database indexes need to be maintained in Oracle whether they are being used by a SQL query execution plan. The primary purpose of creating indexes is to reduce the cost of an sql operation thereby increasing performance and response times in the database. Sometimes database developers create indexes that will be never be used by Oracle.

Example 1 :- Creating index on a column that will be used only in the select clause and never used in the where clause of an sql statement (non-driving columns).

Example 2 :- Another common example is when the cost based optimizer decides that a table access or an alternate index/composite index is a better access path than using the index in question .

One of the most easy to implement Oracle 9i features (and 10g) has been 'Index Monitoring'. The feature identifies all unused indexes in the database and updates a data dictionary view with this information. A DBA in coordination with the development team can review these results and proceed to drop these indexes.

Benefits of dropping unused indexes
1. Faster DML operations
Database table insert/update/delete statements can be executed quickly without the need for Oracle to insert/update/delete the corresponding indexes in the background.
2. Reduced Database size
Indexes occupy the next highest storage after tables in most transactional databases. If large indexes are found unused, they can help reduce the overall database size.
3. Less RedoLog consumption
All indexes update/insert/delete get logged (if they are created with the default LOGGING option - which is the norm rather than an exception ) in the redologs. If unwanted/redundant indexes are dropped, less redologs are switched per day thereby increasing overall database performance by a good multiple.
4. Reduced archive space
More redologs caused by unused indexes mean more archivelog files.
5. Faster database backups
Dropping unused indexes will decrease the database size. Lesser sized databases take less time for their backup.
6. Reduced recovery time
Less archive logs translates to lesser archive logs to apply in database recovery operations.
Implementation Procedure
1. Identify all hot custom indexes which are of greater than a threshold size (e.g 100MB)
2. For each of the above indexes, the following command need to be executed
alter index monitoring usage;


3. Let Oracle monitor the indexes usage for 24 hrs. The data dictionary view v$object_usage gets updated if the index has been used atleast once in the last 24 hrs. (Here 24 hrs is just an example. Some sites may need 48 hrs of monitoring or even upto 7 days of monitoring. )

4. Turn off index monitoring using executing the following command
alter index nomonitoring usage;

5. Get all unused indexes reported in the view v$object_usage
Are there any dependencies ? Is downtime needed for Index Monitoring ?
Index Monitoring has no dependencies with any database operations. It can be dynamically implemented with a very little overhead. There is also no downtime required for the implementation.
Caveats

1. Implementing Index Monitoring in development and test databases may not yield good results or may not be used as a basis for production implementation. Whereas several indexes may be shown as unused in development databases, the same indexes may be used more frequently in production because of the higher transaction rate.

2. Only custom indexes need to be considered for index monitoring. Its not advisable to monitor indexes that come in a pre-seeded database similar to the Oracle 11i E-Business Suite database.

3. The actual Index Monitoring is entirely done by Oracle without a DBA intervention. In ideal cases, Oracle may not even report any unused/redundant index. But this is very rare and happens only in perfect database design cases. So the index monitoring exercise may end up without any unused/redundant indexes in perfect design databases.

4. Sometimes an index is used exclusively for a weekend job or a monthend job to improve its performance. Exercise sufficient caution before dropping unused indexes reported in v$object_usage. Confirmation has to be taken from the index owner that there is no special purpose for the concerned index.
Conclusion
Database upgrades and migrations to new releases have to be performed with a pro-active frame of mind instead of a reactive frame of mind. I have seen many Oracle upgrades and migrations performed just to meet Oracle support deadlines or to meet a prerequisite for other software components. New features like Index Monitoring come with the least risks and lot of benefits. If you are starting a 'reorg' exercise in your database, Index Monitoring can be a good starting point.

Friday, February 1, 2008

Oracle Applications 11i Middle Tiers Linux Migration

We successfully migrated our Oracle Applications 11i (11.5.10.2) middle tiers (forms and web) from HP-UX to Linux during the weekend. We followed the standard Metalink Note :- 238276.1 for the Linux Migration. Our admin tiers and database instances were on HP-UX and these were left alone during the migration. Overall it has been a smooth cutover, however we did face a SEV 1 issue during the GOLIVE activities. We were able to get help within the maintenance window and thereby were able to release the instance on time. The details are below  

ORA-01403: No Data Found... 

After completing all the tasks and starting the applications on the target linux servers, none of the forms were opening. Every click resulted in a "ORA-01403: No Data Found" error. We reviewed all the tasks and everything was run fine. However we were convinced that this error could be a result of just one file or one wrong parameter. Soon after a thorough search on the Oracle Metalink site, we found an article that says an incorrect custom.pll can cause this error while trying to open any form or click on any form button. After getting this vital hint, we were able to demonstrate that its indeed the custom code that is causing the problem. The following method was used to confirm this.

  1. Login to forms
  2. If "ORA-01403: No Data Found" occurs on every form click or form button click, then escape the error window, Goto Help menu => Diagnostics => custom code => The option here will be NORMAL. Choose OFF.
  3. Now navigate forms again. The error no longer showed up after turning off custom code.

After narrowing down to the bottom of the problem, we soon found that a pll file added along with CUSTOM.pll for some changes was not compiled on linux. The problem went away after compiling this pll too.

Conclusion

adadmin program compiles only the standard code and hence custom code compilation has to be taken care of seperately after the standard code compilation. Failure to compile any custom code files can result in the above problem.

Friday, January 11, 2008

Oracle STATSPACK recreation

Introduction

The PERFSTAT schema (and tablespace) contains all the database’s performance statistics. However over a period of time, this tablespace can grow to a large size and the stored statistical data can be purged to make room for more freespace in this tablespace. Occasionally a complete reorganization of the PERFSTAT tablespace can be performed to delete all the historical statistics data, thus reclaiming disk space consumed for this tablespace.

At first glance, it is easy to think that this reorg procedure can be done hot (while the database applications are up and running). But its not quite true. This post explains the "behind the scenes" part of STATSPACK reorganization and why its not a straightforward procedure.

Background

Our 9.2.0.6 Oracle Applications database's PERFSTAT tablespace grew more than 50GB in size with all the thousands of statspack snaps in the schema and its just the right time for a purge and reorganization of this tablespace. Since perfstat schema is not used by any other jobs except the statspack.snap cronjob, it looked like a straight forward operation and which can be done online. Based on this flawed reasoning, the reorg was completed in our development instance. Soon there were users complaining that they were getting java exceptions once they login to Oracle Applications 11i. All the exceptions were due to the DBMS_UTILITY package getting invalidated. Because statspack could not have done this (this was the flawed assumption until then), we guessed it could have been a result of bad code migration from the applications team.

Whats the unexpected
There were no changes nor were any code migrations from the applications team. Upon further investigation, it was identified that the reinstallation of STATSPACK environment (spcreate.sql) invalidates critical DBMS packages including DBMS_UTILITY, DBMS_SNAPSHOT, DBMS_IREFRESH, UTL_RECOMP, DBMS_STATS, DBMS_PRVTAQIP, DBMS_PCLXUTIL etc and we recompiled them immediately to solve the problem. The metalink note 392352.1 mentions this is an expected outcome, but nothing of that sort is written in the official guide.
  1. Take a list of the invalid objects from dba_objects view
  2. Reorg the statspack environment (drop it and recreate it)
  3. Take a new list of the invalid objects and compare it with the list taken above. Recompile the new objects that are invalid after the reorg.
Conclusion

There is no "big change" or "small change". Every change should be given due respect and tested properly before GOLIVE or else expect the unexpected !