Upgrade RAC database from to in Oracle


Up-gradation two Node RAC to on RED HAT 5.4 Version

Configuration of RAC Environment
Node Name : RAC1 , RAC2
Database Name: orcl
Database Version :
Cluster Home: /u01/app/crs
ASM Home: /u01/app/asm
DATABASE Home:/u01/app/10g

Steps involved in Upgrading the RAC Environment
In Oracle10g RAC upgradation, 1st we must upgrade clusterware software, 2nd ASM home and 3rd upgrade Database home.
1.      Upgrade Clusterware software (CRS_HOME)
2.      Upgrade ASM Home (This may or may not be a home separate from the RDBMS home)
3.      Upgrade Database Home (RDBMS_HOME)
4.      Then finally upgrade the RAC Database (Manual or DBCA etc)
Note: Patch does not allow you to upgrade Oracle RAC before you upgrade Oracle Clusterware

Download the patch for patch number is p6810189_10204_Linux-x86.zip. (P8202632_10205_LINUX.zip for

Pre check Steps for RAC Upgradation

1. Check the invalid object in the database.

select object_name,status from dba_objects where status=’INVALID’;

2. Check the time zone of the database.

SELECT version FROM v$timezone_file;
If this gives 4 then you may simply proceed with the upgrade even if you have TZ data.
If this gives higher then 4 look at the Meta link note: Note 553812.1
If this gives lower then 4 perform the following steps:


Your current timezone version is 2!
Do a select * from sys.sys_tzuv2_temptab; to see if any TIMEZONE data is affected by version 4 transition rules.
Any table with YES in the nested_tab column (last column) needs a manual check as these are nested tables.
PL/SQL procedure successfully completed.
Commit complete.
select * from sys.sys_tzuv2_temptab;
f it returns no rows, there is nothing that needs to be done. Just proceed with the upgrade.
If it retunrs the detail of columns that contain TZ data which may be affected by the upgrade, see metalink note: Note 553812.1
The Note 553812.1 states that if you see SYS owned SCHEDULER objects then it is safe to ignore them and proceed with the upgrade. But if you see user data or user created jobs here then you need to take a backup of data before upgrade and restore it back after the upgrade. Remove any user created jobs and re-create them after the upgrade.

Two type of upgrade:
* Rolling Upgrade (No Downtime)
* Non Rolling Upgrade (Complete Downtime)
Start with Clusterware upgradation:

1. Check the cluster services:

[root@rac1 bin]# ps -ef|grep d.bin
root      1467 23423  0 06:03 pts/2    00:00:00 grep d.bin
root     31285     1  0 05:17 ?        00:00:01 /u01/app/crs/bin/crsd.bin reboot
oracle   31716 31303  0 05:19 ?        00:00:00 /u01/app/crs/bin/evmd.bin
oracle   31832 31806  0 05:19 ?        00:00:01 /u01/app/crs/bin/ocssd.bin
2.  First perform upgradation on one node.
Note: Shut down all processes in the Oracle home on the node that might be accessing a database

-- Stop enterprise manager
emctl stop dbconsole

-- Stop sqlplusctl
isqlplusctl stop
--Shut down all services in the Oracle home on the node that might be accessing a database:
srvctl status service -d orcl
srvctl stop service -d orcl 
-- Shut down all Oracle RAC instances on the node on which you intend to perform the rolling upgrade.
srvctl status instance -d db_name -i instanc_name
srvctl stop instance -d db_name -i inst_name
-- Shut down the Automatic Storage Management instance on the node on which you intend to perform the rolling upgrade.
srvctl status asm -n node
srvctl stop asm -n node
-- Stop all node applications on the node on which you intend to perform the rolling upgrade.
srvctl status nodeapps -n node
srvctl stop nodeapps -n node
If the database is on Instance1, Relocate database to second instance
./crs_relocate   ora.racdb.db

3. Backup the following files and folder:


4. Oracle recommends that you create a backup of the Oracle Inventory, Oracle 10g home and Oracle 10g Database before you install the patch set.

# cp /oracle/old_oraInventory oraInventory
# tar czf /oracle/OraCRSHomebkp.tar.gz crs
# tar czf /oracle/OraASMHomebkp.tar.gz asm
# tar czf /oracle/OraDBHomebkp.tar.gz db_1

5. Backup the Clusterware components:

OCR backup
# ocrconfig –export  /oracle/ocrexpbkp.dump
# ocrconfig –showbackup (shows the auto backup location for ocr disks)
# ocrcheck
[root@rac1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :    9775400
         Used space (kbytes)      :       3796
         Available space (kbytes) :    9771604
         ID                       :  861912775
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
                                    Device/File not configured
 Cluster registry integrity check succeeded
$ crsctl query css votedisk

[root@rac1 bin]# ./crsctl query css votedisk
 0.     0    /dev/raw/raw2
located 1 votedisk(s).
$ dd if=votedisk_name  of=backup_votedisk_name  bs=4k
Eg: dd if=/dev/raw/raw2 of=/oracle/votebkp.bak bs=4k
5. Database Backup:
RMAN full backup for RAC database.
6. Start installing Cluster ware patch set on node first:

6.1: Check once all service is down for RAC1 server:
./crs_stat -t

6.2: Cd patchset_directory/Disk1
Run the runInstaller 
6.3: On installation, it default took all the node for copy the patch. So no way to do anything simply click next.
Note: The following instructions are displayed on the Oracle Universal Installer screen.

7. Perform the additional steps on each node.

On Node1:
7.1: Log in as the root user and enter the following command to shut down the Oracle Clusterware:
# CRS_home/bin/crsctl stop crs
[root@rac1 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
7.2: Run the root102.sh script to automatically start the Oracle Clusterware on the patched node:
# CRS_home/install/root102.sh
On node1 will be up including cluster, database and ASM services.
[root@rac1 install]# ps -ef|grep smon
oracle    6768     1  0 06:45 ?        00:00:00 asm_smon_+ASM1
oracle    7298     1  0 06:45 ?        00:00:00 ora_smon_orcl1
root      7335 27696  0 06:45 pts/3    00:00:00 grep smon
On node2:
7.3: Log in as the root user and enter the following command to shut down the Oracle Clusterware:
# CRS_home/bin/crsctl stop crs

7.4: Run the root102.sh script to automatically start the Oracle Clusterware on the patched node:
# CRS_home/install/root102.sh
After patch on node2, All resources are back now in ONLINE state on both nodes. You successfully applied the patch set for oracle clusterware in a rolling fashion.

[root@rac1 bin]# ./crsctl query crs softwareversion
CRS software version on node [rac1] is []

[root@rac1 bin]# ./crs_stat -t
Name           Type           Target    State     Host        
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....l2.inst application    ONLINE    ONLINE    rac2        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2   

Apply Patch ON ASM AND DBHOME of RAC Environment

Note: For RAC Database Patchset we need to down hole database on both nodes, for RAC Database there no rolling upgradation.
1. Shut down all processes in the Oracle home on each node that might be accessing a database, for example Oracle Enterprise Manager Database Control or iSQL*Plus:(both node)

$ emctl stop dbconsole
$ isqlplusctl stop
[oracle@rac1 bin]$ ./emctl stop dbconsole
TZ set to US/Pacific
OC4J Configuration issue. /u01/app/10g/oc4j/j2ee/OC4J_DBConsole_rac1_orcl1 not found.
[oracle@rac1 bin]$ ./isqlplusctl stop
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
iSQL*Plus instance on port 5560 is not running ...

2. Shut down all services in the Oracle home on each node that might be accessing a database:

$ srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
E.g: srvctl stop service -d orcl -s orcl

3. Shut down all Oracle RAC instances on the nodes, which run from the Oracle home on which you are going to apply the patch set. To shut down all Oracle RAC instances for a database, enter the following command where db_name is the name of the database:

srvctl stop database -d db_name

4. To shut down an Automatic Storage Management instance

$ srvctl stop asm -n node
$ srvctl stop asm -n rac1
$ srvctl stop asm -n rac2

5. Stop listeners that are running from the Oracle home that you are patching on all nodes.

$ srvctl stop listener -n node [-l listenername]
$ srvctl stop listener –n rac1
$ srvctl stop listener –n rac2

6. Start the patch set first on ASM server:

Note: select asm home in installation.

7. Run root.sh command:

On node1:
Log in as “root” user
On node2:
Log in as “root” user

8. Checked ASM version with opatch command.

9. Start the patch set on Database Home:


10. Verify the opatch command on Database home.


1. For Oracle RAC installations, start listener and databae on one node of the cluster as follows:

srvctl start listener -n RAC1
srvctl start asm –n  RAC1

2. Login in database and start in no mount state

sqlplus / as sysdba
startup nomount

3. Disable the cluster parameter in spfile


4. Shut down the database:

Shutdown immediate;

5. SQL*Plus commands to start the RAC database in upgrade mode:

SQL> SPOOL patch.log
SQL> @?/rdbms/admin/catupgrd.sql

6. Check the patch.log file.


Note: When the patch set is applied to an Oracle Database 10g Standard Edition database, there may be 54 invalid objects after the utlrp.sql script runs. These objects belong to the unsupported components and do not affect the database operation.
Ignore any messages indicating that the database contains invalid recycle bin objects similar to the following: BIN$4lzljWIt9gfgMFeM2hVSoA==$0
Check the status of upgradation Post Steps
1. Check the DBA registry and status of components:

select comp_name, version, status from sys.dba_registry;

2. Change the cluster database parameter value for RAC database:

--Set the CLUSTER_DATABASE initialization parameter to TRUE:

3. Restart the database:

-- Shutdown the database
-- Start the asm
./srvctl start asm -n rac2
-- start the database
./srvctl start database -d orcl -o open

4. Start any database services that you want to use:

$ srvctl start service -d db_name -s service_name
5. To configure and secure Enterprise Manager follow these steps

In the case of Oracle Real Application Clusters (RAC), execute
$ emca -upgrade db –cluster


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s