Sunday 16 September 2018

Installation and upgrade

Ref :
http://www.oraclenext.com/p/installationpatchupgrade.html

https://asanga-pradeep.blogspot.com/2015/03/upgrading-single-instance-on-asm-from.html

Upgrading Grid Infrastructure 11g to 12c


In this post I will be demonstrating the steps to upgrade Oracle Grid Infrastructure (GI) 11g (11.2.0.4) to Grid Infrastructure 12c (12.1.0.2). Since ASM runs from GI home, ASM would be upgraded automatically while the upgrade of GI. First release version of 12c was 12.1.0.1 and for this example, I would be using 12.1.0.2. As we know that 
Oracle now provides full installation binaries for patchset releases, so the binaries (12.1.0.2) are required to be downloaded from MOS – which is patch number 17694377 (Disk 3 and 4). Upgrade in this example is being done on Oracle Linux 6 (x86-64) running on Oracle Virtual Box. There is also an Oracle 11g (11.2.0.4) database (“db11g”) running on this host having files on ASM. After the upgrade of GI, 11g database files would still remain on upgraded ASM.

Please also note the upgrade from 11.2.0.3 to 11.2.0.4 is almost similar and same article can be used to perform this upgrade. Likewise, came article can be used if you want to upgrade from 11.2.0.4 grid infrastructure to 12.2.0.x

Before moving forward, make sure that all 12c pre-installation tasks have been completed as mentioned in this Oracle official documenthttps://docs.oracle.com/database/121/LADBI/pre_install.htm


Installation/Upgrade StepsYou should log in as Grid Infrastructure software owner to perform this installation. Stop the databases (db11g) which is dependent on 11g GI (ASM).
$srvctl stop database –d db11g

Extract the binaries from 12.1.0.2 GI patchset
Before starting the installer, you should unset ORACLE_HOME and ORACLE_BASE environment variables as these would be set automatically during the installation

$unset ORACLE_BASE
$unset ORACLE_HOME
$./runInstaller 



Select 3rd option for upgrade and Click Next




























Click Next




























Click the checkbox and provide OMS details if you have OEM, to which you would like your database to register with. Click Next




























Select appropriate OS groups. Click Next




























Enter appropriate values for Oracle Base and Oracle Home directories. Click Next




























Click Next




























Resolve any warnings or errors during Prerequisite Check. In my case, my Virtual Box was lacking sufficient RAM so I saw following warning which I “Ignored”. Click Next




























Click Install




























Monitor the progress




























When prompted, open a new terminal window and after login as user root, execute rootupgrade.sh
























Output of rootupgrade.sh should be similar to the following
[root@salman1 ~]# /u02/app/grid/product/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u02/app/grid/product/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u02/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

ASM Configuration upgraded successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node salman1 successfully pinned.
2014/12/29 15:54:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2014/12/29 15:55:23 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2014/12/29 15:57:07 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'

2014/12/29 15:57:16 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'


salman1     2014/12/29 15:57:17     /u02/app/grid/product/12.1.0/grid/cdata/salman1/backup_20141229_155717.olr     0

salman1     2014/12/26 09:40:42     /u01/app/grid/product/11.2.0/grid/cdata/salman1/backup_20141226_094042.olr     -
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'salman1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'salman1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'salman1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'salman1'
CRS-2677: Stop of 'ora.CRS.dg' on 'salman1' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'salman1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'salman1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'salman1'
CRS-2677: Stop of 'ora.asm' on 'salman1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'salman1'
CRS-2677: Stop of 'ora.evmd' on 'salman1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'salman1'
CRS-2677: Stop of 'ora.cssd' on 'salman1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'salman1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/12/29 16:00:44 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After executing the rootupgrade.sh, click OK and installer would move forward for executing some more steps. Keep monitoring.
Once done, click Close to finish the Install/upgrade the process




























Now you can update your .bash_profile of GI Software Owner (grid) to reflect new ORACLE_HOME and other environment variables. Once done, you can check how your GI is doing after the upgrade and check the versions of different GI components

[grid@salman1 ~]$ echo $ORACLE_HOME
/u02/app/grid/product/12.1.0/grid
[grid@salman1 ~]$ echo $PATH
/u02/app/grid/product/12.1.0/grid/bin:/usr/sbin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/grid/bin
[grid@salman1 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.1.0.2.0]
[grid@salman1 ~]$ srvctl config asm
ASM home: <CRS home>
PRCA-1057 : Failed to retrieve the password file location used by ASM asm
PRCR-1097 : Resource attribute not found: PWFILE

While using “srvctl config asm” command, you will see the error message highlighted above in red. This is because from 12.1 onward, ASM instance  resource (you can see ora.asm resource in the output of “crsctl stat res –t”), has an attribute named PWFILE which is currently not configured because this ora.asm resource is upgraded from 11g and 11g did not have this attribute. You can find solution of this problem here.

Now your GI home and all services are upgraded and running from new 12c home. You may remove 11g Oracle GI home using deinstall utility. You can also now startup your 11g database, but remember that srvctl command should be used from 11g database home to manage this 11g database. 

$/u01/oracle/product/11.2.4/db_1/bin/srvctl start database -d db11g

Rman backup

run {
allocate channel D1 type disk;
change archivelog all validate;
backup as compressed backupset database include current controlfile
format '/u02/oracle/backups/db1_%t_%s_%p';
sql 'alter system archive log current';
backup archivelog all delete input
format '/u02/oracle/backups/db1_arch_%t_%s_%p';
release channel D1;
}

How to find latest oracle database patchset

$ORACLE_HOME/OPatch/opatch lsinventory

$ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description"
Patch description:  "Database Patch Set Update : 11.2.0.3.7 (16619892)"

$ORACLE_HOME/OPatch/opatch lsinventory -details


Which Patch has been applied?


SET linesize 200 pagesize 200
col action_time FOR a28
col version FOR a10
col comments FOR a35
col action FOR a25
col namespace FOR a12
SELECT * FROM registry$history;



To unlock the Grid home, perform the following steps:


cd /u01/app/11.2.0/grid/crs/install 
perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid

cd /u01/app/11.2.0/grid/crs/install 
perl rootcrs.pl -patch
$ CRS_home/bin/srvctl stop database -d sales
CRS_home/crs/bin/crsctl stop cluster -all
$ Grid_home/bin/crsctl status resource -t
$ cd Oracle_home/OPatch/4519934/4519934
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
opatch apply

Grid_home/bin/crsctl start cluster -all
$ Grid_home/bin/crsctl status resource -t
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
# Grid_home/bin/crsctl start resource myResource -n docrac2
$ sqlplus /nolog 
SQL> connect sys/password@sales1 AS SYSDBA
SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
SQL> exit
To apply a patch using the rolling patch method:
$ cd Oracle_home/OPatch/12419331/12419331
$ opatch query -is_rolling_patch [unzipped patch location]
$ Oracle_home/bin/emctl stop dbconsole
$ Oracle_home/bin/srvctl stop instance -d sales -i "sales1" -f
$ Grid_home/crs/bin/crsctl stop cluster -n docrac1
$ Grid_home/bin/crsctl status resource -t
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
$ opatch apply -local
$ opatch apply -remote_nodes docrac2
$ opatch apply [-local_node docrac1] -remote_nodes docrac2,docrac3
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
$ Oracle_home/bin/emctl start dbconsole
# Grid_home/bin/crsctl start cluster -n docrac1
$ Grid_home/bin/crsctl status resource -t
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
# Grid_home/bin/crsctl start resource myResource -n docrac
$ sqlplus /nolog 
SQL> connect sys/password@sales1 AS SYSDBA
SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
SQL> exit

 apply a patch to your cluster database using the minimum downtime method:
  1. Change to the directory where the unzipped patch is staged on disk, for example:
    $ cd Oracle_home/OPatch/12419331/12419331
    
  2. Stop all user applications that use the Oracle RAC home directory for the group of nodes being patched. For example, to stop Enterprise Manager Database Control on the local node, use the following command, where Oracle_home is the home directory for your Oracle RAC installation:
    $ Oracle_home/bin/emctl stop dbconsole
    
  3. Shut down all Oracle RAC instances on the local node. To shut down an instance for an Oracle RAC database, enter a command similar to the following example, where Oracle_home is the home directory for your Oracle RAC database installation, sales is the name of the database, and sales1 is the name of the instance:
    $ Oracle_home/bin/srvctl stop instance -d sales -i "sales1" -f
    
  4. Make sure the ORACLE_HOME environment variable points to the software directory you want to patch, for example:
    $ echo $ORACLE_HOME
    /u01/app/oracle/product/11.2.0/dbhome_1
    
  5. Use the following command from within the patch directory:
    $ opatch apply -minimize_downtime
    
    If you run the OPatch command from the directory where the patch is staged on disk, then you do not need to specify the patch ID.
    OPatch asks if you are ready to patch the local node. After you confirm that the Oracle RAC instances on the local node have been shut down, OPatch applies the patch to the Oracle home directory on the local node. You are then asked to select the next nodes to be patched.
  6. After you shut down the Oracle RAC instances on the other nodes in the cluster, you can restart the Oracle RAC instance on the local node. Then, instruct OPatch that you are ready to patch the remaining nodes.
  7. After all the nodes have been patched, restart the Oracle RAC instances on the other nodes in the cluster. The following command shows how to start the orcl2instance for the Oracle RAC database named orcl:
    $ Oracle_home/bin/srvctl start instance -d orcl -i "orcl2"
     
    
  8. Verify that all the Oracle Clusterware resources were restarted on all the nodes in the cluster.
    $ crsctl check cluster
    
    If any of the cluster resources did not restart, then use either the CRSCTL or SRVCTL utility to restart them. For example, you can use commands similar the following to restart various cluster resources, where Grid_home is the home directory of your Oracle Grid Infrastructure for a cluster installation and Oracle_home is the home directory of your Oracle RAC database:
    $ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
    # Grid_home/bin/crsctl start resource myResource -n docrac1
    
  9. Run any post-patch scripts that are mentioned in the patch instructions, for example:
    $ sqlplus /nolog 
    SQL> connect sys/password@sales1 AS SYSDBA
    SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
    SQL> exit

Featured post

Postgres commads

 [oracle@Tesdb ~]$ systemctl status postgresql-15 ● postgresql-15.service - PostgreSQL 15 database server    Loaded: loaded (/usr/lib/system...