Monday 29 January 2018

Upgrade from RAC 11.2.0.1 to 11.2.0.3 (Part II OS preparation, Grid Infrastructure and Database software installation)

Upgrade from RAC 11.2.0.1 to 11.2.0.3 (Part II OS preparation, Grid Infrastructure and Database software installation)
In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2)
In this part I'll prepare the Linux OS for the RAC installation, install Grid Infrastructure 11.2.0.3 and install Database software.

Packages Requirements: (install the same packages version or later)
==================
OEL 5 Required packages for 11.2.0.2 and later versions:
--------------------------------------------------------------------
rpm -qa | grep binutils-2.17.50.0.6
rpm -qa | grep compat-libstdc++-33-3.2.3
rpm -qa | grep elfutils-libelf-0.1
rpm -qa | grep elfutils-libelf-devel-0.1
rpm -qa | grep gcc-4.1.2
rpm -qa | grep gcc-c++-4.1.2
rpm -qa | grep glibc-2.5
rpm -qa | grep glibc-common-2.5
rpm -qa | grep glibc-devel-2.5
rpm -qa | grep glibc-headers-2.5
rpm -qa | grep ksh-2
rpm -qa | grep libaio-0.3.106
rpm -qa | grep libaio-devel-0.3.106
rpm -qa | grep libgcc-4.1.2
rpm -qa | grep libstdc++-4.1.2
rpm -qa | grep libstdc++-devel-4.1.2
rpm -qa | grep make-3.81
rpm -qa | grep sysstat-7.0.2
rpm -qa | grep unixODBC-2.2.11       #=> (32-bit) or later
rpm -qa | grep unixODBC-devel-2.2.11 #=> (64-bit) or later
rpm -qa | grep unixODBC-2.2.11       #=> (64-bit) or later

In case you have missing packages try installing them from the Linux installation DVD:
e.g.
cd /media/OL5.9\ x86_64\ dvd\ 20130429/Server/
rpm -ivh numactl-devel-0.9.8-12.0.1.el5_6.i386.rpm

The most easiest way to download & install 11gr2 required packages & OS settings is to install oracle-rdbms-server-11gR2-preinstall package:
http://www.oracle.com/technetwork/articles/servers-storage-admin/ginnydbinstallonlinux-488779.html

Make sure to install Oracle on a NON Tainted Kernel:
-------------------------------------------------------------
What does Tainted Kernel mean:
 -A special module has changed the kernel.
 -That module has been force loaded by insmod -f
 -Successful (Oracle installation, Oracle support for that database and Oracle support for Linux) will depend on the module that tainted the kernel.
Oracle Support may not support your system (Linux, database) if there is a main module in the kernel has been tainted,.
How to check if the kernel is tainted or not:
# cat /proc/sys/kernel/tainted
1
If the output is 1 ,the kernel is tainted, you have to contact Oracle Support asking their help whether to proceed with oracle installation or not.
if the output is 0 ,the kernel is not tainted, you're good to go to install oracle software.

Network Requirements:
=================
> Each node must have at least two NICs.
> Recommended to use NIC bonding for public NIC, use HAIP for private NIC (11.2.0.2 onwards).
> Recommended to use redundant switches along with NIC bonding.
> Public & private interface names must be identical on all nodes (e.g. eth0 is the public NIC on all nodes).
> Crossover cables between private RAC NICs are NOT supported (gigabit switch is the minimum requirement). Crossover cables limits the expansion of RAC to two nodes, bad performance due to excess packets collision and cause unstable negotiation between the NICs.
> Public NICs and VIPs / SCAN VIPs must be on the same subnet. Private NICs must be on a different subnet.
> For private interconnect use non-routable addresses:
   [From 10.0.0.0    to  10.255.255.255 or
    From 172.16.0.0  to  172.31.255.255 or
    From 192.168.0.0 to  192.168.255.255]
> Default GATEWAY must be on the same Public | VIPs | SCAN VIPs subnet.
> If you will use SCAN VIP, SCAN name recommended to resolve via DNS to a minimum 3 IP addresses.
> /etc/hosts or DNS must include PUBLIC & VIP IPs with the host names.
> SCAN IPs should not be in /etc/hosts. People not willing to use SCAN can do so, just to let the Grid Infrastructure installation succeed.
> NIC names must NOT include DOT "."
> Every node in the cluster must be able to connect to every private NIC in each node.
> Host names for nodes must NOT have underscores (_).
> Linux Firewall (iptables) must be disabled at least on the private network. If you will enable the firewall I recommend to disable it till you finish the installation of all Oracle products to easily troubleshoot installation problems once you finish you can enable the firewall then feel free to blame the firewall if something didn't work :-).
> Oracle recommend to disable Network zero conf: (as it causing node eviction)
  # route -n  => If found line 169.254.0.0 this means zero conf is enabled on your OS (default), next step is to disable it by doing the following:
  # vi /etc/sysconfig/network
  #Add this line:
  NOZEROCONF=yes
Restart the network:
  # service network restart
> Recommended to use JUMBO frames for interconnect: [Note: 341788.1]
  Warning: although it's available in most network devices it's not supported by some NICs (specially Intel NICs) & switches, JUMBO frames should be enabled on the interconnect switch device(Doing a test is mandatory)
  # suppose that eth3 is your interconnect NIC:
  # vi /etc/sysconfig/network-scripts/ifcfg-eth3
  #Add the following parameter:
  MTU=9000
  # ifdown eth3; ifup eth3
  # ifconfig -a eth3  => you will see the value of MTU=9000 (The default MTU is 1500)
  Testing JUMBO frames using traceroute command: (during the test, we shouldn't see in the output something like "Message too long":
  =>From Node1:
  # traceroute -F node2-priv 8970
    traceroute to n2-priv (192.168.110.2), 30 hops max, 9000 byte packets
    1  node2-priv (192.168.110.2)  0.269 ms  0.238 ms  0.226 ms
  =>This test was OK
  =>In case you got this message "Message too long" try to reduce the MTU untill this message stop appear.
  Testing JUMBO frames using ping: (With MTU=9000 test with 8970 bytes not more)
  =>From Node1:
  # ping -c 2 -M do -s 8970 node2-priv
    1480 bytes from node2-priv (192.168.110.2): icmp_seq=0 ttl=64 time=0.245 ms
  =>This test was OK.
  =>In case you got this message "Frag needed and DF set (mtu = 9000)" reduce the MTU till you get the previous output.
> Stop avahi-daemon: recommended by Oracle, it causes node eviction plus failing the node to to re-join to the cluster [Note: 1501093.1]
  # service avahi-daemon stop
  # chkconfig avahi-daemon off

Create new Grid & Oracle home:
========================
mkdir -p /u01/grid/11.2.0.3/grid
mkdir -p /u01/oracle/11.2.0.3/db
chown -R oracle:dba /u01
chown oracle:oinstall /u01
chmod 700 /u01
chmod 750 /u01/oracle/11.2.0.3/db

Note: Oracle user, DBA and OINSTALL groups are created during Oracle Enterprise Linux installation.
Note: I'll install Grid & Oracle with oracle user, I'll not create a new user to be the grid installation owner.

Adding environment variables to Oracle profile:
-----------------------------------------------------
I'm using too much command aliases inside oracle user profile to speed up my administration work, I think it may be helpful for you too. also some aliases refer to some helpful shell scripts like checking the locker session on the DB and more I'll share it with you later in future posts.

# su - oracle
# vi .bash_profile 


# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
 . ~/.bashrc
fi
if [ -t 0 ]; then
   stty intr ^C
fi

umask 022
# User specific environment and startup programs
unset USERNAME
ORACLE_SID=pefms1
export ORACLE_SID
ORACLE_BASE=/u01/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/oracle/11.2.0.3/db; export ORACLE_HOME
GRID_HOME=/u01/grid/11.2.0.3/grid
export GRID_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin
export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib; export CLASSPATH
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$GRID_HOME/bin:/usr/ccs/bin:/usr/bin/X11/:/usr/local/bin:$ORACLE_HOME/OPatch
export PATH
export ORACLE_UNQNAME=pefms
export ORACLE_HOSTNAME=ora1123-node1
alias profile='cd;. ./.bash_profile;cd -'
alias viprofile='cd; vi .bash_profile'
alias catprofile='cd; cat .bash_profile'
alias tnsping='$ORACLE_HOME/bin/./tnsping'
alias pefms='export ORACLE_SID=pefms; echo $ORACLE_SID'
alias sql="sqlplus '/ as sysdba'"
alias alert="tail -100f $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace/alert_pefms1.log"
alias vialert="vi $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace/alert_pefms1.log"
alias lis="vi $ORACLE_HOME/network/admin/listener.ora"
alias tns="vi $ORACLE_HOME/network/admin/tnsnames.ora"
alias sqlnet="vi $ORACLE_HOME/network/admin/sqlnet.ora"
alias sqlnetlog='vi $ORACLE_HOME/log/diag/clients/user_oracle/host_2245657081_76/trace/sqlnet.log'
alias network=" cd $ORACLE_HOME/network/admin;ls -rtlh;pwd"
alias arc="cd /ora_archive1/pefms/; ls -rtlh|tail -50;pwd"
alias p="ps -ef|grep pmon|grep -v grep"
alias oh="cd $ORACLE_HOME;ls;pwd"
alias dbs="cd $ORACLE_HOME/dbs;ls -rtlh;pwd"
alias pfile="vi $ORACLE_HOME/dbs/initpefms1.ora"
alias catpfile="cat $ORACLE_HOME/dbs/initpefms1.ora"
alias spfile="cd /fiber_ocfs_pefms_data_1/oracle/pefms; cat spfilepefms1.ora"
alias bdump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace;ls -lrt|tail -10;pwd'
alias udump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace;ls -lrt;pwd';
alias cdump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/cdump;ls -lrt;pwd'
alias rman='cd $ORACLE_HOME/bin; ./rman target /'
alias listenerlog='tail -100f $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias vilistenerlog='vi $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias listenerpefms1log='tail -100f $ORACLE_HOME/log/diag/tnslsnr/ora1123-node1/listener_pefms1/trace/listener_pefms1.log '
alias listenerpefms2log='tail -100f $ORACLE_HOME/log/diag/tnslsnr/ora1123-node2/listener_pefms2/trace/listener_pefms2.log'
alias listenertail='tail -100f $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias cron='crontab -e'
alias crol='crontab -l'
alias df='df -h'
alias ll='ls -rtlh'
alias lla='ls -rtlha'
alias l='ls'
alias patrol='sh /home/oracle/patrol.sh'
alias datafiles='sh /home/oracle/db_size.sh'
alias locks='sh /home/oracle/locks.sh'
alias objects='sh /home/oracle/object_size.sh'
alias jobs='sh /home/oracle/jobs.sh'
alias crs='$GRID_HOME/bin/crsstat'
alias crss='crs|grep -v asm|grep -v acfs|grep -v gsd|grep -v oc4j|grep -v ora.cvu'
alias raclog='tail -100f $GRID_HOME/log/ora1123-node1/alertora1123-node1.log'
alias viraclog='vi $GRID_HOME/log/ora1123-node1/alertora1123-node1.log'
alias datafile='sh /home/oracle/db_size.sh'
alias invalid='sh /home/oracle/Invalid_objects.sh'
alias d='date'
alias dc='d;ssh n2 date'
alias aud='cd $ORACLE_HOME/rdbms/audit;ls -rtl|tail -200'
alias lastdb='/home/oracle/lastdb.sh'
alias sessions='/home/oracle/sessions.sh'
alias spid='sh /home/oracle/spid.sh'
alias spidd='sh /home/oracle/spid_full_details.sh'
alias session='/home/oracle/session.sh'
alias killsession='/home/oracle/kill_session.sh'
alias unlock='/home/oracle/unlock_user.sh'
alias sqlid='/home/oracle/sqlid.sh'
alias parm='/home/oracle/parm.sh'
alias grid='cd /u01/grid/11.2.0.3/grid; ls; pwd'
alias lsn='ps -ef|grep lsn|grep -v grep'

When adding the variables to Oracle profile in the other node you will change the node name from ora1123-node1 to ora1123-node2

Configure SYSTEM parameters:
========================

All parameters should be same or greater on the OS:
----------------------------------------------------
# /sbin/sysctl -a | grep sem           #=> semaphore parameters (250 32000 100 142).
# /sbin/sysctl -a | grep shm           #=> shmmax, shmall, shmmni (536870912, 2097152, 4096).
# /sbin/sysctl -a | grep file-max     #=> (6815744).
# /sbin/sysctl -a | grep ip_local_port_range  #=> Minimum: 9000, Maximum: 65500
# /sbin/sysctl -a | grep rmem_default  #=> (262144).
# /sbin/sysctl -a | grep rmem_max      #=> (4194304).
# /sbin/sysctl -a | grep wmem_default #=> (262144).
# /sbin/sysctl -a | grep wmem_max     #=> (1048576).
# /sbin/sysctl -a | grep aio-max-nr    #=> (Minimum: 1048576) limits concurrent requests to avoid I/O Failures.

Note:
If the current value of any parameter is higher than the value listed above, then do not change the value of that parameter.
If you will change any parameter on /etc/sysctl.conf then issue the command: sysctl -p

Check limit.conf values:
vi /etc/security/limits.conf 

oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    50000000
oracle   hard   memlock    50000000
# Adjust MAX stack size for 11.2.0.3 => Original was 8192:
oracle   soft   stack    10240

After updating limits.conf file, oracle user should logoff & logon to let the new adjustments take effect.

Ensure mounting /usr in READ-WRITE mode:
------------------------------------------------
# mount -o remount,rw /usr

>For security reasons Sys admins prefer to mount /usr in READ ONLY mode, during Oracle installation /usr must be in RW mode.

Restart the internet services daemon (xinetd):
----------------------------------------------
# service xinetd restart

Edit the /etc/securetty file and append it with the relevant service name:
------------------------------------------------------------------------
ftp
rlogin
rsh
rexec
telnet

Create ".rhosts" file:
This file will provide user equivalence between the servers, should be create under Oracle user home:
su - oracle
cd
vi .rhosts
# Add the following lines
ora1123-node1 oracle
ora1123-node2 oracle
ora1123-node1-priv oracle
ora1123-node2-priv oracle
ora1123-node1-vip oracle
ora1123-node2-vip oracle

Create hosts.equiv file:
vi /etc/hosts.equiv
#add these lines:
ora1123-node1 oracle
ora1123-node2 oracle
ora1123-node1-priv oracle
ora1123-node2-priv oracle
ora1123-node1-vip  oracle
ora1123-node2-vip  oracle

chmod 600 /etc/hosts.equiv
chown root.root /etc/hosts.equiv

Configure Host equivalence between Nodes:
-----------------------------------------------
on Both Nodes:
----------------
mkdir -p cd /home/oracle/.ssh
cd /home/oracle/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

cat id_rsa.pub > authorized_keys
cat id_dsa.pub >> authorized_keys

On Node1:
cd /home/oracle/.ssh
scp authorized_keys oracle@ora1123-node2:/home/oracle/.ssh/authorized_keys_nod1

on Node2:
cd /home/oracle/.ssh
mv authorized_keys_nod1 authorized_keys

cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

Copy the authorized_keys file to Node1:
scp authorized_keys oracle@ora1123-node1:/home/oracle/.ssh/


From Node1: Answer each question with "yes"
ssh ora1123-node1 date
ssh ora1123-node2 date
ssh n1 date
ssh n2 date
ssh ora1123-node1-priv date
ssh ora1123-node2-priv date

From Node2: Answer each question with "yes"
ssh ora1123-node1 date
ssh ora1123-node2 date
ssh n1 date
ssh n2 date
ssh ora1123-node1-priv date
ssh ora1123-node2-priv date

Enable rsh on both Nodes:
------------------------------
First verify that rsh & rsh-server packages are installed

rpm -qa|grep rsh

rsh-server-0.17-40.el5
rsh-0.17-40.el5

If the packages are not installed install them:
you can find rsh package in CD1 under "Server" directory
you can find rsh-server package in CD3 under "Server" directory

Add rsh to PAM:
------------------
vi /etc/pam.d/rsh:
#Add the following line
auth sufficient pam_rhosts_auth.so no_hosts_equiv


Enable xinetd service:
--------------------
vi /etc/xinetd.d/rsh
#Modify this line:
disable=no

-Test rsh connectivity between the cluster nodes:
From Node1: rsh n2 date
From Node2: rsh n1 date

Enable rlogin:
---------------
vi /etc/xinetd.d/rlogin
#add this line:
disable=no

Configure Hangcheck-timer:
------------------------------
If a hang occur on a node the module will reboot it to avoid the database corruption.

*To Load the hangcheck-timer module for 2.6 kernel:

# insmod /lib/modules/`uname -r`/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

->hangcheck_tick: Defines how often in seconds, the hangcheck-timer checks the node for hangs. The default is 60, Oracle recommends 1 second.
->hangcheck_margin: Defines how long in seconds the timer waits for a response from the kernel. The default is 180, Oracle recommends 10.
->hangcheck_reboot: 1 reboot when hang occur, 0 do not reboot when hang occur.

*To confirm that the hangcheck module is loaded, enter the following command:
# lsmod | grep hang
# output will be like below
hangcheck_timer         2428  0

*Add the service in the startup by editing this file:

vi /etc/rc.d/rc.local
#add this line
insmod /lib/modules/`uname -r`/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

You have to put the real value in place of `uname -r` which is your kernel version.
e.g.
insmod /lib/modules/2.6.32-300.32.2/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

Prepare for using Cluster Time Synchronization Service - (CTSS)
----------------------------------------------------------
Oracle Grid Infrastructure 11gr2 provides a new service called Cluster Time Synchronization Service (CTSS) that can synchronize the time between cluster nodes automatically without any manual intervention, If you want to use (CTSS) to handle this job automatically for you, then de-configure and de-install the Network Time Protocol (NTP), during the installation when Oracle find that NTP protocol is not active it will automatically activate (CTSS) to handle the time synchronization between RAC nodes for you, no more steps are required from you during the GI installation.

Disable NTP service:
# service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.original
# rm /var/run/ntpd.pid

Disable SELINUX:
--------------
Note: Starting with 11gR2 SELinux is supported. but I'll continue disabling it. Disabling SELINUX is easier than configuring it :-) it's a nightmare :-)

vi /etc/selinux/config

SELINUX=disabled
SELINUXTYPE=targeted

#################
Extra Configurations:
#################

Configure HugePages: [361468.1]
================
What is HugePages:
--------------------
HugePages is a feature allows larger pages to manage memory as the alternative to the small 4KB pagesize.
HugePages is crucial for faster Oracle database performance on Linux if you have a large RAM and SGA > 8G.
HugePages are not only for 32X system but for improving the memory performance on 64x kernel.

HugePages Pros:
------------------
-Doesn't allow memory to be swaped.
-Less Overhead for Memory Operations.
-Less Memory Usage.

Huge Pages Cons:
--------------------
-You must set  MEMORY_TARGET and MEMORY_MAX_TARGET = 0 as Automatic Memory Management (AMM) feature is incompatible with HugePages:
ORA-00845: MEMORY_TARGET not supported on this system

Implementation:

1-Make sure that MEMORY_TARGET and MEMORY_MAX_TARGET = 0 on All instances.
2-Make sure that all instances on the server are up.
3- Set these parameters equal or greater than SGA size: (values are in KB)

# vi /etc/security/limits.conf
oracle   soft   memlock    20971520
oracle   hard   memlock    20971520

Here I'll set SGA to 18G so I'll set it to 20G in limits.conf file.

Re-login to oracle user and check the value:
# ulimit -l

4- Create this script:

# vi /root/hugepages_settings.sh

#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
#
# This script is provided by Doc ID 401749.1 from My Oracle Support
# http://support.oracle.com
# Welcome text
echo "
This script is provided by Doc ID 401749.1 from My Oracle Support
(http://support.oracle.com) where it is intended to compute values for
the recommended HugePages/HugeTLB configuration for the current shared
memory segments. Before proceeding with the execution please note following:
 * For ASM instance, it needs to configure ASMM instead of AMM.
 * The 'pga_aggregate_target' is outside the SGA and
   you should accommodate this while calculating SGA size.
 * In case you changes the DB SGA size,
   as the new SGA will not fit in the previous HugePages configuration,
   it had better disable the whole HugePages,
   start the DB with new SGA size and run the script again.
And make sure that:
 * Oracle Database instance(s) are up and running
 * Oracle Database 11g Automatic Memory Management (AMM) is not setup
   (See Doc ID 749851.1)
 * The shared memory segments can be listed by command:
     # ipcs -m
Press Enter to proceed..."
read
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'`
if [ -z "$HPG_SZ" ];then
    echo "The hugepages may not be supported in the system where the script is being executed."
    exit 1
fi
# Initialize the counter
NUM_PG=0
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | cut -c44-300 | awk '{print $1}' | grep "[0-9][0-9]*"`
do
    MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
    if [ $MIN_PG -gt 0 ]; then
        NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
    fi
done
RES_BYTES=`echo "$NUM_PG * $HPG_SZ * 1024" | bc -q`
# An SGA less than 100MB does not make sense
# Bail out if that is the case
if [ $RES_BYTES -lt 100000000 ]; then
    echo "***********"
    echo "** ERROR **"
    echo "***********"
    echo "Sorry! There are not enough total of shared memory segments allocated for
HugePages configuration. HugePages can only be used for shared memory segments
that you can list by command:
    # ipcs -m
of a size that can match an Oracle Database SGA. Please make sure that:
 * Oracle Database instance is up and running
 * Oracle Database 11g Automatic Memory Management (AMM) is not configured"
    exit 1
fi
# Finish with results
case $KERN in
    '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
           echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
    '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
     *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac

5-Run script hugepages_settings.sh to help you get the right value for vm.nr_hugepages parameter:
# chmod 700 /root/hugepages_settings.sh
# sh /root/hugepages_settings.sh


6-Edit the file /etc/sysctl.conf and set the vm.nr_hugepages parameter as per the script output value:
# cat /etc/sysctl.conf|grep vm.nr_hugepages
# vi /etc/sysctl.conf
vm.nr_hugepages = 9220

7-Reboot the server.

8-Check and Validate the Configuration:
# grep HugePages /proc/meminfo

Note: Any further modification to the following should be followed by re-run hugepages_settings.sh script and put the new value of vm.nr_hugepages parameter:
      -Amount of RAM installed for the Linux OS changed.
      -New database instance(s) introduced.
      -SGA size / configuration changed for one or more database instances.


Increase vm.min_free_kbytes system parameter: [Doc ID 811306.1]
================================
In case you enabled regular HugePages on your system (the thing we did above) it's recommended to increase the system parameter vm.min_free_kbytes from 51200 to 524288 This will cause the system to

start reclaiming memory at an earlier time than it would have before, therefore it can help to decrease the LowMem pressure, hangs and node evictions.

# sysctl -a |grep min_free_kbytes
vm.min_free_kbytes = 51200

# vi /etc/sysctl.conf
vm.min_free_kbytes = 524288

# sysctl -p

# sysctl -a |grep min_free_kbytes
vm.min_free_kbytes = 51200


Disable Transparent HugePages: [Doc ID 1557478.1]
=====================
Transparent HugePages are different than regular HugePages (the one we configured above), Transparent HugePages are set up dynamically at run time.
Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC & Single Node, Oracle strongly recommend to disable it.
Note: For UEK2 kernel, starting with 2.6.39-400.116.0 Transparent HugePages has been removed from the kernel.

Check if Transparent HugePages Enabled:
--------------------------------------
# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] never

Disable Transparent HugePages:
-----------------------------
Add "transparent_hugepage=never" to boot kernel:

# vi /boot/grub/grub.conf
kernel /vmlinuz-2.6.39-300.26.1.el5uek ro root=LABEL=/ transparent_hugepage=never


Configure VNC on Node1:
===================
VNC will help us login to the linux machine with a GUI session, from this GUI session we can run Oracle installer to install Grid Infrastructure and Database software. eliminating the need to go to the server room and do the installation on the server itself.

Make sure that VNCserver package is already installed:
# rpm -qa | grep vnc-server
vnc-server-4.1.2-14.el5_6.6

Modify the VNC config file:
# vi /etc/sysconfig/vncservers
Add these lines at the bottom:
VNCSERVERS="2:root"
VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"

Set a password for VNC:
# vncpasswd
Password:
Verify:

Run a VNC session just to generate the default config files:
# vncserver :1

Configure VNC to start an Xsession when connecting:
# vi ~/.vnc/xstartup
#UN-hash these two lines:
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc

Now start a VNC session on the machine:
# vncserver :1

Now you can login from any machine (your Windows PC) using VNCviewer to access that remote server using port 5900 or 5901. make sure these port are not blocked by the firewall.
VNC Viewer can be downloaded from this link:
http://www.realvnc.com/download/viewer/

Download Oracle 11.2.0.3 installation media:
================================

Note [ID 753736.1] have all Patch Sets + PSU reference numbers.

11.2.0.3 (for Linux x86_64) is patch#  10404530  we need only the first 3 zip files from 1-3.
 (1&2 for database, 3 for grid, 4 for client, 5 for gateways, 6 examples cd, 7 for deinstall).

I'll extract the first 3 zip files which have Grid and Database binaries under /u02/stage

###########################
Grid Infrastructure installation:
###########################

Setup Cluverify:
===========
Cluverify is a tool checks the fulfillment of RAC and database installation prerequisites.

cd /u02/stage/grid/rpm
rpm -ivh cvuqdisk-1.0.9-1.rpm

Check the fulfillment of Grid Infrastructure setup prerequisites: (using Cluverify tool)
------------------------------------------------------------
cd /u02/stage/grid
./runcluvfy.sh stage -pre crsinst -n ora1123-node1,ora1123-node2  -verbose

Grid installation:
============
On Node1:
Start a VNC session on the server to be able to open a GUI session with the server and run Oracle Installer:
# vncserver :1

Login to the server from your PC using VNCviewer, then from the GUI session execute the following:
# xhost +
# su - oracle
# cd /u02/stage/grid
# chmod +x runInstaller
# ./runInstaller

During the installation:
================
Click "skip software updates":
 >Install and configure Grid Infrastructure for a cluster.

 >Advanced Installation

 >Grid Plug and Play:
   Cluster Name:  cluster
    SCAN Name: cluster-scan
    SCAN Port: 1523

 >Cluster Node Information:
   Add:
   ora1123-node2
   ora1123-node2-vip

 >Network Interface Usage:
   eth0 Public
   eth3 Private
   eth1 Do Not Use
   eth2 Do Not Use

Note: Starting With Oracle (11.2.0.2), you are no longer required to use the network bounding technique to configure interconnect redundancy. You can now define at most four interfaces for redundant interconnect (private network) during the installation phase.

 >Storage Option: Shared File System

 >OCR Storage: Normal Redundancy
   /ora_ocr1/ocr1.dbf
   /ora_ocr2/ocr2.dbf
   /ora_ocr3/ocr3.dbf

Note: Oracle strongly recommend to set the voting disks number to an odd number like 3 or 5 and so on, because the cluster must be able to access more than half of the voting disks at any time.


 >Voting Storage:Normal Redundancy
   /ora_voting1/voting1.dbf
   /ora_voting2/voting2.dbf
   /ora_voting3/voting3.dbf

 >Do not use IPMI

Oracle Base:   /u01/oracle/
RAC Installation path:  /u01/grid/11.2.0.3/grid
OraInventory path:      /u01/oraInventory

At the End of the installation run:
---------------------------------
Run orainstRoot.sh On Node1 then run it on Node2:
# /u01/oraInventory/orainstRoot.sh

Run root.sh On Node1 once it finish run it on Node2:
# /u01/grid/11.2.0.3/grid/root.sh

Just hit ENTER when get this message:
Enter the full pathname of the local bin directory: [/usr/local/bin]:

Note: root.sh may take from 5 to 15 minutes to complete.

Once root.sh finish, go back to the Execute Configuration Scripts window and press "OK".

I've uploaded the screenshots to this link:
http://imgur.com/a/lccmR

 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
 In case you are doing an in-place upgrade of an older release and you are installing the GI on a different home, the following should be done within a downtime window:       
 At the End of installation by root user run:
 -------------------------------------------
 Note: In case of doing an in-place upgrade Oracle recommends that you leave Oracle RAC instances running from Old GRID_HOME.
 Execute this script:
 # /u01/grid/11.2.0.3/grid/rootupgrade.sh
   =>Node By Node (don't run it in parallel).
   =>rootupgrade will restart cluster resources on the node which is run on.
   =>Once you finish with rootupgrade.sh ,click OK on the OUI window to finish the installation.
 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

The outputs of executed commands:
---------------------------
#/u01/oraInventory/orainstRoot.sh

Changing permissions of /u01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/oraInventory to oinstall.
The execution of the script is complete.

#/u01/grid/11.2.0.3/grid/root.sh
Node1 outputs:
-------------
[root@ora1123-node1 /u01]#/u01/grid/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/grid/11.2.0.3/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'ora1123-node1'
CRS-2676: Start of 'ora.mdnsd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ora1123-node1'
CRS-2676: Start of 'ora.gpnpd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ora1123-node1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ora1123-node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ora1123-node1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ora1123-node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ora1123-node1'
CRS-2676: Start of 'ora.diskmon' on 'ora1123-node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ora1123-node1' succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting disk: /ora_voting1/voting1.dbf.
Now formatting voting disk: /ora_voting2/voting2.dbf.
Now formatting voting disk: /ora_voting3/voting3.dbf.
CRS-4603: Successful addition of voting disk /ora_voting1/voting1.dbf.
CRS-4603: Successful addition of voting disk /ora_voting2/voting2.dbf.
CRS-4603: Successful addition of voting disk /ora_voting3/voting3.dbf.
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   205267b4e4334fc9bf21154f92cd30fa (/ora_voting1/voting1.dbf) []
 2. ONLINE   83217239b9c84fe9bfbd6c5e76a9dcc1 (/ora_voting2/voting2.dbf) []
 3. ONLINE   41a59373d30b4f6cbf6f41c50dc48dbd (/ora_voting3/voting3.dbf) []
Located 3 voting disk(s).
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


Node2 outputs:
-------------
[root@ora1123-node2 /u01/grid/11.2.0.3/grid]#/u01/grid/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/grid/11.2.0.3/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ora1123-node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

> In the last page it will show error message regarding Oracle Clusterware verification utility failed, just ignore it. Our installation indeed is successful.

Test the installation:
================
-> Check the logs under: /u02/oraInventory/logs

By oracle:
# cluvfy stage -post crsinst -n ora1123-node1,ora1123-node2 -verbose
# crsctl check cluster -all
# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy

# olsnodes -n
# ocrcheck
# crsctl query crs softwareversion
# crsctl query crs activeversion
# crs_stat -t -v

Confirm clusterware time synchronization service is running (CTSS):
--------------------------------------------------------------------
# crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

Create crs_stat script to show you a nice output shape of crs_stat command:
-----------------------------------------------------------------------
cd /u01/grid/11.2.0.3/grid/bin
vi crsstat

#--------------------------- Begin Shell Script ----------------------------
#!/bin/bash
##
#Sample 10g CRS resource status query script
##
#Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment
RSC_KEY=$1
QSTAT=-u
AWK=/usr/bin/awk # if not available use /usr/bin/awk
# Table header:echo ""
$AWK \
'BEGIN {printf "%-75s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-75s %-10s %-18s\n", "-----------", "------", "-----";}'
# Table body:
/u01/grid/11.2.0.3/grid/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-75s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
#--------------------------- End Shell Script ------------------------------

chmod 700 /u01/grid/11.2.0.3/grid/bin/crsstat
scp crsstat root@node2:/u01/grid/11.2.0.3/grid/bin

Now you can use "crs" command that been included in the oracle profiles to execute crs_stat -t in a cute format.

Change OCR backup location:
=========================
# ocrconfig -showbackup
# ocrconfig -backuploc /u01/grid/11.2.0.3/grid/cdata/cluster11g


Modify RAC configurations:
#######################

=Configure CSS misscount:
 ====================
 The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before kicking out the problematic node...

Check current configurations for css misscount :
# crsctl get css misscount

It's recommended to backup OCR disks before running the following command.
configure css misscount: -From One Node only-
# crsctl set css misscount 60


#################################
Install Oracle Database Software 11.2.0.3: 
#################################

Note: It's recommended to backup oraInventory directory before starting this stage.
Note: Ensure that the clusterware services are running on both nodes.

Run cluvfy to check database installation prerequisites:
========
# cluvfy stage -pre dbinst -n ora1123-node1,ora1123-node2 -verbose
-->Ignore cluster scan errors.

Execute runInstaller:
===============
Connect to the server using VNCviewer to open a GUI session, that enable you to run Oracle installer
Note: Oracle Installer can also run from the command line mode using -silent and -responseFile attributes, you should prepare the response file that will hold all installation selections"

# xhost +
# su - oracle
# cd /u02/stage/database
# ./runInstaller

During the installation:
==================
Select "skip software updates"
Select "Install database Software only"
Select "Oracle Real Application Clusters database installation" -> Select both nodes (selected by default).
Select "Enterprise" -> Selected options like (Partitioning, Data Mining, Real Application Testing)
 =>From security perspective it's recommended to install only the options you need.
 =>From licensing perspective there is no problem if installed an options you are not using it, as Oracle charges only on the options are being used.
Select "dba" group for OSDBA, leave it blank for OSOPER (I never had a need to login to the database with SYSOPER privilege).
Ignore SCAN warning in the prerequisite check page
ORACLE_BASE: /u01/oracle
ORACLE_HOME (Software Location): /u01/oracle/11.2.0.3/db

At the end of installation: By root user execute /u01/oracle/11.2.0.3/db/root.sh on Node1 first then execute it on Node2:
# /u01/oracle/11.2.0.3/db/root.sh

Go back to the Oracle Installer:
click OK.
click Close.

I've uploaded Oracle software installation snapshots to this link:
http://imgur.com/a/OwKQw

Post Steps:
########
Installation verification:
=================
# cluvfy stage -post crsinst -n ora1123-node1,ora1123-node2 -verbose
  =>All passed except SCAN check which I'm not using it in my setup.

Do some backing up:
==============
Query Voting disks:
------------------
crsctl query css votedisk

Backing up voting disks manually is no longer required, dd command is not supported in 11gr2 for backing up voting disks. Voting disks are backed up automatically in the OCR as part of any configuration change and voting disk data is automatically restored to any added voting disks.
http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/votocr.htm#BABDJDHI

Backup the OCR: clusterware is up and running
------------------
# ocrconfig -export /u01/grid/11.2.0.3/grid/cdata/cluster11g/ocr_after_DB_installation.dmp
# ocrconfig -manualbackup

Backup oraInventory directory:
---------------------------------
# cp -r /u01/oraInventory /u01/oraInventory_After_DBINSTALL

Backup root.sh:
-----------------
# cp /u01/grid/11.2.0.3/grid/root.sh /u01/grid/11.2.0.3/grid/root.sh._after_installation
# cp /u01/oracle/11.2.0.3/db/root.sh /u01/oracle/11.2.0.3/db/root.sh_after_installation

Backup ORACLE_HOME:
---------------------------
# tar cvpf /u01/oracle/11.2.0.3/db_After_DB_install.tar /u01/oracle/11.2.0.3/db

Backup GRID_HOME:
----------------------
# tar cvpf /u01/grid/11.2.0.3/grid_after_DB_install.tar /u01/grid/11.2.0.3/grid

Note: Although clusterware services are up and running, GI Home can be backed up online.

Backup the following files:
--------------------------
# cp /usr/local/bin/oraenv  /usr/local/bin/oraenv.11.2.0.3
# cp /usr/local/bin/dbhome  /usr/local/bin/dbhome.11.2.0.3
# cp /usr/local/bin/coraenv /usr/local/bin/coraenv.11.2.0.3

-Restart RAC severs more than once and ensure that RAC processes are starting up automatically.

July SPU Patch Apply:
##################
-Since October 2012 Oracle re-named CPU Critical Patch Update to SPU Security Patch Update, both are same, it's just a renaming .
-SPU patches are cumulative once you apply the latest patch, there is no need to apply the older patches.
-To eliminate making a big change on my environment, plus minimizing the downtime when applying security patches, I prefer to apply SPU (CPU before) over applying PSU patches (which contains SPU patch + Common Bug fixes that affect large number of customers).
OPatch utility version must be 11.2.0.3.0 or later: (OPatch utility is the tool being used to apply SPU patches)
  >> To download the latest OPatch utlility: Go to Metalink, search for Patch# 6880880
   >Backup the original OPatch directory under ORACLE_HOME and just unzip the patch file under ORACLE_HOME.

> $PATH must refer to /usr/ccs/bin
  # export PATH=$PATH:/usr/ccs/bin

> Unzip the Patch:
  # cd $ORACLE_HOME
  # unzip p16742095_112030_Linux-x86-64.zip

Patch Installation:
=============
Remember we still don't have any running database for the time being.
Shutdown Nodeapps or crs:
# srvctl stop nodeapps -n ora1123-node1

Patch Installation:
# cd $ORACLE_HOME/16742095
# opatch napply -skip_subset -skip_duplicate -local

Go to Node2 and do the same steps for installing the latest SPU patch...

Upgrade RAC 11.2.0.1 to 11.2.0.3 (Part I Software Installation)

Upgrade RAC 11.2.0.1 to 11.2.0.3 (Part I Software Installation)
In this post I'll discuss full implementation of upgrading Oracle RAC 11.2.0.1 to 11.2.0.3, the new RAC 11.2.0.3 will be installed on a new hardware (outplace upgrade).
This lengthy post (in order to make it more beneficial) I divided it to FOUR major posts:
  Part I   OS Installation, Filesystem preparation (OCFS2 on ISCSI)
             ->Covers Oracle Enterprise Linux 5.9 x86_64 installation, preparation of ISCSI storage, using OCFS2 to format the shared filesystem.
  Part II  OS preparation, Grid Infrastructure and Database software installation.
             ->Covers Preparation of Linux OS for Oracle ,11.2.0.3 Grid Infrastructure & database software installation.
  Part III Standby Database Creation.
             ->Covers the creation of a standby database being refreshed from the primary DB 11.2.0.1 taking advantage of a new feature that a standby DB with an 11gr2 higher release can be refreshed from 11gr2 lower release.
  Part IV Database Upgrade from 11.2.0.1 to 11.2.03
             ->Covers switching over the new standby DB resides on 11.2.0.3 server to act as a primary DB, Upgrade the new primary DB from 11.2.0.1 to 11.2.0.3

Feel free to click on the part you interested in.

Part I, II, III doesn't require a downtime as they are being done on a different hardware as part of our (outplace upgrade) only Part IV is the one will require a downtime window.
The whole implementation may take less than two hours of downtime if every thing went smooth. but it will take long hours of DBA work.

In this post I tried to provide reference for each point. Also followed many recommendations recommended by the Maximum Availability Architecture (MAA).

Part I   OS Installation, Filesystem preparation (OCFS2 on ISCSI)


The following are "good to know" information before installing Oracle 11.2.0.3 on any hardware:

Knowledge Requirements:
===================
RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) [Metalink Doc ID 811306.1]
RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) [Metalink Doc ID 810394.1]
Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5 [Metalink Doc ID 465001.1]

Product Support Lifetime:
===================
This document indicates database release 11.2 premier support ends at Jan 2015 and the Extended support ends at Jan 2018.
http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf

Patching Support Lifetime:
===================
This document indicates Oracle will continue provide security patches for 11.2.0.3 version till 27-Aug-2015.
Security patches means (PSU, CPU, SPU patches). [Metalink Doc ID 742060.1]

Hardware Certification:
=================
RAC Technologies Matrix for Linux Platforms:
http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html

The main link for certification Matrix for other platforms: (Linux, Unix, Windows)
http://www.oracle.com/technetwork/products/clustering/overview/index.html

In this implementation I'll install RAC on ISCSI NAS storage.

Now let's move from the theoretical part to the technical steps...

Linux Requirements:
===============
If installing 11.2.0.3 on RHEL 5 x86_64:
The minimum requirement is Red Hat Enterprise Linux 5 Update 5 with the Unbreakable Enterprise Kernel 2.6.32 or later.

Partitioning requirement on the server’s local hard disk: [Minimum!]
=======================================
Local hard disk will contain Linux, Grid Infrastructure and database installed binaries.
/u01  => 10G Free space to hold the installation files (GI+DB). I recommend at least 30G to hold future generated logs.
/tmp  => 1G Free space.
SWAP  => RAM=32G which is >8G, SWAP= 75% of RAM = 24G
/dev/shm => must be greater than the sum of MEMORY_MAX_TARGET for all instance if you will use the new 11g feature Automatic Memory Management by setting parameters memory_max_target & memory_target to a specific value which will handle the memory size of SGA & PGA together.

More about /dev/shm:
-According to Oracle Support /dev/shm will not be able to be greater than 50% of the RAM installed on the server.
-Make /dev/shm size = Memory Size installed on the server. or at least the sum of all DBs Memory_max_target on the server.
-/dev/shm must be exist if you will use 11g new feature Auto Memory Management by setting memory_max_target parameter.
-If /dev/shm isn't exist or isn't properly sized the database will pop up this error when starting up if memory_max_target parameter has been set:
 ORA-00845: MEMORY_TARGET not supported on this system.
-Oracle will create files under /dev/shm upon instance startup and will be removed automatically after instance shutdown.
-Oracle will use these files to manage the memory size dynamically between SGA and PGA.
-It's recommended to have the /dev/shm configured with the "tmpfs" option instead of "ramfs", as ramfs is not supported for Automatic Memory Management AMM:
 # df -h /dev/shm
 Filesystem            Size  Used Avail Use% Mounted on
 tmpfs                 16G     0  16G   0%   /dev/shm

BTW I'm not using this feature I'm still stick with sga_taget & pga_aggregate_target. :-)

------------------------------
Linux OS Installation: Both Nodes (Estimated time: 3 hours)
------------------------------
Note: Install a fresh Linux installation on all RAC nodes, DON'T clone the installation from node to other in purpose of saving the time.

FS Layout:
>>>>>>>>>
The whole disk space is 300G

Filesystem      Size(G) Size(M) used in setup
----------          ----    ---------------------
/boot                1G     1072 --Force to Be Primary Partition.
Swap               24G    24576 --Force to Be Primary Partition, 75% of RAM.
/dev/shm          30G    30720 --Force to Be Primary Partition.
/                      20G    20480
/u01                 70G    73728
/home               10G   10547
/tmp                 5G     5240
/var                  10G   10547
/u02                 95G   The rest of space

Note:
 If you will install ISCSI drive avoid making a separate partition for /usr , ISCSI drive will prevent system from booting.
 Ref: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s1-x86-bootloader.html

Packages selection during Linux installation:
>>>>>>>>>>>>>

Desktop Environment:
 # Gnome Desktop Environment
Applications:
 #Editors -> VIM
Development:
 # Development Libraries.
 # Development Tools
 # GNOME software development
 # Java Development
 # Kernel Development
 # Legacy Software Development
 # X Software Development
Servers:
 # Legacy Network Server -> Check only: rsh-server,xinetd
 # PostgreSQL -> Check only: UNIXODBC-nnn
 # Server Configuration Tools -> Check All
Base System:
 # Administration Tools.
 # Base -> Un check bluetooth,wireless packs Check-> Device mapper multipath
 # Java
 # Legacy Software Support
 # System Tools -> Check also: OCFS2 packages
 # X Window System

FIREWALL & SELINUX MUST BE STOPPED. [Note ID 554781.1]

I've uploaded OEL 5.9 installation snapshots in this link:
http://imgur.com/a/LpwMw

populate /etc/hosts with the IPs and resolved names:
=======================================
# vi /etc/hosts

#You must keep 127.0.0.1  localhost, if removed VIP will not work !!!
#cluster_scan,Public and VIP should be in the same subnet.

127.0.0.1       localhost localhost.localdomain

#Public:
172.18.20.1  ora1123-node1  node1 n1
172.18.20.2  ora1123-node2  node2 n2

#Virtual:
172.18.20.3  ora1123-node1-vip node1-vip n1-vip
172.18.20.4  ora1123-node2-vip node2-vip n2-vip

#Private:
192.168.10.1      ora1123-node1-priv n1-priv node1-priv
192.168.10.2      ora1123-node2-priv n2-priv node2-priv

#Cluster:
172.18.20.10  cluster-scan

#NAS
172.20.30.100   nas nas-server

#11.2.0.1 Servers:
10.60.60.1  ora1121-node1 old1 #the current 11.2.0.1 Node1
10.60.60.2  ora1121-node2 old2 #the current 11.2.0.1 Node2

I've added RAC node names, VIP and private IPs and it's resolved names for both nodes and guess what I'm resolving also the cluster scan in /etc/hosts, keep it a secret don't till Larry :-)
Actually I'm still not convinced with using the SCAN feature, if you will use it in your setup just ask the network admin to resolve at least three SCAN IPs in the DNS to the cluster scan name you will use.
This document will help you understanding the SINGLE CLIENT ACCESS NAME (SCAN):
http://www.oracle.com/webfolder/technetwork/jp/obe/fmw/wls/11g/WLS_11g_RAC_gridlink/files/SCAN.pdf

Upgrade the KERNEL:
==================
-Subscribe The new servers in ULN network.
-Upgrade the Kernel to the latest version.
Ensure that /etc/resolv.conf is equipped with the DNS entry and you are connected to the internet, once this task is done if you don't have a need to connect the servers to the internet make sure the servers are not connecting anymore to the internet for security reason.

On linux server:
-------------------
# up2date --register

Install key? Yes

put this information:

login: xxxxxx
pass:  xxxxxx
CSI#: xxxxxx


In case you still cannot establish a connection with ULN You can use the IP 141.146.44.24 instead of address linux-update.oracle.com in "Network Configuration" button.
Also: in  /etc/sysconfig/rhn/up2date :
      You can change this line:
      noSSLServerURL=http://linux-update.oracle.com/XMLRPC to  noSSLServerURL=http://141.146.44.24/XMLRPC
      and this line:
      serverURL=https://linux-update.oracle.com/XMLRPC  to  serverURL=https://141.146.44.24/XMLRPC

Then proceed with updating the kernel from the same GUI or from command line as shown below:

up2date -d @  --> To download the updated packages
up2date @     --> To install the updated packages

I'm putting the symbol @ for skipping the GUI mode and continue with CLI.

Configure YUM with ULN:
--------------------------------
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo
# vi public-yum-el5.repo
Modify the following:
Under both paragraphs: [el5_latest] & [ol5_UEK_latest] modify enabled=0 to enabled=1
An excerpt:

[el5_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
gpgcheck=1
enabled=1

[ol5_UEK_latest]
name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL5/UEK/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
gpgcheck=1
enabled=1

Network configuration:
================
Node1:
--------
# cat /etc/sysconfig/network-scripts/ifcfg-eth0   #=>Public
DEVICE=eth0
BOOTPROTO=static
BROADCAST=172.18.20.255
IPADDR=172.18.20.1
NETMASK=255.255.255.0
NETWORK=172.18.20.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth1   #=>ISCSI NAS
DEVICE=eth1
BOOTPROTO=static
BROADCAST=172.20.30.255
IPADDR=172.20.30.101
NETMASK=255.255.255.0
NETWORK=172.20.30.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth3   #=>Private
DEVICE=eth3
BOOTPROTO=static
BROADCAST=192.168.10.255
IPADDR=192.168.10.1
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes


Node2:
--------
# cat /etc/sysconfig/network-scripts/ifcfg-eth0   #=>Public
DEVICE=eth0
BOOTPROTO=static
BROADCAST=172.18.20.255
IPADDR=172.18.20.2
NETMASK=255.255.255.0
NETWORK=172.18.20.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth1   #=>ISCSI NAS STORAGE
DEVICE=eth1
BOOTPROTO=static
BROADCAST=172.20.30.255
IPADDR=172.20.30.102
NETMASK=255.255.255.0
NETWORK=172.20.30.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth3   #=>Private
DEVICE=eth3
BOOTPROTO=static
BROADCAST=192.168.10.255
IPADDR=192.168.10.2
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes

-----------------------------
Filesystem Preparation:
-----------------------------

RAC servers will connect to the NAS shared storage using ISCSI protocol.

ISCSI Configuration:

Required packages:
# rpm -q iscsi-initiator-utils
# yum install iscsi-initiator-utils

To get ISCSI aware that LUNs are being accessed simultaneously by more than one node at the same time and to avoid LUNs corruption, use one of the following ways (A or B):
A) Generate IQN number
or
B) Setup username and password of ISCSI storage

I'll explain both of them:

A) Generate IQN number (ISCSI Qualified Name) in linux for each node to be saved inside NAS configuration console:

On Node1:

Generate an IQN number:
# /sbin/iscsi-iname
iqn.1988-12.com.oracle:9e963384353a

Note: the last portion of IQN after semicolon ":" is editable and can be changed to the node name, I mean instead of "9e963384353a" you can rename it "node1", no space

allowed in the name.

Now insert the generated IQN to /etc/iscsi/initiatorname.iscsi
# vi /etc/iscsi/initiatorname.iscsi
#Note that last portion of the IQN is modifyiable (modify it to meaningful name)
InitiatorName=iqn.1988-12.com.oracle:node1

Do the same on Node2:

On Node2:

# /sbin/iscsi-iname
iqn.1988-12.com.oracle:18e6f43d73ad

# vi /etc/iscsi/initiatorname.iscsi
#Note that last portion of the IQN is modifyiable (modify it to meaningful name)
InitiatorName=iqn.1988-12.com.oracle:node2

Put the same IQN you already inserted in /etc/iscsi/initiatorname.iscsi on both nodes in the NAS administration console for each LUN will be accessed by both nodes.

(This should be done by the Storage Admin)


B) Set up a username and password for ISCSI storage:

# vi /etc/iscsi/iscsid.conf
node.session.auth.username =
node.session.auth.password =
discovery.sendtargets.auth.username =
discovery.sendtargets.auth.password =

Start the iscsi service:
# /etc/init.d/iscsi start

Same Username & password should be configured in the NAS administration console.(This should be done by the Storage Admin)

Continue configuring the ISCSI:
=======================
Turn on the iscsi service to start after booting the machine:
# chkconfig iscsi on

Discover the target LUNs:

# service iscsi start
# iscsiadm -m discovery -t sendtargets -p 172.20.30.100
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=" "; print $9 " " $10 " " $11}')

Whenever iscsid discovers new target, it will add corresponding information in the following directory:
# ls -lR /var/lib/iscsi/nodes/
# service iscsi restart

Create Persistent Naming: (NON Multipath Configuration)
===================
Every time the machine or ISCSI service restart the partitions source names /dev/sd* will change e.g. /data1 will point to /dev/sdc instead of /dev/sda, the thing we cannot live with it at all.

Note: I only have one physical path "NIC" connecting to the NAS storage, so I apply non multipath configuration.

1) Whitelist all SCSI devices:
-- -----------------------------
# vi /etc/scsi_id.config
#Add the following lines:
vendor="ATA",options=-p 0x80
options=-g


2) Get the names of LUNs and it's device name:
-- --------------------------------------------------
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=" "; print $9 " " $10 " " $11}')

ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive1-lun-0 -> ../../sdn
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive2-lun-0 -> ../../sdr
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-backupdisk-lun-0 -> ../../sdi
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control1-lun-0 -> ../../sdl
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control2-lun-0 -> ../../sda
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-data1-lun-0 -> ../../sdp
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-index1-lun-0 -> ../../sdo
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr1-lun-0 -> ../../sdq
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr2-lun-0 -> ../../sde
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr3-lun-0 -> ../../sdf
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo1-lun-0 -> ../../sdb
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo2-lun-0 -> ../../sdm
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-temp1-lun-0 -> ../../sdh
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo1-lun-0 -> ../../sdj
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo2-lun-0 -> ../../sdc
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting1-lun-0 -> ../../sdk
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting2-lun-0 -> ../../sdg
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting3-lun-0 -> ../../sdd

3) Get the drives UUID:
-- ------------------------
scsi_id -g -s /block/sdn
scsi_id -g -s /block/sdr
scsi_id -g -s /block/sdi
scsi_id -g -s /block/sdl
scsi_id -g -s /block/sda
scsi_id -g -s /block/sdp
scsi_id -g -s /block/sdo
scsi_id -g -s /block/sdq
scsi_id -g -s /block/sde
scsi_id -g -s /block/sdf
scsi_id -g -s /block/sdb
scsi_id -g -s /block/sdm
scsi_id -g -s /block/sdh
scsi_id -g -s /block/sdj
scsi_id -g -s /block/sdc
scsi_id -g -s /block/sdk
scsi_id -g -s /block/sdg
scsi_id -g -s /block/sdd

These UUIDs are the consistent identifiers for the devices, we will use them in the next step.

4) Create the file /etc/udev/rules.d/04-oracle-naming.rules with the following format:
-- --------------------------------------------------------------------------------------
# vi /etc/udev/rules.d/04-oracle-naming.rules

#Add a line for each device specifying the device name & it's UUID:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d003000000000", NAME="archive1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d004000000000", NAME="archive2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d002000000000", NAME="backupdisk"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d001000000000", NAME="control1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d005000000000", NAME="control2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d006000000000", NAME="data1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d007000000000", NAME="index1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d008000000000", NAME="ocr1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d009000000000", NAME="ocr2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d010000000000", NAME="ocr3"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d011000000000", NAME="redo1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d012000000000", NAME="redo2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d013000000000", NAME="temp1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d014000000000", NAME="undo1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d015000000000", NAME="undo2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d016000000000", NAME="voting1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d017000000000", NAME="voting2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d018000000000", NAME="voting3"

# service iscsi restart

5) Check the configuration:
-- -----------------------------
Now new files name under /dev should be e.g. /dev/archive1 instead of /dev/sdn
Note: fdisk -l will not show the new NAS devices anymore, don't worry use the following:

#(cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=" "; print $9 " " $10 " " $11}')

ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive1-lun-0 -> ../../archive1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive2-lun-0 -> ../../archive2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-backupdisk-lun-0 -> ../../backupdisk
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control1-lun-0 -> ../../control1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control2-lun-0 -> ../../control2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-data1-lun-0 -> ../../data1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-index1-lun-0 -> ../../index1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr1-lun-0 -> ../../ocr1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr2-lun-0 -> ../../ocr2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr3-lun-0 -> ../../ocr3
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo1-lun-0 -> ../../redo1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo2-lun-0 -> ../../redo2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-temp1-lun-0 -> ../../temp1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo1-lun-0 -> ../../undo1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo2-lun-0 -> ../../undo2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting1-lun-0 -> ../../voting1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting2-lun-0 -> ../../voting2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting3-lun-0 -> ../../voting3

also test UDEV rule:
# udevtest /block/sdb | grep udev_rules_get_name

udev_rules_get_name: rule applied, 'sdb' becomes 'ocr2'
...


OCFS2 Configuration:

Required Packages:
OCFS2 packages should be installed during Linux installation, if you selected the right packages.
If you didn't do so, you can download and install the required OCFS2 packages using the following commands:
# up2date --install ocfs2-tools ocfs2console
# up2date --install ocfs2-`uname -r`

1) populate /etc/ocfs2/cluster.conf settings:
-  -------------------------------------
In the OCFS2 configuration I'll use the heartbeat NICs (private) not the public ones.

# mkdir -p /etc/ocfs2/
# vi /etc/ocfs2/cluster.conf
node:
        ip_port = 7000
        ip_address = 192.168.10.1
        number = 0
        name = ora1123-node1
        cluster = ocfs2

node:
        ip_port = 7000
        ip_address = 192.168.10.2
        number = 1
        name = ora1123-node2
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2

Options:
ip_port:    The Default Can be changed to unused port.
ip_address: Using the private interconnect is highly recommended as it's supposed to be a private network between cluster nodes only.
number:     Node unique number from 0-254
name:       The node name needs to match the hostname without the domain name.
cluster:    Name of the cluster.
node_count: Number of the nodes in the cluster.

BEWARE: During editing the file be-careful, parameters must start after a tab, a blank space must separate each value.

2) Timeout Configuration:
-  -----------------------------
The O2CB cluster stack uses these timings to determine whether a node is dead or alive. Keeping default values is recommended.

# /etc/init.d/o2cb configure

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 60000
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:

Heartbeat Dead Threshold: is the number of two-second iterations before a node is considered dead.61 is recommended for multipath users, for my setup I'll

set the timeout to 120sec.
Network Idle Timeout: The time in milliseconds before a network connection is considered dead.recommended 60000ms

configured the cluster to load on boot:
-------------------------------------------
# chkconfig --add o2cb
# chkconfig --add ocfs2
# /etc/init.d/o2cb load
# /etc/init.d/o2cb start ocfs2


Filesystem Partitioning: OCFS2
==================
As per the labels on the NAS disk names, I'll assign same names with OCFS2.

# fdisk -l |grep /dev
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=" "; print $9 " " $10 " " $11}')

Formating:
--------------
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr1  /dev/ocr1
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr2  /dev/ocr2
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr3  /dev/ocr3
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting1  /dev/voting1
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting2  /dev/voting2
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting3  /dev/voting3
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L redo1  -J size=64M /dev/redo1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L redo2  -J size=64M /dev/redo2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L control1  -J size=64M /dev/control1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L control2  -J size=64M /dev/control2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L archive1  -J size=64M /dev/archive1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L archive2   -J size=64M /dev/archive2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L undo1  -J size=64M /dev/undo1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L undo2  -J size=64M /dev/undo2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L data1  -J size=64M /dev/data1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L index1   -J size=64M /dev/index1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L temp1   -J size=64M /dev/temp1
# mkfs.ocfs2 -F -b 4k -C 1M -N 2 -L backupdisk  -J size=64M /dev/backupdisk

Options:
-F If the device was previously formatted by OCFS to overwrite the data.
-b blocksize from 512 to 4k(default), 4k recommended (small block size mean smaller maxsize, maxsize=2^32*blocksize means with blocksize=4096 maxsize=16T).
-C clustersize from 4k(default) to 1M, 4k recommended EXCEPT for DBFILES partition it should = Database Block Size =8k
   For backup storage filesystem holding RMAN, dump files, use bigger clustersize.
   128k recommended as a default clustersize if you're not sure what clustersize to use.
-N #Node slots, number of nodes can mount the volume concurrently, it's recommended to set it bigger than required,e.g. if you have two nodes set it to 4, this

parameter can be increased later using tunefs.ocfs2 but this practice can lead to bad performance.
-L lablel name, labeling the volume allow consistent "presistent" naming across the cluster. even if you're using ISCSI.
-J Journal size, 256 MB(default), recommeded (64MB for datafiles, 128MB for vmstore and 256MB for mail).
-T filesystem-type (datafiles,mail,vmstore)
    (datafiles) recommended for database FS will set (blocksize=4k, clustersize=128k, journal size=32M)
    (vmstore)   recommended for backup FS will set (blocksize=4k, clustersize=128k, journal size=128M) .

Note: For the filesystems that hold the database files set the cluster size -C 8k
      For the filesystems that hold backup files set the cluster size -C 1M
      If you're not sure what cluster size to use, use 128k, it proven reasonable trade-off between wasted space and performance.

Mounting the partitions:
-----------------------------
mkdir /ora_redo1
mkdir /ora_backupdisk
mkdir /ora_undo1
mkdir /ora_undo2
mkdir /ora_control2
mkdir /ora_control1
mkdir /ora_archive1
mkdir /ora_redo2
mkdir /ora_temp1
mkdir /ora_index1
mkdir /ora_archive2
mkdir /ora_data1
mkdir /ora_ocr1
mkdir /ora_ocr2
mkdir /ora_ocr3
mkdir /ora_voting1
mkdir /ora_voting2
mkdir /ora_voting3

chown -R oracle:oinstall /ora*
chmod 750 /ora*

Mounting the partitions automatically when system restart:
-----------------------------------------------------------------
vi /etc/fstab

LABEL=ocr1  /ora_ocr1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=ocr2  /ora_ocr2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=ocr3  /ora_ocr3 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting1  /ora_voting1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting2  /ora_voting2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting3  /ora_voting3 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=control1  /ora_control1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=control2  /ora_control2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=redo1  /ora_redo1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=redo2  /ora_redo2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=archive1  /ora_archive1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=archive2  /ora_archive2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=temp1  /ora_temp1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=undo1  /ora_undo1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=undo2  /ora_undo2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=index1  /ora_index1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=data1  /ora_data1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=backupdisk /ora_backupdisk ocfs2   _netdev         0   0

Partitions mount options:
>>>>>>>>>>>>>>>>>
_netdev: mandatory, prevent attempting to mount the filesystem until the network has been enabled on the system.
datavolume: force using direct I/O, used with FS contain Oracle data files, control files, redo/archive, voting/OCR disk. same behavior when using init.ora

filesystemio_options.
            datavolume mount option MUST NOT be used on volumes hosting the Oracle home or Oracle E-Business Suite or any other use.
nointr: default, blocks signals from interrupting certain cluster operations, disable interrupts.
rw: default, mount the FS in read write mode.
ro: mount the FS in read only mode.
noatime: default, disable access time updates, improve the performance (important for DB/cluster files).
atime_quantum=: update atime of files every 60 second(default), degrades the performance.
commit=: optional, sync all data every 5 seconds(default), degrades the performance. in case of failure you will lose last 5 seconds of work (Filesystem will

not be damaged, thanks to journaling). higher assigned value improves the performance with more data loss risk.

After adding the values inside /etc/hosts you can mount the partitions using these commands:
# mount -a

OR:
# mount -L "temp1" /ora_temp1

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c1d0p6      20G  3.9G   15G  22% /
/dev/cciss/c1d0p10     98G  7.0G   86G   8% /u02
/dev/cciss/c1d0p9     5.0G  139M  4.6G   3% /tmp
/dev/cciss/c1d0p8      10G  162M  9.4G   2% /home
/dev/cciss/c1d0p7      10G  629M  8.9G   7% /var
/dev/cciss/c1d0p2      16G     0   16G   0% /dev/shm
/dev/cciss/c1d0p5      70G  180M   66G   1% /u01
/dev/cciss/c1d0p1    1003M   76M  876M   8% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm              1.0G  143M  882M  14% /ora_ocr1
/dev/sdd              1.0G  143M  882M  14% /ora_ocr2
/dev/sde              1.0G  143M  882M  14% /ora_ocr3
/dev/sdk              1.0G  143M  882M  14% /ora_voting1
/dev/sdi              1.0G  143M  882M  14% /ora_voting2
/dev/sdg              1.0G  143M  882M  14% /ora_voting3
/dev/sdl               10G  151M  9.9G   2% /ora_control1
/dev/sda               10G  151M  9.9G   2% /ora_control2
/dev/sdb               10G  151M  9.9G   2% /ora_redo1
/dev/sdr               10G  151M  9.9G   2% /ora_redo2
/dev/sdp              300G  456M  300G   1% /ora_archive1
/dev/sdn              300G  456M  300G   1% /ora_archive2
/dev/sdf               60G  205M   60G   1% /ora_temp1
/dev/sdj               40G  184M   40G   1% /ora_undo1
/dev/sdc               40G  184M   40G   1% /ora_undo2
/dev/sdo              200G  349M  200G   1% /ora_index1
/dev/sdq              400G  563M  400G   1% /ora_data1
/dev/sdh              500G  674M  500G   1% /ora_backupdisk

Performance Tip: Ensure updatedb is not running on OCFS2 partitions, by adding "OCFS2" keyword to "PRUNEFS =" list in file /etc/updatedb.conf


 ///////////////////////////////////////////////////////////////
 In case of using ASM for the shared storage (very quick guide)
 ///////////////////////////////////////////////////////////////
 Note: Don't use persistent naming unless you finish configuring ASM first.

 Install ASMLib 2.0 Packages:
 ---------------------------
 # rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort
 oracleasm-2.6.18-348.el5-2.0.5-1.el5 (x86_64)
 oracleasmlib-2.0.4-1.el5 (x86_64)
 oracleasm-support-2.1.7-1.el5 (x86_64)

 Configure ASMLib:
 ----------------
 # /usr/sbin/oracleasm configure -i
 Default user to own the driver interface []: oracle
 Default group to own the driver interface []: dba
 Start Oracle ASM library driver on boot (y/n) [n]: y
 Scan for Oracle ASM disks on boot (y/n) [y]: y

 # /usr/sbin/oracleasm init

 Use FDISK to create RAW partition for each disk:
 -----------------------------------------------
 # fdisk /dev/sdn
  n
  p
  1


  w

 Do the same for other disks....
 Commit your changes without the need to restart the system using this command:
 # partprobe

 Create ASM Disks:
 ----------------
 # /usr/sbin/oracleasm createdisk OCR1 /dev/sdn1
 # /usr/sbin/oracleasm createdisk OCR2 /dev/sdd1
 # /usr/sbin/oracleasm createdisk OCR3 /dev/sde1
 # /usr/sbin/oracleasm createdisk voting1 /dev/sdk1
 # /usr/sbin/oracleasm createdisk voting2 /dev/sdi1
 # /usr/sbin/oracleasm createdisk voting3 /dev/sdj1
 ... and so on

 SCAN ASM Disks:
 --------------
 # /usr/sbin/oracleasm scandisks
 Reloading disk partitions: done
 Cleaning any stale ASM disks...
 Scanning system for ASM disks...
 Instantiating disk "OCR1"
 Instantiating disk "OCR2"
 ....

 # /usr/sbin/oracleasm listdisks
 OCR1
 ...

 # oracleasm querydisk /dev/sdn1

 Diskgroup creation will be done from the installer.

 ////////////////////////////////////////////////////////////////////

In case you want to use RAW DEVICES for the shared storage:
Note that starting with 11gr2 using DBCA or the installer to store Oracle Clusterware or Oracle Database files on block or raw devices is not supported.

Statistics In Oracle


Ref : http://dba-tips.blogspot.in/search/label/Performance.

This is for my reference..

In this post I'll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
     ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',
     CASCADE => TRUE,
     degree => 4,
     OPTIONS => 'GATHER STALE',
     GATHER_SYS => TRUE,
     STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options:
       =>'GATHER' :Gathers statistics on all objects in the schema.
       =>'GATHER AUTO' :Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>'GATHER STALE':Gathers statistics on stale objects. will return a list of stale objects.
       =>'GATHER EMPTY':Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>'LIST AUTO' : Returns a list of objects to be processed with GATHER AUTO.
        =>'LIST STALE': Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>'LIST EMPTY': Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the 'SYS' user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What's new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed "METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'" => As histograms is not recommended to be gathered on all columns.
Removed  "cascade => TRUE" To let Oracle determine whether index statistics to be collected or not.
Doubled the "degree => 8" but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = 'auto optimizer stats collection';

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
     DBMS_AUTO_TASK_ADMIN.ENABLE(
     client_name => 'auto optimizer stats collection',
     operation => NULL,
     window_name => NULL);
     END;
     /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
     DBMS_AUTO_TASK_ADMIN.DISABLE(
     client_name => 'auto optimizer stats collection',
     operation => NULL,
     window_name => NULL);
     END;
     /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS='YES';

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it's parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
     ownname =>'SCOTT',
     estimate_percent=>10,
     degree=>1,
     cascade=>TRUE,
     options=>'GATHER STALE');


Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name='T1';

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'SCOTT',
     tabname => 'EMP',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between "" instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
 FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.


Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you've to re-gather statistics on that table.

Inside "DBA BUNDLE", there is a script called "gather_stats.sh", it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.
To learn more about "DBA BUNDLE" please visit this post:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html


Gather Index Statistics:
===================
SQL>
BEGIN
DBMS_STATS.GATHER_INDEX_STATS(ownname => 'SCOTT',indname => 'EMP_I',estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);
END;
/

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
----------------------------
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically or within gathering DB stats.

How frequent to gather stats on fixed objects?
-------------------------------------------------------
Only one time for a representative workload unless you've one of these cases:

- After a major database or application upgrade.
- After implementing a new module.
- After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
- Poor performance/Hang encountered while querying dynamic views e.g. V$ views.


Note:
- It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
- Also, note that performance degradation may be experienced while the statistics are gathering.
- Having no statistics is better than having a non-representative statistics.

How to gather stats on fixed objects:
---------------------------------------------

First Check the last analyzed date:
------ -----------------------------------
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED
        from dba_tab_statistics where table_name='X$KGLDP';
Second Export the current fixed stats in a table: (in case you need to revert back)
------- -----------------------------------
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE
        ('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');

SQL> EXEC dbms_stats.export_fixed_objects_stats
        (stattab=>'STATS_TABLE_NAME',statown=>'OWNER');
Third Gather the fixed objects stats:
-------  ------------------------------------
SQL> exec dbms_stats.gather_fixed_objects_stats;

Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats();
SQL> exec DBMS_STATS.import_fixed_objects_stats
        (stattab =>'STATS_TABLE_NAME',STATOWN =>'OWNER');


#################
SYSTEM STATISTICS
#################

What is system statistics:
-------------------------------
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
----------------------------------------
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
-----------------------------------
This will simulate a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics:
-------------------------------
This will gather statistics during the current workload [which supposed to be representative of the actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats('start');
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60);

Check the system values collected:
-------------------------------------------
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;

cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:
-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn't impose an additional overhead on your system.

Delete system statistics:
------------------------------
SQL> execute dbms_stats.delete_system_stats();


####################
Data Dictionary Statistics
####################

Facts:
-------
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,'ORACLE'); 

When to gather Dictionary statistics:
---------------------------------------------
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
---------------------------------------------
SQL> select table_name, last_analyzed from dba_tables
     where owner='SYS' and table_name like '%$' order by 2;

Gather Dictionary Statistics: 
-----------------------------------
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
->Will gather stats on 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');
->Will gather stats on 100% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.



################
Extended Statistics "11g onwards"
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = 'scott';

In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats('SCOTT','EMP','(lower(ENAME))') from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP', method_opt=> 'for all columns size skewonly');

OR
----
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
     (ownname => 'SCOTT',tabname => 'EMP',
     method_opt => 'for all columns size skewonly for
     columns (lower(ENAME))');
     end;
     /

To check the Existence of extended statistics on a table:
----------------------------------------------------------------------
SQL> select extension_name,extension from dba_stat_extensions where owner='SCOTT'and table_name = 'EMP';
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER("ENAME"))

Drop extended stats on column function:
------------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats('SCOTT','EMP','(LOWER("ENAME"))');

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statements are correlated e.g.(country, state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each column. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in the USA so the value of state_name is always influenced by country_id.
If there are extra columns are referenced in the "WHERE statement with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats('SH','CUSTOMERS', '(country_id,cust_state_province)')from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats ('SH','customers',method_opt=> 'for all columns size skewonly');

OR
---

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
     (ownname => 'SH',tabname => 'CUSTOMERS',
     method_opt => 'for all columns size skewonly for
     columns (country_id,cust_state_province)');
     end;
     /

Drop extended stats on column group:
--------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats('SH','CUSTOMERS', '(country_id,cust_state_province)');


#########
Histograms
#########

What are Histograms?
-----------------------------
> Holds data about values within a column in a table for the number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
  -Frequency histograms: is when distinct values (buckets) in the column is less than 255 (e.g. the number of countries is always less than 254).
  -Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
    See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn't collect histograms, it deletes them if you didn't use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If the application is exclusively uses bind variables, Oracle recommends deleting any existing histograms and disabling Oracle histograms generation.

Cautions:
   – Do not create them on Columns that are not being queried.
   – Do not create them on every column of every table.
   – Do not create them on the primary key column of a table.

Verify the existence of histograms:
---------------------------------------------
SQL> select column_name,histogram from dba_tab_col_statistics
     where owner='SCOTT' and table_name='EMP';

Creating Histograms:
---------------------------
e.g.
SQL> Exec dbms_stats.gather_schema_stats
     (ownname => 'SCOTT',
     estimate_percent => dbms_stats.auto_sample_size,
     method_opt => 'for all columns size auto',
     degree => 7);


method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns.
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value.
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms
     where OWNER='SCOTT' table_name='EMP' group by column_name;

Drop Histograms: 11g
----------------------
e.g.
SQL> Exec dbms_stats.delete_column_stats
     (ownname=>'SH', tabname=>'SALES',
     colname=>'PROD_ID', col_stat_type=> HISTOGRAM);


Stop gather Histograms: 11g
------------------------------
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs
     ('SH', 'SALES','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID');
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
----------------------
e.g.
SQL> exec dbms_stats.delete_column_stats(user,'T','USERNAME');


################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
-  -----------------------------
SQL> Exec dbms_stats.create_stat_table(ownname => 'SYSTEM', stattab => 'prod_stats',tblspace => 'USERS');

2-Export statistics to the STATS table: [Backup Statistics]
---------------------------------------------------
The following will backup the statistics into PROD_STATS table which we just created under SYSTEM schema.

For Database stats:
SQL> Exec dbms_stats.export_database_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(ownname=>'SCHEMA_NAME',stattab=>'STATS_TABLE',statown=>'STATS_TABLE_OWNER');
e.g.
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(ownname=>'SCOTT',stattab=>'prod_stats',statown=>'system');
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats(ownname => 'SCOTT',tabname => 'EMP',statown => 'SYSTEM', stattab => 'prod_stats');
For Index:
SQL> Exec dbms_stats.export_INDEX_stats(ownname => 'SCOTT',indname => 'PK_EMP',statown => 'SYSTEM', stattab => 'prod_stats');
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',statown => 'SYSTEM', stattab=>'prod_stats');

Parameters:
ownname: The owner of the object that will have its statistics backed up.
tabname: The table name which will have its stats backed up.
indname: The index name which will have its stats backed up.
statown: The owner of the table which stores the backed up statistics.
stattab: The table which stores the backed up statistics.

3-Import statistics from PROD_STATS table to the dictionary:
---------------------------------------------------------------------------------
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS
     (ownname => 'SCOTT',stattab => 'prod_stats', statown => 'SYSTEM');
For Table stats and its indexes:
SQL> Exec dbms_stats.import_TABLE_stats
     ( ownname => 'SCOTT', stattab => 'prod_stats',tabname => 'EMP');
For Index:
SQL> Exec dbms_stats.import_INDEX_stats
     ( ownname => 'SCOTT', stattab => 'prod_stats', indname => 'PK_EMP');
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats
     (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

Parameters:
ownname: The owner of the object that will have its statistics backed up.
tabname: The table name which will have its stats backed up.
indname: The index name which will have its stats backed up.
statown: The owner of the table which stores the backed up statistics.
stattab: The table which stores the backed up statistics.

4-Drop STAT Table: 
--------------------------
SQL> Exec dbms_stats.DROP_STAT_TABLE (stattab => 'prod_stats',ownname => 'SYSTEM');

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1);

Restore Database stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1);

Restore SYSTEM stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1);

Restore FIXED OBJECTS stats as of timestamp:
----------------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1);

Restore SCHEMA stats as of timestamp:
---------------------------------------
SQL> Exec dbms_stats.restore_SCHEMA_stats
     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>sysdate-1);
OR:
SQL> Exec dbms_stats.restore_schema_stats
     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>'20-JUL-2008 11:15:00AM');

Restore Table stats as of timestamp:
------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS
     (ownname=>'SYSADM', tabname=>'T01POHEAD',AS_OF_TIMESTAMP=>sysdate-1);


Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS ('SCOTT');
For Table stats and it's indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats(ownname=>'SCOTT',tabname=>'EMP');
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats(ownname => 'SCOTT',indname => 'PK_EMP');
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats(ownname =>'SCOTT',tabname=>'EMP',colname=>'EMPNO');

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.


Pending Statistics:  "11g onwards"
===============
What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them "system wide" unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','FALSE');
Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');

Gather statistics: "as you used to do"
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS('sh','SALES');
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;
Then any SQL statement you will run will use the new pending statistics...

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS();

Once you finish don't forget to return the Global PUBLISH parameter to TRUE:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');
>If you didn't do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.


Lock Statistics:
=============
Gathering new statistics is not always a good approach, this may change your applications queries'/reports' execution plans to the worst, it's not guaranteed that gathering new statistics will lead to better execution plans ! .I've learned this lesson before in the hard way! but having a backup of the old statistics before gathering new ones has saved my day!.
This is why you want to avoid having a such scenario, where one of the DBA's in your team has accidentally gathered new statistics on the whole DB ـــscrambling most of execution plans of application queries, in the hope of generating better execution plans. In this case you need to lock the statistics of one or more schema or on key tables in order to prevent their statistics from being refreshed by such unattended maintenance activities.

To lock the statistics on all tables under a specific schema:
SQL> exec dbms_stats.lock_schema_stats('SCHEMA_NAME');
e.g. exec dbms_stats.lock_schema_stats('SCOTT');


To lock the statistics on a specific table:

SQL> exec dbms_stats.lock_table_stats('OWNER','TABLE_NAME'');
e.g. exec dbms_stats.lock_table_stats('SCOTT','EMP');

Note: This will lock the table's statistics and its indexes.

When you have a need to gather new statistics on those tables that having their statistics locked, you need first to unlock the statistics then gather a new statistics as usual.

To check all tables that have their statistics locked:
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED, STATTYPE_LOCKED from DBA_TAB_STATISTICS
where STATTYPE_LOCKED is not null
and OWNER not in ('SYS','SYSTEM','SQLTXPLAIN','WMSYS')
order by OWNER, TABLE_NAME;

To unlock all tables under a specific schema:
SQL> exec dbms_stats.unlock_schema_stats('SCHEMA_NAME');
e.g. exec dbms_stats.unlock_schema_stats('SCOTT');

To unlock a specific table:
SQL> exec dbms_stats.unlock_table_stats('OWNER','TABLE_NAME'');
e.g. exec dbms_stats.unlock_table_stats('SCOTT','EMP');

Note: This will unlock the table's statistics and its indexes.

=========
Advanced:
=========

To Check current Stats history retention period (days):
-------------------------------------------------------------------
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability from dual;
To modify current Stats history retention period (days):
-------------------------------------------------------------------
SQL> Exec dbms_stats.alter_stats_history_retention(60);

Purge statistics older than 10 days:
------------------------------------------
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 160
select sum(bytes/1024/1024) Mb,
segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type
order by 1 asc
/

Check Stats indexes size:
>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 160
select sum(bytes/1024/1024) Mb, segment_name,segment_type
from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type
order by 1 asc
/

Move Stats tables in same tablespace:
>>>>>
select 'alter table '||segment_name||'  move tablespace
SYSAUX;' from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='TABLE'
/

Rebuild stats indexes:
>>>>>>
select 'alter index '||segment_name||'  rebuild online;'
from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
/

Check for un-usable indexes:
>>>>>
select  di.index_name,di.index_type,di.status
from dba_indexes di , dba_tables dt
where  di.tablespace_name = 'SYSAUX'
and dt.table_name = di.table_name
and di.table_name like '%OPT%'
order by 1 asc

Featured post

Postgres commads

 [oracle@Tesdb ~]$ systemctl status postgresql-15 ● postgresql-15.service - PostgreSQL 15 database server    Loaded: loaded (/usr/lib/system...