Wednesday 13 December 2023

script to ddl for table--metadata

 export ORACLE_SID="$1"

export ORACLE_HOME=/u01/db/oracle19c/oraHome_1
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=/u01/db/oracle19c/oraHome_1/lib

 $ORACLE_HOME/bin/sqlplus /nolog << End_SQL >> r_tables.log

 connect / as sysdba

 SET head OFF

SET feed OFF
SET trimspool ON
SET linesize 32767
SET pagesize 32767
SET echo OFF
SET termout OFF
SET verify OFF

 

@r_tables.sql

 

End_SQL


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

more r_tables.sql

spool readwrite_tables.sql
select 'alter table ' || owner || '.' || table_name || ' read write;' from dba_tables where read_only='YES';
spool off

 

spool readonly_tables.sql
select 'alter table ' || owner || '.' || table_name || ' read only;' from dba_tables where read_only='YES';
spool off

create the public synonyms in the target database.

 

  1. Prepare the script to create the public synonyms in the target database.

cat header.txt > 13_b_get_public_synonym.pomdir01.sql.out

cat 13_b_get_public_synonym. pomdir01.sql.out | grep "CREATE PUBLIC SYNONYM" | grep -v SELECT  >> create the 13_create_public_synonym.sql

echo "quit" >> create the 13_create_public_synonym.sql



Huge page script

 


awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*"

Tuesday 12 December 2023

Analyze a particular SQL ID and see the trends for the past day

 Analyse a particular SQL ID and see the trends for the past day


set pages 50000 lines 32767

select
s.snap_id,
to_char(s.begin_interval_time,'HH24:MI') c1,
sql.executions_delta c2,
sql.buffer_gets_delta c3,
sql.disk_reads_delta c4,
sql.iowait_delta c5,
sql.cpu_time_delta c6,
sql.elapsed_time_delta c7
from
dba_hist_sqlstat sql,
dba_hist_snapshot s
where
s.snap_id = sql.snap_id
and s.begin_interval_time > sysdate -1
and
sql.sql_id='&sqlid'
order by c7
/


Do we have multiple plan hash values for the same SQL ID – in that case may be changed plan is causing bad performance

set pages 50000 lines 32767

select
SQL_ID
, PLAN_HASH_VALUE
, sum(EXECUTIONS_DELTA) EXECUTIONS
, sum(ROWS_PROCESSED_DELTA) CROWS
, trunc(sum(CPU_TIME_DELTA)/1000000/60) CPU_MINS
, trunc(sum(ELAPSED_TIME_DELTA)/1000000/60)  ELA_MINS
from DBA_HIST_SQLSTAT
where SQL_ID in (
'&sqlid')
group by SQL_ID , PLAN_HASH_VALUE
order by SQL_ID, CPU_MINS
/

Friday 10 March 2023

shell script to send sql query output to html format to mail body

 #!/bin/bash

# oracle credentials

set oracle env variablbles

# Email details

TO= mail id

subject =" report"

#get tablespace details

sqlplus -S / as sysdba << EOF > /tmp/tablespace_details.html

SET MARKUP HTML ON

SET FEEBACKUP OFF

SET VERIFY OFF

SET UNDERLINE OFF

SET TRIMSPOOL ON

SET LINESIZE 200

SET LONG 50000


----SQL QUERY

EOF

#get email content

body=$(cat /tmp/tablespace_details.html)

#send email

export CONTENT_TYPE=text/html

echo -e "TO:$TO\nSubject: $SUBJECT\nContent-Type:$CONTENT_TYPE\n\n$body" | /usr/sbin/sendmail -t


#clearnup


rm /tmp/tablespace_details.html



Tuesday 7 February 2023

schema refresh from source to target(prod to qa)

 


Source Database Side :



Step 1:


Check the Schema Exist or Not. Schema must be there in source database.


SQL> select username, account_status, created from dba_users where username='ORACLE';


Step 2: 


Check the schema size 


SQL> select owner, sum(bytes)/1024/1024  "MB"  from dba_segments where owner="ORACLE";


Step 3:


Take the count of schema objects, this will be use full after complete the refresh to compare both target and source schema objects.


SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;


Step 4:


Before going to take the export, first check mount point size where you're  going to store the export dumpfile, if mount point doesn't have sufficient space export job gets fail.


example :


]$  cd /d01/exports


exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)



Step 5 :


Create a datapump directory in database level, default datapump directory  location is " /app/oracle/product/10.2.0/rdbms/log/".


First create a directory in OS Level


]$ mkdir -p  /d01/exports 


Next create in database levele


SQL>  create or replace directory DATAPUMP as '/d01/exports';


Step 6 :


Now take the export of schema.


]$ expdp \'/ AS SYSDBA\' directory=DATA_PUMP_DIR dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE



Step 7 :


If  you're going to import on same server means no need to move the dumpfile anywhere, incase if you're going to import this dump file in some other server, copy this dumpfile through SCP command 


expdp]$ scp  -p  username@servername:\do1\targetlocation   (It will ask you target server password )



We have completed almost all steps in source database end, now we are moving to target database side,


Target Database :


Step 1 :


Check the mount pint size, it should be more then schema size.


]$  cd /d01/exports


exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)


Step 2 :


Create a directory same like how we have create for source database.


First create a directory in OS Level


]$ mkdir -p  /d01/exports 


Next create in database levele


SQL>  create or replace directory DATAPUMP as '/d01/exports';



Step 3:


Take target schema backup before importing.(for safe side). with export command.


]$ expdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE



Step 4 :


SCRIPTS:Find complete list of  objects in the schema


Use this script when you do not need to drop the schema, But only the dependent objects in the schema. For ex :- to preserve DB-links, grants, privileges

Below scripts will generate all of the drop statements needed to drop almost all objects (sometimes 1-2 of them will have problems and you will have to manually drop those) from a specified schema (it prompts you for the schema).


EXECUTION

Login to the database where the schema to be dropped exists. Copy and paste the following script , double checking that you are in the correct database!:


--Initializations

set linesize 1000;

SET VERIFY OFF

col owner format a15;

col object_name format a30;

col object_type format a20;



--Select Non-system object owners

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER NOT IN ('SYS','SYSTEM','TSMSYS','ORACLE_OCM','WMSYS','PATMAN','OUTLN','PUBLIC','DBSNMP','XDB','APPQOSSYS','CTXSYS')

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;


--Select specific Owner, Object_Type & Count

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;


--Drops: Tables, Indexes, & Triggers

SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TABLE');


--Drops: Sequences, Views, Packages, Functions & Procedures, Synonyms

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');


--Drops: Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TYPE');


--DO NOT DROP OR IMPORT DBLINKS, EXCLUDE=DATABASE_LINK

SELECT OWNER, OBJECT_TYPE, COUNT(*) 

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('DATABASE LINK')

GROUP BY OWNER, OBJECT_TYPE;


OR


Drop the Schema at destination database:


SQL> Drop Schema <schema_name> cascade; 

(Better drop only schema objects instead of dropping  schema)


Step 5 :


Import the dumpfile into target schema


impdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile=ORACLE_DATABASE.dmp logfile=ORACLE_DATABASE.log schemas=ORACLE


Step 6 :


Compare the Object Count with source database.

SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;


(If all the objects same and looks good go ahead and run utlrp.sql.


Step 7 :


Check invalid objects count 


SQL> select owner, object_type, status, count(*)

from sys.dba_objects

where status = ‘INVALID’

group by owner, object_type, status

order by owner, object_type, status;



SQL> @?/rdbms/admin/utlrp.sql

Schema Refresh in QA/DEV step by step

 Schema Refresh step by step 



Below are the details step for schema refresh . The below steps are very helpful when you want to refresh schemas in QA/DEV and want to keep all the previous grants and privileges after refresh .


1. Take the backup of the schema in target database.



expdp directory=<directory_name> dumpfile=backup_%U.dmp logfile=Backup.log parllel=<n> compression=all

schemas=Schema_name


2. sys_privs.sql :

--------------- 


set echo off

set head off

set pagesize 0

set feedback off

set linesize 200

spool sys_privs.sql

select 'grant '||privilege||' to '||grantee||' ; '  from dba_sys_privs where grantee in ('schema_name');

spool off


3.owner.sql :

-----------


spool owner.sql

select 'grant '||privilege||' on '||OWNER||'.'||table_name||' to  '||grantee||' ; ' from dba_tab_privs WHERE OWNER in ('schema_name');

spool off


4.pre_refresh_role_privs.sql :

----------------------------


set echo off

set head off

set pagesize 0

set feedback off

set linesize 200

spool pre_refresh_role_privs.sql

select 'grant '||granted_role||' to '||grantee||' ; '  from dba_role_privs where grantee in ('schema_name');

spool off


5.Check the object count in the database under that schema

 SELECT object_type,COUNT(*) FROM DBA_OBJECTS WHERE OWNER in ('schema_name') group by object_type ;


6. check the table count

SELECT COUNT(*) FROM DBA_tables WHERE OWNER in ('schema_name');


7.drop all the object under that schema

select 'drop '||object_type||' '||owner||'."'||object_name||'";' from dba_objects where owner in ('schema_names');


8.drop all the tables under that schema

select 'drop table  '||owner||'.'||table_name||' cascade constraints;' from dba_tables where owner in('schema_names');


9.purge the recyclebin

select 'purge table '||owner||'."'||object_name||'";' FROM dba_RECYCLEBIN where owner='schema_name' ;


10.now import data into target database from source dumpfile.

 impdp dumpfile=<dumpfile_name> logfile=import.log directory=<directory_name> schemas=schema_name parallel=<n>


11.check the count of objects after refresh for each schema

SELECT COUNT(*) FROM DBA_OBJECTS WHERE OWNER in ('schema_name');


12.check the table count

SELECT COUNT(*) FROM DBA_tables WHERE OWNER='schema_name';


13.run the spool file

@sys_privs.sql


14. run the spool file

@owner.sql


15.run the spool file

pre_refresh_role_privs.sql

Featured post

Postgres commads

 [oracle@Tesdb ~]$ systemctl status postgresql-15 ● postgresql-15.service - PostgreSQL 15 database server    Loaded: loaded (/usr/lib/system...