Tuesday, 7 February 2023

schema refresh from source to target(prod to qa)

 


Source Database Side :



Step 1:


Check the Schema Exist or Not. Schema must be there in source database.


SQL> select username, account_status, created from dba_users where username='ORACLE';


Step 2: 


Check the schema size 


SQL> select owner, sum(bytes)/1024/1024  "MB"  from dba_segments where owner="ORACLE";


Step 3:


Take the count of schema objects, this will be use full after complete the refresh to compare both target and source schema objects.


SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;


Step 4:


Before going to take the export, first check mount point size where you're  going to store the export dumpfile, if mount point doesn't have sufficient space export job gets fail.


example :


]$  cd /d01/exports


exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)



Step 5 :


Create a datapump directory in database level, default datapump directory  location is " /app/oracle/product/10.2.0/rdbms/log/".


First create a directory in OS Level


]$ mkdir -p  /d01/exports 


Next create in database levele


SQL>  create or replace directory DATAPUMP as '/d01/exports';


Step 6 :


Now take the export of schema.


]$ expdp \'/ AS SYSDBA\' directory=DATA_PUMP_DIR dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE



Step 7 :


If  you're going to import on same server means no need to move the dumpfile anywhere, incase if you're going to import this dump file in some other server, copy this dumpfile through SCP command 


expdp]$ scp  -p  username@servername:\do1\targetlocation   (It will ask you target server password )



We have completed almost all steps in source database end, now we are moving to target database side,


Target Database :


Step 1 :


Check the mount pint size, it should be more then schema size.


]$  cd /d01/exports


exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)


Step 2 :


Create a directory same like how we have create for source database.


First create a directory in OS Level


]$ mkdir -p  /d01/exports 


Next create in database levele


SQL>  create or replace directory DATAPUMP as '/d01/exports';



Step 3:


Take target schema backup before importing.(for safe side). with export command.


]$ expdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE



Step 4 :


SCRIPTS:Find complete list of  objects in the schema


Use this script when you do not need to drop the schema, But only the dependent objects in the schema. For ex :- to preserve DB-links, grants, privileges

Below scripts will generate all of the drop statements needed to drop almost all objects (sometimes 1-2 of them will have problems and you will have to manually drop those) from a specified schema (it prompts you for the schema).


EXECUTION

Login to the database where the schema to be dropped exists. Copy and paste the following script , double checking that you are in the correct database!:


--Initializations

set linesize 1000;

SET VERIFY OFF

col owner format a15;

col object_name format a30;

col object_type format a20;



--Select Non-system object owners

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER NOT IN ('SYS','SYSTEM','TSMSYS','ORACLE_OCM','WMSYS','PATMAN','OUTLN','PUBLIC','DBSNMP','XDB','APPQOSSYS','CTXSYS')

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;


--Select specific Owner, Object_Type & Count

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;


--Drops: Tables, Indexes, & Triggers

SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TABLE');


--Drops: Sequences, Views, Packages, Functions & Procedures, Synonyms

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');


--Drops: Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TYPE');


--DO NOT DROP OR IMPORT DBLINKS, EXCLUDE=DATABASE_LINK

SELECT OWNER, OBJECT_TYPE, COUNT(*) 

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('DATABASE LINK')

GROUP BY OWNER, OBJECT_TYPE;


OR


Drop the Schema at destination database:


SQL> Drop Schema <schema_name> cascade; 

(Better drop only schema objects instead of dropping  schema)


Step 5 :


Import the dumpfile into target schema


impdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile=ORACLE_DATABASE.dmp logfile=ORACLE_DATABASE.log schemas=ORACLE


Step 6 :


Compare the Object Count with source database.

SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;


(If all the objects same and looks good go ahead and run utlrp.sql.


Step 7 :


Check invalid objects count 


SQL> select owner, object_type, status, count(*)

from sys.dba_objects

where status = ‘INVALID’

group by owner, object_type, status

order by owner, object_type, status;



SQL> @?/rdbms/admin/utlrp.sql

No comments:

Post a Comment

Featured post

duplicate db from standy to other server

 Duplicate Testuat   $ export ORACLE_SID=Testuat3 $ sqlplus '/as sysdba' Testuat3 SQL> alter system set cluster_database=FALSE sc...