Wednesday 26 September 2018

Create Database on target Exadata Machine using DBCA


You can use DBCA silent option to create database too. I strongly recommended using DBCA GUI to create databases on Exadata Machine.
dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname mart -sid mart -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword OraPasswd1 -systemPassword OraPasswd1 -createAsContainerDatabase false -pdbAdminPassword OraPasswd1 -databaseType MULTIPURPOSE -automaticMemoryManagement false -totalMemory 5000 -storageType ASM -diskGroupName DATAC1 -redoLogFileSize 50 -emConfiguration NONE -nodeinfo ex01dbadm01,ex01dbadm02,ex01dbadm03 –ignorePreReqs
Step 2 : Create tablespaces based on source database
I have to enable encryption and compression for my migration but I recommend that you should enable OLTP complete at least for Exadata Machine.
select 'create bigfile tablespace '||tablespace_name ||' DATAFILE ''+DATAC1'' SIZE '||sum(bytes)/1024/1024 ||'M ENCRYPTION

 USING ''AES256'' DEFAULT COMPRESS for OLTP STORAGE(ENCRYPT) ;'

       from dba_data_files

       group by tablespace_name
Step 3 : Create public database link to source database  
Note : – DB link be used for network data pump and compare objects between source and target databases.
CREATE public DATABASE LINK src_db CONNECT TO umair IDENTIFIED BY password! USING 'MART';
Step 4 : Create migration directory for logs and dump files
Create directory for Data Pump Usage both on Source and Target
create or replace directory migration as '/etc/migration/mart';
Step 5 : Import Full database META DATA only using DB Link
nohup impdp / full=y content=metadata_only network_link=src_db directory=migration PARALLEL=16 logfile=imp_full_mart.log &
Step 6 : Import Data schema by schema (FIN_MART)
Note : – I am using network option for my data migration about you don’t have network bandwidth or a lot of unsupported objects , you should first export and then import data using dump files.
nohup impdp / schemas=FIN_MART network_link=src_db exclude=index,constraint TABLE_EXISTS_ACTION=REPLACE directory=migration parallel=16 logfile=impdpFIN_MART.log &
 Step 7 : Generate Indexes & Constraints scripts
First create export dump file (META DATA Only)
expdp / schemas=FIN_MART content=metadata_only directory=migration dumpfile=FIN_MART.dmp logfile=expdpFIN_MART.log
Then generate seperate SQL scripts for Indexes and Constraints.
impdp / schemas=FIN_MART include=index sqlfile=FIN_MART_IDX.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_IDX.log
impdp / schemas=FIN_MART include=contraint sqlfile=FIN_MART_CONS.sql dumpfile=FIN_MART.dmp  directory=migration logfile=imp_FIN_MART_CONS.log
Step 8 : Create Indexes using SQLscript
Update FIN_MART_IDX.sql script and replace parallel 1 with 16 then execute it
nohup sqlplus umair/password! @FIN_MART_IDX.sql &
Step 9 : Enable Contraints using SQL script
Update FIN_MART_CONS.sql script and replace ENABLE with ENABLE NOVALIDATE then execute it.
nohup sqlplus umair/password! @FIN_MART_CONS.sql &
 Step 10 : Validate objects
Lastly Validate objects using following SQL and bring any missing objects.
select owner,object_type,count(*) MISSING_OBJECTS

from (

select owner,object_type,object_name

from dba_objects@src_dih

where owner =’FIN_MART’

minus

select owner, object_type, object_name

from dba_objects where owner = ‘FIN_MART’

)

group by owner,object_type

order by owner,object_type
select substr(OWNER,1,10),substr(TABLESPACE_NAME,1,10) ,sum(bytes)/1024/1024/1000 "SIZE_IN_GB" from dba_segments where owner not in ('SYS','SYSTEM','APEX_04020','WMSYS','XDB','MDSYS','AUDSYS','DBSNMP','SCOTT') group by owner, TABLESPACE_NAME order by 3 desc,1;


No comments:

Post a Comment

Featured post

Postgres commads

 [oracle@Tesdb ~]$ systemctl status postgresql-15 ● postgresql-15.service - PostgreSQL 15 database server    Loaded: loaded (/usr/lib/system...