Sunday 31 December 2017

Elastic Block Storage (EBS) --AWS

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. 

EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. 

With Amazon EBS, you pay only for what you use. 

You can create EBS provisioned and throughput optimized  volumes up to 16 TiB.

You can mount these volumes as devices on your Amazon EC2 instances. 


Amazon Elastic Block Store provides block level storage volumes for use with EC2 instances.

EBS can be attached to any running instance that is in the same Availability Zone. 

EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. 

With Amazon EBS, you pay only for what you use.

You can mount multiple volumes on the same instance, but each volume can be attached to only one instance at a time.

You can dynamically change the configuration of a volume attached to an instance.

EBS volumes behave like raw, unformatted block devices. We can create a file system on top of these volumes.






Saturday 30 December 2017

Blocking on sqlserver

To check blocking


select * from sys.sysprocesses where blocked <> 0


head blocker

++++++++++

declare @handle binary(20), @spid int

select top 1 @spid = blocked
from master..sysprocesses a
where a.blocked != 0 
and a.blocked not in (select spid from master..sysprocesses b where blocked != 0)

if @spid is not null
begin
  select @handle = sql_handle from master..sysprocesses where spid = @spid

  exec sp_who2 @spid

  dbcc inputbuffer (@spid)

select lastwaittype from sysprocesses where spid=@spid
  select * from ::fn_get_sql(@handle)

end  

Last successful backup in sqlserver script

SET quoted_identifier OFF
DECLARE @dbname AS VARCHAR(80)
DECLARE @msgdb AS VARCHAR(100)
DECLARE @dbbkpname AS VARCHAR(80)
DECLARE @dypart1 AS VARCHAR(2)
DECLARE @dypart2 AS VARCHAR(3)
DECLARE @dypart3 AS VARCHAR(4)
DECLARE @currentdate AS VARCHAR(10)
DECLARE @server_name AS VARCHAR(30)
SELECT @server_name = @@servername
SELECT @dypart1 = DATEPART(dd,GETDATE())
SELECT @dypart2 = DATENAME(mm,GETDATE())
SELECT @dypart3 = DATEPART(yy,GETDATE())
SELECT @currentdate= @dypart1 + @dypart2 + @dypart3

SELECT SUBSTRING(s.name,1,50) AS 'DATABASE Name',
b.backup_start_date AS 'Full DB Backup Status',
c.backup_start_date AS 'Differential DB Backup Status',
d.backup_start_date AS 'Transaction Log Backup Status'
FROM MASTER..sysdatabases s
LEFT OUTER JOIN msdb..backupset b
ON s.name = b.database_name
AND b.backup_start_date =
(SELECT MAX(backup_start_date)AS 'Full DB Backup Status'
FROM msdb..backupset
WHERE database_name = b.database_name
AND TYPE = 'D') -- full database backups only, not log backups
LEFT OUTER JOIN msdb..backupset c
ON s.name = c.database_name
AND c.backup_start_date =
(SELECT MAX(backup_start_date)'Differential DB Backup Status'
FROM msdb..backupset

Procedure to attend SQL server memory issues

Check SQL installed on Server


Confirm SQL server is installed on the server through services.msc or SQL services configuration manger.


Start à Run àservices.msc verify SQL Services are visible like SQL server, SQL Server Agent. Etc.,


SQL services configuration

Start àProgramsàMicrosoft SQL server 20xxàConfiguration Tools àSQL Server configuration  Manager


Verify SQL maintenance jobs status


Verify SQL maintenance jobs like Reindex , Update Stats or CheckDB  is running during business hours. These jobs are created and scheduled to run on week end or off business hours.  If any maintenance jobs in running state.

Check blocking

select *from sysprocesseswhere blocked >0

If many blockings are found, head blocker can be determined by using the query below.


declare @handle binary(20), @spid int

select top1 @spid = blocked
from master..sysprocesses a
where a.blocked != 0
and a.blocked not in (select spid from master..sysprocesses b whereblocked != 0)

if @spid isnot null
begin
  select@handle = sql_handlefrom master..sysprocesses where spid = @spid
  execsp_who2 @spid

  dbccinputbuffer (@spid)

--exec ('kill ' + @SPID)

select lastwaittype from sysprocesses where spid=@spid
  select* from ::fn_get_sql(@handle)

end

Verify SQL Server Max. Memory settings

 The minimum and maximum memory values are important, especially since we gave the SQL Server account the permission to lock its pages in memory. If other applications are running on this server, we need to specify how much memory want SQL Server to take.
SQL Server application is Memory consuming application. We need to restrict SQL instance with minimum and maximum memory setting by leaving some amount of memory to Operating system.
Go into SQL Server Management Studio, right-click on the server name and click Properties, go into Memory, and just verify SQL max. memory set for the SQL server. 




Check Memory Pressure


This query will provide information about how many queries are requesting for memory , how many are granted and how many are pending to be granted.

--Query 3:
SELECT * FROM sys.dm_exec_query_resource_semaphores

--Query 4:
SELECT TEXT,* FROM sys.dm_exec_query_memory_grants
cross apply sys.dm_exec_sql_text (sql_handle)
order by requested_memory_kb desc

Checking resource semaphore wait type during memory pressure.






File movement in sql server

USE master;
GO

ALTER  database <Database name>
MODIFY FILE (NAME = <database>_Data,
 FILENAME = ' J:\sqldb03\MSSQL12.MSSQLSERVER\data\<database>.mdf');

go

ALTER DATABASE  <Database name>
SET SINGLE_USER WITH ROLLBACK IMMEDIATE

go

ALTER DATABASE  <Database name>
SET OFFLINE
GO

 Move files from
U:\SQL01\MSSQL11.SQLE\MSSQL\Data to U:\SQL07\MSSQL11.SQLE\MSSQL\Data

ALTER DATABASE  <Database name> SET ONLINE
go

ALTER DATABASE  <Database name> SET MULTI_USER

Go

Thursday 21 December 2017

AWS

Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable
compute capacity in the cloud. It allows organizations to obtain and configure virtual servers
in Amazon’s data centers and to harness those resources to build and host software systems.
Organizations can select from a variety of operating systems and resource configurations
(memory, CPU, storage, and so on) that are optimal for the application profile of each
workload. Amazon EC2 presents a true virtual computing environment, allowing
organizations to launch compute resources with a variety of operating systems, load them
with custom applications, and manage network access permissions while maintaining
complete control.

AWS Lambda

AWS Lambda is a zero-administration compute platform for back-end web developers that
runs your code for you on the AWS Cloud and provides you with a fine-grained pricing
structure. AWS Lambda runs your back-end code on its own AWS compute fleet of Amazon
EC2 instances across multiple Availability Zones in a region, which provides the high
availability, security, performance, and scalability of the AWS infrastructure.

Auto Scaling

Auto Scaling allows organizations to scale Amazon EC2 capacity up or down automatically
according to conditions defined for the particular workload (see Figure 1.3). Not only can it be
used to help maintain application availability and ensure that the desired number of Amazon
EC2 instances are running, but it also allows resources to scale in and out to match the
demands of dynamic workloads. Instead of provisioning for peak load, organizations can
optimize costs and use only the capacity that is actually needed.

AWS account create.


ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT


ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT


Some time we face this error when we run batch jobs with very high degree of parallelism.  Oracle is  unable to use more PGA and it kills the session and the batch job fails. This can happen even though you assign very high  PGA_AGGREGATE_LIMIT.

Batch job sessions failed with below errors.

ORA-00028: your session has been killed
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT of 2048 MB

To Avoid this error: Select correct degree of parallelism

Selecting degree of parallelism is very much important, number of parallel sessions depends on server CPU core.

Degree of parallelism can be calculated with below equation.

For a single instance, DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT.
For an Oracle RAC configuration, DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT x INSTANCE_COUNT

Running more that 16 parallel session in 8 core CPU server is not a good idea.

You may have a look at this oracle document for more details.

https://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm

Check the degree of parallelism is in recommended value. If yes check the  parameter PGA_AGGREGATE_LIMIT is set for small value. You may increase it according to degree of parallelism.

Still facing the issue and if your database is 12c, you may revert this parameter to pre12c by setting pga_aggregate_limit to 0.

alter system set pga_aggregate_limit=0 scope=both;

Setting the higher value for pga_aggregate_limit

alter system set pga_aggregate_limit=<Value> scope=both;

This parameter change not required instance restart.

Log miner in oracle



LogMiner is an Oracle utility. Using LogMiner one can query the contents of online redo log files and archived log files. 

·         It can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.
·        
          The LogMiner tool can help the DBA to the find changed records in redo log files by using a set of PL/SQL procedures and functions.
·        
          Log Miner extracts all DDL and DML activity from the redo log files for viewing by a DBA via the dynamic performance view V$LOGMNR_CONTENTS



  Steps:



          

SQL> Alter database add supplemental log data;
1.      create a directory to store dictionary file.
$mkdir         /u01/user/demo/

2.      Specify the location of dictionary file at os level.
Sql>alter system set utl_file_dir=’/u01/user/demo’ scope=spfile;
3.      Bounce back the database.
Sql>startup force;

4.      create a dictionary file
       sql>exec dbms_logmnr_d.build(‘dfile’,’/u01/user/demo’ );

If You will get Below Error
ERROR at line 1:
ORA-01371: Complete LogMiner dictionary not found
ORA-06512: at "SYS.DBMS_LOGMNR", line 58
ORA-06512: at line 1
Solution 
Cross 'UTL_FILE_DIR' parameter , and Directory is present or not Physically on your System.
Also check,   ’/u01/user/demo’ Exist or Not

SQL>select member from v$logfile ; 
5.      connect to a sys user and specify the all logfiles to Logminer session.
Sql>exec dbms_logmnr.add_logfile(‘/u01/user/prod/redo/redo01.log’);
Sql>exec dbms_logmnr.add_logfile(‘/u01/user/prod/redo/redo02.log’);
Sql>exec dbms_logmnr.add_logfile(‘/u01/user/prod/redo/redo03.log’);        

6.      Start the minning process
Sql>exec           dbms_logmnr.start_logmnr(dictfilename=>’/u01/user/demo/dfile’);
                 
 Sql>spool abc.sql
Sql>select sql_undo,sql_redo from v$logmnr_contents  where seg_owner=’USER1’ and seg_name=’EMP’;
Sql>spool off


      

 

Log miner related data dictionary views




V$LOGMNR_CONTENTS - Shows changes made to user and table information.

V$LOGMNR_DICTIONARY - Shows information about the Log Miner
dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option.

V$LOGMNR_LOGS - Shows information about specified redo logs. There is one row for each redo log.


V$LOGMNR_PARAMETERS - Shows information about optional Log Miner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.

Oracle Database 12c New Features


Let us discuss new features of Oracle Database 12c Release 1.(12.1). Oracle says there are 500+ new features incorporated in 12c. We will see very few which are going to make a change in database world.


Multitenant architecture:


Pluggable databases (PDBs) and Container Database (CDB) is the new architecture for 12c.We can have may PDBs in a single database occurrence (CDB).PDBs are just like ordinary 11g or 10g databases. This architecture reducing the total cost of operations and improving the functionalists.It has other benefits like it is very easy to clone the database and new database creation.By doing one patching all PDBs will get upgraded. We can patch a PDB by unplug and plug in to different CDB.
  • There is one set of background processes shared by the root and all PDBs. 
  • There is a single redo log and a single control file for an entire CDB 
  • There are single SGA shared by all pluggable databases. However, we can determine SGA consumption by all containers i.e, root and PDB 
  • We can monitor PGA usage on PDB by PDB basis. 
  • There is one active undo tablespace for a single-instance CDB. For an Oracle RAC CDB, there is one active undo tablespace for each instance. Only a common user who has the appropriate privileges and whose current container is the root can create an undo tablespace 
  • There is a separate SYSTEM tablespace for the root and for each PDB. 
  • There is a separate SYSAUX tablespace for the root and for each PDB. 
  • There is one default temporary tablespace for the entire CDB. However, you can create additional temporary tablespaces in individual PDBs. One active undo tablespace is needed for a single-instance CDB, or one active undo tablespace is needed for each instance of an Oracle RAC CDB. 
  • You can specify a separate default tablespace for the root and for each PDB

DataGuard: 


Far sync standby database


A Far Sync Standby Database is a cascading Standby Database which acts as a Redo Log Repository for a Terminal Database. It does not contain any Datafiles. Only Log Transport Services are active on a Far Sync Standby Database.The Advantage of a Far Sync Standby Database is that it can be a local ArchiveLog Repository for the Primary Database acting in Maximum Protection Mode where the Physical or Logical Standby Database can be on a far remote Site.
New Options for Cascaded Standby Databases

In 12c it is possible to cascade a Standby Database in Real-Time, ie. the first Standby Database can send Redo from the Standby RedoLogs to the cascaded Standby Database. The Data Guard Broker now supports Cascaded Standby Databases, too.

SYSDG Administration Privilege

New sys role for dataguard SYSDG and it is possible to grant the SYSDG-Privilege to a User in order to perform Data Guard-related Operations. The full SYSDBA-Privilege is not required any more to manage a Data Guard Environment, but can still be used.

Online Movement of Online Datafiles

We can now move Online Datafile without stopping Managed Recovery and manually copy and rename Files. This can even be used to move Datafiles from or to ASM, It is in standby what about production?? Yes it is possible to move the datafiles in production also.

Multitenant Database Support

Data Guard supports Physical Standby Database for a Multitenant Database, of course

RMAN: 


Recover table option

The RECOVER TABLE command is a new feature in 12C that allows point in time recovery of a table or a table partition.
The table is recovered into an auxiliary instance and there is the option to import the recovered table into a new table or partition using REMAP option or create the expdp dump of the recovered table only, for import at a later time of your choosing.
In 10g what we did when we need to recover only one table using RMAN? We restored in a UAT and taken export and import it back. Same concept is using here by RMAN. It will create an auxiliary database, then restore the tablespace and take export and import.

Performance: 


Multi-Process Multi-Threaded Oracle

Multi-process multi-threaded Oracle uses multiple processes and multiple threads within each process to provide a new execution model for Oracle Database. Support for multi-process multi-threaded Oracle provides improved performance and manageability through more efficient sharing of system and processor resources.

Advanced Network Compression

New parameters, SQLNET_COMPRESSION and SQLNET.COMPRESSION_SCHEME_LIST, allow the compression of data transitioning over Oracle Net Services between client and server. Compression can be enabled at the: Connection level (connect string, URL),Service level (tnsnames.ora, ldap.ora) and Database level (sqlnet.ora)

Multiple Indexes 

Create multiple indexes on the same set of columns. In 11.2 and below, if we try to create an index using the same columns, in the same order, as an existing index, we get an error. In some cases, we may require  two different types of index on the same data (in a data warehouse where you might want a bitmap index on the leading edge of a set of columns that exists in a btree index).

Adaptive Execution Plans

12c support change of execution plan at run time

Security: 

Auditing Enabled By Default

The new unified auditing architecture can be used in Oracle Database with no changes required to the database initialization parameters. This feature enables audit policies to be created and enabled in the database with no production database downtime, providing flexibility and ease of administration for database auditing.

ASM


Oracle Flex ASM

Oracle Flex ASM decouples the Oracle ASM instance from the database servers. Oracle Flex ASM instances may be run on separate physical servers from Oracle Database 12c instances.

RAC:


Oracle Flex Cluster is a new Oracle Clusterware based topology utilizing two types of cluster nodes: Hub Nodes and Leaf Nodes. Hub Nodes represent traditional nodes,tightly coupled using network and storage. Leaf Nodes are a new type of node that runs a lighter weight stack and does not require direct shared storage connectivity.
Combining tightly coupled Hub Nodes and lightweight Leaf Nodes in one cluster allows the running of a variety of workload and applications across hundreds of nodes without additional overhead, while maintaining the ability to create dependencies between them. Centralized, cluster-wide management and standardized resource allocation policies further facilitate workload consolidation on Oracle Flex Cluster.

Oracle Cluster Registry Backup in ASM Disk Group Support

The Oracle Cluster Registry (OCR) backup mechanism enables storing the OCR backup in an Oracle ASM disk group.

Support for IPv6 Based IP Addresses for Oracle RAC Client Connectivity

Cluster nodes can be configured to use either IPv4 or IPv6 based IP addresses for the Virtual IPs (VIP) on the public network, while more than one public network can be defined for the cluster.


These are the few features of Oracle 12c for DBA. Expecting your comments...

SQL Server 2017 Installation on linux


1.  Introduction


This document is to install and configure  sql server on Rhel server . It provides a step-by-step procedure in configuring of sql server on linux flavor.

2.  Requirements of Packages for Installation on RHEL 7


  • Minimum of 3.25 GB of RAM.
  • Minimum of 4 GB of disk space
  • XFS (default on RHEL) or EXT4 file system
  • Network connectivity to the Internet to download the package
  • 1433 TCP port open on the firewall for SQL Server connections
  • A hostname of less than 15 characters.
  • wget is a required package to run the configuration script.


3.     INSTALL SQL SERVER ON RHEL

  • Connect to the RHEL virtual machine or linux server and log on to the server by keying username and password
  • Switch to root user (superuser)
Step 1:

Su root
Password :*****







  • Download the Microsoft SQL Server Red Hat repository into the RHEL Server

Step 2:

sudo curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo

Step 3:

install SQL Server on RHEL by using the below command





sudo yum install -y mssql-server



From the above command the required packages and sql server installation will completed as successfully


Step 4:

Once SQL Server installation gets completed successfully, kindly run the configuration script. During the configuration setup, It will ask you to;

  • Select the SQL Server Edition which you want to install
  • Accept the SQL Server License terms
  • Key “sa” password












       sudo /opt/mssql/bin/mssql-conf setup



         Once you issue the above command it will ask to choose the edition of the server.
            Like express,standard,entrerprise..so depend on the licence enter the edition
            Then accept the license terms.
            It will prompt for sql server system administrator password.


Step 5:

Now verify, whether the SQL Server instance is running on RHEL Server or not.

      systemctl status mssql-server

Step 6:
  • To allow remote connections, open the SQL Server port on the firewall on RHEL. The default SQL Server port is TCP 1433. If you are using FirewallD for your firewall, you can use the following command;

      sudo firewall-cmd --zone=public --add-port=1433/tcp permanent

Step 7:
  • Let’s apply the firewall configuration right now on the runtime.
1
               sudo firewall-cmd –reload






4.    INSTALL SQL SERVER COMMAND-LINE TOOLS ON RHEL



The mssql-tools package contains Command Line utilities & Bulk Import-Export utilities. The SQLCMD utility allows users to connect to, send Transact-SQL batches from, and output rowset information from SQL Server instances. The BCP utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. Installing SQL Server Tools will also install the Microsoft ODBC driver and all its dependencies by default.



Su – root


Step 1:
Download the Microsoft Red Hat repository configuration file.



sudo curl -o /etc/yum.repos.d/msprod.repo https://packages.microsoft.com/config/rhel/7/prod.repo


Step 2:

  • Install the SQL Server Common Line Tools  with the unixODBC developer package run the following command;
  •  

sudo yum install -y mssql-tools unixODBC-devel









During the installation of the tool, it will prompt you to accept the license terms for MSSQL-Tools and Microsoft ODBC driver

  • By default, sqlcmd program is not in your environment variable PATH. So, you can take opt/mssql-tools/bin and add it to the Bash_profile path. Let’s run the below command. This enables you to run the tools without specifying the full path.

·       echo 'export PATH="$PATH:/opt/mssql-tools/bin"'>> ~/.bash_profile
·       echo 'export PATH="$PATH:/opt/mssql-tools/bin"'>> ~/.bashrc
·       source ~/.bashrc
Step 3:

After the successful installation we need to verify the database connectivity on linux machine using command line tools




sqlcmd -S <Host Name OR IP> -U <Username> -P<P@ssw0rd>


OUT PUT AS BELOW:









5.    Install SQL Server Agent


1
       sudo yum install mssql-server-agent




  • Once you install the SQL Server Agent package successfully, you need to restart the SQL Server to enable the SQL Server Agent.

         sudo systemctl restart mssql-server


  • Reconnect/Refresh the SQL Server instance to check the current status of SQL Server Agent.



Creating database on linux server :

Sqlcmd –s <hostname or ip address> -U sa –Q ‘create database sampledb’

Output :



List of databases on sqlserver.

Command

Sqlcmd –s <hostname> -U sa –p <password>
Ø  Select name from sys.databases
Ø  go



Featured post

Postgres commads

 [oracle@Tesdb ~]$ systemctl status postgresql-15 ● postgresql-15.service - PostgreSQL 15 database server    Loaded: loaded (/usr/lib/system...