blog

How to set up Log Shipping as HA or DR solution for SQL Server on Linux

Rajendra Gupta

Published

Log Shipping is a convenient disaster recovery method without complex infrastructure requirements. Log shipping allows you to sync a warm standby replica of your database with the Primary instance. It is available in the Enterprise, Standard, and Development editions of SQL Server.

Log Shipping works on the transaction log backups, and at a high level, the process does the following tasks:

  • Backup: The first step is to take the transaction log backup of the primary database on the source server or shared directory. 
  • Copy: It then copies the transaction log backups to the destination server. 
  • Restore: It restores the log backups to the destination database in NORECOVERY or STANDBY mode. 

As shown below, the log shipping process requires the following steps:

  • A File Share to store the primary database transaction log files.
  • A backup job to take primary database log backups on the share.
  • A copy job on the secondary to copy the transaction log files from the share.
  • A restore job performs the transaction log backup restoration on the secondary database.

SQL Server uses the agent jobs to automate the Backup-Copy-Restore task on the primary and secondary database server. The Log shipping process works in multiple models:

  • Log shipping on the same server
  • Log shipping on a different server
  • Log Shipping across domains or workgroups
  • Log Shipping between different versions of SQL Server
  • Log Shipping on Windows, SQL Server on Linux
  • Log Shipping between Windows and Linux systems 

Prerequisites for SQL Server Log Shipping

The SQL Databases must be in a full or bulk-logged recovery model. You can run the following T-SQL to check the database and its recovery model:

SELECT name AS [Database Name], recovery_model_desc AS [Recovery Model] FROM sys.databases

SQL Server Agent on Linux: You can enable the SQL Server Agent on Linux using the following code:

sudo /opt/mssql/bin/mssql-conf set sqlagent.enabled true 
sudo systemctl restart mssql-server

Configure a File Share using Samba 

SQL Server on Linux supports Network Share using CIFS + Samba or NFS. Run the following YUM utility for installing the Samba:

sudo yum -y install samba #For RHEL/CentOS

Use the apt-get utility if you are on Ubuntu SQL Server on Linux. 

sudo apt-get install samba #For Ubuntu

In the next step, create a directory for storing transaction log backups. It requires permissions and ownership to mssql users. 

mkdir /var/opt/mssql/tlogs

chown mssql:mssql /var/opt/mssql/tlogs

chmod 0700 /var/opt/mssql/tlogs

In the /etc/samba/smb.conf file, add the following lines

[tlogs]
path=/var/opt/mssql/tlogs
available=yes
read only=yes
browsable=yes
public=yes
writable=no

You must also create an mssql user for Samba using the following code: 

sudo smbpasswd -a mssql

Once done, restart the Samba service to activate the changes in the config file. 

sudo systemctl restart smbd.service nmbd.service

Configure Secondary Server 

Install the CIFS client on the log shipping secondary servers.

sudo apt-get install cifs-utils #For Ubuntu
sudo yum -y install cifs-utils #For RHEL/CentOS

Create a tlogcreds file to store your credentials. In this file, specify the password for the mssql Samba user we created earlier.

vim /var/opt/mssql/.tlogcreds

#Paste the following in .tlogcreds

    username=mssql

    domain=<domain>

    password=<password>

Once we install the CIFS client, create a directory that stores the transaction log backups copied from the primary server. 

mkdir /var/opt/mssql/tlogs
sudo chown root:root /var/opt/mssql/tlogs
sudo chmod 0550 /var/opt/mssql/tlogs
sudo chown root:root /var/opt/mssql/.tlogcreds
sudo chmod 0660 /var/opt/mssql/.tlogcreds

Add the following code in the etc/fstab to persist the share. Here, specify the IP address of the primary database. 

//<ip_address_of_primary_server>/tlogs /var/opt/mssql/tlogs cifs credentials=/var/opt/mssql/.tlogcreds,ro,uid=mssql,gid=mssql 0 0

Mount the file share using sudo mount -a.

Configure SQL Server Log Shipping via T-SQL

Suppose you want to configure the log shipping for TestDB. Follow the steps below:

Step 1: Full backup of TestDB on the primary.

BACKUP DATABASE TestDB
TO DISK = '/var/opt/mssql/tlogs/TestDB.bak'
GO

Step 2: Restore TestDB on the secondary instance in NORECOVERY mode. 

RESTORE DATABASE TestDB FROM DISK = '/var/opt/mssql/tlogs/ TestDB.bak'
WITH NORECOVERY;

Step 3: To add the primary database, use the sp_add_log_shipping_primary_database procedure on the primary database. It creates the transaction log backup job on the primary SQL instance.

DECLARE @LS_BackupJobId	AS uniqueidentifier 
DECLARE @LS_PrimaryId	AS uniqueidentifier 
DECLARE @SP_Add_RetCode	As int 
EXECUTE @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 
         @database = N'TestDB' 
        ,@backup_directory = N'/var/opt/mssql/tlogs' 
        ,@backup_share = N'/var/opt/mssql/tlogs' 
        ,@backup_job_name = N'LSBackup_ TestDB ' 
        ,@backup_retention_period = 4320
        ,@backup_compression = 2
        ,@backup_threshold = 60 
        ,@threshold_alert_enabled = 1
        ,@history_retention_period = 5760 
        ,@backup_job_id = @LS_BackupJobId OUTPUT 
        ,@primary_id = @LS_PrimaryId OUTPUT 
        ,@overwrite = 1 

IF (@@ERROR = 0 AND @SP_Add_RetCode = 0) 
BEGIN 
DECLARE @LS_BackUpScheduleUID	As uniqueidentifier 
DECLARE @LS_BackUpScheduleID	AS int 
EXECUTE msdb.dbo.sp_add_schedule 
        @schedule_name =N'LSBackupSchedule' 
        ,@enabled = 1 
        ,@freq_type = 4 
        ,@freq_interval = 1 
        ,@freq_subday_type = 4 
        ,@freq_subday_interval = 15 
        ,@freq_recurrence_factor = 0 
        ,@active_start_date = 20170418 
        ,@active_end_date = 99991231 
        ,@active_start_time = 0 
        ,@active_end_time = 235900 
        ,@schedule_uid = @LS_BackUpScheduleUID OUTPUT 
        ,@schedule_id = @LS_BackUpScheduleID OUTPUT 

EXECUTE msdb.dbo.sp_attach_schedule 
        @job_id = @LS_BackupJobId 
        ,@schedule_id = @LS_BackUpScheduleID  

EXECUTE msdb.dbo.sp_update_job 
        @job_id = @LS_BackupJobId 
        ,@enabled = 1 

END 

EXECUTE master.dbo.sp_add_log_shipping_alert_job 

EXECUTE master.dbo.sp_add_log_shipping_primary_secondary 
        @primary_database = N'TestDB' 
        ,@secondary_server = N'<ip_address_of_secondary_server>' 
        ,@secondary_database = N' TestDB ' 
        ,@overwrite = 1

Step 4: Use sp_add_log_shipping_secondary_database to add the log shipping secondary database on the secondary server. The transaction log copy and restoration jobs are created on the secondary SQL instance. 

DECLARE @LS_Secondary__CopyJobId	AS uniqueidentifier 
DECLARE @LS_Secondary__RestoreJobId	AS uniqueidentifier 
DECLARE @LS_Secondary__SecondaryId	AS uniqueidentifier 
DECLARE @LS_Add_RetCode	As int 

EXECUTE @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary 
        @primary_server = N'<ip_address_of_primary_server>' 
        ,@primary_database = N'TestDB' 
        ,@backup_source_directory = N'/var/opt/mssql/tlogs/' 
        ,@backup_destination_directory = N'/var/opt/mssql/tlogs/' 
        ,@copy_job_name = N'LSCopy_ TestDB ' 
        ,@restore_job_name = N'LSRestore_TestDB ' 
        ,@file_retention_period = 4320 
        ,@overwrite = 1 
        ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT 
        ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT 
        ,@secondary_id = @LS_Secondary__SecondaryId OUTPUT 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

DECLARE @LS_SecondaryCopyJobScheduleUID	As uniqueidentifier 
DECLARE @LS_SecondaryCopyJobScheduleID	AS int 

EXECUTE msdb.dbo.sp_add_schedule 
        @schedule_name =N'DefaultCopyJobSchedule' 
        ,@enabled = 1 
        ,@freq_type = 4 
        ,@freq_interval = 1 
        ,@freq_subday_type = 4 
        ,@freq_subday_interval = 15 
        ,@freq_recurrence_factor = 0 
        ,@active_start_date = 20170418 
        ,@active_end_date = 99991231 
        ,@active_start_time = 0 
        ,@active_end_time = 235900 
        ,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT 
        ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT 

EXECUTE msdb.dbo.sp_attach_schedule 
        @job_id = @LS_Secondary__CopyJobId 
        ,@schedule_id = @LS_SecondaryCopyJobScheduleID  

DECLARE @LS_SecondaryRestoreJobScheduleUID	As uniqueidentifier 
DECLARE @LS_SecondaryRestoreJobScheduleID	AS int 

EXECUTE msdb.dbo.sp_add_schedule 
        @schedule_name =N'DefaultRestoreJobSchedule' 
        ,@enabled = 1 
        ,@freq_type = 4 
        ,@freq_interval = 1 
        ,@freq_subday_type = 4 
        ,@freq_subday_interval = 15 
        ,@freq_recurrence_factor = 0 
        ,@active_start_date = 20170418 
        ,@active_end_date = 99991231 
        ,@active_start_time = 0 
        ,@active_end_time = 235900 
        ,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT 
        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT 

EXECUTE msdb.dbo.sp_attach_schedule 
        @job_id = @LS_Secondary__RestoreJobId 
        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID  

END 
DECLARE @LS_Add_RetCode2	As int 
IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

EXECUTE @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database 
        @secondary_database = N' TestDB' 
        ,@primary_server = N'<ip_address_of_primary_server>' 
        ,@primary_database = N' TestDB' 
        ,@restore_delay = 0 
        ,@restore_mode = 0 
        ,@disconnect_users	= 0 
        ,@restore_threshold = 45   
        ,@threshold_alert_enabled = 1 
        ,@history_retention_period	= 5760 
        ,@overwrite = 1 

END 

IF (@@error = 0 AND @LS_Add_RetCode = 0) 
BEGIN 

EXECUTE msdb.dbo.sp_update_job 
        @job_id = @LS_Secondary__CopyJobId 
        ,@enabled = 1 

EXECUTE msdb.dbo.sp_update_job 
        @job_id = @LS_Secondary__RestoreJobId 
        ,@enabled = 1 

END

Step 5: Run the log shipping backup job on the primary SQL instance.

USE msdb ;  
GO  
EXECUTE dbo.sp_start_job N'LSBackup_TestDB' ;  
GO

Step 6: Run the log shipping copy and restore jobs on the secondary SQL instance.

USE msdb ;  
GO  
EXECUTE dbo.sp_start_job N'LSCopy_TestDB' ;  
GO  
EXECUTE dbo.sp_start_job N'LSRestore_TestDB' ;  
GO

You can use the Transaction Log Shipping Status report in SQL Server Management Studio to monitor log shipping backup, copy and restore activities, and their status.

Advantages of Log Shipping in SQL Server on Linux

  • Log shipping is easy to set up as it works on existing transaction log backups.
  • It does not require any changes to the source database instance, and you can use the existing SQL Server with SQL Server agent configured to use log shipping.
  • Traditionally, it is helpful for disaster recovery mechanisms in SQL Server. The primary database transactions do not depend on the failures of the log shipping. 
  • Log shipping provides both Disaster Recovery and High Availability solutions.
  • No extra license costs.
  • You can configure multiple databases in log shipping.
  • It is available in Standard, Developer, and Enterprise SQL Server editions. 

Disadvantages of Log Shipping in SQL Server on Linux

  • The failover process in log shipping is manual and does not support automatic failover. 
  • Log shipping involves data loss in case of failure of the primary server. If the primary server fails, transactions that occurred after the last t-log backup to the secondary server are lost. For example, suppose that the primary server fails at 6 a.m. and you cannot get to it. If the last backup copied to the secondary was taken at 5:30 a.m., all transactions between 05:30 a.m. and 6:00 a.m. are lost.

Wrapping Up

Log shipping is a convenient and cost-effective way to synchronize two databases, primary and secondary, and provide high availability and disaster recovery for your SQL Server database. Remember, implementing log shipping requires manual failover with possible data loss in case of failure of the primary database.

If you’re looking for a more automated high availability/disaster recovery solution for SQL Server, check out this post on Always On Availability Groups. If you’re unsure which solution makes the most sense for you, check out this post on the differences between Log Shipping and Always On Availability Groups.

As always, if you want to stay up to date on the latest trends and features for SQL Server or any open source databases, be sure to follow us on LinkedIn and Twitter, and subscribe to our newsletter.

Subscribe below to be notified of fresh posts