Immutable Backup for SAP databases using Azure NetApp Files and BlueXP

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

Immutable/Indelible Backups for SAP databases

Why immutable/indelible Backups

ANF snapshots are point-in-time, read-only copies of data that are stored in an ANF volume. They are by definition immutable, but it is possible to delete those snapshots. To protect such snapshots from deletion we can copy the “daily” snapshot (created from azacsnap) to an immutable and indelible Azure blob space. This Azure blob space must be configured with a data protection policy that prevents the deletion or change of the snapshot before its live time period is over.

Immutable backups are backups that cannot be changed or deleted for a certain period of time. They offer several benefits for data protection, such as:

  • Ransomware protection: Immutable backups are safe from malicious encryption or deletion by ransomware attacks.
  • Threat prevention: Immutable backups are also resistant to internal or external threats that may try to tamper with or destroy backup data.
  • Regulatory compliance: Immutable backups can help businesses meet data regulations that require preserving data integrity and authenticity.
  • Reliable disaster recovery: Immutable backups can ensure fast and accurate recovery of data in case of any data loss event.

Overview of immutable storage for blob data - Azure Storage | Microsoft Learn

Configure immutability policies for blob versions - Azure Storage | Microsoft Learn

 

Scenario

An ANF snapshot will be created on the production/primary side of your deployed SAP system(s). ANF CRR (Cross Region Replication) will copy the volume (incl. snapshots) over to the DR side. In the DR region, BlueXP will automatically copy the ./snapshot directory to an immutable and indelible (WORM) Azure blob. The lifecycle period of the immutable Azure Blov will determine the lifetime of the backup.

RalfKlahr_0-1709728618323.png

Preparation

Create an Azure storage account for the blob space

RalfKlahr_1-1709728696681.png

 

 

RalfKlahr_2-1709728722128.png

 

RalfKlahr_3-1709728771420.png

Here it is very important to select “Enable version-level immutability” to enable this function of an immutable backup.

RalfKlahr_0-1709731061666.png

RalfKlahr_1-1709731100931.png

 

Configure the access network for the storage account

RalfKlahr_2-1709731100941.png

 

 

Go to the Azure storage Account

RalfKlahr_3-1709731100945.png

 

Add a container

RalfKlahr_0-1709731199685.png

 

Add a directory where the backups will be stored

RalfKlahr_1-1709731199692.png

 

RalfKlahr_2-1709731199693.png

Data container

RalfKlahr_3-1709731844402.png

RalfKlahr_4-1709731266976.png

 

Create the BlueXP account

NetApp BlueXP is a unified control plane that lets you build, protect, and govern your hybrid multicloud data estate across multiple clouds and on-premises. It offers storage mobility, protection, analysis and control features for any workload, any cloud, and any data type.

Some of the benefits of NetApp BlueXP are:

  • Simplified management: You can discover, deploy, and operate storage resources on different platforms with a single interface and common policies.
  • Enhanced security: You can protect your data from ransomware, data tampering, and accidental deletion with immutable backups and encryption.
  • Cost efficiency: You can optimize your data placement and consumption with intelligent insights and automation.
  • Sustainability: You can reduce your carbon footprint and improve your data efficiency

https://console.bluexp.netapp.com

Please create a user or log in with your account.

RalfKlahr_5-1709731308394.png

 

Create your “working Environment”

RalfKlahr_6-1709731334047.png

 

Create the Credentials

RalfKlahr_0-1709731779006.png

 

 

RalfKlahr_1-1709731779010.png

 

Create the Azure credentials for the communication between BlueXP and the Azure storage account

RalfKlahr_2-1709731816938.png

The easiest way to get this information is from the azacsnap authentication file (Service principal). It is also possible to use an managed identity for the connection.

RalfKlahr_3-1709732092497.png

RalfKlahr_4-1709731857562.png

 

Return to Working Environments and create a new working environment

RalfKlahr_5-1709731885535.png

RalfKlahr_6-1709731907329.png

 

RalfKlahr_7-1709731932119.png

 

If you don’t have a data broker already, create one. I think it is better to create a data broker manually. This gives you the chance to select the OS vendor and also integrate the broker in your monitoring and management framework.

RalfKlahr_8-1709731932120.png

 

RalfKlahr_0-1709731995922.png

Simply deploy a D4s_v5 with ubuntu20.04 or similar in your environment and run the installation procedure on this VM. This maybe is the better option because you can define all the “required” settings for the VM by yourself.

RalfKlahr_1-1709732027146.png

 

After the data broker is created we can specify the volume and directory which we would like to backup.

For performance and high availability reasons it is highly recommended to create a data broker group of three or more data broker.

RalfKlahr_2-1709732054671.png

 

Now create the relationship for the backup.

RalfKlahr_3-1709732250220.png

RalfKlahr_5-1709732118305.png

 

Create the relationship. Drag and drop the relevant storage tile to the right config place.

RalfKlahr_4-1709732092506.png

This displays the configured relationship

RalfKlahr_0-1709732176932.png

 

Configure the source.

RalfKlahr_1-1709732204692.png

 

 

RalfKlahr_2-1709732204705.png

 

We now need to specify the .snapshot directory from the data volume. We only want to backup the snapshots and not the data volume as it is.

RalfKlahr_3-1709732425056.png

RalfKlahr_5-1709732284487.png

 

RalfKlahr_4-1709732250222.png

 

Now select the storage account we have created in the beginning.

RalfKlahr_6-1709732308961.png

 

Select and copy the connect string.

RalfKlahr_7-1709732335991.png

Paste the connect string into the blob target config.

RalfKlahr_0-1709732386496.png

RalfKlahr_1-1709732425047.png

We would recommend creating one container for each data volume (if you have more than one) and one for the log backup.

RalfKlahr_2-1709732425054.png

 

RalfKlahr_4-1709732553771.png

 

Create the schedule for the sync. We would recommend doing the daily backup once a day and the log backup every 10 minutes.

RalfKlahr_5-1709732663577.png

 

You can exclude snapshot names hire in this section. You must specify what you don’t want instead of what you want. I know…. This might get changed.

RalfKlahr_6-1709732746417.png

 

This is the sync relationship we just created.

RalfKlahr_0-1709732812164.png

 

The dashboard will show the latest information. Here it is also possible to download the sync log from the data broker.

RalfKlahr_1-1709732835866.png

 

This is the data container with all the synced backups.

RalfKlahr_2-1709732861902.png

 

Set the deletion Policy

To setup the deletion policy go to the storage account and select “Data Protection”

RalfKlahr_3-1709733054601.png

RalfKlahr_5-1709732900229.png

 

Now set “Manage Policy”

Here you setup the indelible time frame:

RalfKlahr_4-1709733101719.png

RalfKlahr_6-1709732921835.png

For my system I only protect the deletion for 2 days. Normally we see 14, 30 or 90 days.

 

Automatic Lifecycle Management of the Blob

Normally you would like to have an automatic deletion of the backups in place. This makes the housekeeping much easier.

To setup the deletion policy please go to: “Lifecycle Management” and create a new deletion rule.

RalfKlahr_0-1709732981548.png

 

+Add Rule

RalfKlahr_1-1709733014665.png

RalfKlahr_2-1709733033611.png

RalfKlahr_0-1709733179310.png

RalfKlahr_1-1709733219956.png

 

Now the new lifecycle management rule is created.

 

Restore

The easiest way to restore a backup is when you create a new BlueXP relationship but in the revers order. From Blob to a new Volume. Then you do not have to deal with azcopy or anything else. This is a very easy but time-consuming process.

 

Update the Databroker

Normally the data broker will run an automatic update once a new version is available.

In rare cases you can run the update like this:

curl https://cf.cloudsync.netapp.com/e07d33f7-6ac5-470e-989a-2df33b463ad4_installer -o data_broker_installer.sh

 

sudo -s
pm2 stop all
chmod +x data_broker_installer.sh
./data_broker_installer.sh > data_broker_installer.log
pm2 start

 

Files > 4TB

With the default configuration a databroker can copy only file smaller than 4TB to the blob. With HANA it can happen, that datafiles are getting much larger. In this case we need to adjust the databroker blocksize.

Check Data Broker status:

https://console.bluexp.netapp.com/sync/manage

  • Expand the Data Broker Group and then expand the Data Broker.
  • Check Data Broker version.

 

The Data Broker should automatically be updated to the latest version as features are released.

To manually update the Data Broker version:

  • From the Data Broker VM
  • pm2 update

OR

  • pm2 stop all, and pm2 start all

 

Data Broker config files location:

cd /opt/netapp/databroker/config

 

To make changes to the buffer-size setting (this file is normally empty):

vi local.json

 

add the following:

{
                "protocols": {
                                "azure": {
                                                "buffer-size": "120MB"
                                }
                }
}

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.