Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap

Posted by

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Table of Contents

Version

Authors

Overview

Assumptions

Terms and Definitions

Setting up AzAcSnap with an HSR synchronous HANA database

Hdbuserstore key configuration

AzAcSnap configuration

Scheduling AzAcSnap backups

Summary

Links to additional information

Appendix – SAP HANA data volume locations

 

Version

This document is for SAP HANA on Azure NetApp Files using Microsoft AzAcSnap version 5.0 or later.

 

Authors

Phil Jensen, Principal Software Engineer at Microsoft

Scott McCullough, SAP Solutions Architect at NetApp

 

Overview

Most customers running production HANA databases in Azure want to ensure the highest SLA possible (99.99). To achieve this SLA, HANA can be setup leveraging HANA System Replication (HSR). This technology replicates data (either asynchronous or synchronous) at the database level from one HANA instance to another HANA instance. By doing this, we can have a highly available database setup that achieves the 99.99 SLA. There are other pieces to this solution (Pacemaker cluster, load balancer…) but for the purpose of this discussion we will only be focusing on the HANA database layer and using synchronous replication.

 

GeertVanTeylingen_2-1666001741510.png

 

Another very popular choice when it comes to choosing the storage for the HANA database is Azure NetApp Files (ANF). Azure NetApp Files has built-in data management abilities that are truly unique and if leveraged correctly can dramatically reduce backup time and load on the HANA database regardless of the database size. One piece of the data management abilities that come with Azure NetApp Files are storage snapshots.

 

The important piece to understand are Azure NetApp Files snapshots occur nearly instantly without a performance impact to the application or storage. Now you would be correct in thinking storage snapshots may be worthless if you have structured data and in this specific case a HANA database, when not taken in close concert with the application. A scheduled Azure NetApp Files snapshot taken on an Azure NetApp Files volume that has an operational HANA database is worthless. Trying to recover from a snapshot taken in this way will only lead to frustration and more importantly, a failed HANA recovery.

 

This is why Microsoft developed the AzAcSnap snapshot tool. To quickly summarize AzAcSnap, this is the tool that leverages both a HANA snapshot (not to be confused with a storage snapshot) and an Azure NetApp Files storage snapshot, in alignment with each other. More information on SAP HANA storage snapshots can be found in note 2039883. The tool orchestrates the workflow to provide an application consistent picture of the HANA database that is 100% application aware (you can see the backups within the HANA backup catalog). In addition, just like a HANA file backup or 3rd party ‘backint’ tool can it be used to recover the HANA database with forward recovery.

 

Regardless of database size, the backup typically takes only minutes to complete. AzAcSnap is also included within your Azure subscription so there are no additional costs associated with using the tool and, more importantly, can co-exist with your existing backup solution. The tool has been GA as of April 2021 and is fully Microsoft supported. If you are using Azure NetApp Files for the HANA database there is no reason not to leverage AzAcSnap.

 

Assumptions

The administrator following this guide has experience with SAP HANA and HANA Studio because not all details are provided as screenshots to follow (e.g., logging in to HANA Studio, etc.).

The administrator is familiar with SAP HANA backup processes, including the Backup Catalog and Storage Snapshots.

The administrator has the appropriate permissions at a Linux shell to copy files as the <sid>adm user into the SAP HANA Data Area.

 

Terms and Definitions

Terms used in this documentation:

  • SID: A System Identifier for SAP HANA installation, typically 3 characters long
  • ANF: Azure NetApp Files.
  • VM: Virtual Machine

 

Setting up AzAcSnap with an HSR synchronous HANA database

Setting up and configuring AzAcSnap within your landscape is clearly documented here. This document is now going to detail how to setup AzAcSnap within an HSR synchronous HANA database deployment.

 

As was previously mentioned, most customers want to build to the highest SLA within Azure. To do this HANA is installed and configured to replicate synchronously to another HANA database instance deployed on a separate VM (cluster node). AzAcSnap for production environments most often is installed on a simple stand-a-lone Linux VM that has connectivity to all the HANA database instances as well as the ANF storage. In this way we can centrally manage all HANA backups from a single VM.

 

The AzAcSnap centrally installed VM is configured with hdbuserstore keys for both HANA database nodes. It is very important that you configure the hdbuserstore keys to point to the physical hostnames of the HANA database nodes and not the virtual IP (VIP) or hostname of the cluster. AzAcSnap has built-in logic to determine what system is the primary and what system is the secondary, including situations where the secondary might be configured with read access to the node (logreplay_readaccess).

 

The tool also has built-in logic to determine if there is a split-brain scenario where both HANA databases believe they are the primary, an important reason to configure AzAcSnap to point to the cluster nodes and not the cluster virtual IP.

 

(i) Important

 

A HANA snapshot can only be created on the primary HANA node.

 

Hdbuserstore key configuration

 

HDBUSERSTORE keys configured on the AzAcSnap central VM:

 

azacsnap@azacsnap-opensuse:~/bin> hdbuserstore list
DATA FILE    : /home/azacsnap/.hdb/azacsnap-opensuse/SSFS_HDB.DAT
KEY FILE    : /home/azacsnap/.hdb/azacsnap-opensuse/SSFS_HDB.KEY

KEY AZACSNAP_NODE1
 ENV : saphana1:30013
 USER: AZACSNAP
KEY AZACSNAP_NODE2
 ENV : saphana2:30013
 USER: AZACSNAP

As can be seen the hostname for each HANA database is used instead of the IP address.

 

azacsnap@azacsnap-opensuse:~/bin> cat /etc/hosts | grep saphana
172.16.7.20 saphana1
172.16.7.26 saphana2

 

AzAcSnap configuration

Once hdbuserstore keys have been configured using the physical hostnames, the subsequent AzAcSnap configuration files for each instance can be created. 

 

(!) Note

 

The AzAcSnap “serverAddress” is configured with the hostname.

 

azacsnap@azacsnap-opensuse:~/bin> cat NODE1_HANA.json
{
 "version": "5.0",
 "logPath": "./logs",
 "securityPath": "./security",
 "comments": [
 ],
 "database": [
  {
   "hana": {
    "serverAddress": "saphana1",
    "sid": "PR1",
    "instanceNumber": "00",
    "hdbUserStoreName": "AZACSNAP_NODE1",
    "savePointAbortWaitSeconds": 1200,
    "hliStorage": [],
    "anfStorage": [
     {
      "dataVolume": [
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANADATA_P",
        "authFile": "azureauth.json"
       },
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANASHARED_P",
        "authFile": "azureauth.json"
       }
      ],
      "otherVolume": [
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANALOGBACKUP_P",
        "authFile": "azureauth.json"
       }
      ]
     }
    ]
   }
  }
 ]
}

 

azacsnap@azacsnap-opensuse:~/bin> cat NODE2_HANA.json
{
 "version": "5.0",
 "logPath": "./logs",
 "securityPath": "./security",
 "comments": [
 ],
 "database": [
  {
   "hana": {
    "serverAddress": "saphana2",
    "sid": "PR1",
    "instanceNumber": "00",
    "hdbUserStoreName": "AZACSNAP_NODE2",
    "savePointAbortWaitSeconds": 1200,
    "hliStorage": [],
    "anfStorage": [
     {
      "dataVolume": [
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANADATA_P",
        "authFile": "azureauth.json"
       },
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANASHARED_P",
        "authFile": "azureauth.json"
       }
      ],
      "otherVolume": [
       {
        "resourceId": "/subscriptions/00aa000a-aaaa-0000-00a0-00aa000aaa0a/resourceGroups/saphanaazacsnaptest/providers/Microsoft.NetApp/netAppAccounts/saphanatestinganf/capacityPools/Premium/volumes/HANALOGBACKUP_P",
        "authFile": "azureauth.json"
       }
      ]
     }
    ]
   }
  }
 ]
}

 

Scheduling AzAcSnap backups

Now that the hdbuserstore keys and AzAcSnap configuration files have been created and properly leveraging the physical hostnames of the cluster nodes, scheduling the backups is all that remains. The scheduling of the AzAcSnap backups must include both systems (primary and secondary). In this setup simply cron is leveraged on the central AzAcSnap VM to manage the backup schedules.

 

A best practice of using AzAcSnap is to schedule between 3-6 backups a day. Anything more than six backups starts to have diminishing value. Regardless of if you take only one snapshot or 6 snapshots per day the ANF daily snapshot storage consumption will be the similar. However, taking more than one snapshot per day will reduce RTO because less logs will be replayed to recover the database.

 

The following are the cron entries on the AzAcSnap central VM for the two HANA databases.

0 */4 * * * ( . ~/.profile ; cd /home/azacsnap/bin ; azacsnap -c backup --volume data --prefix PR1_cluster_hourly --retention 6 --configfile NODE1_HANA.json –trim )
0 */4 * * * ( . ~/.profile ; cd /home/azacsnap/bin ; azacsnap -c backup --volume data --prefix PR1_cluster_hourly --retention 6 --configfile NODE2_HANA.json –trim )

The time and date fields of the crontab are “0 */4 * * *”, which means at 0 minutes past the hour, for every fourth hour, every day of the month, every month, and every day of the week run the command enclosed in braces ().

 

The command within the braces sources the user’s profile (to setup the user with the correct PATH, etc as per the installation), then changes into the user’s ‘bin’ directory and runs azacsnap with the parameters shown. Refer to AzAcSnap’s command options.

 

AzAcSnap will query the HANA database and determine the primary node. Only the primary node will have a backup operation performed (as early noted, an SAP HANA snapshot can only be issued on the primary node). When the backup starts on the secondary node, AzAcSnap will determine that it is currently a secondary node, and the backup operation will end without further operations.

 

In the rare case that AzAcSnap determines there is a split-brain scenario where both HANA nodes are declared as primary, the backup operation on both nodes will terminate and user intervention will be required. This is a safety mechanism and HSR must be corrected to properly select the primary node.

 

More information about how to Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap read here.

 

Summary

Using AzAcSnap in conjunction with Azure NetApp Files provides unique data management abilities for SAP customers only found in Azure. The combination of SAP HANA snapshots with ANF storage snapshots gives Basis teams the insurance of an application consistent backup of the HANA database without the time and load on the system. The orchestration of this workflow is all handled via AzAcSnap.

 

More information about how to Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap read here.

Links to additional information

 

Appendix – SAP HANA data volume locations

 

A detailed explanation of persistent data storage can be found in the “SAP HANA Administration Guide for SAP HANA Platform” - “Persistent Data Storage in the SAP HANA Database” section.

 

The following diagram is taken from the “Data and Log Volumes” sub-section. This shows the Directory Hierarchy for Persistent Data Storage (System with Multitenant Database Containers) for SAP HANA. Note the separation of System DB and Tenant DB files into logically grouped sub-directories. The volume names of tenant databases have a suffix to represent the database. For example, the indexserver volume for the first tenant database is hdb00002.00002, for the second database hdb00002.000003, and so on. For example, Tenant DB 1 data storage is grouped into both “hdb00002.00003” and “hdb00003.00003” sub-directories for the indexserver and xsengine respectively.

 

GeertVanTeylingen_3-1666002949785.png

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.