Deploying Microsoft Sentinel Threat Monitoring for SAP agent into an AKS/Kubernetes cluster

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Quick Intro

Effectively monitoring SAP environments has been traditionally very difficult to achieve.

Recently Microsoft released Microsoft Sentinel Threat Monitoring for SAP solution which will help protect your SAP environments.


To see how you can deploy our solution/scenarios, check out

One of the common questions Sentinel team is getting asked is “How do we make Microsoft Sentinel Threat Monitoring for SAP solution highly available?”


Original deployment scenarios (available at outline deployment of the data connector agent to a VM running docker. If this VM runs in Azure, we have a 99.9% SLA (given that some criteria are matched) (

Well, what if you want an even better SLA, or want better manageability than a single Docker instance.

Well, the answer is "run this container in a Kubernetes cluster". We have Azure Kubernetes Services available in Azure, so in this article we’ll review how to get it running.


Technology in use

Before we can get started, let's look what technologies, apart from AKS we'll be using to achieve the goal


Firstly, we need to remember that Microsoft Sentinel Threat Monitoring for SAP data collector utilizes SAP NetWeaver SDK (download at, SAP account required to download), which will need to be presented to the container.

Secondly, we need a location to store the configuration file. We then need to mounted into the container

We’ll achieve that through Azure File Shares.

Next, we'll be storing secrets in Azure Key Vault, and we'll need an Azure Active Directory pod-managed identity to connect to it.


We will be performing all our actions from Bash shell in Azure Cloud shell


Enabling necessary features

Since we'll be using some preview features, we first need to activate them and get the latest version of az command (commands borrowed from Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview))


# The following two commands enable the Managed Pod Identity feature
az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService

# The following commands add the aks-preview features to az command and update the aks-preview extension to the latest version
az extension add --name aks-preview
az extension update --name aks-preview


Next we need to deploy an AKS cluster, that has the identity management feature activated. Also, since our AKS cluster will be talking to an SAP system, we'll set it up in an Azure network, that you can later peer with network that has your SAP deployment (or that is connected to on-premises, in case your SAP resides there)


Creating an AKS cluster

We’ll carry out steps outlined in Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI with a minor change

For this demo we'll be creating all in East US Azure datacenter, make sure to change the location parameter if deploying in a different region. Also note, we're defining some variables that contain names of resources, make sure you change them so they are unique to you.


# Create a resource group
az group create --name $RGNAME --location eastus

# Create AKS cluster
# Notice the --network-plugin azure and --enable-pod-identity. The order of these parameters for some reason must be as it's displayed, changing them around will result in a failure
az aks create --resource-group $RGNAME --name $AKSNAME --node-count 1 --enable-addons monitoring --generate-ssh-keys --network-plugin azure --enable-pod-identity

#Connect to cluster
az aks get-credentials --resource-group $RGNAME --name $AKSNAME

# Verify connection to the AKS cluster
kubectl get nodes



Creating storage account, file shares and granting access

Next step is to create a storage account and create two file shares

We’ll borrow some of the steps from the Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS) guide.

The following sample can be used to create a storage account and necessary file shares.


# Change these four parameters as needed for your own environment

# Create a storage account
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME --sku Standard_LRS

# Export the connection string as an environment variable, this is used when creating the Azure file share
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME -o tsv)

# Create the file shares
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
az storage share create -n $AKS_PERS_SHARE_NAME2 --connection-string $AZURE_STORAGE_CONNECTION_STRING

# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $RGNAME --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)

# Echo storage account name and key
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY


That should create us a storage account and two file shares – `nwrfcsdk` and `work`, and also output the storage account name and key (we'll need them in the next steps)


Next task is to allow the Kubernetes to access the file shares

Borrowing steps from the Manually create Azure Files share guide


kubectl create secret generic sentinel4sap-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY


Next, upload the NetWeaver SDK zip file to the nwrfc share, so that the result looks like this:



Creating systemconfig.ini

Create a systemconfig.ini file (Microsoft Sentinel Continuous Threat Monitoring for SAP container configuration file reference | Microsoft Docs


The Sample systemconfig.ini file below uses Azure Key Vault for storing secrets. You *can* (but shouldn't) configure it to store secrets right in the systemconfig.ini file in plain text, however this is a disaster for security, so let's take the long secure way.

[Secrets Source]

[ABAP Central Instance]
# Uncomment and replace with your own System ID, for example A4H
# sysid = A4H

# Uncomment and replace with your own Client ID, for example 001
# client = 001

# Uncomment and replace with your own System Number, for example 00
# sysnr = 00

# Uncomment and replace with your own ABAP server IP address, for example
# ashost =

[Azure Credentials]

[File Extraction ABAP]

[File Extraction JAVA]

[Logs Activation Status]
ABAPAuditLog = True
ABAPJobLog = True
ABAPSpoolLog = True
ABAPSpoolOutputLog = True
ABAPChangeDocsLog = True
ABAPAppLog = True
ABAPWorkflowLog = True
ABAPCRLog = True
ABAPTableDataLog = False

[Connector Configuration]
extractuseremail = True
apiretry = True
auditlogforcexal = False
auditlogforcelegacyfiles = False
timechunk = 60

[ABAP Table Selector]
USR01_FULL = True
USR02_FULL = True
AGR_1251_FULL = True
UST04_FULL = True
USR21_FULL = True
ADR6_FULL = True
USR05_FULL = True

Upload the systemconfig.ini file to the work share, so the result looks like this:



Creating identity and assign it to the AKS cluster

The next steps are again borrowed from the Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) guide.


Create an identity and assign it to a namespace in the AKS cluster:



az identity create --resource-group $RGNAME --name ${IDENTITY_NAME}
export IDENTITY_CLIENT_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query clientId -otsv)"
export IDENTITY_RESOURCE_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query id -otsv)"
export IDENTITY_PRINCIPAL_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query principalId -otsv)"

az aks pod-identity add --resource-group $RGNAME --cluster-name $AKSNAME --namespace ${POD_IDENTITY_NAMESPACE}  --name ${POD_IDENTITY_NAME} --identity-resource-id ${IDENTITY_RESOURCE_ID}



Create and configure the keyvault

These steps are more-less from our own deployment guide


az keyvault create --name $KVNAME --resource-group $RGNAME

az keyvault set-policy -n $KVNAME -g $RGNAME --object-id $IDENTITY_PRINCIPAL_ID --secret-permissions get


Now the easy part, let's populate keyvault with the secrets


#Define the SID

# Replace values below to match your SAP and Log Analytics setup

az keyvault secret set --name "$SID"-ABAPUSER --value "$USERNAME" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-ABAPPASS --value "$PASSWORD" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSID --value "$LOGWSID" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSPUBLICKEY --value "$LOGWSPUBLICKEY" --vault-name "$KVNAME"



Constructing the yaml file

Finally let’s create the yaml which will be used to build our pod file (borrowed with some editing from Manually create Azure Files share)

Couple of things to point out:

The "aadpodidbinding" label must match with the pod identity name from the previous step. Also something that is crucial is to have the nobrl option on the work file share, else metadata.db file that is generated for the data collector to track it's progress will fail to initialize.

apiVersion: apps/v1
kind: Deployment
    app: Sentinel4SAP
  name: sentinel4sap-agent
  replicas: 1
      app: Sentinel4SAP
        app: Sentinel4SAP
        aadpodidbinding: sentinelforsap
      name: deployment-azurefile
        "": linux
        - name: sentinel4sap-agent
            - name: sentinel4sap-sdk
              mountPath: "/sapcon-app/inst"
              readOnly: false
            - name: sentinel4sap-work
              mountPath: "/sapcon-app/sapcon/config/system"
              readOnly: false
              memory: "2048Mi"
              cpu: "500m"
      - name: sentinel4sap-work
                secretName: sentinel4sap-fileshare-secret
                shareName: work
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"
      - name: sentinel4sap-sdk
                secretName: sentinel4sap-fileshare-secret
                shareName: nwrfcsdk
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"

Upload this yaml to Azure Cloud Shell


Creating Network Peering

One last thing to do before we deploy the actual container is to peer the VNet that was created for AKS with VNet where SAP resides (unless you're accessing your SAP through a public IP address, which would be very odd). Just navigate through the portal and create a new peering between the networks:



Make magic happen

Run the command below and we're done (assuming you saved the yaml as aks-deploy.yml) :)


kubectl apply -f aks-deploy.yml


That should be it, verify container is running by reviewing


kubectl get pods
NAME                                 READY   STATUS             RESTARTS         AGE
sentinel4sap-agent-cdf5fd8fd-w2nrv   1/1     Running            0(4m25s ago)   4m

kubectl logs sentinel4sap-agent-cdf5fd8fd-w2nrv


That's it, we've now deployed the data collector agent onto a kubernetes cluster. Simple, right? :)


For more information on Microsoft Sentinel Threat Monitoring for SAP be sure to check the product documentation page


P.S. Did you find the easter egg in this post?

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.