Deploying Microsoft Sentinel Threat Monitoring for SAP agent into an AKS/Kubernetes cluster

This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .

Quick Intro

Effectively monitoring SAP environments has been traditionally very difficult to achieve.

Recently Microsoft released Microsoft Sentinel Threat Monitoring for SAP solution which will help protect your SAP environments.

 

To see how you can deploy our solution/scenarios, check out https://aka.ms/sentinel4sapintro

One of the common questions Sentinel team is getting asked is “How do we make Microsoft Sentinel Threat Monitoring for SAP solution highly available?”

 

Original deployment scenarios (available at https://aka.ms/sentinel4sapdocs) outline deployment of the data connector agent to a VM running docker. If this VM runs in Azure, we have a 99.9% SLA (given that some criteria are matched) (https://azure.microsoft.com/en-us/support/legal/sla/virtual-machine).

Well, what if you want an even better SLA, or want better manageability than a single Docker instance.

Well, the answer is "run this container in a Kubernetes cluster". We have Azure Kubernetes Services available in Azure, so in this article we’ll review how to get it running.

 

Technology in use

Before we can get started, let's look what technologies, apart from AKS we'll be using to achieve the goal

 

Firstly, we need to remember that Microsoft Sentinel Threat Monitoring for SAP data collector utilizes SAP NetWeaver SDK (download at https://aka.ms/sentinel4sapsdk, SAP account required to download), which will need to be presented to the container.

Secondly, we need a location to store the configuration file. We then need to mounted into the container

We’ll achieve that through Azure File Shares.

Next, we'll be storing secrets in Azure Key Vault, and we'll need an Azure Active Directory pod-managed identity to connect to it.

 

We will be performing all our actions from Bash shell in Azure Cloud shell

 

Enabling necessary features

Since we'll be using some preview features, we first need to activate them and get the latest version of az command (commands borrowed from Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview))

 

# The following two commands enable the Managed Pod Identity feature
az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService

# The following commands add the aks-preview features to az command and update the aks-preview extension to the latest version
az extension add --name aks-preview
az extension update --name aks-preview

 

Next we need to deploy an AKS cluster, that has the identity management feature activated. Also, since our AKS cluster will be talking to an SAP system, we'll set it up in an Azure network, that you can later peer with network that has your SAP deployment (or that is connected to on-premises, in case your SAP resides there)

 

Creating an AKS cluster

We’ll carry out steps outlined in Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI with a minor change

For this demo we'll be creating all in East US Azure datacenter, make sure to change the location parameter if deploying in a different region. Also note, we're defining some variables that contain names of resources, make sure you change them so they are unique to you.

 

# Create a resource group
RGNAME=sentinelforsapRG
AKSNAME=sentinelforsapaks
az group create --name $RGNAME --location eastus

# Create AKS cluster
# Notice the --network-plugin azure and --enable-pod-identity. The order of these parameters for some reason must be as it's displayed, changing them around will result in a failure
az aks create --resource-group $RGNAME --name $AKSNAME --node-count 1 --enable-addons monitoring --generate-ssh-keys --network-plugin azure --enable-pod-identity

#Connect to cluster
az aks get-credentials --resource-group $RGNAME --name $AKSNAME

# Verify connection to the AKS cluster
kubectl get nodes

 

 

Creating storage account, file shares and granting access

Next step is to create a storage account and create two file shares

We’ll borrow some of the steps from the Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS) guide.

The following sample can be used to create a storage account and necessary file shares.

 

# Change these four parameters as needed for your own environment
AKS_PERS_STORAGE_ACCOUNT_NAME=sentinelforsap$RANDOM
AKS_PERS_SHARE_NAME=nwrfcsdk
AKS_PERS_SHARE_NAME2=work

# Create a storage account
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME --sku Standard_LRS

# Export the connection string as an environment variable, this is used when creating the Azure file share
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME -o tsv)

# Create the file shares
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
az storage share create -n $AKS_PERS_SHARE_NAME2 --connection-string $AZURE_STORAGE_CONNECTION_STRING

# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $RGNAME --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)

# Echo storage account name and key
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY

 

That should create us a storage account and two file shares – `nwrfcsdk` and `work`, and also output the storage account name and key (we'll need them in the next steps)

 

Next task is to allow the Kubernetes to access the file shares

Borrowing steps from the Manually create Azure Files share guide

 

kubectl create secret generic sentinel4sap-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY

 

Next, upload the NetWeaver SDK zip file to the nwrfc share, so that the result looks like this:

MSFT_AndrewLomakin_0-1655728723522.png

 

Creating systemconfig.ini

Create a systemconfig.ini file (Microsoft Sentinel Continuous Threat Monitoring for SAP container configuration file reference | Microsoft Docs

 

The Sample systemconfig.ini file below uses Azure Key Vault for storing secrets. You *can* (but shouldn't) configure it to store secrets right in the systemconfig.ini file in plain text, however this is a disaster for security, so let's take the long secure way.

Spoiler
[Secrets Source]
secrets = AZURE_KEY_VAULT

[ABAP Central Instance]
# Uncomment and replace with your own System ID, for example A4H
# sysid = A4H

# Uncomment and replace with your own Client ID, for example 001
# client = 001

# Uncomment and replace with your own System Number, for example 00
# sysnr = 00

# Uncomment and replace with your own ABAP server IP address, for example 192.168.1.1
# ashost = 192.168.1.1

[Azure Credentials]

[File Extraction ABAP]

[File Extraction JAVA]

[Logs Activation Status]
ABAPAuditLog = True
ABAPJobLog = True
ABAPSpoolLog = True
ABAPSpoolOutputLog = True
ABAPChangeDocsLog = True
ABAPAppLog = True
ABAPWorkflowLog = True
ABAPCRLog = True
ABAPTableDataLog = False

[Connector Configuration]
extractuseremail = True
apiretry = True
auditlogforcexal = False
auditlogforcelegacyfiles = False
timechunk = 60

[ABAP Table Selector]
AGR_TCODES_FULL = True
USR01_FULL = True
USR02_FULL = True
USR02_INCREMENTAL = True
AGR_1251_FULL = True
AGR_USERS_FULL = True
AGR_USERS_INCREMENTAL = True
AGR_PROF_FULL = True
UST04_FULL = True
USR21_FULL = True
ADR6_FULL = True
ADCP_FULL = True
USR05_FULL = True
USGRP_USER_FULL = True
USER_ADDR_FULL = True
DEVACCESS_FULL = True
AGR_DEFINE_FULL = True
AGR_DEFINE_INCREMENTAL = True
PAHI_FULL = True
AGR_AGRS_FULL = True
USRSTAMP_FULL = True
USRSTAMP_INCREMENTAL = True

Upload the systemconfig.ini file to the work share, so the result looks like this:

 

MSFT_AndrewLomakin_1-1655728723530.png

Creating identity and assign it to the AKS cluster

The next steps are again borrowed from the Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) guide.

 

Create an identity and assign it to a namespace in the AKS cluster:

 

IDENTITY_NAME=sentinelforsappodidentity
POD_IDENTITY_NAME="sentinelforsap"
POD_IDENTITY_NAMESPACE="default"

az identity create --resource-group $RGNAME --name ${IDENTITY_NAME}
export IDENTITY_CLIENT_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query clientId -otsv)"
export IDENTITY_RESOURCE_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query id -otsv)"
export IDENTITY_PRINCIPAL_ID="$(az identity show -g ${RGNAME} -n ${IDENTITY_NAME} --query principalId -otsv)"

az aks pod-identity add --resource-group $RGNAME --cluster-name $AKSNAME --namespace ${POD_IDENTITY_NAMESPACE}  --name ${POD_IDENTITY_NAME} --identity-resource-id ${IDENTITY_RESOURCE_ID}

 

 

Create and configure the keyvault

These steps are more-less from our own deployment guide

 

KVNAME=sentinelforsapkv
az keyvault create --name $KVNAME --resource-group $RGNAME

az keyvault set-policy -n $KVNAME -g $RGNAME --object-id $IDENTITY_PRINCIPAL_ID --secret-permissions get

 

Now the easy part, let's populate keyvault with the secrets

 

#Define the SID
SID=A4H

# Replace values below to match your SAP and Log Analytics setup
USERNAME="SENTINELUSER"
PASSWORD="P@ssw0rd1"
LOGWSID="8a7e2369-7a53-442f-a264-b1f98e1b1baa"
LOGWSPUBLICKEY="Q29uZ3JhdHosIHlvdSBmb3VuZCB0aGUgZWFzdGVyIGVnZyA6KSBOb3cgZ28gYW5kIGZpbmlzaCB0aGUgc2V0dXA="

az keyvault secret set --name "$SID"-ABAPUSER --value "$USERNAME" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-ABAPPASS --value "$PASSWORD" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSID --value "$LOGWSID" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSPUBLICKEY --value "$LOGWSPUBLICKEY" --vault-name "$KVNAME"

 

 

Constructing the yaml file

Finally let’s create the yaml which will be used to build our pod file (borrowed with some editing from Manually create Azure Files share)

Couple of things to point out:

The "aadpodidbinding" label must match with the pod identity name from the previous step. Also something that is crucial is to have the nobrl option on the work file share, else metadata.db file that is generated for the data collector to track it's progress will fail to initialize.

Spoiler
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: Sentinel4SAP
  name: sentinel4sap-agent
spec:
  replicas: 1
  selector:
    matchLabels:
      app: Sentinel4SAP
  template:
    metadata:
      labels:
        app: Sentinel4SAP
        aadpodidbinding: sentinelforsap
      name: deployment-azurefile
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: sentinel4sap-agent
          image: mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest
          volumeMounts:
            - name: sentinel4sap-sdk
              mountPath: "/sapcon-app/inst"
              readOnly: false
            - name: sentinel4sap-work
              mountPath: "/sapcon-app/sapcon/config/system"
              readOnly: false
          resources:
            limits:
              memory: "2048Mi"
              cpu: "500m"
      volumes:
      - name: sentinel4sap-work
        csi:
            driver: file.csi.azure.com
            volumeAttributes:
                secretName: sentinel4sap-fileshare-secret
                shareName: work
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"
      - name: sentinel4sap-sdk
        csi:
            driver: file.csi.azure.com
            volumeAttributes:
                secretName: sentinel4sap-fileshare-secret
                shareName: nwrfcsdk
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"

Upload this yaml to Azure Cloud Shell

 

Creating Network Peering

One last thing to do before we deploy the actual container is to peer the VNet that was created for AKS with VNet where SAP resides (unless you're accessing your SAP through a public IP address, which would be very odd). Just navigate through the portal and create a new peering between the networks:

MSFT_AndrewLomakin_0-1655906803635.png

 

Make magic happen

Run the command below and we're done (assuming you saved the yaml as aks-deploy.yml) :)

 

kubectl apply -f aks-deploy.yml

 

That should be it, verify container is running by reviewing

 

kubectl get pods
NAME                                 READY   STATUS             RESTARTS         AGE
sentinel4sap-agent-cdf5fd8fd-w2nrv   1/1     Running            0(4m25s ago)   4m

kubectl logs sentinel4sap-agent-cdf5fd8fd-w2nrv

 

That's it, we've now deployed the data collector agent onto a kubernetes cluster. Simple, right? :)

 

For more information on Microsoft Sentinel Threat Monitoring for SAP be sure to check the product documentation page https://aka.ms/sentinel4sapdocs

 

P.S. Did you find the easter egg in this post?

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.