Configure MSDTC to run Distributed transactions for SQL Server Linux Containers on Azure Kubernetes

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

It's been a while since I've had the opportunity to write and share a blog post about SQL Server containers and Linux. Today, I'd like to show you how to set up and use MSDTC (Microsoft Distributed Transaction Coordinator) to execute distributed transactions for SQL Server containers running on a Kubernetes platform.

 

Please see the following documentation for more information on DTC and SQL Server on Linux. How to configure MSDTC on Linux - SQL Server | Microsoft Docs.

 

The first step is to set up a Kubernetes cluster and connect to it. As usual, I'll be using Azure Kubernetes Service (AKS) as my Kubernetes Platform. Please see  Quickstart: Deploy an AKS cluster by using Azure CLI - Azure Kubernetes Service | Microsoft Docs This documentation describes how to deploy an Azure Kubernetes Service cluster with Azure CLI. After the cluster has been deployed, we will proceed to connect to the Azure Kubernetes Cluster using the steps outlined here.

 

I'm going to use the cluster's default namespace to deploy my SQL Server containers and other supporting objects. Before I deploy my SQL Server containers, I need to save the 'sa' password as a secret in Kubernetes, so I'd use the command: 

 

kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"

 

Please note, I will use the same ‘sa’ password to connect to both the SQL Server instances that I deploy.

 

Now that we've created the secret, we're ready to deploy SQL Server containers. I prefer and recommend deploying SQL Server containers as'statefulset' deployments, which ensure that the container name and hostname remain unchanged even after the pods are deleted and recreated. For more information on statefulset deployments, see StatefulSets | Kubernetes.

 

The following yaml script is used to deploy the following objects: one storage class, two SQL Server containers, and two load balancer services to connect to the respective SQL Servers.

 

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-disk provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS kind: Managed --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mssql labels: app: mssql spec: serviceName: "mssql" replicas: 2 selector: matchLabels: app: mssql template: metadata: labels: app: mssql spec: securityContext: fsGroup: 10001 containers: - name: mssql image: mcr.microsoft.com/mssql/server:2019-latest ports: - containerPort: 1433 name: tcpsql - containerPort: 13500 name: dtcport - containerPort: 51000 name: dtctcpport env: - name: ACCEPT_EULA value: "Y" - name: MSSQL_ENABLE_HADR value: "1" - name: MSSQL_AGENT_ENABLED value: "1" - name: MSSQL_RPC_PORT value: "13500" - name: MSSQL_DTC_TCP_PORT value: "51000" - name: SA_PASSWORD valueFrom: secretKeyRef: name: mssql key: SA_PASSWORD volumeMounts: - name: mssql mountPath: "/var/opt/mssql" volumeClaimTemplates: - metadata: name: mssql spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi --- apiVersion: v1 kind: Service metadata: name: mssql-0 spec: type: LoadBalancer loadBalancerIP: 40.88.213.209 selector: statefulset.kubernetes.io/pod-name: mssql-0 ports: - protocol: TCP port: 1433 targetPort: 1433 name: tcpsql - protocol: TCP port: 51000 targetPort: 51000 name: dtctcpport - protocol: TCP port: 135 targetPort: 13500 name: nonrootport --- apiVersion: v1 kind: Service metadata: name: mssql-1 spec: type: LoadBalancer loadBalancerIP: 20.72.137.129 selector: statefulset.kubernetes.io/pod-name: mssql-1 ports: - protocol: TCP port: 1433 targetPort: 1433 name: tcpsql - protocol: TCP port: 51000 targetPort: 51000 name: dtctcpport - protocol: TCP port: 135 targetPort: 13500 name: nonrootport

 

Let me explain each section of the script above. The first section of the script below creates the Azure disk-based storageClass, which is used as persistent storage for SQL Server containers.

 

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-disk provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS kind: Managed

 

The script's second section deploys multiple SQL Server instances as part of a single deployment. The number of SQL Server containers deployed is determined by the value you enter in the replicas field. In this case, we're only deploying two SQL Server containers. We are also configuring two environment variables that are required for MSDTC configuration.

  • MSSQL RPC PORT-> 13500: This is the TCP port to which the 'RPC endpoint mapper' process is bound.
  • MSSQL DTC TCP PORT -> 51000: This is the port on which the MSDTC server listens. If this option is not set, the MSDTC service will use a random ephemeral port on service restarts, and firewall exceptions will need to be reconfigured to ensure that the MSDTC service can communicate. We are also exposing these ports 1433, 13500, and 51000 at the container level.

For the above two environment variables, you can use any port of your choice.

 

apiVersion: apps/v1 kind: StatefulSet metadata: name: mssql labels: app: mssql spec: serviceName: "mssql" replicas: 2 selector: matchLabels: app: mssql template: metadata: labels: app: mssql spec: securityContext: fsGroup: 10001 containers: - name: mssql image: mcr.microsoft.com/mssql/server:2019-latest ports: - containerPort: 1433 name: tcpsql - containerPort: 13500 name: dtcport - containerPort: 51000 name: dtctcpport env: - name: ACCEPT_EULA value: "Y" - name: MSSQL_ENABLE_HADR value: "1" - name: MSSQL_AGENT_ENABLED value: "1" - name: MSSQL_RPC_PORT value: "13500" - name: MSSQL_DTC_TCP_PORT value: "51000" - name: SA_PASSWORD valueFrom: secretKeyRef: name: mssql key: SA_PASSWORD volumeMounts: - name: mssql mountPath: "/var/opt/mssql" .. ..

 

The script concludes with the creation of the load balancer service, which allows us to connect to the SQL Server instances externally. You'll notice in the script that we expose three ports: 1433 for connecting to SQL Server, and 13500 and 51000 for DTC communication. We also configure port routing to occur between port 135 (MSDTC RPC port) and port 13500 using the 'port' and 'targetPort' options in the script. I also am configuring the load balancers to use static IPs, which ensures that the external IP address for the service does not change when the service is deleted and recreated. To know more and to create static IPs in AKS please refer: Use static IP with load balancer - Azure Kubernetes Service | Microsoft Docs.

 

.. port: 1433 targetPort: 1433 name: tcpsql - protocol: TCP port: 51000 targetPort: 51000 name: dtctcpport - protocol: TCP port: 135 targetPort: 13500 name: nonrootport

 

After you've used yaml to create the necessary objects. When you use the command kubectl get all to list all the objects, you should see something like this.

 

C:\>kubectl get all NAME READY STATUS RESTARTS AGE pod/mssql-0 1/1 Running 0 61m pod/mssql-1 1/1 Running 0 61m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 7d1h service/mssql-0 LoadBalancer 10.0.18.186 40.88.213.209 1433:31875/TCP,51000:31219/TCP,135:30044/TCP 3d1h service/mssql-1 LoadBalancer 10.0.16.180 20.72.137.129 1433:30353/TCP,51000:32734/TCP,135:31239/TCP 3d1h NAME READY AGE statefulset.apps/mssql 2/2 5d20h

 

You can now connect to the SQL Servers, add linked servers, and run the distributed transactions shown below.

I connect to mssql-1, then add mssql-0 as a linked server and run the distributed transaction to list mssql-0's sysprocesses.

 

USE [master] GO EXEC master.dbo.sp_addlinkedserver = N'40.88.213.209', @srvproduct=N'SQL Server' ; GO EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'40.88.213.209', @rmtuser = 'sa', @rmtpassword = 'xxxx', @useself = N'False'; GO set xact_abort on begin distributed transaction select * from [40.88.213.209].master.dbo.sysprocesses commit Go

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.