Containers Basics: Providing Persistent Storage to Containers

This post has been republished via RSS; it originally appeared at: ITOps Talk Blog articles.

One of the main differences you will notice when comparing containers with Virtual Machines (VMs) is that containers use an ephemeral storage by default. What that means is that containers use what we call as “scratch space” and this data is not persisted.

 

The VMs are associated with a virtual disk, the state of the OS, the application, and data are stored on that virtual disk (or multiple). Containers don’t have that by default. They do write their state to disk (by default on the container host) and you can actually bring a container up and down and still see that data. However, containers are supposed to be stateless by nature – and in many cases, orchestrators (like Kubernetes, AKS, etc.) simply replace a container if the state of that container is degraded.

 

The question then becomes: How do we present persistent storage to a container to ensure that new containers can always write to a place it can recover in case something happens to that container instance? There are a few ways you can solve this. 

Bind Mounts

The simplest way to provide persistent storage to a container is to provide a bind mount, which means that a path on the host will be visible inside the container. For example: C:\AppData is mounted as C:\AppData inside the container. Here is how you do that when instantiating a container:

 

 

 

 

docker run --name testcontainer -v C:\AppData:c:\AppData testcontainerimage

 

 

 

 

The command above will create a new container called testcontainer from testcontainerimage with the mounted volume inside of it as C:\AppData.
 

One thing to keep in mind is that the permissions for mounted volumes on containers are a bit different than regular access paths. We will explore this in an upcoming blog post however a current Microsoft Doc entitled Persistent Storage in Containers provides a good introduction to this concept.

SMB Mounts

The option above is excellent for testing scenarios but realistically won’t scale for production scenarios. Ideally, on production scenarios you should look into using SMB shares that can be mounted on containers. One of the benefits of using SMB mounts is that you can mount the SMB share into multiple containers with read/write access. SMB Global Mapping is needed to enable this as is offered via Windows Server version 1709. The following needs to be run on the container host via PowerShell to enable the functionality:

 

 

 

 

$creds = Get-Credential New-SmbGlobalMapping -RemotePath \\server01\share01 -Credential $creds -LocalPath

 

 

 

 

The command above will store the credentials and map the share to the drive on the container host. From there, you can run the same command as you did for the bind mount:

 

 

 

 

docker run --name testcontainer -v D:\AppData:c:\AppData testcontainerimage

 

 

 

 

One important point is that the AppData folder is already in the \\server01\share01, so we are just mapping it to C:\AppData inside the container. The access inside the container is completely transparent and works as if it was a local storage. One important thing to note here is that all containers will access this folder/path from the credentials used on the first step. On the server side, the share can be hosted on any compatible SMB server, enabling interesting scenarios, such as Azure Files.

 

If you have a scenario that we did not discuss here, let us know. We’re always looking into improving our container support based on customer requirements.

 

Be sure to also checkout my previous blog posts on containers which include PowerShell and Docker file, working interactively, containers and Windows Admin Center, and resource limits.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.