This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.
The technology: what is a container?
According to the Cloud Computing Dictionary a container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
In Linux, a container is a sandboxed process isolated from all other processes on the host machine. That isolation leverages kernel namespaces and cgroups, features that have been in Linux for a long time. Docker has worked to make these capabilities approachable and easy to use. To summarize, a container:
- It is a runnable instance of an image. You can create, start, stop, move, or delete a container using the DockerAPI or CLI.
- It can be run on local machines, virtual machines, or deployed to the cloud.
- It is portable. Containers can run natively on Linux and Windows operating systems. Windows containers can run only on Windows hosts (a host server or a virtual machine), and Linux containers can run only on Linux hosts. However, in recent versions of Windows Server and Hyper-V containers, a Linux container can also run natively on Windows Server by using the Hyper-V isolation technology currently available only in Windows Server Containers.
- It is isolated from other containers and runs its software, binaries, and configurations.
- It is intended to be stateless and immutable: you should not change the code of a container that is already running. If you have a containerized application and want to make changes, the correct process is to build a new image that includes the change, then recreate the container to start from the updated image.
What is a container image?
When running a container, the root filesystem is mounted in an isolated namespace. The content of the root filesystem is provided by a container image. Since the image contains the container’s filesystem, it must contain everything needed to run an application - all dependencies, configurations, scripts, binaries, etc. The image also contains other settings for the container, such as environment variables, a default command to run, and other metadata.
Choose the right Azure service
When building or scaling a solution, one of the key things to consider is choosing the most appropriate hosting platform and service on Azure. The right approach is not always the same for everyone, it depends on the:
- Goals of your organization
- The size of your team
- The complexity of the product, the time, and the budget you have to build it.
Azure offers wide a range of services to host containers. Some products like AKS or Arc-enabled Kubernetes offer a higher level of flexibility, allowing you to customize your containerized applications to meet your specific needs. These types of products may require more skilled personnel to operate effectively, but can be a great fit for organizations with the resources and expertise to take advantage of their advanced capabilities. On the other hand, other products like Azure Container Instances or Azure Container Apps, may have less flexibility but offer a more streamlined and easy-to-use experience, making them a good fit for organizations looking for a quick and easy way to get started with containerization. It is important to determine why you are you going for containers. Containers and Kubernetes are not silver bullets that will solve all problems just adopting them. It is always a good idea to understand what are the actual goals of your organization to select the right product.
Example of goals could be:
- Rapid scalability: scale Cost-Effectively your application
- Agility: scale the number of Developers
- Reduce the time to market
Factors to consider before choosing containers
Take an example, Alice wants to build a shopping website to sell carpets and has a team of two people. She decides to use containers since she knows it will allow her to quickly build a website. Would this be the best choice for her?
When evaluating container technologies, there are a few factors to consider. Let's see some of these factors.
How will the container infrastructure be operated?
With increased Kubernetes popularity, according to the 2021 CNCF survey 96% of organizations are either using or evaluating Kubernetes, it is fair to say that it would be an easy choice for any company building a cloud native application to select Kubernetes as an option. Even though Kubernetes is the industry standard for container orchestration, and it is offered on Azure as a managed product (AKS), the operation of Kubernetes requires at least a dedicated engineer. AKS is a managed services and makes Kubernetes easy, but you will still need to make design choices on:
- Networking CNI and IP address planning
- Networking Load Balancing and Ingress technology
- Virtual Machines SKUs for Nodepools
- Storage options
- Kubernetes observability
- Kubernetes Security and Secret Management
From the above, we can see that Kubernetes requires a dedicated engineer to implement the design and continue the "day 2" operations. The cluster changes driven by the product evolution and the product testing against the new versions of the Kubernetes APIs will require at a minimum a dedicated engineer to operate the cluster. There is an interesting blog post from Matt Rickard that suggests the team size as a data point to decide for the adoption of Kubernetes or a Serverless container runtime. Also John Savill published on YouTube a video explaining the required work to Upgrading and Maintaining your AKS Cluster. Suppose your startup does not yet have a dedicated engineer or is a small team. In that case, you could benefit from starting your journey to containers using a serverless compute platform that supports containers like Azure Functions or a serverless container orchestration service like Azure Container Instances or Azure Container Apps For a full list of the Azure hosting platforms that support containers, see Comparing Container Apps with other Azure container options.
What is the complexity of your product?
One of the critical benefits for ISVs and Startups of building a product with containers on Azure is the velocity at which you can make it. Containers are an excellent option for creating and scaling a product quickly. However, if your product design is not cloud-native, you might face challenges containerizing your software. We recommend reviewing the 12-factor app methodology to understand if your product is cloud native. If not, you might consider refactoring your product to make it cloud-native. This will allow you to build your product faster and scale it more easily. If your application requires a complex network architecture or a particular storage type, or you can't afford to deal with all the complexity of managing a full-fledged AKS cluster, you can consider alternative solutions. In this case, you can review the container options available on Azure and compare them, as they can offer a faster serverless option for building your application and scaling it. For example, Azure Container Apps is a serverless container runtime powered by Kubernetes and open-source technologies like Dapr, KEDA and envoy that allows you to develop and scale your application without managing the underlying infrastructure.
How much time do you have to build the product?
Kubernetes is a complex service and requires a dedicated engineer to manage it. If you are building a solution that requires a quick time to market, there might be better options than this. As mentioned above, you would need to make design choices that would require adding configurations after selecting Kubernetes as your container orchestration tool. This can lead to a longer timeline to build your product.
Do you need to scale your product quickly?
One of the key benefits of Kubernetes is scalability. If you are building a product that doesn't need to scale from the start you can look at containerizing the application and using a serverless container runtime. Eventually you can always switch to Kubernetes when you are ready to adopt it. This will allow you to build product quickly without having to manage the underlying infrastructure. For example, Azure App service allows you to quickly deploy your containerized application and scale it as needed. The documentation of Azure Container Apps contains a comparison with the other Microsoft container related products.
What happens if you do not choose the right container service?
If you do not choose the right container service, you might end up with a solution that is not scalable, hard to maintain, and difficult to scale in the future. This can lead your project to fail. It is essential to understand the pros and cons of each possibility and make the right decision for your organization. Let's see some of the challenges that you might face if you do not choose the right container service.
Complexity in management of the containers
If you choose Kubernetes as your container orchestration tool, you must plan its configuration carefully. If you make the wrong decisions during the design phase, this could lead to an unnecessarily complex infrastructure and network topology that are hard to maintain and scale in the future. We recommend reviewing the above factors before selecting Kubernetes and aligning it with the organization's goals.
Product complexity and a longer timeline to build the product
Building cloud native applications requires that you at least have the knowledge of building containers and deploying them. If you are not familiar with this, it can lead to a longer timeline to build the product and the complexity of your product will increase. We recommend reviewing the Cloud Native Infrastructure with Azure to learn how to build and manage cloud-native applications.
Challenges with upgrades and maintenance
Kubernetes ecosystem is constantly being developed and rapidly changing.
Kubernetes follows a 1-year deprecation cycle, which means that the team responsible for managing the cluster should periodically monitor the Kubernetes release notes for API changes. Your application should be tested with any new Kubernetes version in a quality assurance environment to ensure it continues to work as expected and does not use deprecated features. In addition, if you are using other open-source tools such as Helm, Istio, or NGINX ingress controller in your operations, it is essential to note that these tools also have their release cycles. Maintaining a secure and functional infrastructure is necessary to keep all of these components up-to-date. Please do so to avoid security issues or outages in the long term. Kubernetes is a distributed system, and the upgrade is process that upgrades the components during a maintenance window. We recommend reviewing the AKS upgrade documentation to understand the process of upgrading your cluster.
Challenges with hiring the platform team
Selecting Kubernetes requires at least a dedicated engineer to manage it. In this case, you would need to hire a dedicated engineer to manage the Kubernetes APIs and the underlying infrastructure. This can lead to a higher cost for the organization. We recommend reviewing some of the container options available in Azure and comparing them as they can offer a serverless option for building your application without the need to manage the underlying infrastructure. If you need access to the Kubernetes APIs, you can always switch to Kubernetes when you are ready to adopt it ensuring you have the right team to manage it.
Avoiding unnecessary complexity choosing the wrong product
Kubernetes has many features that can help you build and scale your product. However, if you are not using most of these features, you might end up operating a complex product without getting the full benefit of it. This can potentially lead to a deployment that is not secure, requires manual work, and is hard to debug when the cluster isn't working as expected. The effort for the cluster configuration, maintenance, and monitoring needs to be factored in when deciding the hosting platform for your containerized and micro-services-based applications. Kubernetes has a steep learning curve, and configuring it can be time-consuming and not worth it, especially if you are not using most of the functionalities.
Cost optimization
One of the primary benefits of using cloud computing is the ability to pay only for the resources you need, when you need them. The cloud model provides significant cost savings compared to traditional IT infrastructure. Total Cost of Ownership (TCO) is a financial estimate of the direct and indirect costs of using a service.
There is a very extensive article from @paolosalvatori on how to reduce TCO of your AKS cluster. The key concepts of cost optimization are the following:
-
Autoscaling: This helps ensure you only pay for the resources you need, minimizing waste and unused capacity. You have to learn how to do it correctly, but it is a powerful tool.
-
Spot instances: These allow you to take advantage of unused capacity at discounted prices, as long as you can tolerate the risk of your instances being terminated when the capacity is needed elsewhere. This is a good option for non-critical workloads.
-
Azure Reservations and Azure Savings Plans: If you know you will need a certain amount of resources for an extended period of time, you can commit to Azure Reservations and Azure Savings Plans to receive discounted prices. This is a cost-effective choice when you have clear plans. For more information, see decide between a savings plan and a reservation.
Storage options for containers
The introduction to Azure Storage provides information about all the available storage options in Microsoft Azure. When running containers on Azure that need access to persistent storage, it is essential to understand the differences between the following storage services:
- Azure Disks: These are block-level, highly available storage volumes that can be replicated across multiple regions for increased durability and reliability. Managed disks are like a physical disk in an on-premises server but, virtualized. To create a managed disk, you have to specify the disk size and type and provision the disk. Once you provision the disk, Azure handles the rest. The available types of disks are the following:
- Ultra disks
- Premium SSD v2
- Premium SSDs (solid-state drives)
- Standard SSDs
- Standard HDDs (hard disk drives)
- Azure Files: This cloud-based file storage service allows you to access and share files via the SMB or NFS protocols. This storage service was explicitly designed for use cases that require shared file storage. Azure Files is highly available and can be accessed from multiple locations. Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API. Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux clients. Additionally, SMB Azure file shares can be cached on Windows servers with Azure File Sync for fast access near where the data is being used.
- Azure Blob Storage: This is a highly scalable, distributed object storage service for unstructured data. Blob Storage is optimized for storing massive amounts of unstructured data that doesn't adhere to a particular data model or definition, such as text or binary data.
Containers make it easy to scale an application horizontally by quickly scaling out the number of containers to handle an increasing load demand. This is particularly easy for stateless applications, as there is no need to worry about persistent storage to maintain the application's state when processing incoming requests. Containers are ephemeral, meaning they do not persist data when they are terminated or moved to another agent node. This means that any data stored on the local disk of a container host will be lost when the container is terminated.
You can attach a temporary disk to the container when an application only requires temporary storage. This allows the application to store and access data during the container's lifetime. The disk can be attached to the container's agent node, usually with better performance than a storage solution attached over a network connection. For this use case, you can use Azure Disks to attach a disk to a container in AKS.
When an application is stateful, it requires persistent storage, and to scale horizontally, the design of the storage solution becomes more complex. Azure offers several storage abstractions that are specifically designed for use with containers. Azure Blob Storage, provides a way to store data without attaching managed disks to the container's agent node. This means that data can be stored in a central location, where it can be accessed by multiple containers, even if they are running on different hosts.
Azure Blob Storage provides a simple API for storing and retrieving data. When using Azure Blob Storage, developers need to shift their mindset away from traditional storage methods, such as attaching disks to virtual machines and accessing files from a local disk, and move towards the object storage API. In object storage, data is read and written using HTTP calls, rather than through a file system. One of the key benefits of using object storage, such as Azure Blob Storage, is that it can scale with the number of containers. As more containers are added to a deployment, the storage backend will automatically scale to accommodate the increased demand in storage.
If you have an existing application designed to read and write from a file system, there are multiple options to mount a network drive over NFS.
- Azure Container Apps and Azure Container Instance are integrated with Azure Files to mount shared drives.
- With AKS you can mount a shared drive backed by Azure Blob Storage. This works thanks to the Azure Blob Storage CSI driver for Kubernetes that provides the capability of mounting Azure Blob Storage as a file system to a Kubernetes pod or application, using either the BlobFuse or NFS 3.0 options. In conclusion, Microsoft Azure offers many storage services for containers, including Azure Disks, Azure Files, and Azure Blob Storage. When building a stateless application, it is possible to use temporary storage by attaching a block device with Azure Disks. But when building a stateful application, choosing a storage solution that can scale with the number of containers is essential. Azure Blob Storage provides a simple API for storing and retrieving data and can scale with the number of containers. Remember to check the available storage options when using Azure Container Apps, Azure Container Instance, and AKS.
Containerized Applications Lifecycle
This section provides a series of references to resources, tutorials, and quickstarts that can help you get started creating and testing containerized applications locally before deploying them to Azure.
Docker
Containers are compact virtualized environments, like virtual machines, that provide a platform for building and running apps. Containers are immutable and don't require the size and overhead of a complete operating system. Docker is a third-party, industry-standard container provider and container management system. You can install Docker on your machine to create, debug, and test containerized applications locally before deploying them to Azure. For more information on Docker, see the following resources:
- Get Started: this guide contains step-by-step instructions on how to get started with Docker. Some of the things you’ll learn and do in this guide are:
- Build and run an image as a container
- Share images using Docker Hub
- Deploy Docker applications using multiple containers with a database
- Run applications using Docker Compose
- Best practices for writing Dockerfiles: this article covers recommended best practices and methods for building efficient container images. Docker builds images automatically by reading the instructions from a
Dockerfile
, a text file that contains all commands, in order, needed to build a given image. ADockerfile
adopts a specific format and set of instructions which you can find explained in the Dockerfile reference. - Containerize an application: this tutorial show how to build and containerize a simple todo list application that’s running in Node.js.
Docker Extension for Visual Studio Code
If you use Visual Studio Code to develop and test your containerized applications, you can install and use the Docker extension that makes it easy to build, manage, and deploy containerized applications in Visual Studio Code. The Docker extension for Visual Studio Code provides the following features:
- Docker Explorer: The Docker extension contributes a Docker Explorer view to Visual Studio Code. The Docker Explorer lets you examine and manage Docker assets: containers, images, volumes, networks, and container registries. If the Azure Account extension is installed, you can browse your Azure Container Registries as well.
- Docker Commands: Many of the most common Docker commands are built right into the Command Palette of Visual Studio Code.
- Docker Compose: Docker Compose lets you define and run multi-container applications with Docker. The Compose Language Service in the Docker extension gives you IntelliSense and tab completions when authoring docker-compose.yml files. Press Ctrl+Space to see a list of valid Compose directives.
- Image registries: the Docker extension displays the content and push, pull, or delete images from Azure Container Registry, Docker Hub, GitLab, and more.
For more information on how to install and use the Docker extension for Visual Studio Code, see Docker in Visual Studio Code .
Create and test containerized applications using Visual Studio Code
You can use Visual Studio Code and Docker extension to build, test, and deploy containerized applications. For more information, see the following resources:
- Node.js in a container: this tutorial shows how to:
- Create a Dockerfile file for an Express Node.js service container
- Build, run, and verify the functionality of the service
- Debug the service running within a container
- Python in a container: In this tutorial, you will learn how to:
- Create a Dockerfile file describing a simple Python container.
- Build, run, and verify the functionality of a Django, Flask, or General Python app.
- Debug the app running in a container.
- ASP.NET Core in a container: in this guide you will learn how to:
- Create a Dockerfile file describing a simple .NET Standard service container.
- Build, run, and verify the functionality of the service.
- Debug the service running as a container.
- Debug containerized apps: the Docker extension provides support for debugging applications within Docker containers, such as scaffolding
launch.json
configurations for attaching a debugger to applications running within a container. The Docker extension currently supports debugging Node.js, Python, and .NET applications within Docker containers. - Use Docker Compose: Docker Compose provides a way to orchestrate multiple containers that work together. Examples include a service that processes requests and a front-end web site, or a service that uses a supporting function such as a Redis cache. If you are using the microservices model for your app development, you can use Docker Compose to factor the app code into several independently running services that communicate using web requests. This article helps you enable Docker Compose for your apps, whether they are Node.js, Python, and .NET, and also helps you configure debugging in Visual Studio Code.
- Using container registries: a container registry is a storage and content delivery system, holding named Docker images, available in different tagged versions. Users can connect to Docker registries from the following sources:
- Azure Container Registry
- Docker Hub
- GitLab container registry
- Any generic private registry that supports the Docker V2 API
- Deploy to Azure App Service: in this guide you will learn how to:
- Create a container image for your application.
- Push the image to a container registry.
- Deploy the image to Azure App Service.
- Deploy the image to Azure Container Instances (ACI)
- Your development environment: when using Visual Studio Code, you can choose whether to develop a container-based service in the
local environment
, or in aremote environment
. - The
local environment
is the operating system of your developer workstation; using the local environment means you build and run your service container(s) using Docker installed on your workstation. Docker is supported on Windows, macOS, and various Linux distributions; for system and hardware requirements, refer to Docker installation page. - A
remote development environment
is different from your developer workstation. It can be a remote machine accessible via SSH, a virtual machine running on your developer workstation, or a development container. A remote environment can have advantages over the local environment, the main one being the ability to use the same operating system during development, and when your service is running in production. To use a remote environment, you need to ensure that thedocker
command (Docker CLI) is available and functional within that environment. - Custom development environments: your development environment is where you do your coding. Visual Studio Code allows you to use a development environment different than your local computer through a container, a separate (or remote) machine, or the Windows Subsystem for Linux (WSL). These configurations are known as remote development.
- Customize the Docker extension: the Docker extension includes several Visual Studio Code tasks that you can customize to control the behavior of Docker build and run, and form the basis of container startup for debugging.
- Use Bridge to Kubernetes (VS Code): you can use the Bridge to Kubernetes to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. In this guide, you will learn how to use Bridge to Kubernetes to redirect traffic between your Kubernetes cluster and code running on your development computer. For more information, see How Bridge to Kubernetes works.
- Docker Tools Tips and Tricks: this article covers troubleshooting tips and tricks for the Visual Studio Code Docker extension.
- Tutorial: Create and share a Docker app with Visual Studio Code: this tutorial is the beginning of a three-part series introducing Docker using Visual Studio Code (VS Code). You'll learn to create and run containers, persist data, and deploy your containerized application to Azure.
- Tutorial: Create multi-container apps with MySQL and Docker Compose: In this tutorial, you learn how to create multi-container apps. This tutorial builds on the getting started tutorials, Create and share a Docker app with Visual Studio Code.
Developing inside a Container
The Visual Studio Code Dev Containers extension lets you use a Docker container as a full-featured development environment. It allows you to open any folder inside (or mounted into) a container and take advantage of Visual Studio Code's full feature set. A devcontainer.json file in your project tells Visual Studio Code how to access (or create) a development container with a well-defined tool and runtime stack. This container can be used to run an application or to separate tools, libraries, or runtimes needed for working with a codebase.
Workspace files are mounted from the local file system or copied or cloned into the container. Extensions are installed and run inside the container, where they have full access to the tools, platform, and file system. This means that you can seamlessly switch your entire development environment just by connecting to a different container.
This lets Visual Studio Code provide a local-quality development experience including full IntelliSense (completions), code navigation, and debugging regardless of where your tools (or code) are located. For more information, see the following resources:
- Overview: this article provides an introduction to how to leverage the Visual Studio Code Dev Containers extension to create a Docker container as a full-featured development environment.
- Use a Docker container as a development environment with Visual Studio Code: this module explains how to create, and configure a container-based development environment with the Visual Studio Code and the Dev Containers extension. The Dev Containers extension lets you use a Docker container as a full-featured development environment.
- Dev Containers tutorial: this tutorial walks you through running Visual Studio Code in a Docker container using the Dev Containers extension.
- Attach to a running container: Visual Studio Code can create and start containers for you but that may not match your workflow and you may prefer to "attach" Visual Studio Code to an already running Docker container, regardless of how it was started. Once attached, you can install extensions, edit, and debug like you can when you open a folder in a container using devcontainer.json.
- Create a Dev Container: this article explains how to create a Dev container using the Dev Containers extension.
- Advanced container configuration: the articles in this section of the Visual Studio code documentation cover advanced container configuration when working with the Visual Studio Code Dev Containers extension.
- Dev Containers CLI: this article covers the development container command-line interface (dev container CLI), which allows you to build and manage development containers, and is a companion to the Development Containers Specification.
- Dev Containers Tips and Tricks: this article includes some tips and tricks for getting the Dev Containers extension up and running in different environments.
- Dev Containers FAQ: This article includes some of the common questions for getting the Dev Containers extension up and running in different environments.
Deployment Process for Containerized Applications
After developing the application locally, the next step is to deploy the application in Azure. It is important to have a CI/CD pipeline to automate the deployment process. The CI/CD process ensures that your containerized application is deployed to production in a repeatable and reliable way. Here are some of the CI/CD systems that you can leverage to build and deploy your solution to the target environment.
Azure DevOps
Azure DevOps is a cloud-based DevOps solution that provides a set of services to support the entire DevOps lifecycle. Azure DevOps provides services for planning, tracking, and automating software development and delivery.
Azure Pipelines is a cloud-based continuous integration and continuous delivery (CI/CD) service that you can use to build, test, and deploy your code to any platform and cloud. Azure Pipelines is a fully managed CI/CD service that runs on Microsoft-hosted agents. You can also run your CI/CD pipelines on your own private agents.
With regards to containers, Azure Pipelines provides the following options depending on the selected container option
- Deploy Azure Web App for Containers
- Build and push Docker images
- Deploy to Azure Kubernetes Service
- Deploy to Azure Container Apps
It's highly recommended to use YAML pipelines instead of classic pipelines as they are an evolved product feature introduced later. It's best to utilize YAML pipelines, which are versioned along with the code in the repository. This provides a unified source of truth for the pipeline, making it easier to replicate in different environments.
GitHub Actions
GitHub Actions is a feature of GitHub that allows you to automate your software development workflows. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.
These are the various ways you can use GitHub Actions to deploy your containerized application to Azure:
- GitHub Actions for Kubernetes
- You can simplify the process of using Github Actions with Kubernetes by configuring automated deployments
- Deploy to Azure Container Apps using the deploy to Azure Container Apps with GitHub Actions
- Deploy a single container to Azure Container Instances as documented in configuring GitHub Actions to create a container instance
GitOps
The term 'GitOps' was coined by Weaveworks in 2017. GitOps is a way to do Kubernetes application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications. With Git at the center of your delivery pipelines, developers can make pull requests to accelerate and simplify application deployments and operations tasks to Kubernetes.
In an ideal GitOps workflow, developers make changes to the desired state of their applications and infrastructure in Git. The Git repository is configured to automatically deploy those changes to Kubernetes. This allows developers to operate their applications and clusters with the same pull request workflows they use for their code. This ensures a fully automated and auditable system that enforces security best practices. According to OpenGitOps project when implementing GitOps the following are the key GitOps principles to be aware of when implementing GitOps:
- Declarative The states needs to be expressed declaratively some examples of this are Kubernetes manifests, Helm charts, Terraform, etc.
- Versioned and Immutable This means that the state is stored in a way ensuring that the state is versioned and immutable. This allows you to roll back to a previous state if needed.
- Pulled Automatically The state is pulled automatically from where it is stored.
- Continuously Reconciled The state is continuously reconciled with the desired state.
The following are some of the tools you can use to implement GitOps in Azure:
For a more comprehensive list of GitOps tools, see the Awesome-GitOps project.
GitOps for AKS
If you are using AKS, you can refer to the following resources:
- GitOps with Flux and AKS
- GitOps with Flux, GitHub, and AKS to implement CI/CD
- GitOps with Argo CD, GitHub repository and AKS
- Use GitOps with Argo CD, GitHub Actions, and AKS to Implement CI/CD
- Use Syncier Tower and GitOps operator to enforce policies
GitOps and IaC
It important to have a clear separation of concerns between Infrastructure as Code (IaC) and GitOps. IaC is used to provision the infrastructure and GitOps is used to manage the applications and configuration of the infrastructure. IaC is an important part of GitOps and ensures that the infrastructure is defined as code and is versioned and immutable. Ensure that this is done early as you adopt GitOps. Some of the tools you can use to implement IaC are:
In conclusion, it is important to have an automated deployment process so that you can deliver applications and infrastructure changes quickly and reliably. This is achieved by using a combination of tools like IaC, GitOps, and CI/CD.
Introduction to Observability and Security for Containers
This section will provide a high-level overview of observability and security for containers in Azure. The reader is encouraged to read the Cloud Native Infrastructure with Azure book for more details.
Observability
Observability is the ability to understand the state of a system at any given time. This is achieved by collecting and analyzing metrics, logs, and traces. The free Microsoft book Cloud Native Infrastructure with Azure in chapter 6 describes how to implement observability in Azure in detail with code examples.
The difference between observability and monitoring is that monitoring is a subset of observability. Monitoring is the process of collecting and analyzing metrics, logs, and traces, but this alone does not let you understand the state of the system, you need to analyze the data to understand the state of the system.
When using a managed product like Azure Container Instances or Azure Container Apps you don't need to worry about observability for the underlying infrastructure, as the service takes care of it for you. When using Kubernetes, you need to implement observability yourself. Of course, you need to implement observability for your application.
Monitoring is a key part of managing a Kubernetes cluster. You should monitor the health of the cluster itself, the health of the workloads, and the health of the underlying infrastructure. Practically speaking you will have metrics and events from the Kubernetes control plane services, metrics and logs generated by your workloads, and metrics and diagnostic logs from the cloud infrastructure such as disk I/O, network I/O, and CPU and memory usage.
Prometheus is a popular monitoring solution for Kubernetes and Grafana is a popular visualization tool for Prometheus.
Installing and operating Prometheus and Grafana is a complex task. On Azure you can use Azure Monitor managed service for Prometheus and Azure Managed Grafana, to avoid the burden of operations, and focus on monitoring and visualization.
Security
Working with Containers in a Cloud Native environment requires a completely new approach to security. The Kubernetes documentation introduces the concept of 4C's of Cloud Native Security. How to approach security in Kubernetes is also covered in great detail in the book Kubernetes Security and Observability. For the purpose of this article, we will focus on the following 3 aspects of security:
-
The workloads are ephemeral. Containers run everywhere over a network of nodes. Because of the declarative way of deploying the workloads, you don't control on which server the workload is exactly running.
-
The networking is flat, and traditional network segmentation techniques are not applicable. The Pods have ephemeral IP addresses, and the network is not secure by default. The Pod to Pod traffic sometimes stays within the same node, sometimes it is routed over the network to another node, increasing the challenges of network observability.
-
Container images are updated frequently, even several times per day. The release process is automated and the images are built from scratch every time. This means that the images are not patched, but rebuilt from scratch. This is a completely different approach to security.
When you run your workload on AKS, you need to be aware of Kubernetes security measures. However, when using container-managed products like Azure Container Apps, Microsoft takes care to protect the attack surface by patching the operating system and managing the underlying infrastructure. Nonetheless, you must still secure the application code, container images, and network access to ensure a secure environment. It's important to understand the shared responsibility model in the cloud and take appropriate measures to secure your workloads.
Image scanning
Container images are built frequently, even if your application does not change, the base image could receive regular daily updates. A best practice is to use a distroless image to reduce the attack surface. The distroless image is a minimal image that contains only your application and its runtime dependencies. It does not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
In this very dynamic environment you should scan images for vulnerabilities both at the build and at the execution time.
On Azure you can use Defender for Containers to scan images at the build time. Defender for Containers is a free service that is available in all Azure regions. It is integrated with Azure Container Registry and Azure DevOps. You can also use the Azure CLI to scan images locally.
At the execution time, you can use Azure Defender for Kubernetes to scan images. Defender for Kubernetes is a free service that is available in all Azure regions. It is integrated with Azure Kubernetes Service and AKS Engine. You can also use the Azure CLI to scan images locally.
Runtime security
Containers are ephemeral, and you don't control on which server the workload is precisely running. The containers should run with the least privileges possible. You should not run the containers as root. In particular, in Kubernetes you should not run your Pods with the default service account but instead use a dedicated service account for each Pod and not use the default namespace.
In Kubernetes a Pod Security Admission controller is a good way to enforce the least privileges principle. The Pod Security Admission controller is a built-in admission controller in Kubernetes. It is enabled by default in AKS. Azure Policy OPA Gatekeeper is a built-in policy engine in AKS. You can use the Azure Policy to enforce the least privileges principle.
You should use a runtime security solution that can detect suspicious behavior and alert you. On Azure you can use Azure Defender for Kubernetes to detect suspicious behavior.
Network security
Azure's containers and microservice hosting platforms other than Azure Kubernetes Service (AKS), such as Azure Container Apps, do not provide the same level of flexibility and network plugin options as Kubernetes. Typically, running containers within a subnet of an existing Virtual Network (VNET) is feasible using a straightforward flat network setup that enables easy access to other Azure services. The design of the network multi-tenancy is usually built-in into the service, and the network is secure by default. In general, hosting platforms like Azure Container Apps support multitenancy. For more information, see Considerations for using Container Apps in a multitenant solution.
Kubernetes has a flat network where all the containers run in the same virtual network or virtual address space. AKS provides multiple networking plugins:
- Kubenet
- Azure CNI
- Azure CNI Powered by Cilium
- Azure CNI Overlay
- Bring your own CNI option that makes it possible to customize the Kubernetes networking installing with a third-party CNI like Cilium. In addition, the containers are ephemeral, meaning that an IP address has little meaning. Traditional firewall rules based on IP addresses and ports don't work in this environment. Network policies allow to restrict network traffic using label selectors rather than IP addresses to define the rules.
On AKS, you can use Azure Network Policies or Calico Network Policies to restrict network traffic and implement micro-segmentation. For more information, see Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)
Secrets management
Best practices for secrets are to avoid secrets sprawl and to encrypt secrets at rest. You should have a clear separation between secrets and configuration. Secrets should be stored in a secure location, and they should be accessed by the application only when needed. Secrets should be rotated regularly. When possible you can use ephemeral secrets.
Secrets are sensitive information such as passwords, API keys, or application tokens. On Azure, you can use Azure Key Vault to securely stores and control access to secrets, keys, and certificates. Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys.
Identity management
Azure Managed Identities are a secure way to authenticate to other Azure services without the need to manage and store secrets. They allow for simple and secure access to resources, reducing complexity and improving security. Azure Container Instances and Azure Container Apps are integrated with Azure Managed Identities. Containerized applications running on these hosting platforms can authenticate to other Azure services like Azure Container Registry or Azure Key Vault using a system-assigned or user-assigned managed identity. This eliminates the need to manage explicit secrets or credentials, reducing the complexity of the authentication process and improving security.
Azure Active Directory is a cloud-based identity and access management service that provides secure authentication and authorization for users, applications, services, and resources. It offers a comprehensive set of capabilities to manage and secure identities, enabling organizations to easily control access to their resources. AKS-managed Azure Active Directory integration simplifies the Azure AD integration process for users that need to authenticate to the Kubernetes API, for example using kubectl. Azure AD workload identity is a feature of Azure Kubernetes Service that allows you to use Azure AD identities to access Azure resources. The identities are federated and mapped to Kubernetes service accounts so that you can assign the identities at the Pod level. You can use Azure AD workload identity to access Azure Key Vault, Azure Container Registry, and Azure Storage using Azure AD authentication and authorization and Azure RBAC. The AKS cluster itself can run on a system assigned identity or a user assigned identity. The system assigned identity is automatically created by AKS. The user assigned identity is created by you. The user assigned identity is useful when you want to use the same identity for multiple clusters. The AKS cluster identity will be used to create resources in the subscription, like IP addresses, load balancers, and managed disks. An additional system assigned identity is the kubelet identity. The kubelet identity is used by the kubelet to access the Azure API. The kubelet identity is automatically created by AKS. When using Azure Container Registry the kubelit identity will be used to pull the images from the registry. In addition to Kubernetes RBAC, AKS offers the possibility of Azure RBAC, a role-based access control service that provides fine-grained access management to Azure resources. You can use Azure RBAC also to define access to resources in Azure Kubernetes Service.
Cluster security
In the Azure products where the container orchestration platform is completely managed by Microsoft, the security posture is easier to manage. Security baseline articles are available for Azure Container Instances and Azure Container Apps
When using Azure Kubernetes Service, review the AKS security best practices and the AKS security baseline.
The AKS cluster should be upgraded regularly to receive the security patches. Upgrading and maintaining the AKS cluster is a customer responsibility.
You can watch the video Upgrading and Maintaining your Azure Kubernetes Service (AKS) Cluster to learn more about the Kubernetes release process and how this is reflected in the AKS releases.
AKS offers an auto upgrade option that is provided via multiple channels. The AKS node operating system is hardened by default. If you prefer to operate the Kubernetes upgrades manually, you should at least use the node-image channel that receives security updates automatically for the node operating system.
Conclusion
In this article, we have tried our best to provide you with the essential knowledge required to make an informed decision regarding the Azure container product that suits your needs. We have explained the trade-off between flexibility and ease of use and how it affects the level of customization and management involved. Additionally, we have given you an overview of the differences in development, deployment, security, and operations in a containerized environment. By understanding these concepts, you can make an informed decision based on your specific requirements.